This only got announced at the end of another thread in the Analytics and reporting forum (See https://moodle.org/mod/forum/discuss.php?d=209008#p957858.) and no-one seems to have responded. I only saw the spec through Recent Changes in the developer docs, and had to Google to find the forum thread. Anyway, it seemed worth starting a thread here, on behalf of Moodlerooms.
The spec is here: http://docs.moodle.org/dev/Outcomes_Specification and 7 other linked pages.
Really, someone should add the Project template to that page. (See http://docs.moodle.org/dev/Template:Infobox_Project.) Also, some consistent navigation between the 8 pages of that spec would make it easier to cope with. (See, for example, http://docs.moodle.org/dev/Question_Engine_2.)
OK, having got the administrivia out of the way, I will give my actual comments on the spec in another post.
1. I see lots fo UI mockups (good), but I don't see a database schema (bad). A DB schema would help make the data model clear.
2. Also, I don't see a list of how this fuctionality breaks down into Moodle plugins. These are developer-focussed bits of information, but it is important to get it right, so please put it somewhere in the spec. Of course, some of it will be a new core component, but some things, like the reports, should clearly be plugins.
3. While most of the spec looks very sensible, come bits strike me as highly speculative. The first such I came to is the bit about 'streaks' and 'risk metrics'. This is clearly an area where research and experimentation is needed, so these crazy algorithms need to be consigned to some sort of plug-in, not hard-coded in the core of the system.
(Apologies if there is a statistically regorous theory behind what you have specified, but if there is, please can you give some proper references.) (Thinking about this is what lead me to make the preceeding Point 2.)
4. http://docs.moodle.org/dev/Outcomes_Instructor_Specification seems to be incomplete as yet. Naturally, as quiz/question maintainer, that is the bit I am most interested in ("Instructor Maps Quiz Questions Against Standards".) If you would like to discuss this with me, please ping me and we can arrage a time to talk on Skype, Google hangout, or something.
5. But can I ask for clarification about how you intende to implement that outcome. I can see two possibilites:
- a. An outcome is a properly of the question in isolation. That is, it is metadata stored alongside the question in the question bank.
- b. An outcome is a property of the question in a particular quiz, That is, it is stored alongside the information about which questions are in this quiz (the quiz_question_instances table).
6. Regarding "Outcomes Student Use Cases", particularly keeping a permanent record. One way you are supposed to be able to export a permanent record of what you have done from Moodle is using the 'portfolio' API. Did you consider that? (That would give in-demand, not automatic, transfer of the information somewhere is, so is probably not what is wanted, but considering the portfolio API / badges / outcomes together is probably worthwhile in terms of keeping Moodle's funcitonality coherent. Probably outside the scope of this spec, and one for MD.)
7. Re: "Course Outcomes - Student View". Presumably everywhere the name of an activity appears, it will be a link.
8. Have you though about what to do where an outcome was achieved in an activity that has now been hidden?
9. Outcomes currently get displayed in the gradebook. How will the gradebook changes as a result of this spec?
10. Removing dependency on scales seems like a win to me, but I am probably not qualified to comment on that. I can see it being a useful option, but only an option.
So, in summary:
- Good set of requirements.
- Mostly good UI mock-ups.
- Before you start writing code, please share the DB schema changes and breakdown of code into plugins wiht us for further comment.
- I hope others will join in the discussion of the details of the functionality, so they get properly validated before we move on.
Thanks for reviewing. I appreciate the comments, and I'll try to respond to some of them. First of all, I just posted this spec over the last couple of weeks, and I still have some cleanup, and some additional screens that I need to write requirements around. Secondly, this is just the functional spec. At Moodlerooms, we generally write full functional specifications, review them with our clients and end users, and then move on to technical specifications. I'll be doing a formal review of the functional spec with ten or twelve end users, probably in about two weeks. I expect to have a few changes, but I've already been socializing this with many of them, and the feedback has been positive. Kris Stokking and Mark Nielsen will be doing the tech specificaiton. I believe that they've started thinking about it, but I don't think that they have posted it. I'll follow up with them, and we'll find a place to host it.
Okay, so, let me hit your questions or comments in order:
#1,#2 - Per my note, this is still pre-tech design. I'll leave that to people who are smarter than I am. (I'm a developer, but I haven't been developing for several years.)
#3 - I have some research on this, which I'll dig up tomorrow and include in the spec. However, I have a bit more work to be done here. A couple of broad concepts that I am working under. 1) Plugin System - By making this a pluggable system, we allow people to have different methods of completion for different types of programs. 2) We will probably only build the simplest ones by default. I'm thinking of just one based on % correct and maybe one based on streaks. 3) All of these are suggestive, rather than conclusive. One screen which I am currently mocking up is the Instructor Marking screen. They can choose one of the completion metrics, which will suggest which Outcomes have been achieved, but they can manually mark them or unmark them as well.
#4 - There's still a bit of work here (see #3). On the quiz stuff, we should clearly chat sometime in the next couple of weeks.
#5 - In answer to your specific question, I'd personally like to see the outcome as a property of the question itself, and independent of the quiz on which it is attempted. Mapping questions is a time consuming process, and including it on the question will encourage re-use of the question. In my research on how outcomes/standards are used in most professional schools and primary/secondary institutions, a well-written question is typically very granular, and maps against one outcome, and likely couldn't be mapped against a second outcome. However, we won't do anything to prohibit mapping against multiple outcomes.
#6 - Martin and I have talked about this at a high level, and I agree, this needs to tie into badges. I will defer technical implementation details to the tech team and MD. I think we all agree on the goals, however.
#7 - Yes. If that is not clear, I will update on my next revision.
#8 - No. Haven't thought about it, but I will think about it this week, and update, if necessary.
#9 - Gradebook - More work needed here for sure. I can share that very few clients are using outcomes as they are implemented today. We did some data mining on client usage. Of our approximately 900 clients, only about 25 are actually using outcomes. So, while I want data to migrate for those clients, the current implementation is not meeting the needs. In my spec review focus group in a couple of weeks, that seems like a good topic to probe with those few that are using the functionality.
#10 - Scales - So far, of the 20 or 30 people who have reviewed this specification (including our training and implementation team, who have been training people on how to use gradebook, scales, and outcomes), not one has asked that we maintain the dependency on specs. I'll wait for more people to chime in, but I don't know how we can leave the dependency on scales, unless we were to make "Scales" one of the achievement plugins (mentioned in spec and above).
Thanks again, and thanks for starting the discussion. I'll keep working, and we'll hope that other people start getting involved as well!
Thanks for your detailed answers.
#5 - I see that I forgot to give my opinion, which I am afraid is the reverse of yours. I think question outcomes are context-specific. That is, it is a property of the question in a particular quiz. To justify that, consider these use-cases:
- Teachers in different states collaborating on a question bank of maths questions, where they all have to map them to their own State standards.
- Similarly, text-book publishers wanting to sell question banks.
- Or, a question that could be seen to assess both two different outcomes A and B in one standard. However, in the quiz we are building now, we are only interested in assessing A right now, so we want to ingore the contribution to B, since it is irrelevant noise.
I can see the scope for meta-data on the questions, to help teachers set things up (e.g. "if you are adding to a quiz using this standard, the likely outcomes are A and/or B"), but I think teachers need the flexibility at quiz building time.
I do have a vested interest here. A long-standing feature request, that applies even if outcomes are not in use, is to have one quiz report multiple scores to the gradebook. (The gradebook can cope with one activity mapping to several grade-items, like the Workshop module does with the score for submission and score for grading.) I would like to implement that one day, and I don't think it should only be available when outcomes are in use.
Of course, if the quiz had that functionality, then it would help set up the outcomes thing. The quiz would set up one grade-item for each outcome, and then the outcomes code would only have ot look at that grade-item. That would give good decoupling between the quiz code and the outcomes code.
A similar bit of design might also resolve the piont David makes about different advanced grading methods.
So, in those use cases, I think that it would be okay to have the question mapped against multiple standards, even if it existed in the question bank, not just in the context of the quiz. The instructor can filter all of their reports and coverage/performance criteria to only the standards that they care about.
The use cases if we map them to an individual quiz have pretty poor usability. Every time I copy a question to a new quiz, I would have to remap it. In the U.S., the publisher would have to create 50 quizzes (actually 49, as one state hasn't adopted standards). Generally speaking, the publishers in the U.S. use a company called Academic Benchmarks for aligning against state standards, and they have a meta-ID layer, which allows them to map the question against all of the state standards by mapping them against just one "meta standard". These are usually distributed through software bundled with the book (the most common is called Exam Soft), but they are localized to a particular state (as is the teacher's edition of the text book). This does remind me that I should add a use case for the import of quizzes from ExamSoft/Respondus/etc, that are mapped against outcomes. It would be good if the API for the questions also brought in the standards (we could make this an option, I suppose). I'd love to hear your thoughts on how to best do this.
I did have another concept that people are asking for on the quiz that is related to this. Several of the people who have been reviewing would like to be able to tag a quiz as a practice quiz or a drill quiz, rather than an actual assessment. Then they would like to be able to partition the performance data based on the type of quiz. (They would use some of the same questions in each, but the drills would be done earlier in the term).
I can't help thinking that your last paragraph rather makes my case for me.
Another point: Are you going to get Microsoft to change the format of .doc files, so that when a teacher uploads a .doc to the course, it automatically gets assigned to the right outcome? I didn't think so. Quiz questions are similar.
Outcome mapping is not inherently part of a learning object, although it is metadata associated with it, and associating metadata with things is still a weakness in Moodle. Moodlerooms may not want to completely solve atbitrary metadata on things as part of this development, but we could try to store the data in a way that assumes that is coming eventually.
All this should be entirely compatible with getting the Outcomes UI you want. I agree that your use case (Teacher imports Examview files, builds quiz from the questions, has outcomes automatically set up (or not at their choice) is well worth having.
On the subject of metadata, does IMS LOM (http://www.imsglobal.org/metadata/index.html) have an encoding for outcomes mappings? As usual IMS standards are sufficiently incomprehensible that I was not able to tell quickly.
Not sure I understand the point on the Microsoft document.
I guess that I need to add a new use case (or three) about sharing of questions that are mapped to outcomes. I've been working under the assumption that we would want users to be able to share and collaborate on questions without collaborating on the whole quiz. So, in the simplest use case (which I can add to my use case document), two secondary math teachers are wanting to give Algebra quizzes. One wants to give a quiz of 100 questions (he's a mean, mean teacher), and the other wants to give a quiz of 10 questions. They should be able to use a shared question (if they both were to have access to it).
This is the way that other LMS systems have implemented it (I know that ANGEL and Blackboard do it this way, because I wrote the spec and implemented it for ANGEL, and Blackboard basically did it the same way a couple years later). Not saying that we should it because of that, but the workflow has been ironed out over the past 5 years in those systems.
I guess in the end, I'll do in the spec whatever the users that we focus group with the spec next week tell us to do. From a technical perspective, I don't necessarily mind how it gets implemented. In the end, you own Quiz, so I'll defer to your judgment. I just want to make sure that the end users get to weigh in before we make any conclusions.
re: IMS - I know that IMS LOM is not really being used anywhere that I know of. The key standards would be IMS Common Cartridge. The newest version of IMS CC includes an Outcome ID, which is mapped at the question level in the assessment and in the "Question Bank". I will have to look at how we implemented Common Catridge in Moodle to see what we are exporting to the "Question Bank" item in a Common Cartridge. We may not be exporting anything at this point into that particular field. I know that we have not implemented the most recent version of the CC spec which has outcomes, so I included a use case for that. If we are exporting into the Question Bank element of a CC, we should include the Identifier there.
Unfortunately, today, the CC ONLY has an Outcome ID, so unless you have a shared set of outcomes with matching IDs, this is not terribly helpful. Of course, you could export it and import it into the same moodle, but you can just use Moodle Backups to do that. The reason that it was implemented that way is that the publishers do all agree on the same set of outcomes (in this case, it's the Academic Benchmarks outcomes, or in a few cases, it is their own internal set of outcomes), so it works for the publishers. It doesn't really work for the individual institution that wants to implement outcomes, however. I'll be at the IMS meeting in May, and I can try to dive deeper into them.
Thanks for spending time on this. I'm really enjoying the dialog. Getting this stuff hashed out before we start coding is key, and Kris and Mark are itching to get started. Glad to see that there's been a lot of interaction on this spec already!
Tim, thanks for all of your feedback on the proposed Outcomes system! As Phill has stated, it is clearly a work that is still in progress. We have lots of ideas on how we could implement certain requirements, but nothing has been fully defined enough to share yet. We won't be breaking ground on development before the technical architecture has been drafted and generally agreed upon.
Our primary objective is to create a system that can be powerful enough to meet the demands of the most requested features, while remaining flexible for developers to extend and implement in their own creative ways. It is not our intention to define a white-list of plugins that can support Outcomes, and the way(s) in which they do so. While the specification may identify a specific use case for associating outcomes to quizzes and rubrics, our technical team will need to abstract that as a way to allow outcome association at a more granular level per module and advanced grading method, using quizzes and rubrics as a specific implementation example.
Let me also try to address the questions you've raised:
1. The DB schema will be part of the technical documentation. It makes more sense to me to hold off on it until we've addressed the requirements further, but we can craft something if that would be helpful for the conversation.
2. There's a balance that must be struck between creating new plugin types vs. the overhead of maintaining and configuring them. The 2 primary areas for plugins would be the reports (piggybacking off of the existing system) and the algorithm for marking whether a student has met the outcome (aka Streaks). There may be other areas that would be useful, and they will likely come out in discussion.
3. I would agree that Streaks should be a pluggable system - for those who attended, this would be the same system that we discussed during the Outcomes session at Hackfest. We need a little more definition around Risk Metrics to determine whether it should be pluggable system or can be addressed through configuration.
5. Obviously we'll defer to your judgement here, but couldn't we technically have both? From the feedback we have received, it's pretty critical that the outcome is associated with a question in isolation, but that could simply be used to facilitate the outcomes associated with the question in a particular quiz. I could see it being useful to allow the instructor to override that in some cases, preferably based on a capability to do so.
9. The new Outcomes system would operate independently from the Gradebook. I personally don't think it makes sense to mix Outcomes and Grades on the same report, especially since the Outcomes could be organized very differently in the new system. It's possible that the Outcome Marking plugin could create a bridge in some cases, but that still needs to be defined.
1. Yes, I get the point that DB schema comes later. But ...
I still think there is an argument for starting to develop the high-level logical data model now. What are the major entities, and how do they link to each other? For example, we know we have standards and individual outcomes, with a one-to-many relationship. Also, how to they link to existing moodle concepts like activity modules? That is many-to-many.
Another way to put it is that some people might say that UI mockups should come later in the design process, but you have included them in the spec already, and that is really helpful for understanding what is going on. I think that some high-level entity-relationship diagrams would similarly illuminate the spec, and more importantly help validate early on that what you are specifying will be possible to implement in the back-end.
2. First, creating new plugin types is not a big deal. pluginlib.php etc. do a lot of the work. Of course it should still only be done when really necessary, and for this spec most of the functionality fits into existing plugin types:
- Any reporting uses either report_ or gradereport_.
- 'Manage outcomes sets' could be a tool_.
- I think we have greed that algorithms for computing whether an outcome has been met from other data should be a new plugin type. (Good luck naming that )
- In theory, different import/export formats for outcomes could be a new plugin type, but in the first instance, we probably don't need to formalise that. If we ever find it necessary, we could move the code from inside tool_manageoutcomes to separate plugins.
- For bits of the UI, particuarly student-related display, the UI might naturally work as blocks.
The core of the system will, of course, have to be added to Moodle core, as will the API that activities use to communicate with core.
5. Yes, looks like we are talking ourselves into a 'both' solution.
9. I am not suggesting that we mix grades and outcomes in the same UI. I am asking about the back-end. There are lots of similarities:
- There is a central store of information about what students achieved on different activities.
- For each activity, there can be multiple 'grade_items'. Those might be one or more different numerical grades, or they might be the evidence that a student has met one or more differnet outcomes from an outcome set.
- There is an API for acitvities to push the information about which 'grade_items' correspond to it, and the value for each student for each grade item, once the student has completed the activity.
- There is a plugable system of gradereport_s so the the data in the back-end can be displayed in different ways. There are also import and export plugins for getting the data in and out.
- There is already code in the back-end to calculate new grade_items, and overal course summaries from other grade_items.
- The gradebook and advanced grading plugins already play nicely together.
I think there is enough there to justify doing some serious thought about how much of the existing back-end can be re-used.
That does makes things harder initially. Certainly for you doing the design. It is much easier to design with a blank sheet of paper that to start from someone else's design where the only accurate and up-to-date documentation is the existing code. Simiarly it is harder to build on someone else's code that to write your own. Also, there are known issues with the gradebook API, but why can't we take this opportunity to fix them?
Even with all that, there may be good reasons why it is better to not build inside the gradebook back-end, but if so, I think you should be able to articulate why.
You were the one wh raised "overhead of maintaining and configuring them" (quite rightly!).
Let me remind that Rubrics is just one type of advanced grading methods. While the use case is clear, the tech analyst should keep in mind that other advanced grading method plugins may (and probably will) want to interact with the Outcomes API, too.
I freely admit that much of the Moodle-speak in what I have looked at here goes straight over my head and that might be why others tend not to comment. However here are some comments from a purely educational practitioner viewpoint:
- it would be good to see a less USA-centric description toencourage development to be flexible. I realise it is easier to speak of what you know but please do not neglect the non-US market as there are many of us!
- a big part of my concerns regarding outcomes (as they are now) is the viewing of those and their grading and a better gradebook view of outcomes would help a lot (but see also 3)
- I use outcomes all the time and my use of them may refelect that of others. I do not care whether students pass an assignment/quiz/activity at all! What matters is if they hit the outcomes. Admittedly the one normally implies the other but it is a very different emphasis. It makes no difference to students (in terms of qualification) how good their work is just whether they hit the outcomes. This viewpoint should be reflected in an ability to track outcomes rather than grades. Activities might be just marked complete/not complete (or percentage) as currently but the outcomes can be seen clearly as met or not met.
- results should be clearly visible at least course-wide so outcomes in the whole unit/moduel can be seen in one place. Ideally it would also be possible for a student to see outcomes across Moodle courses as well (for those of us who use a separate course for each unit/module ina qualification). I am also aware of some units/modules which are taught by more than one person and I guess some might use separate Moodle courses and yet the outcomes are only relevant as a whole. I suspect you would not want to include this in this particular project as it would presumably be best done as a separate plug-in.
- Have you considered what will happen if one outcome is met across multiple Moodle activities? Often an outcome might say something like "Show examples of dealing with clients in ..." or it might say "Design, create and test ..." and again that naturally exists as three activities or more. This is not hypothetical as I have already had to split outcomes including one into 8 separate outcomes (even though it is one official criterion). For me the answer is simple - each outcome should have a configurable number of "mets" and for it to be completed it would have to be met in that many activities. Students complete an activity and the outcome can be shown as 1/3 met or a percentage. This simplistic approach may well not work for others so a one-to-many use of sub-outcomes might be unavoidable
Overall this seems to me to be a potential monster so well done for tackling it!
I am aware of PHP (but not OO PHP) and databases so if I can be of any use in opinion or testing please let me know. The way Moodle works behind the scenes is something I have not looked at though so I may not be much use! Hope this helps.
Just one more point to throw in to this discussion about outcomes.
When I was still teaching, I tried a few times to use outcomes to record marks for BTEC qualifications. These work by having a number of Pass (P1, P2, P3, etc.), Merit (M1, M2, etc.) and Distinction (D1, D2, etc.) marks across as many assignments as the college wants to provide for each unit. Each mark can appear in more than one assignmet (theoretically anyway, it rarely happens in practice), although each student can only gain each mark (P1, P2, M1, etc.) once for a unit.
To gain a 'pass' for a unit, the student needs all the pass marks for that unit. All the pass and merit marks are needed for a 'merit'. All the marks are required for a distinction.
I found that using outcomes for this was very unwieldy, as you had a large number of 'outcomes' in each course (especially if each course contained multiple units, in which case they ended up as U5P1, U5P2, etc. for Unit 5 P1). Each outcome also only needed a grade of 'Yes' or 'No'. There was also no sensible way to gather these grades into an overall pass/merit/distinction for a unit.
It may well be that this use case is completely outside of anything that outcomes will ever sensibly address (and maybe there needs to be some different mechanism to support such 'exotic' marking schemes - some sort of 'yes / no' criteria, rather than graded outcomes).
That's exactly how I use it Davo except that we only have one BTEC unit per Moodle course so it is more manageable. However, it is still difficult to spot a final grade in a unit so your point is a good one.
The point you made in passing about potentially a criterion appearing in more than one assignment is also a good one and might arise in at least two possible situations and I only thought of one originally:
- when a criterion cannot be met in one assignment as I mentioned in my earlier post
- where a student might be given more than one opportunity at a criterion
The second may not normally occur but in the UK I see it in the near future. We are moving towards each student having an individual learning plan which meets their particular needs. This, in theory, would mean that a student could choose between a range of tasks to meet a criterion. Any one of those outcomes met would mean they had also met the criterion.
Hope that made sense my current group is a bit loud!
Martin/Davo - I wanted to follow up on the conversation around BTEC support. Marcus Green created a BTEC advanced grading method (discussed here) that could be used in combination with BTEC Outcomes. In this case, the relevant criteria for BTEC would be created within the advanced grading form and mapped to the corresponding Outcome. The mapping could occur in more than one activity, which would handle when a student is given more than one attempt to satisfy a criterion. A custom BTEC recommendation plugin could assist instructors in marking the student's achivement of outcomes in the Instructor Student Completion Marking. Once met, the student would carry the outcome achievement site-wide, and the data could be used for custom BTEC reporting if the out-of-box reports didn't satisfy your requirements. The approach would only work for modules that support Advanced Grading Methods (currently only assignment), but certainly could be more useful as adoption for advanced grading increases.
Davo, the Outcomes system allows for a simple identification of Met or Not Met (yes/no) for a student against an outcome. We've had overwhelming feedback that this simplification is needed instead of the current implementation that is tied to the gradebook and based on scales. In addition, the algorithm for calculation is left completely open ended - we'll develop our own implementation using a streaks algorithm, but that's just an example of one use case. I'm not sure how you're using a Unit - is that equivalent to a course section (i.e. topic)?
If I understand you correctly, I would use a Rubric that contains a criterion associated for each of the relevant unit marks. Each criterion would be associated with a P, M and D outcome. The instructor would mark off whether the student received a P, M or D, and the algorithm would mark off each outcome as Met or Unmet depending on the algorithm. It could do this even if the Outcome was used in different assignments or even courses (depending on how Units apply).
This might give you extra information in an Outcomes report as it would show each Pass, Merit and Distinction outcome as either Met or Unmet instead of showing each outcome as Pass, Merit or Distinction. However, you could probably customize the report in some way to get the information you need.
Martin, in regards to #5, our intention is to house the algorithm for determining whether an Outcome should be marked as Met or Not Met within a plugin system, giving developers complete control over how that should be computed. The algorithm would have access to everywhere the Outcome was used, including different different activities or courses.
Martin "it would be good to see a less USA-centric description to encourage development to be flexible. I realise it is easier to speak of what you know but please do not neglect the non-US market as there are many of us!"
+1. Except I do not like the word "market" and would rather use "users".
Great comment. I'll update the spec to make it more general. In our research, the rubric is by far the most commonly used Adavanced Grading Method, although we also have Checklist, etc. I'll see about adding these into the spec. I suppose the best approach is to build outcome support into the Advanced Grading framework, and then builders of Advanced Grading methods could use outcomes in their methods. Does that sound right?
I've just realized that I didn't post the rest of the Instructor Specification. I will get it up by Tuesday. I was working locally on wireframes, etc. I think that many of these questions will be answered when I get the rest of it posted. Sorry for the confusion.
I've updated the Instructor Specification to finish the rest of it out. Sorry for the confusion. I've added a use case around additional Advanced Grading methods. I've done a quick mockup of rubrics, but I would need to do that for other Advanced Grading methods. Additionally, I've added an Instructor Outcomes Completion area, which allows instructors to see all of a students' interactions with that Outcome, and provides a recommendation based on the Achievement plugin. The instructor can then choose whether or not it is complete based on that data.
Good in-depth discussion. I will sit and watch from the sidelines -- I am very interested in this track.
/me back to lurking
Just a quick update, and then I'm also going to use this forum post as a TODO list for myself. We've been meeting with clients and end users about this over the past several weeks, and they've made some good adds from a functional perspective. They've also overwhelmingly confirmed the vision of removing the dependence on scales. Their suggestions will appear in my TODO list for updating the specification below. On our side, our technology team, which for this project will be headed by Kris Stokking and Mark Nielsen, has now reviewed the spec, and provided me with a lot of feedback (much of which will also appear in the TODO list below).
TODO List and Key Decisions:
- Course Mapping to Outcome Sets - We had a lot of discussion, both functionally, and technically about the importance of the mapping of a course to Outcome Set(s). From a functional perspective, this mapping has two purposes. First, it simplifies the mapping of activities, rubrics, and questions, by limiting down the number of Outcome Sets that I can select from. Second, the course level mapping defines which outcomes I want to report against. So, from a functional perspective, this is important. However, as I discussed with the tech guys, the mapping that is the most important is the mapping of the outcome against the content itself. Let me give an example to clarify: Let's say that there is a quiz question that is used in multiple courses through the question bank. That individual question might be mapped against two outcomes. However, for my course, the only outcome that I care about is the outcome that my course is mapped against. Why does this matter?
- Backup/Restore - If I restore a course that has items mapped against outcomes, the outcomes won't really appear in my reports, etc. unless we also map the course against that outcome.
- Shared Questions (see above)
- Accidental Deletion of Mapping - If I have mapped 1000 questions against my outcomes, then I accidently remove the Outcome Mapping at the course level, it should NOT delete all of my work in mapping the individual items.
- Report for "Unmapped Activites and Questions": Great customer suggestion here. If I am in a course that is using outcomes extensively, we should build a report that shows items that are not mapped against any of my associated outcomes. This would be like the coverage report but would be content centric, rather than outcome centric. The assumption is that if I am tying my course to outcomes, nearly every piece of content in the course should be mapped against an outcome, so if something is not, it probably should be.
- Detail on the Reporting - We need to go one level deeper on the reporting pages and define what happens when I click on the links in the summary reporting. One of the particular questions that came up was how easy it would be to gather the artifacts of student submissions.
- Use Case Add: Export Outcomes data through API to Portfolio System (such as efolio or Mahara)
- Outcomes Summary/Workflow Block for Teachers - An idea from a client is to create a block for the course home page that gives updates on outcomes workflow. More definition needed, but wanted to document the idea.
- Define XPath options on import as only non-complex elements.
- Clarify how "Average Grade" works for quizzes where questions can have different weight.
- Define Capabilities (Create Outcomes, Import Outcomes, Map Course, Unmap Course, Map Activities, Map Rubrics, Map Questions)
- Backup Restore Specification (include Common Cartridge)
- More work on Recommendation Engine - Users loved this concept BTW. One of the biggest problems was the sheer volume of work created by outcomes. Anything we can do to make that easier helps.
- Outcome In Use - Warn before editing
- Versioning of Outcomes - We know that this is a major use case, but we can't bite off everything this time. Let's make sure not to design ourselves into a corner on this from a technical perspective.
- Can instructors see how students did on the same outcome in a different course? Is this a setting?
As a Moodle Admin and ComSci teacher at an international school that is exploring a move to Standards Based Instruction and Reporting (I'm using the US parlance because I am most familiar with it - apologies to those who must translate), I am delighted beyond words that all you good people are taking on this project. I had been considering taking a stab at it myself, but I can see from the above and the specs that it is in far better hands.
I do have a couple of thoughts/questions, and I apologize ahead of time if I missed these being addressed already in the spec or above. These are primarily reporting (module/plug-in) specific, but I thought I would mention them because they may have some implications for data capture and storage design.
I saw in the specs that the intent is to provide for hierarchical Outcome structures. If I understand you correctly, this would be wonderful. One of the major limitations of the current Outcomes schema is that it is difficult to know which "level" of the standards hierarchy to track. Using Outcomes to track the 4-6 "Strands" (in my terminology a Strand aggregates several Standards which themselves aggregate many Benchmarks) that would go on a report card. From a feedback perspective (assessment for learning and not simply assessment of learning) using Outcomes to track Strands or Standards provides essentially no helpful data. To be helpful for learning, the Outcomes need to track Benchmarks (specific learning/performance targets), and at the high school level in some courses (e.g. ComSci) these are legion. In the specs it is clear that you intend to have Outcomes tied to Benchmarks, but will tying an Outcome to a Benchmark automatically tie it to the Standard and Strand of which it is a component? I am assuming so, but do want to raise the issue. Doing so would certainly help teachers in schools that continue to give grades (whether simple A, B, C..., or 1, 2, 3...with descriptors, or some other system) help in deciding to what degree a student has met the overall Strand/Standard requirements of the course.
In the same vein, will there be a way to assign a "weight" to each instance where a question, rubric evaluation, etc. is tied to a Benchmark, and each Benchmark is tied to a Standard, and each Standard is ties to an ultimate report card Strand? If a student has met a Benchmark 10 times over the course of a semester, will each success be weighted equally even though they arise from different assessment modalities (e.g. multiple choice questions, essay questions, performance assessment). Will there be a way to weight the reporting data in terms of where an assessment item falls in the learning process (most recent performance vs early performance)? Will it be possible to aggregate performance data for Benchmarks by mean, mode, median?
Again, I realize these are primarily report plug-in related questions. But, if the right data is captured, I can envision some wonderful graphical reporting tools that would make Standards Based Instruction and Reporting both productive and a pleasure.
Finally, any remote sense of what your target for release would be on this?
Cheering you all the way,
Thanks for taking the time to make such a thoughtful reply to my questions.
On the issue of aggregate reporting usefulness, I suppose the major thing would be to make sure the fields used for data capture would permit a report module to query for any or all heirarchy levels. That way report modules could be written to address different user needs. I am so please that you are considering these issues!
In thinking about the Recommendation System you describe it strikes me that there may well be many situations in which streaks-based or probability-based recommendations are not appropriate to analyze the data set. It seems to me that those analysis tools fit a situation in which computer assisted assessment presents multiple opportunites to achieve mastery of an issue within a single or tight series testing event using the same testing modality. They may not be as applicable to a set of responses gathered via different question modalities over the course of an entire semester or year. Just a thought.
As to the information bits to be made available to the recommendation plugin, I hope you will include date of assessment. A primary premise of standards based assessment (at least as it is described in the US) is that students are like popcorn (each kernal pops when it is ready) and so the primary question becomes what did they ultimately learn, not what is the average of their learning over the breath of the course - often referred to as the last, best data. Again, getting the data into the properly granular fields is the issue (in terms of ultimate data mining), and I'm hoping the design will err in the direction of particularity rather than exclusion.
Thanks again for the info. No need to respond. Keep up the great work!
The Streaks and Probability recommendations are just examples that would be available for implementation. They won't be useful in every situation, and I could see plenty of reasons why an institution wouldn't want to use them at all. The main point is that we will be capturing data about the attempt to support recommendations based on streaks, probability, grading thresholds, and more should the administrator choose to configure those plugins. We'll also capture the date the attempt was made - my previous list of data points was not exhaustive.
It may even be possible for the plugin to query for additional information based on the mapped content, but admittedly we have not gotten that far. We're aligned with you in wanting to give as much useful data as possible to query against, but we need to do so in a way that is both elegant and highly-performant.
As you can see in the thread, our technical guys have made huge progress on the tech spec side, and we are getting closer to really starting the coding. During the tech review, we uncovered some possible issues, and so we have refined the functional specification with a number of changes. Some of these were do to technical constraints (reporting against Completion, rather than against activity, so as not to have performance problems on the logs), and others were do to actually working through some use cases technically (after a review of a lot of the various state standards, their XML, and the nesting requirements, we made some changes to the format of an outcome set and how they are nested). I still have a few more changes, which I'm hoping to get in before the Developer meeting tonight, but I thought I would post now, so that anyone who is going to the developer meeting tonight could have time to review before then, if they are interested.
Good to see you folks moving ahead with speed, but I must also say I am concerned about two thing I see in the Outcomes Technical Specification: 1) Outcomes no longer appearing in the gradebook, and 2) Outcomes no longer supporting scales. Perhaps I don't understand where this is going, but it seems likely to me that these two changes will make the Outcomes module virtually unusable for K-12 schools in the U.S.
The move is on in a very, very big way toward Standards Based Reporting and Assessment in the U.S. but the ususal approach is to assess and report student achievement on a scale - typically something like: 1 Not Present; 2 Area of Concern; 3 Progressing; 4 Mastered. I don't see how a pass/fail approach to data collection can support this model, nor how pass/fail can be used to meaningfully report student performance on essays or other performance based assessments. Also, in the end the teacher's assessment of a student's performance on such a scale (perhaps by looking at the mean, mode, median, most recent or a combination of these) IS the grade in the gradebook. I'm not seeing how outcome tracking can be separated from the gradebook.
I am attaching a short article by Thomas Guskey (one of the leaders in SBG in the U.S.) which provides a basic example of how Standards Based Reporting and Assessment is being implemented in U.S. schools.
I realize that standards or criteria based grading is being implemented differently around the world, but would hope the new Outcomes module will not exclude K-12 teachers and schools in the U.S.
All the best,
Then we are in agreement.
I think you are confusing outcome marking with progress. We are not trying to answer at which point the student ultimately knows the criteria, just that they have. The overwhelming feedback we have received (including US K-12 institutions) is to remove the dependency on scales and the display of duplicated outcomes in the gradebook in order to reduce the confusion they cause. We are not going to force the instructor to assess each student for each graded activity for each related outcome (for each attempt!) as is currently being done with Outcomes. That simply inundates the gradebook with a ton of data that is only marginally useful to monitor progress, when that data could be collected separately.
The instructor will know whether the student has achieved an outcome, so we give them the ability to mark that achievement and give them the applicable data to make the assessment. Using the definitions from your article, the "Outcome Attempt" would translate to the Product criteria, which the instructor could access from the marking screens. They would also have access to resource views against supplementary materials or attempts against ungraded activities, which could be used as Process criteria. Finally, we have recommendation plugins that can analyze the student's attempts and make an informed recommendation about his or her progress towards the outcome. And at the end of the course, each student is marked as having Met or Not Met an outcome, which they will carry with them across the site.
So, I think that we are all shooting for the same goal. I'll add to Kris's comment to describe a little bit more of the background of this, and ways to accomplish what you are trying to accomplish.
First of all, the reason that we went the way that we did was that, while you could do grading as you describe in the old outcomes system, it had some real limitations, specifically around tying outcomes to quiz questions. There are many times where a quiz question ties tightly to one outcome, and whether or not the student gets that question (or a group of like-mapped questions) correct is a very solid indicator of competence. We have to balance this with the more subjective nature of other types of grading, such as assignments, where a paper could cover more than one outcome, and the grading is more subjective. For this, we are doing tight integration with rubrics (advanced grading). So, in the example that you mentioned, you could create a rubric that was mapped against outcomes and had the scale that you mentioned (1 Not Present; 2 Area of Concern; 3 Progressing; 4 Mastered). Upon grading that activity, you would have the data that you need. This is different from a quiz question, where it is a much more binary condition. We want to make sure to support both paths.
Additionally, the ability to do coverage reporting, as well as "Completion" reporting, where you can see whether or not your students have read or participated in the background materials tied to specific Outcomes should be a huge win for remediation of possible issues.
As to whether or not to display them in the gradebook or not, I think that there are arguments for both, and I'd love to hear more dialog on it. Currently, the Outcomes link in the Settings area takes you to the Gradebook, but you can also get there through the gradebook. We could potentially leave both. I know that Kris and Mark were looking at the technical hooks needed and whether we should leave both.
By the way, I've made a bunch of updates to the spec over the weekend. Take a look. I'm going to make a separate post with notes about the changes and a link to the change log. Thanks for being so active in this thread.
If you'd like to have a more one-on-one discussion with me about some of these issues, let me know.
We've made a couple of notable improvements to the technical design which bear mention.
1. Instead of mapping the outcome/content association to a course, we map it directly to an activity. We realized we were making it harder on ourselves by giving the system the flexibility to map to anything in the course. While that might be nice in theory, in actuality a user's performance will be tied to Activity Completion and Grades, concepts that are fundamental to activities but not other plugins/objects. This works out quite nicely since questions and advanced grading criteria both relate to an activity (when implemented), and we get a major bonus of identifying the content item's relevant context within a course - it's no longer Question A in Course B. It's Question A, part of the Final Exam (or Practice Exam) in Course B. The Functional Specs are currently being updated to reflect this.
2. Outcome attempts now capture additional data points for mingrade, maxgrade, and rawgrade. This isn't a major change, we are simply updating the spec to capture what we said we would to make it more useful for instructors and recommendation plugins.
You can see the specific changes made by clicking on the following link: http://docs.moodle.org/dev/index.php?title=Outcomes_Technical_Specification&action=historysubmit&diff=37983&oldid=37930
Hi Kris and Phil,
Sorry about the tardiness of my response. I do want to thank you for taking time to explain the philosophical and actual approaches being pursued in this new Outcomes module. I continue to be very excited about what a tremendous contribution it will make to student learning.
I've spent a lot of time considering the proposed data structure and didn't want to reply until I'd had a chance to really think through its implications. Although, in the end, I still think many schools will want to report student progress and final standard mastery on a scale rather than a simple yes or no basis, the granularity with which you are capturing performance data is what is really important. If the data is there (and God bless you for making it so), then those who want to scale it for reporting can do so via a plugin. Great work. This is really exciting.
This granularity of data capture will also allow for the creation of plugins that report student progress with a degree of specificity that will truly provide assessment for learing (what teachers and students need to guide student progess in real time) as well as final assessment of learning (what parents and universities care about). I can't wait to start using it (and probably writing a plugin or two).
Keep up the great work!
Kriss asked my thoughts about web services.
I first misunderstood during the dev meeting that people would want to distribute/publish outcomes (either with Mnet, either with Moodle hub repository). It's why I suggested to implement the web services. From the specs it's an import/export feature for admin. So no need of web services.
However it still would be nice to have some web service functions for two reasons:
* it shows when your lib.php API went wrong. When you think as a developer of a teacher/student app or admin tool, you sometimes find out that your API functions do either too much, either not enough. Issues related to parameters, capabilities, return values, abusing of Moodle form, etc... the web service functions will expose the problems of your API. I believe you'll save quite some maintenance time and some future development time if you write web services from start.
* currently we are pushing for web services to make Moodle more open.
If you have some time to create some web services, I wrote this documentation to contribute a web service to core. I'll be happy to answer if you have any specific questions regarding web services.
The technical documentation has been updated with some clarifications and refinements. The main highlights though are that the tables have been renamed for clarity and an ERD like diagram has been added to show the relations between the new outcome tables and core tables.
I went thru the wiki and this thread again, with the focus on the technical specification. I have couple of comments and questions in random order (as they came to my mind).
Firstly, please rename pages like "Capabilities and Roles", "Migration and Technical Issues", "Specifically Excluded Use Cases" and similar in a way that it is clear they are related to this project. Using "Outcomes Specification/" (including the tail slash) is what I would do. Without it, they just pollute the dev wiki and may confuse developers looking for a documentation (imagine a developer looking for dev docs on capabilities and roles system in Moodle, for example).
I don't like the userid column in outcome_sets table. If it should be foreign key to the user table, it can't have 'default 0' imho. If the value is to be optional (as I understood the description of the field), just keep it declared as the foreign key with the NULL value allowed. I can't see any reason for 'default 0' (especially for foreign keys). Same may apply for other columns.
I would like to hear more about your concept of the outcome_areas table. 'Areas' are used in several subsystems in Moodle core, such as files API or advanced grading methods. They represent sort of unified system of locating and addressing places in Moodle that other components can hook to (as in file is attached to a post, image is embedded to a text field, advanced grading form is used for assessing submissions in the Assignment module etc). It has been proved that what works is what I call "the holy four" (I wanted to call it the "fantastic four" but some puny film studios trademarked it ).
Shortly, it is good to use the combination of contextid + componentname + areaname + optional itemid to address hookable places in Moodle. So if there is a file attached to the student's submission in the Workshop module, we can easily associate the file with the context (contextid of the workshop module instance), componentname ("mod_workshop"), area ("submission_attachment") and itemid (the submissionid). Similarly, the rubric form used to assess submissions in the assignment module is hooked to an area identified by context (context of the assignment module instance), component ("mod_assign") and area ("submissions").
Looking at your ERD, your tables outcome_areas and outcome_used_areas do not fit my mind well. Not only I'm missing the contextid there. Could you please provide an example/illustration of these tables filled with some data and explain what these data mean? Eventually some pseudo-SQL queries?
WRT Phill's answer above, I have not found any detailed information of how you plan to integrate Outcomes with advanced grading methods. The spec just claims that there will be an API for that. But how will that API look like, at least roughly shaped?
What I would expect is that every advanced grading form is able to be associated with zero, one or many outcomes. When a form is reused in other activity or shared as a template, these mappings are preserved by default unless they are explicitly removed (note that advanced grading forms are copied for each individual activity, that is different from how Questions are implemented).
How would such association be stored in the database?
First, thank you so much for taking the time to read over the wiki docs. I know it takes some time to digest it all, so it is very much appreciated.
DM1 - OK, we can do that.
DM2 - Sounds good.
DM3 - That's fine, null will be used instead of zero. Please see change here. Let me know if I missed any.
DM4 - I was also trying to stick a contextid into the outcome_areas table, but it wasn't making sense for the associations that we are making. EG: what contextid would you use for a question (remember, questions exist on the site or course)? Same question for a grading form. If you think of a way to include it that would be beneficial, I'm very much interested.
Example for a grading form data would be component=gradingform_rubric, area=criteria, itemid=gradingform_rubric_criteria.id and this would represent an outcome(s) being associated to a rubric criteria. Once the grading form was associated to an activity, then a record in the outcome_used_areas would be added with the cmid of the activity in question and the id of our record from outcome_areas. The outcome_used_areas is important for coverage reports and because some content (like questions) can be used in multiple activities and we want to be able to distinguish between the two.
DM5 - I don't yet know how the API will look, but the idea is, it'll be designed to accomplish the goals I have laid out and more. I assure you it will be flexible and re-usable and it will be developed out in the open for all to review and critique.
What you are expecting is exactly what I was thinking. Since grading forms are loosely defined, meaning the plugin can do whatever it likes, then the integration of the outcomes would not be automatic. So, it is up to the grading form to determine how it would like to use outcomes. For example, what we would like to do for rubric is to be able to associate outcomes on a per criteria level. Another grading form may want to do it on another mechanism or just allow a single association to the grading form as a whole. Please see my response to DM4 for example data. If we are using a grading form template, then the outcome associations would be copied with the rest of the grading form.
DM4 - every question clearly belongs to a context: question.category -> question_categories.id; question_categories.contextid -> context.id.
So, there is an obvious value to add to outcome_areas.contextid for questions. That is useful metadata. In is not necessary for anything else in your spec, but I still think it should be added. (It may be useful for future reporting or maintenance things you you have not thought about yet, and won't be implemented in the first version.)
We haven't been using Moodle Outcomes, but we've been trying to implement the Remote Learner version of Learning Outcomes in ELIS, and not been able to meet our needs that way, so I started looking for alternatives this week.
I've just come across this discussion, so I apologize for the late remarks. Hopefully it's not too late to give some input. At this point, after reviewing the Outcomes 2 specs and conversations, I'm generally optimistic, but I do have a couple of questions/comments/suggestions:
- Will it be possible to unify Grades, Outcome results, and Badge status somehow? I.e., I design my activities so that success (a passing grade) on the activity indicates that the learner has met the Outcome. When I grade an activity, I don't want to have to go in and manually set an Outcome as well-- and if I ask other faculty at our institution to do it (e.g. using the current field on the Rubric form), it will never get done. I recognize that some users need to keep these two values separate, but will we have the option of unifying them? If so, I think we will be able to tie Badges in by setting their criteria based on the Grade of an activity... or maybe the Badge criteria should be based on the status of an Outcome?
- Will it be possible to tie multiple activities into a single Outcome with aggregation options? Or will it be necessary to create Sub-Outcomes and have the status roll up from there?
- This might be an Advanced Grading plugin question, but what about aggregating Grade input from more than one grader? These could be co-teachers or student peers.
- Tying low-level Outcomes to specific quiz questions makes sense, though our outcomes (at a college) are usually less granular. It might be helpful to allow the instructor or administrator to set the Outcome by Category of questions, for large test banks where the questions have already been sorted into different Categories based on the Outcome.
- By tying Quiz Questions to individual Outcomes, a single Activity can contribute to more than one Outcome. What about Activities other than Quizzes? Can a Rubric Criterion be tied to a specific Outcome?
- Currently Ratings can feed into Grades of some activities. This is helpful as a possible method of peer review, though not very specific. Will Ratings tie into Outcomes at all? The specific example I am thinking of is the Forum. We emphasize discussion in most of our online courses. A single Forum might span multiple Outcomes. Currently the only way to grade within the Forum (without creating a separate Grade Item in the Gradebook) is to use Ratings. (Sadly, this means we can't also use Ratings to let students recognize helpful posts.) Can we configure a Forum so that we can indicate student completion of an Outcome while reading posts? What about partial completion?
Thanks for any info!
Wow, lots of great questions in here. First, let me say that I appreciate the feedback. Development is progressing quite nicely, but we still have a ways to go. Let me hit some of the points that you make, and try to answer questions. Some of these questions are already addressed in the spec, so if you haven't given it a read yet, please check it out.
1. Outcomes are new, as are badges, so we haven't integrated them yet, but there is a clear connection here. On the grade side, this is actually possible with the design that we are pursuing, depending on how you grade. For example, if you have an assignment with a rubric tied to it, that rubric could be tied to outcomes (as criteria). So, when you grade it via the rubric, both outcome achievement and grades themselves are actually updated. Right now, there is no design for how the aggregate of outcomes could translate directly to a grade without going through an activity, although this is an interesting idea.
2. So, multiple activities can be tied to the same Outcome, and the results are aggregated into reports, both at the course level, and (eventually) at the Administrator or Assessment Coordinator level.
3. No plans right now for multiple graders, although we are looking at the grading workflow with some of the core developers right now. Stay tuned.
4. Interesting idea. We haven't gone down this path. I agree with your comment about University criteria being less granular. However, the Outcome Sets are a heirarchy, which means that there could be high level objectives, and then you could create more granular objectives that lead up to the higher level objectives.
5. Rubrics can include Outcomes, and the measurement rolls up into the reports.
6. Great idea. We don't have this now, but we are looking at how you grade forums with Core, and there might be progress here soon.
Hope that this all makes sense!
Thanks for the quick response, Phill!
Having Grades based on Outcomes is certainly one way to go... having Outcome completion based on the Grades of certain activities would be another, don't you think? But the overall point is to only have to grade an assignment once. Whether an automatically scored quiz, an essay or project graded with a rubric, or a discussion grade based on ratings, it will be important to us to have a tight binding between Grades and Outcomes so our faculty and learners (and accreditors) can easily see that Grades are a reflection of meeting Outcomes.
Is there a discussion about grading workflow and the grading of forums that I should be following?
(Edited to add: is the forum grading work you are referring to the same as what is described here? https://moodle.org/plugins/view.php?plugin=mod_hsuforum)
The workflow is designed such that grades flow into outcomes, which could theoretically then flow into badges at a later date. We are currently developing the screens to allow instructors to manually mark off outcomes based on grade and completion data for the related outcome "attempts" (e.g. a mark in a rubric, a grade for a quiz question, whether the student has viewed the supplementary material, etc.) We'll be progressively enhancing on this approach by building in support for "recommendations" which will allow plugins to make recommendations to the instructor on whether the student has met an Outcome based on some algorithm (e.g. streaks, probability, average grade, a custom algorithm for your institution's grading workflow, etc.) We think the instructor should have the final say in determining whether the student has met the outcome, but it would definitely be possible to automate the recommendation in a later phase or as a separate plugin altogether to make it easier on instructors that are only looking to grade once.
I've been asked to look into the situation with Outcomes again, both for faculty professional development and for student reporting and program accreditation. How is the Outcomes 2 development progressing? Now that Badges are in 2.5, hopefully there has been more thought about integrating Outcomes with Badges.
Based on what has been written so far, I look forward to support for many-to-many relationships between Activities and Outcomes, though I'm still a little worried about how this will work for Activities that don't currently support Advanced Grading options.
Here are some other thoughts:
1 - Could there be a way to automatically display the Outcomes (or summary Outcome Sets) addressed by all Activities within a Course Section? If not, could we at least have the option to display the Outcome within the course section under each Activity?
2 - Suppose we have a list of Outcomes and a list of Activities, and we have not specifically defined the relationship between them, and we'd like individual students to suggest this relationship. For example, suppose I have an Outcome of "consistently uses correct writing mechanics such as spelling, capitalization, and punctuation." The course has a dozen essay assignments. How about letting students identify up to three (or another defined number) of Activities that they feel meet the Outcome? Granted a more sophisticated way of solving this would be by using an e-portfolio system with integrated assignments, but that would probably also be more difficult for students to use.
3 - Bulk course create and remove tools are now in core as of Moodle 2.6 (MDL-13114). Will the identity of an Outcome Set for each course in the csv be supported when Outcomes 2 is integrated into Core? Or will we need to rely on making a copy of some canonical template course that has the Outcomes defined?
4 - The current Outcomes Import system is very difficult to use. Although the documentation says it wants a "comma separated values" file, in fact the separator must be a semicolon, and a single non-ascii character will halt processing of the file. There is no way to manipulate Outcomes in bulk once they are created (e.g. modifying the Scale defined). There is no report that I can find that displays the Description of the Outcome, making me wonder why the field is even present. I hope the new system will be easier to use.
5 - It seems that Advanced Forums will not be replacing Forums in Moodle Core any time soon. Hopefully there will be revisions to Moodle Core Forums that will provide better grading features. MDL-1626
I've just finished a MOOC on badges. My project involved creating a badge system for Liberal Arts college outcomes. I based my outcomes list on the "VALUE Rubrics" developed by AAC&U (The American Association of Colleges and Universities). Sorry for the US-specific focus-- if someone would like to suggest a more universal document, I'm all ears, but I think this will work well enough as an example for the discussion at hand.
In the AAC&U VALUE Rubrics, there are 15 topic areas, each with 5-6 criteria. Example:
Topic Area: Global Learning
- Global Self-Awareness
- Perspective Taking
- Cultural Diversity
- Personal and Social Responsibility
- Understanding Global Systems
- Applying Knowledge to Contemporary Global Cultures
For each criterion, there are four levels: "Benchmark (1)," "Milestone (2)", "Milestone (3)", and "Capstone (4)."
My badge system implementation awards a badge for the Capstone level of each of the criteria. I have "Constellation" badges defined for each of the categories, meaning that if a student completes all the outcome criteria in a category, they get a larger badge for the whole category.
I also created cross-category "cognitive competency" badges. For example, many of the competencies defined in the various categories involved learning to incorporate multiple perspectives. If a student achieves several of these competencies, we would award a "Perspective" badge. In many ways, this is the real value of the system, showing a prospective employer (for example) the cognitive skills gained by liberal arts students.
To make this system work, we need a master list of all the competencies as Outcomes, and we need to be able to assign sets of these competencies (criteria) to all sections of a given catalog course. So, for example, every section of our World History class that is created in the system needs to automatically include the Outcomes expected to be achieved in that course. There are also some "secondary" or "opportunity" Outcomes that might be present in many courses, e.g. "propose Solutions/Hypotheses" from the "Problem Solving" category. We would need to be able to designate a source of a set of Outcomes for inclusion when each course is created, or even after course creation. So Outcomes and sets of Outcomes need to be addressed in the bulk course creation and editing features included in Moodle 2.6: http://docs.moodle.org/26/en/Upload_courses
We also need to be able to set criteria for what constitutes meeting an Outcome. As has been mentioned earlier, I don't think an all or nothing binary status is really sufficient. The Scales tools in Moodle are cumbersome and using them with the existing Outcomes is difficult, but that doesn't mean that some kind of Scales aren't needed to work with Outcomes. Otherwise, I end up having to create four times as many outcomes and track them all.
As a use case, this illustrates what we will need to be able to do with Outcomes. Once the Outcomes are defined at the institutional level and attached to each instance of a given catalog course, the instructor attaches specific assignments to Outcomes (or Outcomes to Assignments, whichever), so reporting can show that all the Outcomes have been allowed for in the course designs before the start of the term. When an Outcome is connected with an Assignment, the rubric attached to that Outcome needs to also be attached to the assignment, so that when a faculty member grades the assignment, they do so in accordance with the definition of that Outcome.
For any given student, we (staff, faculty, the student) should be able to easily see which Outcomes have been expected of a student, and the level to which each has been met. For Outcomes achieved at the Capstone level, an Open Badge needs to be issued.
But it might be the case that a specific Outcome requires at least 3 examples of student work to reach the Capstone level. It could even be required that the work cited has to come from different courses or subject areas. So multiple instructors could easily be involved-- but it is important that they don't need to know what work a student has done in another context in order to be able to give credit in their own contexts.
It feels like this is a lot of text for an idea that might better be explained with a diagram. Should I link or upload a file that will show more of what we need to be able to do?
I was disappointed to note that New Outcomes didn't make it into Moodle 2.7. Based on MDL-40230, there hasn't been much development or discussion since January 2014. We would really like to be able to use this feature. The existing Outcomes don't integrate well with Rubrics or Badges.
But I wonder if it would help to change the name to Competencies to avoid confusion with the previous Outcomes?
I, too, am disappointed that there's not more to work with regarding Outcomes, Competencies, Badges, or whatever we end up calling the system of how we keep track of who's doing what. What can I do to help.
I am sad to get to the bottom of this thread and find that we don't have implementation. I see that moodlerooms is using a variant of this, and I was really, really hoping to find something I could send to my admin as a request. I need something that supports Standards-Based Grading, and every work-around I've tried in Moodle is coming up short. This looks like it would solve the issues I'm having.
I'm glad it's being discussed, but where's the progress?
Damyon Wiese from Moodle HQ gave me the following answer yesterday by email about whether or not this feature will be in Moodle 2.9 :
The short (and not very helpful) answer is that we have not yet decided exactly what we will do with outcomes for 2.9. As soon as we do we will post about it in the dev forums and the future major features forums, so I would just watch one of those places for updates.
So no progress right now but if it goes on, you're at the right place to know
If you need a tool right now in Moodle 2.7, 2.8 or 2.9 look at Skills repository plugin here:
Skills repository ("referentiel") is a Moodle module for skill certification.
- Teachers specify a Competency Framework (skills repository) or import it
- Teachers create Skills repository activities in courses (it's a Moodle module)
- Students declare activities linked with competencies
- Teachers follow students declarations, comment, evaluate competencies with a binary (default) or a multivalued scale
- Teachers propose tasks (i.e: a mission, a dead line, a list of competencies to prove, a documentation, etc.)
- Teachers export and print certificates
- If your site enables Outcomes (also known as Competencies, Goals,
Standards or Criteria), you can now export a list of Outcomes from
referentiel module then grade things using
that scale (forum, database, assigments, etc.) throughout the site.
Then these evaluations will be integrated in the Referentiel instance of the course.