Outcomes/Competencies/Goals/Metadata

Outcomes/Competencies/Goals/Metadata

by Martin Dougiamas -
Number of replies: 20
Picture of Core developers Picture of Documentation writers Picture of Moodle HQ Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
Basically this is the plan: http://docs.moodle.org/en/Metadata

Almost everyone I've tried this idea out on have been very enthusiastic. I'd like to work on it ASAP. Some potential funding for it fell through, but I'm still following up options.


In reply to Martin Dougiamas

Re: Moodle and assessment

by Peter Campbell -
Hi, Martin. I really like this plan. I'd love to help out on some the instructional design and functionality requirements. Here are a couple thoughts.

The Metadata doc says:

"When students complete a given course, the outcomes for that course are recorded against the user's name. This allows:reporting by student of which outcomes they have completed . . ."

In my mind, it would need to be more than a matter of saying that a student has met or completed an outcome. There would need to be a way to drill down into these reports to look at two things: (1) the actual work the student did that was tagged with the outcome tags and (2) the assessment that was used to determine how well the student met the outcome.

I think one of the greatest challenges is to demonstrate the extent to which the student did or did not meet the outcome. It's likely that there is evidence in the course that shows the student did or did not do so. So portfolios/e-portfolios fill this function nicely, i.e., they allow students to demonstrate competency or mastery of course outcomes by linking an outcome statement with an artifact from the portfolio. "Here's proof I have met this outcome."

But how does someone -- the teacher, a future employer, an accreditor -- evaluate the merits of this claim? Has the student met the outcome or not?

So underneath this Metadata structure would need to be both a portfolio tool as well as a rubric-building tool. In the end, administrators or evaluators could run these top-level reports and query the database by asking something like, "Show me all the students who have met Outcome X." Once the results list appeared, the user could drill down and see the portfolio samples that were tagged with this outcome. Next to each portfolio artifact would be a link to the rubric (or some other form of assessment) that was used to assess the artifact. The user could then see actual evidence that the outcome was met and how well it was met.
In reply to Peter Campbell

Re: Moodle and assessment

by Ger Tielemans -

An artefact in an educational portfolio has only meaning if it is in the company of:

  1. a classification of the competences which are met
  2. a summary with the context description of the creation setting and
  3. a set of criteria (Rubrics-scheme?) and how well these were met

Metadata could deliver this classification, the summary and even the criteriaset the results against these criteria must come form the Moodle-modules.

In reply to Peter Campbell

Re: Moodle and assessment

by Martin Dougiamas -
Picture of Core developers Picture of Documentation writers Picture of Moodle HQ Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
Thanks for the thoughts.

I see the metadata/outcomes system, the grades system, and the portfolio system as being separate subsystems, but thank you for bringing up the relationships, as that should be discussed.

Firstly, the actual student-submitted material and *grades* will still be there, attached to the activities, and that's what provides the information about what/how specific outcomes were addressed. The metadata/outcomes/competencies just 'tags' the activities, but of course you can use this relationship to access the underlying grades/submissions/actions (depending on what that activity supports).

eg, an assignment tagged with a typical outcome statement like "Use analytical thinking to draw reasonable conclusions from observations" will have a grade attached to it.

Right now activities only have a single grade, but it makes sense that the next step for grading should allow multiple grades per submission: one per outcome (assuming the teacher in question is USING outcomes, because not everyone will be).

I would really like to see some clear consensus on how all these separate "rubric" grades can be managed. Are they boiled down into a single grade per submission via a formula? Or should all these "component" grades end up in the gradebook for manipulation/combination at that level? Should the final grades for a course also be expressed in outcomes or not?

And yes, portfolios can use this information too.

Please feel free to help refine the specs in the wiki and flesh it out!

http://docs.moodle.org/en/Development:Outcomes

http://docs.moodle.org/en/Development:Grades

In reply to Martin Dougiamas

Re: Moodle and assessment

by Peter Campbell -
Martin D. wrote:
"Right now activities only have a single grade, but it makes sense that the next step for grading should allow multiple grades per submission: one per outcome (assuming the teacher in question is USING outcomes, because not everyone will be)."

I like this idea a lot. The challenge as I see it is to form a meaningful relationship between the grade given for each outcome and the student activity. For example, if I see the outcome "Use analytical thinking to draw reasonable conclusions from observations," and I see the number 87 next to it, I'm not really sure what that means. Of course, I can click on the activity -- as assignment, a journal entry, whatever -- and read it, but then I'm still left wondering, "What is the relationship between this activity, the number 87, and the outcome statement?" Presumably the number 87 means the student scored 87 out of a possible 100 points. But what makes it an 87 response and not a 77? or a 37? In other words, the criteria for assessment and the means by which the score of 87 was arrived at are invisible. Thus the need for rubrics. The rubrics make the assessment criteria visible. They also make the range of performance on an activity visible. So, looking at the rubric, I can see why the student got an 87.

So that closes the loop between the number 87 and the activity. But what's still open is the relationship between the activity and the outcomes statement.

Here's what I'd like to see:

For each activity, the student writes a brief description of how and why the activity meets the outcome(s). The student is making an argument on his/her behalf, i.e., "This journal entry meets these outcomes in this way:" The teacher then reads this argument and accepts or rejects it. I imagine the teacher using something like a Likert scale for this, e.g., in response to the statement, "The student has met the learning outcome," the teacher would choose one of the following: strongly agree, agree, not applicable, disagree, strongly disagree.

This would then close the gap completely: I'd know why the student got an 87, what an 87 meant, whether or not the student had met the outcome, and how well the student had met the outcome.

Caveat: this is not perfect!!! But it's a heck of a lot better than simply stamping an A or 73 on an activity. Doing so tells us nothing. Attaching more information to a grade helps us understand what the grade actually means. It doesn't tell us everything, but it's better than nothing.

As for your other question:

I would really like to see some clear consensus on how all these separate "rubric" grades can be managed. Are they boiled down into a single grade per submission via a formula? Or should all these "component" grades end up in the gradebook for manipulation/combination at that level? Should the final grades for a course also be expressed in outcomes or not?

Boiling down is certainly easiest, i.e., add up all the numbers for each criterion and then come up with a total score. For example, if there are 5 criteria in the rubric, and each one is worth 5 points, then the maximum number of points for the activity is 25. So a student could get a 3 on one criterion, a 4 on another, and then a 2 and a 5 and a 3 and come up with a total score of 17 out of 25. That 17 out of 25 could be weighted more heavily than other grades in the gradebook, so that would allow you to adjust the way this grade affected the overall score. I like the idea of "component" grades being expressed as criteria and as outcomes for the course, but I think it would be difficult to make sense of them all.

However, if you like the plan I sketched above, then you could look at all the outcomes of the course and see how well the student met them. One way to aggregate this would be to look at the teacher's responses for each Likert scale they used to assess the outcomes statement for each student. Each "strongly agree" would be worth 2 points and each "agree" would be worth 1 point. So you could get a total aggregate numerical score for outcomes achievement. Obviously students who got a "disagree," "strongly disagree," or "not applicable" would not gain any points.

Does this sound too mechanistic?
In reply to Peter Campbell

Re: Moodle and assessment

by A. T. Wyatt -
IMHO, boiling down to a single grade is not the answer. We also need reports on how well students (and by extension whole classes) did on single objectives. In my situation, we don't use this type of assessment as a grade for the student as much as we use it to improve teaching and learning. A grade might indeed be comprised of a number of these more targeted assessments, but I need to be able to aggregate and disaggregate them in a variety of ways in order to mine the data properly.

This whole thing is quite hard to wrap your mind around, but we did start a conversation on it a long while ago. . .
http://moodle.org/mod/forum/discuss.php?d=27348
http://moodle.org/mod/forum/discuss.php?d=27468#129729

atw
In reply to A. T. Wyatt

Re: Moodle and assessment

by Peter Campbell -
I think you need to boil the rubrics down into a single grade if you want to or need to produce numerical data about student performance. Some schools and universities don't have to do this, and I personally favor keeping assessment and grades separate. But most schools and universities -- hell, most students -- want a grade, a single grade. In boiling down the rubrics to a single grade, we can at least make that grade more meaningful. In other words, the grade is actually based on something that the student knows about and, hopefully, understands.

As far as measuring the attainment of learning outcomes, we need to go beyond simply saying "met" or "not met." In proposing a Likert scale, I'm trying to find a way to offer more meaningful information about attainment of outcomes while, at the same time, keeping the process relatively simple. We could come up with more sophisticated ways to determine the attainment of outcomes, but my concern is that teachers would not engage in such a process if it was overly complex and/or overly time-consuming.

The other problem with what I proposed earlier, I now realize, is that leaving it to one person to say whether a student attained a certain learing outcome or not is not terribly reliable, even if that person was using a Likert scale. It's still the opinion of one person, and there's no clear indication of why the teacher doing the assessment "strongly agrees" or merely "agrees" that the student has attained the outcome. So we're back to the problem of being able to show criteria for what "meeting the outcome" means.

Here's a thought: what if the outcomes appeared in more than one course? In this scenario, more than one teacher would say whether a student had met an outcome or not. It might be possible that the student had not met the outcome in Course A with Professor Jones, but had met it in Course B with Professor Smith. Looking at the highest level of abstraction at the total learning experience of the student, either through data mining or through artifacts in a portfolio, we could aggregate these different data points and say something like, "In the 7 courses where this outcome was part of the course, this student met the outcome 5 times." Each school or university or academic institution could determine what "meeting the outcome" means, e.g., the student has to meet the outcome in 70% of their classes where the outcome appears as part of the course.

Peter
In reply to Peter Campbell

Re: Moodle and assessment

by Martin Dougiamas -
Picture of Core developers Picture of Documentation writers Picture of Moodle HQ Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
Peter, I'm really liking this direction.

Using a standard Likert scale against every outcome that is tied to an activity (or course) seems to be complex enough to allow a wide range of uses while being simple enough for most people to understand fairly intuitively.

The question is: does everyone agree?

Not everyone is going to want to grade 100 students on five scales (500 menus). How do the people who want to type 99.4348 as a total grade for the assignment cope with this?

Should we be forcing/encouraging them to use this mechanism because it's good teaching practice (dammit)? Or do we provide the choice when they set up the activity? And if there's a choice, how do we integrate different results in the gradebook?

I want to be really clear on all this before we start coding it in 1.9, because it's going to affect so much of what people use Moodle for.

Perhaps we can start collecting good examples presented as "user stories" (with realistic data) in the Moodle Docs:

http://docs.moodle.org/en/Development:Outcomes_examples


BTW, I'm going to MOVE this discussion to the Gradebook forum shortly, so anyone interested in this should come there!
In reply to Martin Dougiamas

Re: Moodle and assessment

by A. T. Wyatt -
I would say that it has to be optional. It requires a lot of effort and discussion to come up with the scales and different organizations will be ready and NOT ready to implement this. I would think it better to have the choice when you set up the activity. That said, people will want to change that setting later (say the next semester when they restore the class).

I agree that good assessment practice requires more than one rater to contribute data and that you might assess the same standard in a number of different classes. You could even get REALLY complicated and assess knowing that there are some classes where the material is introduced, some where it is developed, and some where it is mastered. Or you could restrict the assessment to "mastery" classes, not collecting intermediate data.

I guess I just can't see grading and assessment as a single activity, although I realize that the same activities and the teacher evaluation process contribute to both. With assessment, you have to follow individual threads of data (from individual students, to courses, to schools) that gets lost if you factor them all into a single grade. Teachers also give grades that don't have anything to do with your assessment objectives (like attendance, or participation).

Don't forget that the quiz tool also factors into this. Not all assessment requires a rubric; many use traditional objective tests.

Example of a typical situation: You might have an objective that requires all students to demonstrate that they can accurately calculate conversions from one unit of measure to another at least 80% of the time.

You set up a moodle quiz with questions from 3 chapters out of the math book and 6 questions have to do with conversions. You make those 6 questions available to all math teachers in the grade and they are all embedded into the unit test for every student in the grade.

You will need to pull the data on how all the students in all the classes did on those 6 questions.

The grade for the test is much more than the 6 questions, there was a bonus question that had NOTHING to do with math, and you need to report the data for each student, across multiple teachers, and for the entire grade-level math program.

Are you thinking of things that support this kind of effort, which is admittedly very detailed, but seems to be very typical at all levels of academic institutions? Or something else altogether? A lot depends on which level of objective to which you tie the activity. Lesson objectives are usually much more detailed than course objectives, which are more detailed than department/program objectives, which are more detailed than institutional objectives. Moodle would not necessarily need to support every level, but perhaps focus on the course objectives. It would then be the responsibility of the institution to pull out the rest of the data from the database. I hope that there is some way for multiple teachers to be able to apply the same rubrics/activities. If we could share them between courses, then the database entries would be consistent. That will be important for the person who has to pull this data out and aggregate and report for the higher levels.

atw

In reply to Martin Dougiamas

Re: Moodle and assessment

by Sean Keesler -
I was directed over here from Peter Campbell after I made a post on this thread yesterday:

http://moodle.org/mod/forum/discuss.php?d=62979

Guys, I would love to see a collaboration between Moodle and Sakai here. The Goal Aware project is right up your alley.

http://bugs.sakaiproject.org/confluence/display/GM/Goal+Management+Tools

Interoperability and standards baked in from the start will really do a lot to make both platforms better. At the very least, I'd be interested in sharing what I have learned from Sakai and OSP with you. What say you? Would a demo of the idea help? Cookies?

In reply to Martin Dougiamas

Re: Moodle and assessment

by Bernard Boucher -
Hi all,
I am very happy that this feature will be more supported by Moodle. As some said, it must be optionnal and fully configurable at the activity, course, curiculum ( many courses ) and even at the Moodle Network level if needed.

Maybe a suggestion for the reporting part : instead of starting from scratch is it possible to start with Agata or a tool like that and to fully integrated it to moodle with some templates to start like :
  1. reporting by student of which outcomes they have completed
  2. reporting for students about which outcomes they are yet to achieve
  3. smart course selection (find me a course to teach me "xxxxx")
  4. reporting for admins about all students on the system and where they are up to.
  5. ...
The reporting tool will be usable with metadata but with also any Moodle tables for any kind of report.


I hope it may help,

Bernard



In reply to Bernard Boucher

Re: Moodle and assessment

by Doug Hajek -
Hi,

I'll make a stab at an "user story" which I am working on now:

We run British HE qualifications called Higher National Diplomas, awarded by Edexcel / BTEC. The qualification has 16 units (classes) and each unit has defined Outcomes/Assessment criteria - see the Outcomes attachment for an example.

There are almost always 4 outcomes in these units, and each outcome has 3-7 assessment criteria.

For a student to "pass" a unit, he must produce evidence for ALL assessment criteria. The teacher is responsible to create a number of assignments (we recommend 3), each of which will cover a subset of the total assessment criteria for the unit. All together, the assignments for the unit will (must!) provide the opportunity to cover all assessment criteria - and depending on the unit and assignment structure, some assessment critera may be covered multiple times.

Each assignment is graded for the assessment criteria on a Met/Not Met basis. The results are then tracked in a rubric - see the second attachment for a rough idea, where the number in the matrix refers to the assignment number where the evidence was produced.

Students who miss some criteria on a particular assignment may be given a "referral", or opportunity to resubmit. Or the unit may be structured in a way that the assessment criteria appears in a later assignment as well. This is up to the teacher, based on his assigment design.

If a student meets all PASS assessment criteria, then the work is graded for MERIT and DISTINCTION.

In these Edexcel programs, there are 3 broad Merit criteria and 3 broad Distinction critera (hence the shorthand for M1,M2, M3. D1, D...). However, unlike the pass criteria, which are set in the course specification, the M/D grading requires the teacher to rewrite the broad indicators in the context of the particular assignment. The requirements for M and D grading criteria are made clear to the student on each assignment brief.

Like the Pass critera, the M/D criteria are then graded in a rubric on a Met/Not Met basis, with reference to the assignment number where the evidence is produced. Also, there may be multiple possibilities to get a particular M/D in the various assignments.

The students final grade for the unit is a MERIT only if all pass criteria are met, and M1,M2,M3.

The students final grade for the unit is a DISTINCTION only if a MERIT plus D1,D2,D3 is met

In an ideal solution: wink

  1. the Outcomes/pass assessment criteria would be imported to a particular Moodle course
  2. the teacher would create the assignment in Moodle, attach the Outcomes and Assessment critera that are covered in that assignment as a subset from above, and attach and contextualise the M/D criteria
  3. students would submit their work, which could be either one or multiple files, links or descriptions of physical work with a digital archive, thereby creating a rudimentary portfolio
  4. teachers would grade according to the rubric, and either send work back for re-submission or post to the "gradebook" - in this case the matrix of Pass and M/D assessment criteria.
  5. Feedback should be made by the teacher for the overall assignment and for each Pass Merit and Distinction criteria on that particular assignment
  6. At the end of the class, a final grade of P/M/D could be either extracted automatically or manually by the teacher, and posted to a transcript area either in Moodle or an external SIS.
Comments and thought:

- numeric grades are irrelevant and confusing in this system, so best if they could be turned on/off

- the roadmap spec for competencies should meet our needs, although a slight reworking of the assignment module would be ideal for us

- a portfolio system that locked down the evidence for each assignment submission would also be highly desirable

- Edexcel / BTEC curriculum is widely used in the UK and around the world. From what I have seen of similar HE competency tracking in US systems, with the addition of numeric grading for each assessment criteria (0-4 GPA or %), it should be easily used there too.

We would be very happy to work further on this. We have several people who can help with the code, but no one has had the time, unfortunately, to become very knowledgable of Moodle modules and structures.

In reply to Doug Hajek

Re: Moodle and assessment

by Chardelle Busch -
Picture of Core developers
Just a couple of quick thoughts:

Metadata -- would it be possible to also use a weighting scheme for compentencies. E.g. based on a role assigned to the student. For example, Compentency A has a weighting of 5 for a Manager, but a weighting of 3 for a clerical person. Or based on the importance level of that competency in a certain context/course.

Assessment -- likert scales --- YES, and how about multiple raters? E.g. self-rating, teacher-rating, peer-rating, etc.
In reply to Chardelle Busch

Re: Moodle and assessment

by Rick Barnes -
We are just starting to prepare for a new UK course, OCR National which is also a competency based course very similar to the BTEC so this style of grading would be very useful.
The comment about different parallel assessments would be very powerful if linked to a portfolio of some sort.
Pupils upload work and fill in details saying which criteria that piece of work meets.
Peer assessment, second pupil is allocated work to asses and the grades and feedback are available to the first pupil and to the teacher. Unlike a workshop this would need to be allocated on a rolling basis
Teacher assesses work using the same criteria and can provide feedback to the pupil on their grade done/not done and on the reasons for disagreeing with the pupils scores.
Internal verification/standardization takes place in the same way with feedback going to the initial teacher/assessor and not to the pupil, again some automatic allocation (set a % to have a second assessment) would mean that this process could be carried out as the course progresses not at the end of the unit.

We also assess pupils to national curriculum levels (numerical scores 1 to 8 but also split levels (2c, 2b, 2a, 3c, 3b, 3a.... *c, *b, 8a) and these are an issue in grade book which is biased to numerical data. These could be assessed in a similar way if staff and pupils created a list of criteria to be met for each level.
In reply to Rick Barnes

Re: Moodle and assessment

by Chardelle Busch -
Picture of Core developers
I've had a couple more thoughts on this. Just some ideas. My thoughts are that I would like to see a new competency assessment module rather than try to fit competency assessment to each activity already in Moodle.

Example:

Create competencies with titles, descriptions, weightings based on student's role (e.g. managers have a weighting of 6, clerks have a weighting of 3). Have the ability to categorize them into groups (e.g. Customer Service competencies). Then, in a course, add a competency assessment similar to the way we add a quiz.
Choose the competency or groups of competencies we will be assessing (I would love it if we could group questions, add these to an assessment as a "scale" and then get individual "scale scores").
Choose/create competency assessment questions for the assessment the same way we create and choose quiz questions now. These questions could go into the question db we already have in place. This may create the need to design more types of questions, e.g. multi-rater questions, better Likert-style questions (e.g. get rid of the a. b. c. and get rid of the "partically correct"), a question type that requires an attached document, or maybe a question type that can be linked to an activity that is in the course--e.g. link it to an assignment--and the grade for the assignment automatically gets added to the competency assessment (with a weighting?), etc.
Choose whether the assessment is self-rated, multi-rated, or teacher-rated ( in fact, if it is teacher-rated, it might not even be visible to the student).
The outcome of the assessment goes to the student's e-portfolio or whatever with a weighting based on data from the student's profile/role (obviously role isn't the correct term here since it means something else in Moodle) for each competency that was assessed.

In reply to Chardelle Busch

Re: Moodle and assessment

by Rick Barnes -
I like the quiz style idea, one question/item for each competency. Most of our pupils proof will be in the form of a file for example a spreadsheet containing validation or a word processed document showing proof of the same. They are all pass fail so there would not need to be a link to the assessment for indivudual files in our case so linking to a portfolio would be our prefered option, otherwise we could end up grading work twice and giving feedback in 2 places.

Pupils could create a link to the file containing the evidence (which could be linked to more than one competency and could be from an assignment or e-portfolio) and they could have a space to add and explanation where necessary to direct the assessor to the corect part of the file, validation settings in cells .... etc.

The assessor would need to view the students comments and links for each competency and have space to record their assessment and comment, this could look like the asignment submissions page with competencies listed with links, self assesment and explanations visable and then columns for subsequent assessments and feedback. You cold even sort the submissions by the evidence links to speed up marking, open teh file once and check all the related competencies before moving on to the next piece of evidence.
In reply to Peter Campbell

Re: Moodle and assessment

by Paul Garrett -

Peter, just found this great discussion.

Your statement   “Here's a thought: what if the outcomes appeared in more than one course? In this scenario, more than one teacher would say whether a student had met an outcome or not” is right on target.  I suspect that this is not attempted by many schools simply because there is no tool to get it accomplished.  That is our situation.  While the assessments of these outcomes will contribute to a grade in a course, they should still be a part of the overall assessment of a student across many courses.

Because of this, whether we distill these assessments into a single grade or not, they need to be maintained at an atomic level so that summarization can be done outside of the course context.  In my case, I want to also combine with demographic data from my student management system, and look at outcomes in a course longitudinally.  If you think about this in terms of a data warehousing concept, assessments of outcomes in the course are considered as operational data, and are used to summarize with other assignments for a grade for the course.  Then, the scores are taken out of the context of the course (course should still be a data element) so that aggregation can take place by student, by degree, by school, etc.

Obviously, there must be a common rubric across all courses in which the outcome was assessed, and that rubric must follow from one context to another.

Paul

In reply to Martin Dougiamas

Re: Outcomes/Competencies/Goals/Metadata

by Brian King -
I came upon this post by searching for "metadata" in the forums. I apologize if this is a bit off-topic, but it is related to metadata.

As I noted in this post, we are working on a project in which we have a need to attach metadata to activities and resources. We don't have any need to use this metadata for student assessment, but rather as an aid for teachers to find relevant resources for them to use.

We have funding to do this, and would of course be happy if this was integrated into the standard Moodle code.

Perhaps search-metadata and assesment-metadata could be combined into one system; we're open to working together to find a good solution.




In reply to Brian King

Re: Outcomes/Competencies/Goals/Metadata

by Mike Churchward -
Picture of Core developers Picture of Plugin developers Picture of Testers
Hi Brian -

We're close to releasing into the wild a system that lets you tag metadata to any activity. The system is robust enough to extend the elements that can be tagged to almost anything. On its own, it does nothing, but we've used it to set things like expected times on activities, levels of study, etc., so that other reporting/display mechanisms can use it.

mike
In reply to Mike Churchward

Re: Outcomes/Competencies/Goals/Metadata

by Brian King -
Hi Mike,

sounds interesting ... would it be possible to get a sneak preview?
In reply to Martin Dougiamas

Re: Outcomes/Competencies/Goals/Metadata

by Colin McQueen -
Great that the Moodle community is working towards this. Just a thought on the creation of metadata tagging of activities/courses. I believe that in the UK organisations like exam boards, Becta and QCA (the latter manage the quality control, classification and QAN- numbering system for qualifications) may not only produce downloadable lists (vocabularies?) of learning standards but also provide these in the form of web services. Could the input of the lists be supported by the web services layer of Moodle that I believe is progressing?