Future major features

 
 
Picture of Phill Miller
Re: MoodleRoom's proposed Outcomes changes
 

Folks, 

Just a quick update, and then I'm also going to use this forum post as a TODO list for myself.  We've been meeting with clients and end users about this over the past several weeks, and they've made some good adds from a functional perspective.  They've also overwhelmingly confirmed the vision of removing the dependence on scales.  Their suggestions will appear in my TODO list for updating the specification below.  On our side, our technology team, which for this project will be headed by Kris Stokking and Mark Nielsen, has now reviewed the spec, and provided me with a lot of feedback (much of which will also appear in the TODO list below).

TODO List and Key Decisions: 

  • Course Mapping to Outcome Sets - We had a lot of discussion, both functionally, and technically about the importance of the mapping of a course to Outcome Set(s).  From a functional perspective, this mapping has two purposes.  First, it simplifies the  mapping of activities, rubrics, and questions, by limiting down the number of Outcome Sets that I can select from.  Second, the course level mapping defines which outcomes I want to report against.  So, from a functional perspective, this is important.  However, as I discussed with the tech guys, the mapping that is the most important is the mapping of the outcome against the content itself.  Let me give an example to clarify: Let's say that there is a quiz question that is used in multiple courses through the question bank.  That individual question might be mapped against two outcomes.   However, for my course, the only outcome that I care about is the outcome that my course is mapped against.  Why does this matter? 
    • Backup/Restore - If I restore a course that has items mapped against outcomes, the outcomes won't really appear in my reports, etc. unless we also map the course against that outcome. 
    • Shared Questions (see above) 
    • Accidental Deletion of Mapping - If I have mapped 1000 questions against my outcomes, then I accidently remove the Outcome Mapping at the course level, it should NOT delete all of my work in mapping the individual items.  
  • Report for "Unmapped Activites and Questions": Great customer suggestion here.  If I am in a course that is using outcomes extensively, we should build a report that shows items that are not mapped against any of my associated outcomes.  This would be like the coverage report but would be content centric, rather than outcome centric.  The assumption is that if I am tying my course to outcomes, nearly every piece of content in the course should be mapped against an outcome, so if something is not, it probably should be.  
  • Detail on the Reporting - We need to go one level deeper on the reporting pages and define what happens when I click on the links in the summary reporting.  One of the particular questions that came up was how easy it would be to gather the artifacts of student submissions.
  • Use Case Add: Export Outcomes data through API to Portfolio System (such as efolio or Mahara
  • Outcomes Summary/Workflow Block for Teachers - An idea from a client is to create a block for the course home page that gives updates on outcomes workflow.  More definition needed, but wanted to document the idea.  
  • Define XPath options on import as only non-complex elements. 
  • Clarify how "Average Grade" works for quizzes where questions can have different weight.  
  • Define Capabilities (Create Outcomes, Import Outcomes, Map Course, Unmap Course, Map Activities, Map Rubrics, Map Questions) 
  • Backup Restore Specification (include Common Cartridge) 
  • More work on Recommendation Engine - Users loved this concept BTW.  One of the biggest problems was the sheer volume of work created by outcomes.  Anything we can do to make that easier helps.  
  • Outcome In Use - Warn before editing 
  • Versioning of Outcomes - We know that this is a major use case, but we can't bite off everything this time.  Let's make sure not to design ourselves into a corner on this from a technical perspective.  
  • Can instructors see how students did on the same outcome in a different course?  Is this a setting?  

Cheers - 

Phill

 
Average of ratings:Useful (1)
Picture of Doug Loomer
Re: MoodleRoom's proposed Outcomes changes
 

As a Moodle Admin and ComSci teacher at an international school that is exploring a move to Standards Based Instruction and Reporting (I'm using the US parlance because I am most familiar with it - apologies to those who must translate), I am delighted beyond words that all you good people are taking on this project.  I had been considering taking a stab at it myself, but I can see from the above and the specs that it is in far better hands.

I do have a couple of thoughts/questions, and I apologize ahead of time if I missed these being addressed already in the spec or above.  These are primarily reporting (module/plug-in) specific, but I thought I would mention them because they may have some implications for data capture and storage design.

I saw in the specs that the intent is to provide for hierarchical Outcome structures.  If I understand you correctly, this would be wonderful.  One of the major limitations of the current Outcomes schema is that it is difficult to know which "level" of the standards hierarchy to track.  Using Outcomes to track the 4-6 "Strands" (in my terminology a Strand aggregates several Standards which themselves aggregate many Benchmarks) that would go on a report card.  From a feedback perspective (assessment for learning and not simply assessment of learning) using Outcomes to track Strands or Standards provides essentially no helpful data.  To be helpful for learning, the Outcomes need to track Benchmarks (specific learning/performance targets), and at the high school level in some courses (e.g. ComSci) these are legion.  In the specs it is clear that you intend to have Outcomes tied to Benchmarks, but will tying an Outcome to a Benchmark automatically tie it to the Standard and Strand of which it is a component?  I am assuming so, but do want to raise the issue.  Doing so would certainly help teachers in schools that continue to give grades (whether simple A, B, C..., or 1, 2, 3...with descriptors, or some other system) help in deciding to what degree a student has met the overall Strand/Standard requirements of the course.

In the same vein, will there be a way to assign a "weight" to each instance where a question, rubric evaluation, etc. is tied to a Benchmark, and each Benchmark is tied to a Standard, and each Standard is ties to an ultimate report card Strand?  If a student has met a Benchmark 10 times over the course of a semester, will each success be weighted equally even though they arise from different assessment modalities (e.g. multiple choice questions, essay questions, performance assessment).  Will there be a way to weight the reporting data in terms of where an assessment item falls in the learning process (most recent performance vs early performance)?  Will it be possible to aggregate performance data for Benchmarks by mean, mode, median?

Again, I realize these are primarily report plug-in related questions.  But, if the right data is captured, I can envision some wonderful graphical reporting tools that would make Standards Based Instruction and Reporting both productive and a pleasure.

Finally, any remote sense of what your target for release would be on this?

Cheering you all the way,

Doug

 

 
Average of ratings:Useful (1)
Picture of Kris Stokking
Re: MoodleRoom's proposed Outcomes changes
 
Hi Doug - thanks for your feedback on Outcomes.  I think you raise some very valid points about reporting, and we actually met about this as a group on Friday.  You are correct that Outcomes are to be organized into nested Outcome Sets, which you define as Benchmarks and Standards/Strands respectively.  The student will only be marked as having met the Outcome, not the Outcome Set.  However, that information can be aggregated into useful reports - one such example is the My Outcomes report which shows progress of achieved Outcomes against a related Outcome Set.  The question we need to solve is which level should be used in order to make them useful.
 
For example, if we had the following Outcome Set: Computer Science Standards -> CS Standards for 2013 -> Basic Concepts -> Data Structures
 
That contained the following Outcomes: Arrays, Hashes, Trees, etc.
 
At what point does the aggregate reporting become useful?  Too high up the hierarchy, and the student would likely never achieve 100%.  Too low, and it may not be useful enough.  In most cases, it will probably make the most sense as the last Outcome Set in the hierarchy (i.e. Data Structures), although there may be cases in which it would be dependent on how the Outcome Sets are arranged.  In addition, we have performance and design considerations - making it too flexible could frustrate end users (e.g., if they need to navigate to the correct "level" in the Outcome Sets each time they run a report), and we would not be able to cache the data effectively as we would need to do so for every given scenario.  Simply put, there's more design work needed here and it's on the forefront of our discussions on Outcomes.
 
Regarding your second question about Outcome "weights" - we plan to handle this via a Recommendation System.  When the instructor is on the Completion Marking report, each Outcome will have an associated recommendation for the student based on their performance against content.  A streaks-based or probability-based recommendation plugin may not care about a weight - it may make its recommendation entirely on consistency.  But we do plan to make useful bits of information available to the recommendation plugin such as mingrade, maxgrade, rawgrade, even the passing grade threshold (where available) so that if the plugin wants to weight each attempt based on the amount of points for that (which is certainly reasonable!), it may do so.
 
Average of ratings:Useful (1)
Picture of Doug Loomer
Re: MoodleRoom's proposed Outcomes changes
 

Hi Kris,

Thanks for taking the time to make such a thoughtful reply to my questions.

On the issue of aggregate reporting usefulness, I suppose the major thing would be to make sure the fields used for data capture would permit a report module to query for any or all heirarchy levels.  That way report modules could be written to address different user needs.  I am so please that you are considering these issues!

In thinking about the Recommendation System you describe it strikes me that there may well be many situations in which streaks-based or probability-based recommendations are not appropriate to analyze the data set.  It seems to me that those analysis tools fit a situation in which computer assisted assessment presents multiple opportunites to achieve mastery of an issue within a single or tight series testing event using the same testing modality. They may not be as applicable to a set of responses gathered via different question modalities over the course of an entire semester or year.  Just a thought. 

As to the information bits to be made available to the recommendation plugin, I hope you will include date of assessment.  A primary premise of standards based assessment (at least as it is described in the US) is that students are like popcorn (each kernal pops when it is ready) and so the primary question becomes what did they ultimately learn, not what is the average of their learning over the breath of the course - often referred to as the last, best data.  Again, getting the data into the properly granular fields is the issue (in terms of ultimate data mining), and I'm hoping the design will err in the direction of particularity rather than exclusion.

Thanks again for the info.  No need to respond.  Keep up the great work!

Doug

 
Average of ratings: -
Picture of Kris Stokking
Re: MoodleRoom's proposed Outcomes changes
 

The Streaks and Probability recommendations are just examples that would be available for implementation.  They won't be useful in every situation, and I could see plenty of reasons why an institution wouldn't want to use them at all.  The main point is that we will be capturing data about the attempt to support recommendations based on streaks, probability, grading thresholds, and more should the administrator choose to configure those plugins.  We'll also capture the date the attempt was made - my previous list of data points was not exhaustive.

It may even be possible for the plugin to query for additional information based on the mapped content, but admittedly we have not gotten that far.  We're aligned with you in wanting to give as much useful data as possible to query against, but we need to do so in a way that is both elegant and highly-performant.

 
Average of ratings:Useful (1)
Picture of Phill Miller
Re: MoodleRoom's proposed Outcomes changes
 

Folks, 

As you can see in the thread, our technical guys have made huge progress on the tech spec side, and we are getting closer to really starting the coding.  During the tech review, we uncovered some possible issues, and so we have refined the functional specification with a number of changes.  Some of these were do to technical constraints (reporting against Completion, rather than against activity, so as not to have performance problems on the logs), and others were do to actually working through some use cases technically (after a review of a lot of the various state standards, their XML, and the nesting requirements, we made some changes to the format of an outcome set and how they are nested).  I still have a few more changes, which I'm hoping to get in before the Developer meeting tonight, but I thought I would post now, so that anyone who is going to the developer meeting tonight could have time to review before then, if they are interested.  

http://docs.moodle.org/dev/Outcomes_Specification_Change_Log

Thanks! 

Phill

 
Average of ratings: -