This thread speaks of 2 related but very distinct things :
1) Improving item analysis efficiencyBelieve it or not, the actual code is a lot more efficient that it was at the begining of this report !
But I agree it could certainly be improved a lot more. Enrique would surely not be mad at me if I say he wasn't a
SQL guru when he wrote this report and neither was I when I took the task to maintain his code.
One of the problem is the lot of options you need to support in the code :
- you can analyze just one quiz attempt for each user. This particular attempt may be the one with the highest overall score, the first attempt or the last attempt of those performed. Alternatively all attempts data may be combined for a cumulative analysis.
- Some attempts can be excluded from analysis by setting a low limit for the score of the attempts to analyze. This limit is specified as a percentage (0-100) of the maximum grade achievable in the quiz.
But all people really using the analysis report will agree with me that these options are really necessary and their removal would greatly reduce analysis report
usability.
One other problem is that at the time this report was written, we has to support very old
MySQL versions. Now that MySQL 4.1 is the minimum requirement, I think some of the queries could be rewritten resulting in a a great gain in efficiency.
2) support more question types in item analysis reportAs Piere said analysis report need to call the get_question_responses function wich in turn call the get_all_responses method of the questiontype.
It is very interesting to read again
MDL-5379 "Analysis report should not have a list of accepted types" because it was at that time we had to decide how analysis will decide if a qtype is supported or not.
I am strongly in favor of specific get_all_responses methods for some qtypes but Pierre is right, the actual code is written with the assumption that the question can return a list of all possibles students responses. More exactly this assumption is not made by the analysis report but by
Classical Test Theory : how could we calculate parameters such as facility index or discrimination index if we can't have a list of all responses ?
This list of all possible responses can either be in ->options->answers and fetched by the default get_all_responses or constructed by the get_all_responses method specific to the question type.
Can some Test experts answer to the following question :
For a question where a student must answer several subquestions is analysis of each subquestion separately the only sensible way to go ?Tim I am very interested to read your specs because even if I live in a country (France) where most of the teachers (including those using
MCQs) never heard about facility or discrimination index, I use item analysis quite a lot.