Losing track of how students are doing (pedagogy, not software)

Losing track of how students are doing (pedagogy, not software)

by Ben Davis -
Number of replies: 6
Hi all,

I had to throw out an observation, and see what others have experienced / done. I am coming to the end of my 1st year using moodle as an instructor, and starting to reflect on it;s use. There have been some tremendous successes (You know them all). Mine is basically a hybrid course, at this point.


One thing is haunting me. Because I no longer grade many assessments, I feel like I don't know as much about the students anymore. It seems like I just put the manual assignment grades in, and don't check the course grade often enough, to see where students stand. In transferring grades into our districts program, I was surprised by a few grades this quarter. That didn't seem to happen in the past. I feel like I have disconnected somewhat from this.

Also, when looking at actual assessments, I am looking more at the test itself (discrimination values for each question), rather than the students performance. When on paper, who wants to count %right and wrong for each question... but now that is the data I am interested in.


Anyone else had a similar disconnect like this? Does moodle have any tools to help you keep an eye on students who's grades "change"? Any best practices anyone has discovered.

I give weekly quizzes, but they are open book, open note, multiple attempt, large question bank, highest score counts. I know the litterature does not have many good things to say about this type of assessment, but the students seem to enjoy and get a lot out of it. By doing this, I don;t know how students are doing week by week.

Maybe some of this is good. Anymore, when a student comes to me, I have them explain their grades to me (since they are all online). Maybe they have taken ownership of them?

Any advice / ideas?
Average of ratings: -
In reply to Ben Davis

Re: Losing track of how students are doing (pedagogy, not software)

by Tim Hunt -
Picture of Core developers Picture of Documentation writers Picture of Particularly helpful Moodlers Picture of Peer reviewers Picture of Plugin developers
Interesting observations. I don't feel qualified to comment on your actual questions, since I don't teach myself (at least not in a formal sense). Thank you for sharing.

However, as the developer of the quiz module, I was surprised by your remark "the literature does not have many good things to say about this type of assessment".

What literature? What you describe is exactly how I like to hear of the quiz being used - as an activity that helps students to learn, and which engages them. So I if you can point me any any bits of literature specifically, I would be interested to read about the opposing view-point. Thanks.
In reply to Ben Davis

Re: Losing track of how students are doing (pedagogy, not software)

by Irmgard Willcockson -
Interesting observations, Ben. I've been using Moodle for almost 3 years now, and really like the ability to have weekly quizzes. I should say that I teach health informatics at a small graduate school in Texas.

Our approach to quizzes is very different however. The quizzes are timed, 25 min for about 12-15 questions, single attempt. Open book, open everything, of course. 16 quizzes account for 20% of the students' grade. We also have a timed midterm and final exam.

I've been looking at several issues surrounding grades. One is that the first four quiz scores are highly predictive of final course grade. This has been submitted to AMIA, won't know until June as to whether it has been accepted. As a result of this research, I can identify students at risk of doing poorly early, and our next step is to try to intervene, providing study skills coaching, for example.

In summary, I feel that I have more information than ever to help me identify at risk students. The midterm grades can be a surprise in some cases, but overall, we have good confidence that we can tell who'll do poorly. As an aside, I'm not happy with the Moodle gradebook, it does not help me get the data I need.
In reply to Irmgard Willcockson

Re: Losing track of how students are doing (pedagogy, not software)

by Ben Davis -
To clarify, I teach 9th and 10th grade Biology (15-16 year olds). The course structure is a hybrid, with most assessments given on moodle, and some assignments, but the majority of class time is F2F. Access to computers is limited!

@Tim,

From what I have been reading here, people have stated that the "multiple attempts at a large quiz bank", doesn't yield good data, because students really blow off the 1st and 2nd attempt, and the really use later attempts for scoring, because now they have the some of the answers in front of them. I have tried using time delays to discourage this, but it only hurt the folks who don't have the internet at home (since they could only get 1 attempt in per class period).

I generally have a 20 question bank for a 10 question test. I could see how a much larger (100 question) bank could increase effectiveness, but it would also increase the possibility of covering only one topic / missing another.


Ideally, after a quiz, I should have a sense of what students know, so that I can reteach what was missed. Using the multiple attempt model, I don't have that data. I have yet to dive into adaptive... or maybe 2 attempts with adaptive scoring.


I think ideally, if I could increase access to computers, I could make worksheets out of lesson/scorm/quiz modules-- and structure them similar to my current quizzes---, and then have the assessment be a one shot deal... this way I know what they students know. I really think i could make some neat stuff using cloze!


Does that clear up where I am coming from? I love how it works, and I do believe that students do use it for all it is worth. What I am trying to figure out is, how do I get data on what the students know to be able to make instructional decisions. I have considered two quizzes a week... One multiple attempt, and one one shot(with data off of the one shot).

@Irmgard
You hit exactly what I am talking about. I need to transform my quizzes into a tool that gives me data to guide instruction. BUT I also don't want to lose their desire to keep trying and figure out an answer. I think that is what I am trying to figure out.

For moodle gradebook, I am glad to see it is an area of active development. There are some needs.. [AJAX, display possible points on an assignment on the user report]. But overall, I am thrilled with it! I have kept all my grades on it, and my students have been very interested in it. I think for the first time, they now actually understand weighted grades. They track their grades, and can explain them. The grades went from being 'mine' to 'theirs', and that has been a great thing.
In reply to Ben Davis

Re: Losing track of how students are doing (pedagogy, not software)

by Tim Hunt -
Picture of Core developers Picture of Documentation writers Picture of Particularly helpful Moodlers Picture of Peer reviewers Picture of Plugin developers
Thank you for clarifying.

With 10 questions drawn from 20, then yes, on the 2rd attempt, there is a good chance the student will have seen 7 of the questions before.

If you want a larger questions bank, but still ensure that the student gets some questions on each topic, then you can put the qusetions into different categories, and build the quiz as 5 qusetions form topic1, 5 questions from topic 2, say.

The extreme version of that is when you have 10 categories called 'Variants of Q1', 'Variants of Q2', ... I explain that in more detail on Effective_quiz_practices#Robust_testing_with_random_variants.

The other thing to bear in mind is that this year you had to write 20 questions for each test from scratch. Next year, you will already have all those questions in your question bank, so hopefully you can do less work. Maybe write 10 more for each test to bring the pool up to 30. Perhaps delete a few old ones that did not work out, and then you can think more about how those questions are constructed into tests.

Anyway, online testing tends to involve a big up-front investment of time, as you write the questions. Particularly if you want to build in specific feedback. But then when you run the tests the computer does all the grading for you. Other forms of assessment tend to be the other way round. It is marking all the responses by hand that takes much more time than preparing the questions.
In reply to Tim Hunt

Re: Losing track of how students are doing (pedagogy, not software)

by Ben Davis -
Thanks for the link. I will be reading and thinking about that for quite some time.

Another teacher and I ended up tesing on the ecology unit on the same day. I walked into the planning area, and saw him feverishly tearing through page after page. Marking items right and wrong. it took him most of the period (50 min) to go through one period and correct, nevermind grade.

I later showed him my test, the grades, and then showed him where I found one bad question bases on negative discrimination values, and 4 fair questions with a low % correct rate.

Pedagogically speaking, at the end of the day(ok, long after...), he had a general feeling for how his students did. He knew who did good and not, because he saw all of them. He might have realized what questions were good and bad, but did not have real data to that dimension.

At my end of the day (much quicker), I had information about the test, but felt like I didn't know how the students did. I don't think many of my colleagues routinely look at discrimination values on tests (or even do much item analysis). I think I am ahead of them, but need to condition myself to go back and systematically review the student data in some sort of structured way.

[For tests, all students get the same ~40 questions, just shuffled]

=====

And yes, there is a lot of upfront work. But it really was great to just grade essays as they came in, and honestly, be done with a test before the students even left the room. That in itself is amazing.

===

As for my weekly quizzes, I will see what I can do to expand the question bank. That does seem to be the answer... to have a question bank large enough to discourage gleaning answers from previous attempts. I also see that outcomes are a developing item (haven't gotten to play with it). Would be useful to tag quiz items with a stanard, and see progress toward that standard.
In reply to Ben Davis

Re: Losing track of how students are doing (pedagogy, not software)

by Paul Ganderton -
Hi Ben,

I've been reading and re-reading your post and the replies. I'm more confused than when I started! Is your concern that you are losing sight of your students' learning processes i.e. they become statistics more than people (which is what your post title implies) or is it that you are concerned about the way in which quizzes might not be the most effective vehicle for learning?

If it's the former then I might suggest you open up your course site to more interaction with students. My situation is similar to yours in terms of the student age range and teaching situation. I pepper my courses with surveys, forums etc. to make sure students are connected to the learning. Younger students go for the gradebook as you suggest. They like the idea of seeing grades "develop". Unless I need to do otherwise, all my assignments are online/upload and marked without any intervention from paper. IMHO it's about 2 to 3 times quicker, far easier to give feedback (especially on a design project where I altered the design in front of the student to show what effects I was looking for) and the feedback, being emailled out, saves class time and the negative comments that can often go with "sharing" grades in class.

Another idea I'm trialling this year is to give two senior classes different Moodle permissions to see which group takes more ownership of their work. One has the traditional role of students and the other has full teacher rights (minus gradebook editing ;) ). It'll be interesting to see what happens. Both groups are very vocal about their needs which suggests I'm on the right path. It helped that one assignment was to answer a survey - they had to tell me what they thought of their sites.

If it's the latter then the quiz link is, as you noted, very useful. If your concern is pedagogical then might you not link the various question formats in Moodle to specific outcomes for the course? This would be sound educationally/pedagogically and give variety for students. I am also trialling a spreadsheet (which should be possible in gradebook but I haven't tried it) where I record not only grades per assignment and per outcome but also assignment, class and year means and standard deviations. In addition I also calculate grade per student as an index number set against the year mean for that task and will now include the same calculation but against that students past performance (i.e. both summative and ipsative assessments). This gives the numbers yuo need for report writing, comment etc. and also a good sense of how the student is performing (students seeing how their grades work soon make the link between performance, grade and effort!.

Hope this helps,

Paul