At the Open University we do periodic user testing (by employing some of our real students to test). However, we generally concentrate on our own new features rather than Moodle core.
By the nature of user testing, it is not at all a reliable method for drawing firm conclusions 'is this usable' or 'does this suck'. This is true of virtually all user testing. The reason is that user testing takes an extremely small sample of people, who are self-selected and may not be entirely representative (i.e. we don't offer that much pay, so you have to be either really in need of money, or really motivated to help us improve) and puts them in an unrealistic situation, etc. However, you can learn valuable individual points from the individual users involved. Also, ours should be not-that-unrepresentative because our students are generally adults from a wide age range etc. (As compared to studies using students in a 'traditional' university where they're all young adults.)
When I see videos of user testing I am always shocked. Here are two recent fun examples:
1. Student who couldn't find the 'save' button on a form because it's off the bottom of the screen (yes, he knew how to use scrollbars... he just didn't think
As a lesson
learnt from this, we are placing a solid background colour on all forms in our theme (instead of the current thin border). My hope is that, because this clear chunk of background colour extends right to the bottom of the visible display, it will make clear that there is more form afterward. I don't know if this will actually work or not
2. Student who couldn't figure out how to create a page in our wiki. He clicked the help button next to 'Help with creating pages', and appeared to be reading the popup help, but did not get as far as the heading midway down the help screen 'Steps to create a wiki page' or the 1, 2, 3, 4, 5 numbered list below it.
The lesson learnt from this basically that not only do people usually not read online help, but even when they think
they're reading online help, they might not be. Basically we need to make sure that all necessary information is available on the main screen, while keeping it extremely short (or people won't read that either).
In this case we are adding a 'Create new wiki page' button at the bottom of the wiki screen.
Obviously our students are not at all stupid (they are studying at university level). So the third thing to learn, from these examples and others, is that end users, without being stupid, are incredibly bad at using computer software. I mean that in the literal sense of incredible - as in, they are so bad that we as developers and expert users actually can't believe it.
And it's not just some people who are bad at using computer software - as I said, these usability tests involve very small samples, but I don't remember seeing a usability test report with a single person
who didn't have huge trouble using at least one relatively straightforward part of the software (e.g. scrolling, or reading help).
One other thing I wanted to say about user testing is that any suggestions that test users make directly will almost certainly be unhelpful. They don't know how to use the software - why should we expect them to design it for us? But that's not a problem - the valuable information they provide is basically what they do (and what they say about why they couldn't figure it out). From that point it's up to designers to come up with feasible ideas that might improve the situation.
And finally, I think user testing can be helpful but it's slow and expensive and should probably be the last resort. There are lots of cases where you can already see (without involving users) that something really sucks; there's no point paying people to tell you that. Fix it first, then do user testing... But in our case, user testing also provided a justification to schedule the development work to make these improvements.
PS As with everything I say, these are all my own opinions and not my employers'