This is the plan at the moment:
1) Add a new tag across the stable branch called something like MOODLE_19_REVIEWED (to start with it would be the same as MOODLE_19_STABLE)
2) Once a week (currently we're thinking Tuesday) all developers stop coding and spend the day (or as long as it takes) to review and test new STABLE code from the past week and bump the MOODLE_19_REVIEWED flag on those files in CVS when finished. We'll also update bug status from Resolved to Closed at the same time.
3) The download page will feature the latest stable packages from Wednesday as the ones likely to be the best possible version available. I may even stop putting up dailies and put up weekly builds instead.
Does any one have any more ideas about this? Unless there's any serious objections / alterations to this I intend to start this from next Tuesday.
Yes, yes, yes! I would say, all 14 days if i search the issue tracker. This takes away the pressure from the main developer, the problems are not the core files, the problems - this is my meaning - i jump on your site, the quality management. Some examples:
I post a bug, six big moodle installation with more than 30.000 user, their moodle admins upgrade to 1.9 have the problem, they cannot restore the frontpage course. How long do you mean should i wait ta ask again? I cannot wait Martin, they jump and wink for help, all i can do is create a new comment! This is a bad scenario for all. I mean i´m sure you understand that the schools a very unhappy with this situation
- MDL-10708 (Open since August 07)
- MDL-13897 (Open since 14 Days, sorry to say this, i think i must wait some weeks more)
These two bugs are not harms the whole system, but the bug below:
- MDL-13896
If the developer will find more time to fix these kind of bugs or problems, i will wait two weeks for new releases!
Andy
For your issue: there are a lot of outstanding bugs in the tracker and a limited number of developers to fix them, so we need to prioritise.
I'd love it if more developers from the community were willing to help fix these things but by far the majority of bugs end up with the same small core group.
Perhaps you can convince those big clients of yours to help fund some more developers to help us.
What kind of review/testing will take place? Review of individual bugs, system testing, a combination? How do developers standardise or unify their approach to this?
1) testing the interface (if that makes sense) to make sure the fix works as it's supposed to.
2) reading the code and looking for mistakes.
This filter could be useful ... basically I was thinking first-come first-served on all the ones marked "Resolved" and not closed.
http://tracker.moodle.org/secure/IssueNavigator.jspa?mode=hide&requestId=10543
We can also generate a listing of files that have not been checked yet (looking for REVIEWED tags that are not up to date).
It would be good to standardise the approach so it's obvious and simple. I'm assuming we'll improve as we go along!
I use a "slightly modified" Moodle as schoolserver and it is such non-CVS-site..
But I am not a programmer, so I do not understand CVS when it finds a conflict and asks me to solve the difference
I know which code I added myself, so..
I found this idiot-proof procedure for myself to maintain my main website:
- My NEW 1.9 main Moodle website runs on Linux (Suse)
- I installed XAMPP on my laptop
- I copy my patched 1.9 website code to a subdir in /htdocs of XAMPP (= /SWP)
- I also installed a TortoiseCVS checkout of the latest Moodle in another subdir of XAMPP (= /moodleCVS)
- This did I once:
- I ran once a comparison with WinMerge to identify my local changes against the latest CVS.
(WinMerge can neglect differences in end-of-lines between Windows and Linux and finds even the difference in number of white spaces.. another tool is Kdiff3) - In every directory where I find one or more files which I had changed or added, I put a TXT file with my changes in the style of:
CODE WAS: ... (range of lines, not only the line number that is touched)
CODE NEW: ...(same range of lines with the changes in the middle)
- I ran once a comparison with WinMerge to identify my local changes against the latest CVS.
- I do on my laptop (under XAMPP) a weekly CVS update (Is your advise Friday?) for my /htdocs/MoodleCVS and look at files which Tortoise mentions as changed:
- When files are new or changed while I did not touch them:
I copy them blind from /htdocs/moodleCVS to /htdocs/SWP. - When files are changed in CVS AND I changed the same files in my local Moodle (I read/check that in my TXT file in that dir):
I compare them with the help of WinMerge - see picture - and I merge by hand, only with the visual help of WinMerge - the changes from moodleCVS into my local files: as long as the new changes from CVS do not touch my own local changes, there is no need to change my TXT doc in that dir. - Then I try to run the updated /htdocs/SWP moodle under XAMPP
- If that works, I copy the changed files form /htdocs/SWP to my school Moodle.
I think that the other Martin will fall from his chair and say: "what you do by hand is exactly what CVS does", but this way it gives me more feeling of human control, maybe I need a crash course in CVS? The free manual is not enough.
- When files are new or changed while I did not touch them:
This might be a usable procedure if number of changes is small and chages are localized, but might be much harder to follow on larger divergences...
I never knew how to get rid of the first line comparison (CVS automated timestamp tagging) on Kdiff3...
cvs -q update -kk ....
With Kdiff3 you can compare three versions in parallel: I need this in situations where I have:
- my own changes, like changes for the WYSIWYG editor to keep the buttons on the right in the edit box)
- changes from another - not yet - mainstream patch (like Nanogong or DrawMath now, I like both!)
- the new daily CVS
I see with Kdiff3 the three files together in the same "lines-zone" and have to decide which changes should win:
those from new CVS, those from "old CVS" (like in a Nanogong file, patched from an earlier CVS) or my own additions in the oldest CVS, all in the same "lines-zone".
By the way, I was expecting comment from the other Martin
Must I read your comment as: install and learn LINUX?
(and I think that you are right: my approach is more sensetiv for human errors)
pfffff...
Finally, you need to fix the whitespace issue caused by the difference between UNIX and WINDOWS carriage-return/line-feed. If you edit a file in a windows tool, it will leave an extra line-feed that triggers the error. The fix for this comes Capi’s Corner…
Edit .git/hooks/pre-commit and comment the following lines(c. line 58).
if (/\\s$/) { bad_line("trailing whitespace", $_); }
"..I never knew how to get rid of the first line comparison.."
yes I have to copy everytime the new time header over the older..
Martin,
do you think it would be a good idea to sync a "production" instance on CVS update on its stable branch or might it be too risky for service stability ? i actually use distinct directories for stable checkouts and dev or production volumes.
Val.
You could use:
cvs -q update -dP -r MOODLE_19_REVIEWED
from next week for the least risk.
very good idea
I prefer having a new (sub)version stable release once a week, but beeing sure that it will be good (because when you update and have a new bug ). People really wanting to have uptodate versions can still use CVS
Séverin
- my only comment... will-try-to-resist... mentioning-using-a-better-SCM... can't do it!...
Using $DSCM there are a few very good workflows to "pull in" code only after it has one or two levels of peer-review.
Replace $DSCM with the distributed SCM of your preference
yesterday when I read the first messages in this discussion... I thought... "how many hours will Martín Langhoff" resist?"
About the weekly generation of REVIEWED packages, I really like them, more that current daily ones, 100% agree.
Anyway, we must be really careful about moving that tag in one unique transaction for all the repository; if we do so gradually (while reviewing bugs and so) it can end in a unstable status (if the generation starts in the middle of the process).
Sure we can do that, just we need some way to know when everything has been reviewed (in the tracker?) in order to perform the global "cvs tag -F MOODLE_19_REVIEWED" when ready.
Also, how is the review task going to be shared, by QA assignment? Do we need one list of things (bugs) to review (and QA assignments) ready each Tuesday ?
All them, minor organizational issues, but just wanted to share them here.
Ciao
And yes, I'm thinking we will use the QA assignment flag (that way all testers can participate) but I'm not sure if we can easily make filters that use that field currently ...
I would suggest a few things, as I have always had trouble looking at changes over an interval of time:
1. Be very clear about the time that evaluations start and keep it open for a 24 hour window for comments (longer if issues are found that can not be unambiguously resolved). This will let people in different time zones have a chance to run their testing regime.
2. In some way list all the tracker items that are affected by the changes that week, especially so we can look at the diff files and read the commentary. I currently have to sort by change date and that can mean things are missed.
3. Have a way for testers to comment that the review has been made and what was found, even if that means that there were no issues to be raised. This could be as simple as a Mediawiki entry or a tracker entry.
4. Find a way to buy a round of beers for us doing the testing
--Gary
We can test fixes any time during the week. The idea of Tuesday is just to put aside time especially for this aspect. The aim is to have the list of outstanding untested issues down to zero by Wednesday when the automatic builds will happen.
If we don't then we just don't, we are still ahead of what we are doing now. I expect the number of regressions caused by bug-fixing to be very low overall.
See http://docs.moodle.org/en/Development:Weekly_Code_Review for details on how we can see what needs testing and who is doing what.
As for beers, perhaps users should be buying them for testers! "Buy one every time you use a feature and it works perfectly!"
Here is the current process:
http://docs.moodle.org/en/Development:Weekly_Code_Review
It's our usual testing process in fact and a bit simpler than originally proposed (not using the MOODLE_19_REVIEWED flag yet) just to see if we can get into a rhythm.
For clarification: do the non-developer QA persons (like me) have still the possibility to participate to the review?
(In fact, I'm glad to see that the "Test the fix" paragraph describes the way I'm still doing it.)
the original idea was that developers review the PHP code, logic and coding style in general, which means coding skills are required.
For now we are using the Close action in Jira tacker. This may not be optimal because it interferes with QA work and testing from user perspective which is of course needed too. I am sure we can work on QA process improvements more this week.
So far this Review Tuesday seems to be a success - we already found several regressions today, discovered some more bugs and reviewed a lot of code - several patches were already committed and some more are waiting for review.
Petr
Thanks for your answer. You point correctly the "testing from user perspective" which is what I (and other) people did.
I saw that this Review tuesday was a success: it'll be very hard to compete with you in the next Bugathon
Just an idea. Can Jira tracker be configured so we can add a new step into the workflow? I mean issue states:
Open > In progress > Resolved > Reviewed > Closed
Reviewers would change the the status from Resolved to Reviewed, while QA tester would change the status from Reviewed to Closed.
All the MOODLE_XX_WEEKLY tags (and the associated weekly packages) are now being generated automatically at about 1:00 GMT on Wednesday morning.
http://download.moodle.org
MOODLE_19_WEEKLY is the recommended tag to use with CVS for production servers.
So going forward, let's all try to keep to this sort of pattern:
Tuesday: testing and fixing of regressions ONLY, no new bug fixes
Wednesday: safest time for new bug fixes and larger patches
Other days: bug fixes and testing as usual
Does this affect the way we check corrections in at all? That is, do we need to add a new tag of any sort?
mike
The full process is here:
http://docs.moodle.org/en/Development:Weekly_Code_Review