Dan Poltawski's blog

Software metrics and Moodle

This weekend I attended the phpnw11 conference in Manchester, a good conference with a lot of interesting talks which i’d highly recommend to any php programmer.

phpnw11 details

Not entirely accidentally, I went to a number of talks focused on testing and continuous integration and came home with quite a lots of bits and pieces I wanted to play with. Sebastian Marek gave an excellent talk on Software Metrics (slides available) as the first track talk of the day. Obviously I can’t summarise his talk, but he introduced us to the concept of the cyclomatic complexity, NPATH, C.R.A.P. as well as WTFs/min as well as tools which can be used to measure them and other assessments of code quality. Sebastian made the point quite strongly that you can’t just use these metrics on their own, but that they could be good indicators combined with analysis of the code.

This got me wondering if the modern parts of Moodle would be analysed more favourably that older parts of Moodle which developers tend to long to refactor away?

Testing 1.9 vs 2.2

So I conducted a test using php mess detector and its code size rules on the question engine in Moodle 1.9 and compared that with master (which includes the rewritten question engine that Tim spent a good portion of the last year working on ((and I hope he doesn’t mind me using as an example!)) ).

~/git/moodle$ git checkout MOODLE_19_STABLE
~/git/moodle$ phpmd question/ html codesize > question19.html
~/git/moodle$ git checkout master
~/git/moodle$ phpmd question/ html codesize > question22.html

Results

Moodle 2.2 has better metrics than 1.9!

Well, not quite - in order to get those results I had to remove all the unit tests, which weren’t present in Moodle 1.9 and also tend to be ’long, dumb’ methods ((Incidentally Laura Beth Denker also did a great talk at phpnw11 emphasising how you should write good test code avoiding things like conditionals - slides available)) which trigger just this sort of metric and made the result larger. I don’t really know the code well enough but there may be other factors such as question/ including more code…

The results matched Sebastians point that these metrics can’t be used on their own, but some of the metrics which can be generated might be very interesting datapoints to look at with time. Sonar was a tool demonstrated which could be used in combination with a continuous integration system and tools which generate these metrics to evaluate and report over time - some of which looks really cool and I hope to play with soon.