I have received copies of some (raw, unaudited) large automated security test results in the past, and generally found patterns where the same type of false positive was replicated in various different areas of the site, which can cause the report size to grow quickly. A large proportion of the results I have analysed have either been intended functionality, or based on incorrect assumptions. Some examples include:
- Reporting vulnerabilities such as XSS, where it is intended
functionality. For example, certain roles such as teachers and admins
have certain capabilities flagged as "risk XSS", where we expect them to
their students, but those "trusted" users are being utilised by the testing tool.
- Similarly, reporting self-XSS, where only the current user sees the content.
- Reporting CSRF on any pages that do not include a CSRF token, when in fact a lot of pages should not include such a token. Given the size of some Moodle sites, this could cause quite a lot of false positives. We include CSRF tokens where CRUD operations are performed (such as enrolling a student, or updating a course), so pages adding/removing/modifying data that are missing this token are valid bugs, but we but intentionally do not require those on things like course homepages, viewing a specific forum etc. Including those tokens on those types of pages would prevent users being able to link each other to those resources.
- The testing tool modifying GET parameters in a URL to attempt actions such as enumerating pages, or SQL injection, and simply checking that the page resolves. The problem with this is that in some cases, those attempts will either redirect the user to their homepage, display an error containing the (escaped) string, or simply be a valid URL, which can cause the tool to detect a successful page load, and hence report a false positive.
- There are cases where they find a single piece of code that is a suspected issue, and list new results for every piece of functionality in the site that uses that code (which in some cases is hundreds of duplicates).
I hope those examples are helpful, but please don't hesitate to ask if you have any further questions.
If you receive copies of those reports and any of the findings look like a legitimate issue, the best thing to do is determine the steps to reproduce (if you are doing this by testing manually yourself, it is a good idea to do this on a test site, so you don't risk compromising data on a production site), then report it either in a Tracker issue (setting a security level of minor or serious), or emailing firstname.lastname@example.org, so myself or one of the relevant component leads can look into the cases further. If there are any specific items that you are concerned about, but aren't sure whether they are a bug or intended functionality, feel free to email details of those to email@example.com, and I will take a look where possible.