Better FOSS QA Testing

I just had a random thought when I came across the following in the Changelog for VirtualBox 3.1:

Mac OS X hosts (64 bit): don’t interpret mouse wheel events as left click (bug #5049)

This seems like something basic that should have been caught in a beta release, but it didn’t because there is no way to monitor the use case coverage. The typical QA process for most FOSS applications is to release a Beta version, watch the bug reports pour in, fix them, and repeat.

This is great, except – Wait, no, that’s not what I meant to say. This sucks. Of all those beta users who downloaded gizmo-0.9beta how many actually tried using, say, the HTTP proxy feature? Or ran the application over a dial-up connection? Or ran on 64-bit OS X?

There are all kinds a solutions to this problem, all of which could be integrated into one of the many software project managment tools out there like code.google.com, Trac, sf.net, etc.

Ask for an email address when before reaching the beta download link and follow up a week later asking the user to fill out a quick report about the environment they ran the software in and, optionally, a list of the programs features they used. Click a green check next to the feature if it worked properly, and a red X if the feature did not work. After the form is submitted, post a screen with links to the effect of “Click here to submit a bug report for your problems you experienced with xyz”

Take that one step further and ask for permission to collect information about the environment the first time it launches. If the user is willing to loose anonymity they can also enter their email address and the form they they are emailed one week later will be pre-populated with as much information about their environment/usage experience as possible.

Even better, turn this data into any number of cool visualizations which are publicly available in real time. Developers can quickly see what aspects of the software are problematic and, more importantly, what environments have not been test much (or at all). Hopefully they would decide to make some sort of concerted effort to get test coverage for those areas. And by making the data public and easily accessible in the form of pretty charts, users can see how well tested the application is on their particular platform/processor/new-fangled-peripheral.

Okay, that’s enough ranting for tonight.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *