Information Technology Dark Side

Struggles of a Self-Taught Coder

Information Technology Dark Side header image 2

What is the purpose of software testing?

March 4th, 2008 · 16 Comments

A few weeks ago I would have thought this was an obvious question. But then I stirred up a hornet’s nest by asking a group of testers to identify common myths about software testing. The ensuing discussion was heated, interesting, and sometimes insulting. Some people became offended, and great rifts were born between some of the testers involved in the discussion.

It occurred to me later that the source of the disagreement was very fundamental, and perhaps it was so deeply ingrained in the testers’ various psychologies that they were unaware of the differences between them. I believe the two sides of the debate represented two schools of thought about the purpose of software testing.

School #1: Find bugs.

This camp just wants to find problems in software. They want to sharpen their skills at honing in on these bugs, explaining bugs to developers, and negotiating the processes that result in getting these bugs fixed. They have an affinity for things like exploratory testing, bug advocacy, energy management, oracles, heuristics, and even coding. They aren’t particularly interested in requirements, traceability, and up-front test planning, though they will do it when forced to just so they can have the opportunity to find some bugs. These testers NEVER argue about whether something is a defect or a missed requirement – it’s a defect. They are generally skeptical of automated testing and only use it when they feel there is a high likelihood it can find bugs more effectively than they could manually. They prefer small, proficient teams over swarms of testers.

The best bug-hunters can test anything at a moment’s notice. They don’t need requirements, charters, or test managers to do their job. Just give them an application and they will find bugs.

School #2: Prove the software performs as advertised
This camp wants to demonstrate that the software meets the requirements as defined in the beginning of the project. They are into requirements, traceability, and detailed upfront planning. The extremists among them believe you can test quality into a project, and that the only input for determining whether software works is formal requirements documentation. They are deeply concerned with planning the right tests, based on the requirements, that can demonstrate that the software meets the requirements. They love automated testing, test scripts, and planning. To them, test skills revolve around planning, analyzing requirements, using tools, and executing scripts.

This role is sometimes referred to as quality assurance. I think it’s a stupid, ambiguous term and that nobody knows exactly what is meant by it.

One of the outcomes of this school is specialization. The “good” testers don’t test – they right scripts based on requirements. They do this during the requirements phase of the project. Then, when code starts to ship to the test environment, they become test managers, watching over a horde of script-monkeys who execute scripts and report defects. The test manager gets the job of determining whether the defects are really defects or simply missed requirements, which only get fixed if the business agrees to pony up more moolah.

Three good bug-hunters are worth thirty good script-monkeys and their boss
As a project manager, I don’t really have much use for school #2. There simply isn’t much value in “proving”, on a line-item basis, that the software my teams build meets the requirements. The only thing that really matters to me is whether the business accepts the software as meeting their needs, requirements be damned. My main concern in shipping software is that it 1) works and 2) has value. Oddly enough, the quality assurance school of software testing hasn’t helped me in either respect.

I find bug-hunters are much better at making sure the software works and that it meets the business’s needs. They tend to be good collaborators, and have very curious natures that drive them to interact with business partners and figure out their perspectives on the application being tested. They are capable of using requirements to figure out what the software is supposed to do, but they won’t rely on it exclusively. They are flexible and comfortable with ambiguity. This makes them very valuable team members.

School #2 is crippled by its rigidity
The quality assurance approach only works well under ideal circumstances. My projects are never ideal. They are chaotic, crazy, short-term efforts that move rapidly and don’t rely on precise, complete documentation. They are a lot of fun and emphasize collaboration heavily. QA testers don’t fit in on my teams. They are too uptight, too focused on documentation, and too inflexible. I can’t use them.

Both schools tend to have a near-religious fervor about their beliefs
For the testers who are very deeply rooted in their schools, discussion doesn’t really help much. You can’t talk school #2 into school #1, and vice-versa. The debate between these schools reminded me of biblical scholars with different views on the literal nature of scripture arguing whether Jonah really was swallowed by a whale, or of the fervor with which political leaders once fought the idea that the earth was round. Only when Columbus returned from the new world with coffee, chocolate, and gold, did they accept the truth, in spite of the fact that it was easily observable in the natural world.

You cannot convert school #2 to school #1 with words alone. You have to bring them chocolate, coffee, and gold from the new world. The earth is flat until they experience the zeal of finding bugs for themselves, until they get tired of shipping software that “meets the requirements” but doesn’t work or have value. You have to show them what good software really means before they will abandon their hordes of script-monkeys and join you on the bug hunt.

If you enjoyed this post, make sure you subscribe to my RSS feed!
Stumble it!

Tags: Project Management

16 responses so far ↓

  • 1 Allen // Mar 4, 2008 at 1:28 pm

    So what are the good things that the Script Monkeys bring to the table?

    As the lead developer for several projects where the main quality of the code was based on test scripts that verified internal and external functions – I’m a strong advocate for them. I call the scripts the “anti-Allen code” because when I would go in to help one programmer fix his code and altered a central function, I would often inadvertently break something over in another section of the code.

    Without the test scripts it would be several weeks or months later that we would find the “unexplainable” bug that eventually ended up in an “ohhhhh I broke that way back when” moment. With the test scripts it was caught as soon as I made the change and hit the “Run” button. That way I reworked the central code to accommodate both needs.

    But those were many thousands of rules and big infrastructure apps – 5-15 man year efforts. You keep emphasizing the point you are on nimble projects.


  • 2 Allen // Mar 4, 2008 at 1:40 pm

    By the way we generally had the policy that each coder was his own script monkey. While he was writing his function, that was the moment that he knew what proper inputs and outputs were and coded them into individual function calls with expected results.

  • 3 Chris // Mar 4, 2008 at 4:07 pm

    There’s School #3, which I like: “The purpose of testing is to advise management on the level of risk attendant on releasing the software.” This means not only finding bugs (anyone can find bugs) but finding bugs that, when fixed, reduce the risk that the software release will be spoiled by inoperable software. The bug-finders who find lots and lots of minor and medium bugs but who miss the major bug that will arise and bite you in the behind once the software is released will kill your project stone-dead every single time, and at the worst possible place, too: after the software is in the hands of the users.

  • 4 Dennis Gorelik // Mar 4, 2008 at 5:19 pm


    1) What group is bigger [according to your observation]: “Bug Hunters” or “Specification Compliance Assurance”?

    2) Developers can fall into one of these categories too.

    3) “Anti-spam” measure (CAPTCHA) on this page is very bad. I lost my original comment because I entered “wrong” keyword. Do you use very old version of it?

  • 5 Scott Barber // Mar 4, 2008 at 7:47 pm

    It is my belief that all the different reasons for testing stem from exactly one thing.


    It’s my opinion that the various schools of thought that you (and others) identify and discuss are nothing more than second generation manifestations of some key stakeholder’s belief about what “the worst thing that could happen” related to the software is.

    It’s “second generation” because before it becomes a school of thought it’s been paraphrased and passed down through individuals, projects, and time.

    When folks get it first hand from the key stakeholder, it’s simply a testing mission.

    Consider this. “Find Bugs” is a reasonable manifestation of “Field Support & Call Centers are expensive”, “bad press is, well, bad”, and “Buggy software doesn’t sell.” “Prove the software performance as advertised” is a reasonable manifestation of “I don’t want to get sued”, “We don’t get paid if the software doesn’t meet spec”, and “If it doesn’t do what we said it would, that’s a lot of wasted advertising dollars AND bad press.”

    You’ve mentioned my notion of “Commander’s Intent” on Technology Dark Side before and I think this is all part of the same concept. Over time, “the industry” has generalized several of the most common interpretations of countless Commander’s Intents into a taxonomy of sorts, then forgot how that taxonomy evolved, then started teaching testers how to test and how to think about testing based on their interpretation of the way the taxonomy was summarized to them when they were learning about testing.

    Scott Barber
    Chief Technologist, PerfTestPlus, Inc.
    Executive Director, Association for Software Testing
    Co-Author, Performance Testing Guidance for Web Applications

  • 6 Dave Christiansen // Mar 5, 2008 at 12:56 pm

    Dennis asked these questions:
    1) What group is bigger [according to your observation]: “Bug Hunters” or “Specification Compliance Assurance”?

    School #2 seems to have more members. Since this school needs hordes of script monkeys, they recruit and indoctrinate lots of young testers.

    2) Developers can fall into one of these categories too.

    True enough.

    3) “Anti-spam” measure (CAPTCHA) on this page is very bad. I lost my original comment because I entered “wrong” keyword. Do you use very old version of it?

    Sigh. This is the first complaint I’ve received about this. I’ll look into it.

    Scott made a comment about commander’s intent. I have often made the point that process separated from purpose generally has no value. This is the case perhaps with both testing schools. I can see a use for school #2 in certain types of commercial software, but not for the vast majority of non-commercial software like that built by corporate IT.

    Unfortunately, in spite of this, you still see school #2 advocated in projects where it isn’t really justifiable. That’s because it has gone past simply being the implementation behind an intent and become a “best practice” that is separate from context. This is a bad thing, in my opinion. Intent/purpose is, I believe, the single most important aspect of process. Separating the two renders process worthless.

  • 7 Dennis Gorelik // Mar 5, 2008 at 5:25 pm

    1) The size of a group depends on how much money (budget) is available for this group.
    What do you think is the reason that “Specification Compliance Assurance” group gets bigger budget?

    2) When you’re fixing CAPTCHA — please also add “email me follow-up comments” checkbox (or simply send follow-up comments if email was provided).
    Without email notification I don’t know when my comment/question is addressed).

  • 8 IT’s About Uptime - The StackSafe Blog » Change Management by the Numbers // Mar 5, 2008 at 8:16 pm

    […] bind because they task their QA groups to build, test, and implement changes. And why not expect QA to handle this requirement? QA groups have unit test labs containing mature tools, they enjoy the rigor of test plans, and […]

  • 9 David Christiansen // Mar 5, 2008 at 8:48 pm

    Hi Dennis,
    To question #1, I think one of the reasons why the specification compliance assurance crowd gets more money is because they need more money – it takes a lot of time to write and execute all those scripts. Plus, projects that take this approach are usually serial development (aka waterfall) projects that use the test phase as contingency. When they finally get to test (if they ever get there – waterfall has a horrible success rate) they are usually behind schedule, so the PM’s and business sponsors try to crash schedule by throwing bodies at the testing effort.

    For #2 – I’ll do my best. When I get a fix installed, will you test it for me? I need to do an overall WordPress upgrade anyway, so it might be a few days.

  • 10 TDD: It’s A People Problem | Programmer’s Paradox // Mar 5, 2008 at 9:53 pm

    […] I came across this eye opening post on testing.  I’d never thought about there being a fundamental difference in philosophies on testing […]

  • 11 Dennis Gorelik // Mar 5, 2008 at 10:32 pm

    1) All other things being equal, businesses prefer to pay less and get better results.
    Apparently, Waterfall provides to the businesses (or to corporate bureaucracy) something valuable, something that iterative development don’t usually provide.
    May be it’s because Waterfall teams give some promises, that client’s bureaucrat can use to pass the blame?

    2) I’ll test it for you.
    I’m considering switching to WordPress too (currently I use Blogger).
    What do you think: does it make sense to host the blog on your own web site, or it’s better to host it on WordPress?
    That would keep your blog always up-to-date with the most recent WordPress features, right?

  • 12 David Christiansen // Mar 6, 2008 at 7:29 am

    I think the appeal of waterfall comes from two sources – widespread belief that the cost change curve is a law of nature (late changes cost a bazillion times more than early changes) and the happy, comfortable feeling that accompanies a comprehensive upfront plan (even if it is completely unrealistic).

    I host my own blog – I don’t know what it’s like to have WordPress host it for you. I’m not sure how you install plugins and themes when WP hosts it for you. Maybe they let you, but I’m not sure. I have always preferred to control my own environment, plus I like to play around with other technologies like Joomla, Moodle, etc, in my domain.

  • 13 Bob // Mar 6, 2008 at 11:37 am

    RE: school #2

    This one is vital for an independent developer trying to get paid for a piece of code. I’ve seen people go around in nasty circles because of this.

    – it has its place, just not so useful in a corp. setting

  • 14 David Christiansen // Mar 9, 2008 at 9:22 am

    Bob, I think you’re right that there may be a place for school #2 in many areas. The fact that it’s not useful to me doesn’t mean it’s completely useless.

    That said, I have a hard time justifying the cost of hordes of unskilled test script monkeys. There has got to a better way to convince your clients that you’ve done what they asked you to do.

    But maybe I’m just fed up with the extreme implementation of this, having personally seen development projects with 30+ testers pounding scripts for months when there were only ever about 10 developers.

  • 15 mukalazi henry // Aug 3, 2009 at 2:05 pm

    To minimize on the risk of not working up to an acceptance level

  • 16 Peter // Nov 25, 2009 at 4:41 am

    Just a note on Columbus: he was not trying to prove that the earth was round, he was trying to prove that he could reach India faster by sailing West from Europe than the overland route through the Middle East.
    Other than that, very interesting article!

Leave a Comment