Thursday, November 5, 2009

Software Testing Overview

Types of Test Cases
Ideally, you want to abstract your tests far back enough so you can apply it in different situations, but before you can do that you must have a clear picture of what could go wrong. Once you have a clear picture of the problem space, then you can create automated tests.

Happy/Positive Case
How the application or method behaves when given good input (errors or exceptions should not happen).
Example

int divide( 10, 2 ) // 10 / 2, return 5


Negative Case
How the application or method behaves when given bad input (errors or exceptions should happen).
Example

int divide( 1, 0 ) // 1 / 0 should return error or throw exception


Probe Case
How the application or method behaves when some one is tries to negatively manipulate the application. It is also important to note what the application is doing: is it a notepad program (relatively low damage) or is it the registry editor (relatively high damage). Accessing passwords (relatively high damage)? Character's money or level or stats in multilayer video game (relatively high damage) vs. offline single player game (relatively low damage). The list goes on.
Example

void foo( garbageData, garbageData2 ) // Could one pass bad data and exploit the application?


Usability Test
Not everyone thinks the same, especially when it comes to the programmer and the user. How does the application or method behave for other people? or Does the application behave like how the user would anticipate?

Example: Testing a window

Given


Test Notepad's Find window/dialog (yes this is purposely vague)

1. Read Documentation
Get the documentation for this window. This will will answer "How should the window behave?". This will create the initial questions.

2. Make Test Table

To organize our test cases, we'll create a table of all possible tests. We'll start with labeling the headings of our tests:
Name / Area / Component
ID
Description
Requirements
Test Category
Author
Automated
Pass/Fail
Remarks


Make a list of things we see:
  • Window
  • "Find" Window Title
  • Window close button
  • "Find what:" label
  • Find text box
  • "Find Next" button
  • "Cancel" button
  • "Match case" check box
  • "Direction" group
  • "Up" and "Down" radio buttons
  • Accelerators on "Find what:", "Find Next", "Match case", "Up", "Down"
  • Window theme
  • Layout & Layout behavior
Other items that we didn't see here but could have been tested: Drop Down List Boxes, Combo Boxes, List Boxes, menu bar, scroll bars, tabs.

Now, compile into the table:
Name
Description
Requirements
Category
Author
Automated
Pass/Fail
Remarks
Window







"Find" Window Title







Window Close Button







"Find what:" label







Find text box







"Find Next" button







"Cancel" button







"Match case" check box







"Direction" group







"Up", "Down" radio buttons







Accelerators on "Find what:", "Find Next", "Match case", "Up", "Down"







Theme, Layout, Layout behavior


































3. Write Tests
Perform tests: Positive Test (PT), Negative Test (NT), Probe Test (PrT), Usability Test (UT)

Fill out table with the tests

4. Beta Test
Once you are confident of your work while maintaining the invariants specified in the documentation, ask others for input. What does your developer think (if you didn't develop the item of test) or fellow developer? What does your lead think? What does the customer think?

5. Ship
Ship. Rinse. Repeat. As Jeff Atawood (play off Glengarry Glen Ross) says, Always Be Shipping. Assuming your application is now a solid product, some times the best testing is done by the community. Their criticism, tinkering, tearing apart of your application may be the only way to get the experience/evolution your application needs (or at least the drive to figure it out!).

No comments:

Post a Comment