Software testing can’t make your code 100% bug free, but give more confidence it has less bugs.
Software Testing Topics
Functional vs. non-functional testing
- verify a specific action or function of the code
- Can the user do this?
- Does this particular feature work?
- May not be related to a specific function or user action (i.e., scalability or security)
- How many people can log in at once?
- How easy is it to hack this software?
Software defected that is not caused by coding errors. A programmer makes an error (mistake) which results in a defect (fault, bug) in the software source code. If this defect is executed, in certain situations the system will produce wrong results, causing failure.
- Requirement gaps (unrecognized requirements)
- Non-functional requirements: testability, scalability, maintainability, usability, performance, and security
- Important to find faults early: Requirements -> Architecture -> Construction
- Test is the last step before release, important to find defects now or 10x more costly in post-release
- Frequent cause of software failure is compatibility with another application, new operating systems, or web browser version
- Testing under all combinations of inputs and preconditions is not feasible
- Non-functional quality (how it is supposed to be versus what is supposed to do)
- Scalability, performance, compatibility, reliability
- Reviews, walkthroughs, or inspections
- Unfortunately often omitted
- Executing programmed code with a given set of test cases
- Can start before program is complete (testing modules or discrete functions)
- Drivers or debugger
- Have we built the software right? (I.e., does it match the specification)
- Have we built the right software? (i.e., is this what the customer wants)
- Manager, test lead, test designer, tester, automation developers, and test administrator
- * Examine the and change the software engineering process itself to reduce the amount of faults – defect rate
- * Mission critical vs. Non-mission critical
- Banking software vs. MS Calc
The Box Approach
Black box testing vs. white box testing are different point of views that a test engineer takes when designing test cases.
Black Box Testing
Black Box Testing is done without any knowledge of internal implementation.
- * Equivalence Partitioning
- o Divides the input data into partition of data from the test cases can be derived
- o Test cases are designed to cover each partition at least once
- o Example (input: months expressed as integers, the input parameter 'month' might have the following partitions)
invalid partition 1 valid partition invalid partition 2
- * Boundary Value Analysis
- o Tests are designed to include representatives of boundary values. Values on the edge of an equivalence partition or at the smallest value on either side of an edge. Common locations for errors that result in software faults. Frequently exercised in test cases.
- o Example (input: months expressed as integers, the input parameter 'month' might have the following partitions), pick values before, on, and after boundary:
invalid partition 1 valid partition invalid partition 2
- All-Pairs Testing
- For each pair of input parameters to a system, tests all possible discrete combinations of those parameters. Tests can be parallelized.
- Fuzz Testing
- Provides invalid, unexpected, or random data to the inputs of a program. Looks for when program fails (i.e., crashing).
- Areas of interest: file formats, networking protocols, environment variables, keyboard/mouse events, sequences of API calls, (even parts not considered “input): databases, shared memory, precise interleaving of threads.
- Inputting random stream of bits to applications
- Can find memory leaks (useful in languages like C/C++)
- Negative testing
- Security: Interesting areas cross trust boundaries
- Bug finding tool rather than quality assurance
- Increases security and safety
- Creates tests with odd data
- Can find exploitable bugs
- Needs mature specifications, as fuzzing is based on this
- Proprietary protocols make it difficult to generalize fuzzing methods
- Code coverage can be poor
- Simple faults
- Model-Based Testing
- Test cases are derived from a model that describes (usually functional) aspects of the system
- Traceability Matrix
- Table that correlates two relationships to determine completeness of the relationships
- Exploratory Testing
- Learning, test design and test execution
- Less preparation
- Important bugs found quickly
- do not have to complete a series of scripted tests
- Dependent on testing skills
- Tests can’t be reviewed (unlike scripted testing)
- Reproducing the test is difficult
- Use when requirements and specifications are incomplete or if there is a lack of time.
- Specification-Based Testing
- Test the functionality of software according to requirements
- No ‘bonds’ with the code, has the “tester” mentality
- Walking in the dark (tester doesn’t understand how code works)
- Tester could over or miss areas of the code
Access to internal data structures and algorithms
Types of white box testing
- API Testing – Testing of the application using Public and Private API calls
- Code Coverage – creating tests to satisfy some criteria of code coverage (e.g., the test designer can create tests to cause all statements in the program to be executed at least once)
- Fault Injection methods – improving the coverage of a test by introducing faults to test code paths
- Mutation Testing methods – modifying program’s source code in small ways, and any tests which pass after code has been mutate are defective.
- Static Testing – white box testing includes all static testing
White box testing methods can also be used to evaluate the completeness of a test suite created with black box testing methods.
Grey Box Testing
Access to internal data structures and algorithms for purposes of designing the test cases but testing at the black-box level
- May include reverse engineering
Grouped by where they are added in the software development process or by the level of specificity of the test
Tests that verify the functionality of a specific section of code, usually at the function level
- May have multiple tests for one function
Seeks to verify the interfaces between components against a software design
- Individual software modules are combined and tested as a group
- Purpose is to verify functional, performance, and reliability requirements placed on major design items
- Occurs after unit testing and before system testing
Tests a completely integrated system to verify that it meets requirements
System Integration Testing
Verifies that a system is integrated to any external or third party systems defined in the system requirements
Focuses on finding defects after a major code change has occurred
Seeks to find old bugs that have come back
Black-box testing performed on a system prior to its delivery
Simulated or actual operational testing by potential users/customers or an independent test team
Released to a limited audience outside of the programming team
Non-Functional Software Testing
Software performance testing, including Load Testing
Checks to see if software can handle large quantities of data or users
- Performance Testing
- Load Testing
Checks to see if the software can continuously function well in or above an acceptable period
Check if the user interface is easy to use and understand
Essential for software that processes confidential data to prevent system intrusion by hackers
Internationalization and localization
Needed to test these aspects of software for which pseudo-localization is method can be used
Attempts to cause the software or a sub-system to fail, in order to test its robustness
The Testing Process
Traditional CMMI or waterfall development model
- Performed by an independent group of testers after the functionality is developed before it is shipped to the customer
- Could compromise the time devoted to testing
- Test-driven software development
Waterfall development model
- Requirements analysis
- Test planning
- Test development
- Test execution
- Test reporting
- Test result analysis
- Defect retesting
- Regression testing
- Test closure
Used in Test-driven development
Continuous Integration software will run tests automatically every time code is checked into a version control system
Testing/debug tools include features such as:
- Program monitors, permitting full or partial monitoring of program code:
- Instruction Set Simulator – complete instruction level monitoring and trace
- Program animation – step-by-step execution and conditional breakpoints
- Code coverage reports
- Formatted dump or Symbolic Debugging – inspection of program variables
- Automated functional GUI testing tools are used to repeat system-level tests through GUI
- Benchmarks – run-time performance
- Performance analysis (or profiling tools) – highlighting hot spots and resource usage
- ISO 9126: reliability, efficiency, portability, maintainability, compatibility, usability
- Test plan – test specification
- Traceability matrix – correlates requirements or design documents to test documents. It is used to change tests when the source documents are changed, or to verify that the test results are correct
- Test case – id, requirement references from a design specification, preconditions, events, a series of steps to follow, input, output, expected result, and actual result.
- Test script – combination of a test case, test procedure, and test data
- Test suite – collection of test cases
- Test data – multiple sets of values or data are used to test the same functionality of a particular feature.
- Test harness – software, tools, samples of data input and output, and configurations
Does it work when it gets good data?
Does it work when it gets bad data?
How does it handle when somebody tries to break it (usually with bad data)?