Fault Localization via Visualization
Studies show that locating the faults causing test case failures is the most difficult and time-consuming component of the debugging process. Traditional methods of debugging require a software maintainer to step through the code watching the state at each execution point. Because of the cost associated with traditional methods of debugging, automation can likely offer an improvement in both time and money.
Using our technique, the program under test is instrumented to output statement coverage information for each execution, or test case. Instrumentation adds probes to the original program that record the execution (or non-execution) of each statement. A file is created for each test case that lists the statements that are executed by that test case.
This leads to a huge amount of data that is difficult to interpret manually. To make this data comprehendible, we have created a visualization method and a tool, Tarantula, that implements this visualization.
Tarantula displays each source code statement using color models that reflect its relative success rate of its execution by the test suite. Roughly, statements that are executed by a failed test case become more red, and statements that are executed by a passed test case become more green -- on a color spectrum from red to yellow to green. Statements that are shown as red are highly suspect, and warrant further investigation as a potential cause of the test case failures. Statements that are shown as green convey a strong confidence in their correctness. Statements that are shown as yellow convey a sense of ambiguousness, in that they are executed by both passed and failed test cases. Further exploration with the tool can reveal more information about these statements.
|Georgia Tech |||College of Computing |||GVU Center |||
|| Tarantula Home|