Applying tcltest to Tk Applications Bob Techentin, Sharon Zahn, Barry Gilbert, Erik Daniel Mayo Clinic Rochester, MN Abstract This paper presents an approach for automating Tcl/Tk application testing using the tcltest package. Some Graphical User Interface (GUI) test automation tools rely on screen coordinates and screen captures to drive the program under test, while others may drive at the widget level. This new approach tests the application at the procedural level, but supports starting and stopping the Tk windowing application multiple times in the same test file. Summary Tcltest is an established and robust package supporting automated regression testing. Its inherent portability, flexibility and scriptability make it suitable for many platforms and environments. It can test libraries and programs written in many languages, and its speed makes it possible to execute thousands of tests per minute. The tcltest framework supports running multiple test files, each of which usually contains several up to hundreds of individual test scripts. Tcltest options can control which test files and individual tests are run or skipped with a great deal of granularity, but most test files will run multiple tests in the same Tcl interpreter. Individual test scripts should be coded to run independently of others in the same file, so that a skipped or failed test does not cause the rest of the tests in a file to fail. Independent tests are problematic when trying to exercise a Tk application. The application and GUI usually have a significant amount of state which should be initialized at the beginning of the test and discarded after the test is complete. But Tk is not well suited to re-initialization. So it is not easy to include multiple application level (or "system level") test scripts in one test file. There are other approaches to Tk application testing, which usually rely on driving the GUI from either the window manager or GUI toolkit level. Test harnesses based on GUI techniques will usually start an instance of the application, and then inject a series of recorded keyboard and mouse events to drive the application to a certain state. Test success or failure is determined by evaluating the screen or widget states or application state. A new test can restart the application and drive it to a different state. But these GUI-driven techniques have a distinct disadvantage in that the test scripts usually have little in common with the code they are testing. A recorded list of keystroke and mouse events does not describe the test author's intent. In fact, most GUI tests must be recorded with a software tool, perhaps with documentation added later. Tcltest test scripts, on the other hand, are written by a tester and succinctly show the intent of the test and the expected results. This paper presents a new technique for writing tcltest test suites which exercise Tk applications. Test scripts start the application in a slave interpreter, which loads Tk and runs the application. The test script has complete access to the internal API and state of the application through the slave interpreter evaluation mechanism. When the test is completed, the slave interpreter is destroyed, destroying that instance of Tk. Each test script in the test file gets a "clean" environment in a new slave interpreter. The paper will present the approach, source code, and examples for exercising an application API. Slave interpreter issues (e.g., defining argv and argc) will be addressed. An application architecture will be outlined that supports testing using this technique. Advantages and disadvantages will be described and contrasted with other testing techniques.