Skip to main content



Comparing Single Tests

Each time a user presses the Test button in irAuthor, a new session tab is created in irVerify. Each session tab represents a compiled set of rules as they existed in irAuthor when the test was launched. Therefore, a user could author one set of rules and create a test session. The user could then author more rules or change existing rules and create another test session. The user could apply rules in both session tabs to see the differences in authored rules affect the outcomes of the tests.

For example, consider the following simple walk-through to demonstrate the value of comparing single tests:

  1. A user begins with a rule application that consists of an input field, an output field, and no rules.
  2. The user creates a simple rule and presses test.
  3. irVerify loads that rule application into a tab named Session1. Applying rules verifies the results.
  4. The user navigates back to irAuthor and adds another rule that contradicts the original rule. Pressing test loads the new rule application into a tab named Session2.
  5. The user can cycle between the active tabs to measure differences in test data as a result of modified or added rule elements.

Although the rules have changed, the tests can be re-run in Session1 against the original set of rules and compare the results to the rule sets in Session2.

Granted, the comparisons in this simple example are easy to make. However, comparing test data outputs between variations of larger rule applications can become cumbersome when only working in a single session.