Cognitive Tutor Authoring Tools 2.0 > Using the Tools > Cognitive Tutors (Jess)

3. Cognitive Tutors (Jess)

This section describes the files that form a cognitive model and tutor, and the programming and debugging tools available in CTAT for working with a cognitive tutor.

3.1. Files and file types

Besides the student interface, a CTAT cognitive tutor problem consists of the following required files:

  • the production rules file; contains Jess production rules, each defined using the (defrule)construct.

  • wmeTypes.clp: the Jess templates file; contains the templates available in working memory, each defined using the (deftemplate) construct; can be generated by CTAT for the currently loaded student interface (Java or Flash) and behavior graph.

  • [problem-name].wme: the Jess facts file for the named behavior graph (BRD); contains an initial representation of working memory for the problem; can be generated by CTAT for the currently loaded student interface (Java or Flash) and behavior graph.

  • [problem-name].brd: the behavior graph for the problem; need only contain a start state node that describes the initial state of the problem: for tutoring, a full problem-solving graph is superfluous—the tutoring is provided by the model-tracing algorithm—but such a graph can be used for semi-automated testing and problem state navigation.

3.1.1. Production Rules file

The production rules file,, contains the Jess production rules that model student procedural knowledge and misconceptions.

Jess functions are typically defined at the top of the production rules file so that production rules can use them. Alternatively, functions can be defined in the Jess templates file, or in a separate file which is referenced via a (require* ) function that appears at the bottom of the templates file. See the Jess manual for more on the require* function and its reciprocal function, provide.

CTAT loads the production rules file whenever a new behavior graph is loaded, or when the start state of a loaded behavior graph is clicked. You can also load the production rules file at any time while in CTAT.

To load (or reload) the production rules file:

  • Click Cognitive Model > Load Production Rules, or press CTRL L on the keyboard.

3.1.2. Jess Templates file (.CLP)

The templates file, wmeTypes.clp, contains definitions of Jess templates. A template in Jess is a description of a fact type; every fact in working memory has a template. See the Jess reference documentation on templates for more on this construct.

CTAT loads this file when a behavior graph is loaded; or when the start state of a loaded behavior graph is clicked.

The templates file typically ends with the following line to notify Eclipse that Jess templates have been parsed:

(provide wmeTypes)

CTAT creates an initial set of Jess templates when the start state of a new graph is created. This representation is based on the visual elements—the widgets—that appear in the student interface. CTAT creates a Jess template for each widget type in the interface.

In addition, CTAT defines the following templates in the templates file:

  • studentValues: contains slots for student selection, action, input; however, using the globals ?*sSelection*, ?*sAction*, ?*sInput*, which are populated at the start of each model-trace, may be more useful.

  • selection-action-input: a precursor to test-SAI; deprecated.

  • problem: represents the 'problem' as a collection of interface elements and subgoals. In Jess, it is also possible to match interface facts and subgoal facts directly, without using the problem fact.

To save the Jess templates:

  • Select File > Save Jess Templates.


    Executing this command will save a Jess templates file named wmeTypes.clp, overwriting (if it exists) the current wmeTypes.clp file in the cognitive model folder. If you wish to preserve the existing wmeTypes.clp file, create a copy in another directory or rename the file—otherwise, it will be overwritten by the results of the Save Jess Templates command.

3.1.3. Jess Facts file (.WME)

The facts file contains the Jess facts that make up working memory for a given problem. CTAT loads this file (if it exists) when a behavior graph is loaded; otherwise, facts are created based on the current interface and templates. One facts file (with file extension .WME) should exist for each behavior graph (with file extension .BRD).


The facts file must have the same name as the behavior graph file (less the filename extension difference) for CTAT to load it.

CTAT creates an initial representation of working memory when the start state of a new graph is created. This representation is based on the visual elements—the widgets—that appear in the student interface. CTAT creates a Jess fact for each widget in the interface. This initial structure is typically expanded to account for non-visual structures or more complex facts (eg, facts composed of other facts). The initial working memory contents are not saved to file, but loaded into working memory.

The facts file typically begins with the following line to specify to Eclipse the templates file to read:

(require* wmeTypes "wmeTypes.clp")

To save the Jess facts that comprise working memory:

  • Select File > Save Jess Facts.


    Executing this command will save the facts currently in working memory to a file. If this file is given the same name as the behavior graph (less the BRD file-name extension), loading the graph (or clicking its start state) will trigger CTAT to load the Jess facts file, which may or may contain a faithful description of working memory at the start of the problem. Therefore, it is possible to have a problem be in its start state, but with working memory contents that don't match the student interface.

3.1.4. Behavior Graph file

The behavior graph is the file that represents the starting state of a problem, and, optionally, the correct and incorrect steps that comprise student problem-solving behavior for that problem.

In a cognitive tutor (Jess), the start state of the behavior graph is the only required node: the start state provides the initialization information necessary for CTAT to create a Jess representation of the problem. Tutoring is provided to the student based on the cognitive model and model-tracing algorithm; therefore, no other graph information is required. A complete behavior graph, however, can be used for semi-automated testing.

3.2. Cognitive Tutor (Jess) Tools

CTAT provides a number of tools for planning, testing, and debugging cognitive models authored in Jess. These tools are:

  • Behavior Recorder: supports planning and testing cognitive models.

  • Working Memory (WME) Editor: used for cognitive model development; allows an author to inspect and modify the contents of the cognitive model's 'working memory'.

  • Conflict Tree: debugging tool that provides information on activation paths explored by the model-tracing algorithm, including partial activations; displays the rule-predicted and observed selection, action, input values.

  • Why Not?: launched from the Conflict Tree; [exploratory?] debugging tool that provides detailed information on rule activations and partial activations by displaying the values of variables referenced in the rule; includes an embedded Working Memory (WME) Editor for examining the values of working memory slots and facts.

  • Jess Console: command line for interacting directly with the Jess interpreter; helpful for carrying out debugging strategies not directly supported by CTAT.

Other features:

  • Breakpoints

  • Production rule editing in Eclipse

  • Test Production Model On All Steps (requires a behavior graph)

  • Auto-generation of initial working memory contents

3.2.1. Behavior Recorder

In addition to helping you to construct an Example-tracing Tutor, the Behavior Recorder can aid in cognitive model planning, development, and testing.

As a planning tool, the Behavior Recorder allows you to:

  • map the solution space, or the realm of student behavior for which the cognitive model should account; and

  • associate knowledge components with steps in the graph, which provides an idea of the quantity and quality of skills to be modeled as production rules.

As a testing tool, the Behavior Recorder allows you to:

  • perform semi-automated regression testing by checking a cognitive model against all states of the behavior graph; and

  • jump to states in the graph, moving both working memory and the student interface to the recorded state. Planning with the Behavior Recorder

Before developing your cognitive model, consider creating a few representative Example-tracing problems with the student interface you're planning to use for the cognitive tutor. These graphs can be used for planning and testing as described below.

Annotate the steps of a behavior graph using knowledge component labels. By labelling the steps in the graph, you are performing a form of cognitive task analysis ([reference?]); you are determining how the overall problem-solving skill breaks down into smaller knowledge components. These knowledge components are also likely to be formalized as production rules you will write, with each knowledge component corresponding to a production rule. By annotating the graph with knowledge component labels, you've identified the set of skills for which your model must account. (See [section blah] for more on creating knowledge component labels and viewing a knowledge component matrix.) Testing with the Behavior Recorder

Behavior graphs can also serve as test cases for a cognitive model. In this way, a behavior graph is a specification for how the model should behave on the steps of the problem.

To test a cognitive model against a behavior graph:

  1. Load the behavior graph into the Behavior Recorder (File > Open Graph).

  2. Check that the cognitive model has loaded by entering the command (rules) in the Jess Console. The console should print the names of your production rules and end with a count of the total number of rules.

  3. Select Cognitive Model > Test Cognitive Model on All Steps, or press CTRL+T.

Two indicators will appear notifying you of the results of the test. The first is the test report window (shown below).

Figure 2.7. Production Model Test Report Window

Production Model Test Report Window

This report describes the results of a comparison between the graph's specification of correctness for an ordered list of steps and the model-tracing algorithm's evaluation of that same list of steps. Here, the term 'step' refers to a student action (technically represented by a selection-action-input triple).

The test operates by first determining the possible paths (or path) from start state to done state, the last state of the graph. For each of these paths, the model-tracing algorithm traces the path to each step, and presents its evaluation (incorrect, correct) to the test. The test compares the link type in the graph to the results of the model-tracing algorithm's trace. A comparison is consistent if the link type defined in the graph matches the evaluation by the model-tracer; it is inconsistent if the two do not match, or if a state in the graph is unreachable by the model-tracing algorithm. A state is unreachable if it appears beyond a buggy (incorrect action) link in the graph—a step by the model-tracing algorithm not present in the graph—or if appears in the graph but is not traced by the model-tracing algorithm.

The report also references good and bad changes. As the report indicates, a 'good' change is from inconsistent to consistent; a 'bad' change is from consistent to inconsist. This comparison is presented if the Test Cognitive Model on All Steps command has been run previously during the authoring session. Typically, you run the test and upon finding inconsistencies, modify the production rules and run the test again.

The second indicator is color changes to the links in the behavior graph. [TBS: how are colors determined? dotted for inconsistent? grayed for unreachable?]

To reset link colors modified by the test:

  • Select Cognitive Model > Reset Link Colors.

Lastly, the Behavior Recorder allows you to jump to recorded states, moving both working memory and the student interface to the desired state. To jump to a recorded state, click the desired state in the behavior graph. Note the updates to the student interface, to the working memory window, and to the conflict tree as the cognitive model is traced against the steps outlined in the graph.

3.2.2. Working Memory (WME) Editor

The working memory (WME) editor allows you to inspect and modify a cognitive model's working memory at any time.


Don't see the Working Memory (WME) Editor? Show it by selecting Windows > Show Window > WME Editor.

Figure 2.8. Working Memory (WME) Editor

Working Memory (WME) Editor Working memory contents

The intiial contents of working memory is displayed in the Working Memory (WME) Editor based on the existence of a .WME file and/or a wmeTypes.CLP file. If one or both of these files is missing, CTAT uses templates and facts that it has generated for the given problem based on the student interface and the start state of the problem.

At any given state of a cognitive tutor, the templates and facts shown in the WME Editor reflect the contents of working memory after the step is model-traced.

Common Operations

Below are instructions for performing common operations on working memory using the Working Memory (WME) Editor.

To add a template to working memory:

  1. Right-click (Windows) or CTRL+click (Mac) anywhere in the Working Memory (WME) Editor's list of templates and facts.

  2. Select New Template.

To rename a template in working memory:

  1. Single-click the template that you'd like to rename.

  2. Enter a new name in the text field to the right of the world 'Template:'.

  3. Press the Enter key. The template listed in working memory will update to reflect the new name.

To add a fact to working memory:

  1. Right-click (Windows) or CTRL+click (Mac) the template for which your new fact will be a member.

  2. Select New Fact.

To edit a fact in working memory:

  1. Single-click the fact in working memory that you'd like to edit.

  2. Enter a new value in the Slot Value column.

  3. Press the Enter key.


Changing a fact's Slot or Type value may have unexpected effects as slot and type are attributes defined by the template, not the fact. It is recommended that you edit and save the templates, or even the initial set of facts, but leave CTAT to manage the facts of a tutor problem.

3.2.3. Conflict Tree

The Conflict Tree is a debugging tool that shows you which rules correctly predicted the student's selection/action/input (S/A/I) and which rules fired, but only partially activated. Its purpose is to show the space of rules that were explored by the production system interpreter as it tried to find a "chain" or "path" of rules that correctly predicted the S/A/I. This space is always in the form of a tree.

The Conflict Tree displays rule activations in terms of 'chains' of rules formed during the model trace. In Jess model tracing, a chain is a point in the model-tracing search where the effects of one rule's activation to working memory cause another rule to fire. The chaining point is represented by a folder icon in the tree.

In addition, the Conflict Tree is the launching point for 'Why Not?' inquiries (e.g., 'I see that a rule did not fire at all, but why not?').

For a rule that predicts student S/A/I (via the test-SAI function on the right-hand side of the rule), the results of that prediction are shown to the left of the rule name in the columns S, A, and I. A green checkmark indicates that the selection, action or input was predicted correctly. A red X signifies that the selection, action or input was not predicted correctly. As soon as the production engine encounters a rule activation where the left-hand side (LHS) matches the asserted facts and the selection/action/input (S/A/I) were all correctly predicted, it fires that rule and stops in that state. Hence an entry in the conflict tree having three green checkmarks signifies the rule activation that fired and ended a successful model-tracing search for the student's action; the production engine's current state should reflect the actions of that rule's RHS.

To display predicted and observed selection, action, and input:

  • Click in any of the S, A, or I columns of the row displaying the production rule name you're interested in.

    A window will appear similar to the one depicted below. The first row shows values predicted by the production rule; the second row shows actual values observed in the student interface and performed by either the student or author.

Figure 2.9. Conflict Tree: Rule's predicted SAI vs. student's actual SAI

Conflict Tree: Rule's predicted SAI vs. student's actual SAI

A rule can be in any of the following states at a given point in Conflict Tree:


This applies only to buggy rules during that part of the search where the tools are only trying for a successful match, when the buggy rules are actually removed using the rule engine's undefrule command.

Not Activated

The LHS did not match.

Activated, But Not Fired

The LHS matched, but the search ended before model-tracing algorithm reached this rule.This can be confusing at first because these rule activations do not show up in the Conflict Tree, but you will see them when you do a Why Not? and it will say "LHS matched successfully"

Fired, Chained

The LHS matched and CTAT fired the rule, but its RHS did not generate a complete prediction for the student's S/A/I. If a partial prediction was made, it was correct; after one of these rule firings, it "chains" (a term inherited from TDK) — that is, they descend one more level in the search. Depending on the result of that search, we distinguish between the following two types.

Fired, Chained, Kept

The search is successful and the results of the rule's firing (i.e., the changes it made to working memory) are kept.

Fired, Chained, Undone

The search fails at a lower level and the effects of the rule's firing are undone (i.e., working memory changes are reverted).

Fired but Incorrect S/A/I prediction

CTAT has to undo the effects of this rule's firing since the S/A/I prediction was incorrect.

Fired, Correct and Complete S/A/I Prediction

This ends the search. The effect of the rules firing is kept.

The Conflict Tree distinguishes only Not Activated ("Failure to match LHS"), Activated But Not Fired ("SUCCESSFUL MATCH OF THE LHS"), Fired But Incorrect S/A/I prediction ("Failed to Match SAI"), and Fired, Correct and Complete S/A/I Prediction.


  • A rule can be in a different state at different levels in the tree. In the addition tutor, the rule must-carry is Not Activated at the top but may be Fired, Correct and Complete S/A/I Prediction at the bottom of the tree.

  • There can be multiple activations for a rule at a single node. These correspond to multiple different fact combinations that matched the rule's LHS.

  • Just because a rule fires successfully does not mean the rule is correct. A rule may evaluate successfully and fire but have a logic error in it.

  • It is possible that rules at the beginning of a chain fire but rules at the end of the chain do not fire. In that case, you would get a window with rules shown in the Conflict Tree, but there would be no path of green rules from root to leaf node.

There are two ways tutor can fail to match what the student did:

  • CTAT finds no applicable rules, or it finds rules you expect will match working memory but do not (see Why Not? Window below).

  • Your rules did match working memory, but your predictions (as specified via test-SAI) did not correspond to student action. In other words, your rule matches working memory—the action you predicted would be appropriate at that moment—but the student didn't take that action, so the student action doesn't match your production rule.

3.2.4. Why Not? Window

You may have a rule that was partially activated (i.e., an instance of a rule where a rule has some, but not all, of its conditions satisfied) or a rule that did not activate at all. In these cases, you may want to see more details concerning the match. Alternatively, you may want to explore the details of a matched rule (one depicted with three green checkmarks in the conflict tree). The Why Not? window provides further detail about each search node depicted in the Conflict Tree.

To display a Why Not? Window:

  • For a rule that partially or fully activated, click its name in the Conflict Tree. For a rule that did not activate at all, click the node labeled Chain, and select its name. Interpreting the Why Not? Window

The top third of the Why Not? window (Figure 2.10, “Why Not? Window: rule definition”) displays the production rule that you're examining. Here, all variables are given a background color, which corresponds to the color used in the table below the rule definition (Figure 2.11, “Why Not? Window: variable values”).

If you hover over a variable in the top window with your mouse cursor, a tooltip will appear displaying the variable's value (in the case of a simple variable) or table of fact information (in the case of a fact reference). Simultaneously, a black outline will appear around the corresponding variable row in the middle area.

Figure 2.10. Why Not? Window: rule definition

Why Not? Window: rule definition

Partial Activations shows the various activation attempts by Jess. Each activation is an attempt to match the LHS of the rule with the facts in working memory; and all variables must be 'bound' (matched) to activate the rule. If all variables could not be bound for a given attempt, that attempt results in a partial activation. Green indicates that all variables were bound successfully; red indicates that some variables were not bound.

Click a partial activation in the list box on the left to see the alternate mappings; the variables table to the right will update to reflect the particular activation, as will the highlighting in the rule definition window above.

Of particular importance is the line to the right of Partial Activations (depicted in Figure 2.11, “Why Not? Window: variable values”). Below, this line reads 'LHS matched'. When you click on a partial activation, this line will update to reflect the first disparity in the comparison between the rule's LHS definition and working memory.

Figure 2.11. Why Not? Window: variable values

Why Not? Window: variable values


You will often see more than one activation listed in Why Not? because the pattern-matching algorithm in the Jess rule engine almost always makes multiple attempts to bind variables, as it has to try different values for those variables.

The Embedded Working Memory (WME) Editor

In the bottom third of the Why Not? Window is an embedded Working Memory Editor. It operates identically to the standard Working Memory Editor with one important addition: it allows you to examine working memory before the rule fires or after the rule fires. You can switch between before and after states by clicking the radio buttons labeled Show Pre and Show Post.

3.2.5. Jess Console

The Jess console is a text window and command line for interacting directly with the Jess interpreter.

3.2.6. Breakpoints

You can define breakpoints for debugging a cognitive model.

To set breakpoints:

  1. Click Cognitive Model > Set Breakpoints.

  2. Select rule names on the left side of the Breakpoints window and use the > button to add that breakpoint.

    Remove breakpoints by selecting rule names on the right side of the Breakpoints window and use the < button to remove that breakpoint.

  3. Click Set to set the breakpoints.

To clear breakpoints:

  • Click Cognitive Model > Clear Breakpoints.

Figure 2.12. Defining Breakpoints

Defining Breakpoints