An important question is whether indexes can remain assigned to tokens over a saccadic eye movement. If index maintenance were based on purely local retinal processes, such as those proposed in the network models of Koch and Ullman (1985) or Acton (1993), it is hard to see how an index could keep tracking a token moving across the retina at up to about 800 degrees/second -- even putting aside saccadic suppression and smearing. The fastest covert attention movement reported in the literature -- and even this has been questioned as being too high -- is 250 degrees/sec (Posner, Nissen & Ogden, 1978). However, if index maintenance were able to make use of predictive information, such rapid tracking might be possible. There are two main sources of such predictive information. One is extrapolation from portions of the current trajectory, using some sort of adaptive filtering with local data support, as proposed by Eagleson & Pylyshyn (1988, 1991). The other is extraretinal information such as efferent and afferent signals associated with the eye movement. The appeal to extraretinal signals has usually been associated with metrical superposition theories of visual stability. Such theories posit an internal metrical display or image onto which retinal information is superimposed at the correct registration, based on some extraretinal (usually efferent) signal. Although this internal image view has lost support in recent years (O'Reagan, 1992; Bridgeman, Van der Heijden & Velichkovsky, 1994; Irwin, 1993; Intraub, Mangels & Bender, 1992) the role of extraretinal information in some aspect of transsaccadic integration has continued to be accepted (though some consider it only relevant in locating objects for purposes of motor coordination).
If it can be shown that indexing survives saccades it would provide an important mechanism for saccadic integration compatible with current theorizing on the subject. For example, Irwin (1993) and others have shown that surprisingly little qualitative information is retained between saccades. Irwin (1995) has argued that even for location -- which is retained better than identity information -- only the position of 3-4 objects is retained from one fixation to another, and this is most likely to include the position to which the eye is about to saccade. Based on such observations McKonkie & Currie (1995) and Irwin, McConkie, Carlson-Radvansky, & Currie (1994) have argued that on each fixation only one significant benchmark is encoded and on the next fixation a fast parallel search attempts to identify that benchmark, which is then used to calibrate the location in space of other items in that fixation. However the relocation-by-features idea seems implausible, even if it could be accomplished in the short time available, since it ought to lead to frequent and significant errors when the scene is uniformly textured or otherwise free of unique features (see proposed study EM3). The process would be computationally simpler and more reliable if a small number of significant features could be tracked through the saccade to provide the anchors by which schematically encoded perceptual information could be integrated from fixation to fixation, in a manner suggested by Pylyshyn (1989) and others (e.g. Intraub, Mangels & Bender, 1992).