Aks, Haladjian, Pylyshyn, & Hakkinen (2009). Multiple Object Tracking with blink-contingent scene changes. Vision Sciences Society 2009, Naples, FL. [download pdf]
Visual Indexing Theory proposes a referential mechanism that tracks objects in a visual scene without necessarily encoding object properties, as demonstrated through Multiple Object Tracking experiments. The encoding of location information during object tracking, however, remains a possible exception. In the current studies, we tracked eye movements during a standard MOT task and employed a blink-contingent methodology in which objects stopped moving or disappeared during eye-blinks. Because this halting is synchronized with eye-blinks, we were able to examine natural, intrinsically generated disruptions of the visual scene without inadvertently cuing the object change.
Experiment 1 examined the effect of changes in object motion that occurred during spontaneous eye-blinks. Subjects performed a standard MOT task (4 targets, 8 non-targets). In half the trials, the objects halted for the duration of all blinks; these trials were randomized among trials where objects continued their movement. The results indicate that a blink-contingent halting of objects produces fewer tracking errors, but this effect diminishes with practice. Also, fewer fixations were correlated with better tracking performance in the last block.
Experiment 2 tested for location-encoding during MOT when objects disappeared during more natural interruptions (instead of occlusions or abrupt disappearances). We replicated the main features of Keane & Pylyshyn (2006) except we replaced occlusions with blink-contingent disappearances. We used a simple sound to signal subjects to blink once during each trial. This voluntary blink induced a change in object motion (halting or continuing along trajectories) and disappearance (150, 300, 450, or 900 ms). The results revealed superior tracking performance in the halt conditions, with lower performance as disappearance duration increased in both conditions.
Overall, our results suggest that location information and trajectory extrapolation are not crucial for tracking. When abrupt changes in a scene are visually detected, the most recently sampled location may be retrieved.
Keane, B. & Pylyshyn, Z. (2006). Is motion extrapolation employed in multiple object tracking? Tracking as a low-level, non-predictive function. Cognitive Psychology, 52(4), 346-368.
Haladjian, Pylyshyn, & Kugel (2009). Multiple Object Tracking through temporal gaps created by the fading of objects. Vision Sciences Society 2009, Naples, FL. [download pdf]
In three experiments, we examine whether the encoding of object location is used in Multiple Object Tracking. Observers were asked to track four target discs among eight identical distractors on a display where the same random-dot texture was used for object surfaces and display background. Stereoscopic glasses were used to create two display conditions: 3D (where objects appeared to float in front of the background texture) and 2D (where objects appeared on the background texture). In the 2D displays, disks were only visible while they moved and became indistinguishable from the background when they stopped. In 75% of the trials, the objects halted movement mid-trial for one, two, or four seconds.
Experiment 1 used textured discs with no borders. During the pauses, the discs would appear to dissolve into the background in the 2D condition but remained distinct in the 3D condition. This produced significantly lower tracking performance only in the 2D trials with the longest pause; no decline was observed in the 3D condition.
Experiment 2 was identical to Experiment 1, except the discs had a white border during the entire trial, allowing the discs to remain distinct during halts. In this case there was no effect of pause duration.
Experiment 3 used the same 2D display as Experiment 1, except that in half of the trials object borders flashed “on” before halting. Here, there was an effect of pause duration in both flash and non-flash conditions (decreased performance with longer duration).
These experiments found that objects that disappear without an abrupt offset are more difficult to track, indicating that object locations are not encoded and used to continue tracking after a gap in visibility. This suggests that the tracking mechanism does not encode location information unless cued by abrupt changes in the visual scene.