2006 Annual Meeting of the Vision Sciences Society

 
 
Are items encoded into VSTM when they are selected for tracking in MOT?
Explorations with simultaneous and sequential cue presentations
 

Carlos Montemayor and Zenon W. Pylyshyn, Rutgers Center for Cognitive Science

 

Hung, Wilder, Curry & Julesz (1995) used sequential and simultaneous presentations to distinguish the encoding and storage limitations of visual short term memory.  They found that simultaneous presentation led to better recall than sequential presentations and that with sequential presentations a slower rate of presentation (SOA of 50 ms) led to better recall than faster rates of presentations (SOAs of 33 ms and 17 ms).  They argued that information is encoded differently from sequential and simultaneous visual presentation and that with sequential presentation sufficient time is required to encode each incoming item. Here we apply the Hung et al methodology to test whether the process of selection, as opposed to the process of encoding, follows the same pattern.  We used Multiple Object Tracking (MOT) as a measure of whether visually cued items had been selected under various conditions of temporal presentation.  In contrast to the Hung et al findings with item recall, we found that item selection was better when they were sequentially cued (leading to better MOT performance) than when they were simultaneously cued, while there was little different at the different presentation rates in the SOA range used in the Hung et al study.  This finding further supports our earlier claim that selection and tracking may not involve memory because items’ properties are not encoded into VSTM in the course of selecting and tracking objects in MOT (Scholl, Pylyshyn & Franconeri, ARVO1999).

 

 

 

Spatiotemporal cues for tracking objects through occlusion

Steven L. Franconeri1, Zenon W. Pylyshyn2, Brian J. Scholl3
1University of British Columbia, 2Rutgers University, 3Yale University

 

As we move about the world, and objects in the world move relative to us, objects constantly move in and out of view as they are occluded by other objects. How does the visual system maintain attention on objects of interest, given such disruptions? To explore the spatiotemporal cues used to link the pre- and post-occlusion views of objects, we asked observers to track a set of moving objects that frequently passed behind static vertical occluders, as we manipulated each object’s exit position.

Experiment 1 tested whether linking the two views relies on memory for the object’s location. When objects exited occluders higher or lower than expected, tracking performance dropped, suggesting that linking the two object views relies on a location ‘marker’ at the site of disappearance. In Experiment 2, performance was better when objects exited closer to the initial entry location, rather than their expected extrapolated location, suggesting that the marker is not placed at the extrapolated position. In Experiment 3, tracking performance improved when objects reappeared from occluder centers, compared to at edges, again suggesting that the marker is placed close to the initial point of occlusion.

Together, these results suggest that when an object is occluded, the occlusion location is a critical factor in linking pre- and post-occlusion views, but not the extrapolated exit point, or even rudimentary elements of scene structure like the edges of the occluder. This simple trick could underlie much of our perception of persisting objecthood when an object disappears from view.

 

 

 

 


Implicit Multiple Object tracking without an explicit tracking task

 

Harry H. Haladjian and Zenon W. Pylyshyn, Rutgers Center for Cognitive Science

 

We have previously used a probe detection task with Multiple Object Tracking and showed that in MOT probes are detected better on targets than on nontargets (VSS 2005) – a result we interpreted as showing that nontargets are inhibited.  Here we ask whether the difference between probe detection performance on targets and nontargets is due to the explicit multiple object tracking task that required observers to keep track of and report targets at the end of a trial, or whether flashed targets are primed and implicitly tracked merely because they were cued by being flashed – in other words, whether a target-nontarget difference in probe detection might be observed when no tracking is explicitly required.  We used a modified version of MOT, in which observers were instructed merely to monitor for a probe while some objects moved in their field of view and  were occasionally flashed “to distract” them.  In this task no tracking of flashed items was required.  We found evidence for better probe detection on flashed than on nonflashed items suggesting that flashed items are implicitly tracked even when there is no explicit requirement for tracking and reporting the cued items.


 

 

 

"Attentional high-beams" in tracking through occlusion

Jonathan I. Flombaum (Yale), Brian J. Scholl (Yale), & Zenon W. Pylyshyn (Rutgers)

The visual system employs specific heuristics for keeping track of objects through frequent periods of occlusion.  A considerable amount of research has uncovered several such heuristics, but very little work has explored the on-line mechanisms that implement and support these heuristics. We explored how attention is distributed throughout a display when featurally identical objects become momentarily occluded during MOT. Observers tracked three targets among three distractors as they moved haphazardly during 10 second trials. All objects periodically became occluded when they passed behind two visible static 'walls'. During tracking, observers also had to detect small probes that appeared sporadically on targets, distractors, occluders, or empty space. Though occlusion did not impair MOT, probe detection rates for these categories confirmed the earlier finding that detection on nontargets was worse than on targets or in empty space and also revealed two novel effects. First, probe detection on an occluder's surface was much greater when either a target or distractor was currently occluded in that location, compared to when no object was behind that occluder. Thus object-based attention can still be active in a display even when the attended object is not visible. Second, and more surprisingly, probe detection was always better when objects were occluded (vs. unoccluded) for both targets and distractors. This attentional high-beams effect indicates that the apparently effortless ability to track through occlusion actually requires the active allocation of additional resources, and the current experiments demonstrate a new way that such effects can be discovered and quantified.