VSS 2011 - Vision Sciences Society 2011 Conference Abstracts

Aks, Alley, Rathakrishnan, Kourtev, Haladjian, & Pylyshyn (2011). When vision loses its "grip" on tracked objects: Lessons from studying gaze-to-item dynamics. Vision Sciences Society 2011, Naples, FL.

We use a unique gaze-to-item analysis to study when vision "loses its grip" on tracked objects. Important insights can be gained by looking at spontaneous tracking failures and those that occur during uninterrupted vs. interrupted tracking (such as when we blink our eyes or objects overlap each other). We generate an explicit trace of eye-movement paths and each of the eight item positions recorded over the course of each of (138 - 5 sec) multiple object tracking (MOT) trials. Temporal profiles of scan- and item-paths, help identify sources of tracking failures obscured by the aggregated accuracy measures typically recorded at the end of each trial. We show tracking failures from object crowding, and subsequent gaze-switching from targets to non-target items. We also show how spontaneous switching across tracked objects is common, and does not impair tracking accuracy (See Elfanagely et al, VSS 2011). Finally, we show when object tracking is disrupted briefly (<1 sec), our gaze continues to remain close to those items tracked just prior to their disappearance (See Alley et al, VSS 2011). Because we have tested conditions where gaze and attention are correlated, scan path patterns are easily understood in terms of gaze and attentional indexing as two systems coordinating to effectively track objects.

Alley, Rathakrishnan, Harman, Kourtev, Kugal, Haladjian, Aks, & Pylyshyn (2011). Tracking objects and tracking our eyes during disrupted viewing. Vision Sciences Society 2011, Naples, FL.

We are studying how people track objects, and how eye-movements and attention contribute to this ability. We extend Keane and Pylyshyn (2006) and Aks, Pylyshyn, Haladjian et al., (2010), research on multiple object tracking (MOT) during disrupted viewing to learn whether the visual system encodes the position of tracked objects. Observers blinked their eyes when a brief tone was presented midway into each trial where they were tracking 4 of 8 identical items. Eye-blinks triggered item disappearance and the onset of a mask that blocked the display of items (for up to 1 second). During their disappearance, objects either continued moving, or halted until their reappearance. Better tracking occurred when items halted (or were displaced further back along their quasi-random motion trajectory) suggesting that the visual system refers back to past position samples to guide where tracked items are likely to reappear. In the current study, we explore the role of eye-movements in MOT. Our gaze-to-item analysis, described in Aks et al., VSS 2011, shows parallels between eye-movements and MOT performance. Gaze tends to remain near targets that were tracked just before the blink when objects disappeared. This gaze- to-item linkage was reliable across "halt" trials, highly idiosyncratic on "move" trials, and intermittent during the uninterrupted part of the tracking task. Switching gaze across targets, accounting for the intermittency, was surprisingly common and often spontaneous (see Elfanagely et al., VSS 2011). These results suggest that different eye-movement strategies can be used to maintain mental links to tracked objects.

Elfanagely, Haladjian, Aks, Kourtev, & Pylyshyn (2011). Eye-movement dynamics of object-tracking. Vision Sciences Society 2011, Naples, FL.

Tracking requires maintaining a link to individual objects as they move around. There is no need to maintain a record of object position over time; all that is needed is maintaining a connection, or index, to target items as they move (Pylyshyn, 2004) . Yet, how well we maintain links is undoubtedly reflected in tracking behaviors. Both the time course and pattern of eye-scanning used in multiple object tracking (MOT) may help us understand how humans track objects. By analyzing MOT dynamics, we explore why better tracking occurs when objects halt during their disappearance (Keane & Pylyshyn, 2006), and how the visual system maintains a memory of prior object-positions. We use the MOT task described in (Alley et al. 2011), and "gaze-to-item" analysis measuring relative distance between eye-positions and each of 8 changing item positions (4 are tracked targets). We also use Recurrence Quantification Analysis (RQA) to determine whether recurring eye-movement patterns play a role (Webber & Zbilut; 1994). How smooth and repetitive are gaze paths? Fehd & Seiffert (2008) report that gaze follows the center of a group of targets, and that this "centroid" strategy reflects tracking a global object formed by grouping. This leads to a prediction that such a "center-looking strategy" should be smooth since the centroid moves with the average instantaneous position of independently moving objects . However, among gaze dynamic patterns that we found, one surprising result is the pervasiveness of switching gaze across items. Such frequent switching occurs spontaneously, and under crowding conditions, and is consistent with the alternative indexing account that individuated objects are tracked separately. By focusing only on aggregated positions, we may be masking important dynamics. Perhaps most significant are recursive scan paths of which switching behavior is a critical component. This may reflect iterative coding for sequences of prior object positions.

Haladjian, Griffith, & Pylyshyn (2011). The attentional blink impairs localization but not enumeration performance in an "enumerating-by-pointing" task. Vision Sciences Society 2011, Naples, FL.

Earlier we reported (Haladjian & Pylyshyn, 2010) that observers are able to rapidly and accurately enumerate up to six items when using an "enumerating-by-pointing" method (compared with the typical subitizing limit of four). We have been exploring possible reasons for this increase. The present study examines the role of increased encoding time (without increasing actual viewing time) by testing whether two presentations of the stimulus separated by a variable interval improves enumeration performance. Additionally, this allowed us to test if the second presentation of the stimulus was sensitive to the attentional blink. Participants were shown masked displays that contained 2-9 randomly-placed black discs (~1° diameter) on a gray background. The stimulus was presented once for 100-ms or presented twice for 50-ms (each) with a delay of 200-, 400-, or 600-ms (ISI) between the mask offset and the second presentation onset. Participants then marked the locations of each disc using a computer mouse.

Trials with two separate 50-ms presentations showed better enumeration performance than trials with a single 100-ms presentation for numerosities >4; the delay conditions did not significantly differ from each other (except in 5-item displays). For localization performance, two-presentation trials produced more accurate responses than single-presentation trials for numerosities <7. Here, location accuracy was significantly better in the 600-ms delay condition for displays with 5-8 items. This suggests an additive benefit when presenting the second display outside of the attentional blink in trials where observers needed to enumerate >4 items. These results (that the attentional blink affects localization more than enumeration) suggest that attention is more critical for the encoding of location information than for enumerating small sets. These results also point to the possibility that the increased coding time associated with the mouse pointing (when marking object locations) may play some role in the increased subitizing limit. .

Harman, Haladjian, & Pylyshyn (2011). Eye movements during an enumerating-by-pointing task enhance spatial compression. Vision Sciences Society 2011, Naples, FL.

Observers can accurately enumerate and localize sets containing up to six randomly-placed dots when using an "enumerating-by-pointing" method (Haladjian & Pylyshyn, 2010). Analyses of localization errors suggest a form of compression, where location responses are closer to the centroid of the set of dots than their actual locations on the stimulus screen. We address the following questions in the current study: Is this compression stronger around the centroid of the dots or the point of central fixation? Is the frequency of fixations correlated with response accuracy? Is compression of pointing responses linked to eye-movements? We used an EyeLink 1000 eye-tracker to examine the role of eye-movements in our enumerating-by-pointing task. Participants were shown a display with 1-10 randomly-placed black dots (~1° diameter). This gaze-contingent display appeared immediately after participants fixated the center of the screen for one second. After a full-screen mask, participants used a mouse to place markers on a blank screen indicating the perceived locations of the dots. Analyses were performed on enumeration accuracy and localization errors (distance between dots and nearest response marker). Results show strong compression around the centroid of dots, and some compression around fixations (i.e., localization errors are smaller and less variable around the centroid). Stronger compression (on 2/3 of the cases) required at least one fixation to the centroid. More fixations, as well as dots, also strengthened centroid compression. Increased fixation frequency, however, did not improve localization or enumeration performance. Overall, these results suggest that compression is centered on the centroid of a set of stimuli, and eye-movements play a role in perceived shrinkage of the display configuration, but not judgments associated with counting.