Videos footage from RuCCS Colloquium Talks can be found on the RuCCS YouTube Channel. For all other events, please check the sponsor's website for more detail.

To filter by event category, click on the event category link in the table below or use the menu on the right.

List of Past Events

Shades of gray in high-dynamic range images

Sarah Allred

Monday, November 28, 2011, 12:00pm - 07:00pm

Rutgers University-Camden, Department of Psychology

Copy to My Calendar (iCal) Download as iCal file

The perceived lightness (shade of gray) of an object depends on the scene in which it is viewed. Part of this dependence is simple to understand. The illumination impinging on an object can change (as when a white piece of paper is placed in shadow) and this in turn causes a change in the intensity of light reflected from an object to an observer. But even when the illumination is held constant, scene context can still affect perceived lightness; for example, consider the classic case of simultaneous contrast. Ultimately, a complete theory of lightness would predict perceived lightness from a specification of the entire retinal image of any scene. However, despite a long tradition of lightness research, we are still far from this goal for even moderately complex scenes: this failure is due both to the complexity of the computational challenge involved in resolving the sensory ambiguity and because we still lack consensus about the key empirical phenomena that such a theory should predict. In my talk, I will describe a set of experiments that serve as a step towards this broad theory.

We measured the perceived lightness of targets of varying luminance embedded in a series of grayscale checkerboards that incorporated the high luminance range (~10,000:1) pervasive in natural images. These empirical observations allow us to reject several models of visual adaptation that have traditionally been invoked to describe the effect of scene context on perceived lightness in less complex scenes with a lower luminance range. Finally, we developed a Bayesian model that provides a quantitative account of the data and a qualitative prediction of several visual illusions.

Sarah Allred