Department of Psychology, Durham University Institute of Behavioural Sciences, University of Helsinki Humans perceive the colors of objects in a relatively stable manner regardless of widely varying viewing conditions – an ability called color constancy. According to a common suggestion emanating from a Bayesian theory of vision, the visual system uses constraints from prior knowledge to infer object properties from noisy sensory inputs.
However, very little is known about how this knowledge is learned from the visual input, and how it is used in visual estimation tasks. I will talk about how object knowledge is learned from visual input for color, how it is used by human observers in noisy color estimation tasks, and how it interacts with well-known color context effects. I will present a quantitative framework in which learning, memory, and perception can be considered jointly, reflecting the structure of natural tasks. I will argue that to understand color perception in the real world, all of these processes need to be considered together.