
Organizers
This event is supported by York University’s Vision: Science to Applications (VISTA) program and by Johns Hopkins University's William H. Miller III Department of Philosophy and Vision Sciences Group.




aboutus
The past decade has seen a resurgence in conversation between vision science and philosophy of perception on questions of fundamental interest to both fields, such as: What do we see? What is seeing for? What makes seeing different from remembering, deciding or imagining? But opportunities for conversation between vision scientists and philosophers are still hard to come by. The phiVis workshop is a forum for promoting and expanding this interdisciplinary dialogue. Philosophers of perception can capitalize on the experimental knowledge of working vision scientists, while vision scientists can take advantage of the opportunity to connect their research to long-standing philosophical questions.
Short talks by philosophers of perception that engage with the latest research in vision science will be followed by discussion with a slate of vision scientists, on topics such as probabilistic representation in perception, perceptual constancy, amodal completion, multisensory perception, visual adaptation, and much more.
Schedule
Chairs:
Kevin Lande (York)
Chaz Firestone (Johns Hopkins)
1:15PM
Opening remarks
- Kevin Lande (York University)
- Chaz Firestone (Johns Hopkins)
1:20PM
Gabriel Greenberg (University of California, Los Angeles) | Neural Images: Retinotopy, Representation, and Convolution
What is the significance of retinotopy to information processing? In particular, does retinotopy imply the use of picture-like representations in the brain? Many have been skeptical of this idea: just because retinotopic areas have a picture-like appearance to an external observer does not mean they are actually used as pictures in neural computation. But in this talk, I’ll highlight evidence in favor of the pictorial hypothesis, drawing on the idea that ventral stream processing can be modeled as a deep convolutional neural network. I’ll ague that if early areas of visual processing perform convolutional computations, then early visual representations are indeed genuinely picture-like. To defend this claim, I'll first make the case that convolutional algorithms in early vision makes essential use of 2D functional space in the cortex, analogous to a picture plane. Then I’ll argue that convolution is only a sound inferential strategy if the 2D functional space of the cortex maintains a picture-like relationship of geometric projection to 3D visual space. These conclusions suggest that retinotopy reflects a distinctively image-based mode of storing and processing information.
- Comments from Jennifer Groh (Duke University)
- Q&A
1:55PM
Jacob Beck (York University) | Resurrecting a Primary–Secondary Quality Distinction
In the 17th century, thinkers such as Galileo, Boyle, and Locke distinguished primary qualities, such as size, shape, number, and motion, from secondary qualities, such as color, sound, odor, and taste. This distinction was endorsed by Newton and played a key role in the scientific revolution. Even so, it was immediately mocked by Berkeley and has been recurrently critiqued ever since. Where does that leave us? Are there grounds to accept a primary–secondary quality distinction today? I’ll argue that there are. Although the distinction I’ll develop is anachronistic—it is grounded in contemporary perception science and is not intended as a serious interpretation of early modern texts—it sorts qualities in roughly the same way as early modern accounts. The upshot is that size, shape, number, and motion really are unlike color, sound, odor, and taste, but for different reasons than Galileo and company thought.
- Comments from Darko Odic (University of British Columbia)
- Q&A
2:30PM
Kathleen Akins (Simon Fraser University) | Colour Perception is High-Level Perception
Traditionally, colour has been classed as one of vision’s earliest and most basic tasks, if not the most basic. Colour processing contrasts with complex visual processes such as cortical facial recognition, which are thought to be at the peak of the visual hierarchy. This talk challenges the tradition, placing colour perception somewhere near the peak of the visual hierarchy as opposed to the base. This view better accounts for some curious features of the development of colour vision. Although trichromatic wavelength discrimination develops early, colour vision in the colloquial sense—seeing surfaces, volumes and lights as coloured—takes just over 2 decades to reach maturity. Small children seem to find colour, as a property of the world, difficult to grasp. Even though most (Western) toddlers will be able to list a wide range of colour terms by 2-3 years old, another year will go by before they use one colour term correctly. It will be yet another year, after the age of 3, that pre-schoolers can apply the 10 most basic colour terms correctly. At about this age, infants will begin to pass a simplified version of the Ishihara pseudoisochromatic test for colour vision. But they will not be able to complete the Lanthony D-15, a simplified colour seriation test, for children until they are 5 years old. And even then, children’s scores will increase markedly with age. At least prima facie, it seems that colour vision matures with age and not just in terms of increasing sensitivities to colour’s dimensions. None of this would be surprising if colour perception were understood as a complex visual process, at the peak of the visual hierarchy not at the base.
- Comments from Rosa Lafer-Sousa (University of Wisconsin -Madison)
- Q&A
RSVP
- phiVis 5: Philosophy of Vision Science WorkshopTue, May 20Banyan/Citrus Room, Tradewinds
- phiVis 5 ONLINETue, May 20Zoom
MEDIA
Recordings of Past Events
Recordings of Past Events


phiVis 4: Philosophy of Vision Science Workshop 2024

phiVis 3: Philosophy of Vision Science 2023

Sneak Peek: Wayne Wu, "We Know What Attention Is!"
