Research in the Goodale Lab

Latest Research


The Neural Substrates of Human Echolocation

Everybody has heard about echolocation in bats and dolphins. These creatures emit bursts of sounds and listen to the echoes that bounce back to detect objects in their environment. What is less well known is that people can echolocate too. In fact, there are blind people who have learned to make clicks with their mouths and to use the returning echoes from those clicks to sense their surroundings. Some of these people are so adept at echolocation that they can use this skill to go mountain biking, play basketball, or navigate unknown environments.

Working together with two of my former postdocs, Lore Thaler (Durham) and Steve Arnott (Toronto), I was able to show that blind echolocation experts use what is normally the ‘visual’ part of their brain to process the clicks and echoes. Our study was the first to investigate the neural basis of natural human echolocation. We first made recordings of the clicks and their very faint echoes using tiny microphones in the ears of the blind echolocators as they stood outside and tried to identify different objects such as a car, a flag pole, and a tree. We then played the recorded sounds back to the echolocators while their brain activity was being measured in our 3 T fMRI brain scanner. Remarkably, when the echolocation recordings were played back to the blind experts, not only did they perceive the objects based on the echoes, but they also showed activity in those areas of their brain that normally process visual information in sighted people. Most interestingly, the brain areas that process auditory information were no more activated by sound recordings of outdoor scenes containing echoes than they were by sound recordings of outdoor scenes with the echoes removed. Only ’visual’ cortex showed differential activation to the faint echoes. Importantly, when the same experiment was carried out with sighted control people who did not echolocate, these individuals could not perceive the objects, and neither did their brain show any echo-related activity.

Our data clearly show that echolocation can be used in a way that seems uncannily similar to vision. Our findings also show that echolocation can provide blind people with a high degree of independence and self-reliance in their daily lives.

For more details, see:

Thaler, L. Arnott, S.R. & Goodale, M.A. (2011). Neural correlates of natural human echolocation in early and late blind echolocation experts. PloS ONE, 6: e20162.
Download pdf


Highlights from Past Research


Two Visual Systems

In a series of theoretical articles, my colleague, David Milner, and I have proposed that separate, but interacting visual systems, have evolved for the perception of objects on the one hand and the control of actions directed at those objects on the other. This 'duplex' account of high-level vision suggests that 'reconstructive' approaches and 'purposive-animate-behaviorist' approaches need not be seen as mutually exclusive, but as complementary in their emphases on different aspects of visual function. Evidence from both humans and monkeys has shown that this distinction between vision for perception and vision for action is reflected in the organization of the visual pathways in primate cerebral cortex. Two broad "streams" of projections from primary visual cortex have been identified: a ventral stream projecting to the inferotemporal cortex and a dorsal stream projecting to the posterior parietal cortex. Both streams process information about the structure of objects and about their spatial locations – and both are subject to the modulatory influences of attention. Each stream, however, uses this visual information in different ways. The ventral stream transforms the visual information into perceptual representations that embody the enduring characteristics of objects and their relations. Such representations enable us to identify objects, to attach meaning and significance to them, and to establish their causal relations – operations that are essential for accumulating knowledge about the world. In contrast, the transformations carried out by the dorsal stream deal with moment-to-moment information about the location and disposition of objects with respect to the effector being used and thereby mediate the visual control of skilled actions directed at those objects. Both streams work together in the production of adaptive behavior. The selection of appropriate goal objects and the action to be performed depends on the perceptual machinery of the ventral stream, but the execution of a goal-directed action is carried out by dedicated on-line control systems in the dorsal stream.

For more details, see:

Milner, A. D. & Goodale, M.A. (1995). The Visual Brain in Action. Oxford: Oxford University Press, 248 pp. (paperback 1996).

Goodale, M.A. & Milner, M.A. (2004). Sight Unseen: An Exploration of Conscious and Unconscious Vision. Oxford: Oxford University Press, 140 pp.

A precis of The Visual Brain in Action is available in Psyche, a web-based journal.

See also: Goodale, M.A. & Humphrey, G.K. (1998). The objects of action and perception. Cognition, 67, 179-205.     Download pdf



Visuomotor Psychophysics

Human beings are capable of reaching toward and grasping objects in space with great precision, and vision plays an indispensable role in the control of this skilled behaviour. Not only does our arm extend toward the object, but our hand adjusts its shape in anticipation of the grasp itself. The long-term goal of research in my laboratory is to find out how the brain controls this complex behaviour and other visually guided movements of the limbs and body. With the use of a special recording system, we can reconstruct the movements of an individual's fingers, hand, arm, and eyes as he or she reaches out to pick up objects placed at different distances from the body. We then use a computer to analyze the form and timing of the movements. By comparing the performance of normal individuals and patients with damage to particular regions of the brain, we are gaining important insights into how information from the visual system and other sensory systems is used to control this important human behaviour. It is hoped that this research will help to answer one of the central questions in behavioural neuroscience -- how sensory inputs are transformed into useful motor acts.

My former postdoc, James Danckert, who now holds a Canada Research Chair at the University of Waterloo, worked with me on a project in which we showed that reaching movements made to targets in the lower visual field are more efficient than those made to targets in the upper visual field. This result is consistent with electrophysiological and anatomical studies showing a lower visual field bias in the central visual pathways - particularly in those pathways implicated in the control of skilled movements of the arm and hand.

More recently, David Westwood, another former postdoc in my lab (now a faculty member at Dalhouse University), showed that dedicated, real-time visuomotor mechanisms are engaged for the control of action only after a grasping movement is cued, and only if the target is visible. If the target object is not visible when the movement is cued, then grasping is driven, not by the dedicated visuomotor systems in the dorsal stream, but by information provided by the ventral perception stream.

For more information about research in my lab on visuomotor psychophysics see:

Danckert, J., & Goodale, M.A. (2001). Superior performance for visually guided pointing in the lower visual field. Experimental Brain Research, 137, 303-308.
Download pdf

Westwood, D.A., & Goodale, M.A. (2003). Perceptual illusion and the real-time control of action. Spatial Vision, 16, 243-254.



Grasping Illusions

The distinction that David Milner and I proposed between vision for perception and vision for action predicts that visually guided movements should be largely immune to the perceptually compelling changes in size produced by pictorial illusions. Salvatore Aglioti, Joe DeSouza, and I demonstrated that this was the case using a 3D version of the Ebbinghaus illusion. We showed that despite the large effect the illusion had on perceptual judgements of size, there were only small effects of the illusion on grasp scaling. Nevertheless, some have argued that the small effect on grasp implies that there is a single representation of size for both perception and action. We have shown that 2D pictorial elements, such as those comprising illusory backgrounds, can sometimes be treated as obstacles and can thereby influence the programming of the grasp. The arrangement of the 2D elements commonly used in previous studies examining the Ebbinghaus illusion could therefore give rise to an effect on grasp scaling that is independent of its effect on perceptual judgements, even though the two effects are in the same direction. We have recently demonstrated that when the gap between target and the illusion-making elements in the Ebbinghaus illusion is equidistant across different perceptual conditions, the apparent effect of the illusion on grasp scaling is eliminated.

Recent experiments by David Westwood and myself have shown that grasping is also insenstive to size-contrast illusions that are present in haptics (active touch).


For more information about grasping illusions see:

Aglioti, S., DeSouza, J., & Goodale, M.A. (1995). Size-contrast illusions deceive the eyes but not the hand. Current Biology, 5, 679-685.

Haffenden, A.M, & Goodale, M.A. (2000). Independent effects of pictorial displays on perception and action. Vision Research 40, 1597-1607.

Haffenden, A.M., Schiff, K.C., & Goodale, M.A. (2001). The dissociation between perception and action in the Ebbinghaus illusion: Nonillusory effects of pictorial cues on grasp. Current Biology, 11, 177-181.     Download pdf

Westwood, D.A., & Goodale, M.A. (2003). A haptic size-contrast illusion affects conscious size perception but not grasping. Experimental Brain Research (on-line).



Object Priming

Recognizing an object is easier if one has seen the object before -- even if one cannot recall having seen the object earlier. This facilitation of perception is called priming. In neuroimaging studies, priming is often associated with a decrease in activation in brain regions involved in object recognition. It is thought that this occurs because priming causes a sharpening of object representations which leads to more efficient processing and, consequently, a reduction in neural activity. A recent 4T-fMRI study in my laboratory (led by my former Ph.D. student, Tom James, now at Indiana University) has shown, however, that the apparent effect of priming on brain activation varies as a function of whether the neural activity is measured before or after recognition. By slowing down the process of recognition by gradually unmasking an object, Tom has been able to show that the activation peak for primed objects in the fusiform object area occurs sooner than the peak for non-primed objects. After recognition, activation declines rapidly for both primed and non-primed objects. This experiment shows that priming does not produce a general decrease in activation in the brain regions involved in object recognition but, instead, produces a shift in the time of peak activation that corresponds to the shift in time seen in the subjects' recognition performance.

More recently, we have used a priming paradigm with fMRI to show that object-related activity in the dorsal (action) stream is viewpoint-dependent whereas object-related activity in high-order regions of the ventral (perception) stream is viewpoint-independent.


For more information see:

James, T.W., Humphrey, G.K., Gati, J.S., Menon, R.S., & Goodale, M.A. (2000). The effects of visual object priming on brain activation before and after recognition. Current Biology, 10, 1017-1024.   Download pdf

James, T.W., Humphrey, G.K., Gati, J.S., Menon, R.S., & Goodale, M.A. (2002). Differential effects of viewpoint on object-driven activation in dorsal and ventral streams. Neuron, 35, 793-801.    Download pdf


Virtual Grasping

We are now using virtual reality displays, both in our own lab and in the Virtual Environment Technologies Centre (VETC) here in London, to study the perception of object dimensions and the visual control of grasping. The VR displays allow us to control the visual appearance of objects with enormous precision from trial to trial. The first virtual workbench was built in my own lab with the help of my former graduate student, Yaoping Hu. Yaoping completed her Ph.D. in Engineering and is now a faculty member in the Department of Electrical and Computer Engineering at the University of Calgary.. Using the virtual workbench, Yaoping and I were able to show that, even though the judgments of object size are influenced by the presence of other objects in the visual array, the scaling of grasping movements directed at those same objects is not. This fits well with the work on grasping illusions and suggests that perception uses scene-based frames of reference and relative metrics while the visual control of grasping uses egocentric frames of reference and absolute metrics.

The reason that perception and action use different frames of reference is clear. If perception were to attempt to deliver the real metrics of all objects the visual array, the computational load would be astronomical. The solution that perception appears to have adopted is to use world-based coordinates -- in which the real metric of that world need not be computed. Only the relative position, orientation, size and motion of objects is of concern to perception. Such relative frames of reference are sometimes called allocentric. The use of relative or allocentric frames of reference means that we can, for example, watch the same scene unfold on television or on a movie screen without be confused by the enormous absolute change in the coordinate frame.

As soon as we direct a motor act towards an object, an entirely different set of constraints applies. We cannot rely on the perception system's allocentric representations to control our actions. We cannot, for example, direct actions toward what we see on television, however compelling and ‘real’ the depicted scene might be. To be accurate, the actions must be tuned to the metrics of the real world and the movements we make must be computed within an egocentric frame of reference that is specific to the effector that is being employed at the time.

For more information see:

Hu, Y., & Goodale, M.A. (2000). Grasping after a delay shifts size-scaling from absolute to relative metrics. Journal of Cognitive Neuroscience, 12, 856-868.
Download pdf



Active Exploration of Objects

Some years ago, Keith Humphrey and I, together with our former Ph.D. student, Karin Harman James (now at Indiana University) demonstrated that observers who actively rotated three-dimensional novel objects on a computer screen later showed faster visual recognition of these objects than did observers who had passively viewed exactly the same sequence of images of these virtual objects. In a later experiment, we showed that compared to passive viewing, active exploration of three-dimensional object structure led to faster performance on a ‘mental rotation’ task involving the studied objects. We found that in both studies, observers spent most of their time looking at the 'side' and 'front' views of the objects, rather than the three-quarter views. This strong preference for the ‘plan’ views of an object led us to examine the possibility that restricting the studied views in active exploration to either the plan views or the canonical views would result in differential learning. We found that recognition of objects was faster after active exploration limited to plan views than after active exploration of canonical views. Taken together, these experiments demonstrate (1) that active exploration facilitates learning of the three-dimensional structure of objects, and (2) that the superior performance following active exploration may be a direct result of the opportunity to spend more time on plan views of the object.


For more information see:

Harman, K.L., Humphrey, G.K., & Goodale, M.A.. (1999). Active manual control of object views facilitates visual recognition. Current Biology, 22, 1315-1318.   Download pdf

Harman, K.L., Humphrey, G.K., & Goodale, M.A.. (2001). Manipulating and recognizing virtual objects: Where the action is. Canadian Journal of Experimental Psychology, 55, 111-120.   Download pdf



The Effects of Retinal Motion on Reaching

When you reach out to pick up your coffee cup, the cup is not the only thing you see. Your eye is confronted with all kinds of motion signals, some of which are caused by the movement of your eye and head as you look at the cup and some of which are caused by the movement of other objects in the scene. To accurately reach out to the cup, it has always been assumed that your visuomotor system must somehow separate information about the location of the cup from the irrelevant background motion signals on the retina. Our experiments provide evidence that, counterintuitively, the visuomotor system does not do this at all. In a recent experiment, my former postdoc, David Whitney (now at UC Berkeley), and I (together with my former postdoc, David Westwood) have shown that when observers reach to a stationary object, motion signals that are unrelated to the target (e.g., other moving objects) cause significant deviations in the hand’s trajectory. Although this seems counterproductive and contrary to our everyday experience, it might, in fact, be precisely the reason we are able to reach successfully for objects: since retinal motion is most commonly caused by eye and head movements, the visuomotor system might use these signals as a quick way of computing how our body’s position has changed with respect to the target, and how much we have to correct our hand’s trajectory to compensate for this. In other words, the fact that the visuomotor system conflates the position of the target with the background motion may be an adaptive mechanism for quickly updating visually guided actions. Such a computation could be made directly on the basis of retinal information and would require no additional input from other sensory or motor systems.

For more information see:

Whitney, D., Westwood, D. A., & Goodale, M. A. (2003). The influence of visual motion on fast reaching movements to a stationary object. Nature, 423, 869-873.    Download pdf    or     Download pdf of supplemental material



Flexible Retinotopy

The early visual areas of the brain such as area V1 are retinotopic. That is they show an orderly map of the visual world. Researchers have often used this map to interpret the relationship between fMRI activation and perception. They have assumed there is a one-to-one relationship between the layout of the retinotopic map and our perception of the world. Our research team (led by my former postdoc, David Whitney) have found that this is not necessarily the case. Using a motion-dependent visual illusion, which made stationary apertures containing a moving grating appear displaced from their real positions in the direction of motion, we showed that the activation in area V1 did not correspond to this change in apparent position. That is, although subjects' perceptions changed, there was no corresponding change in the map of activity in the brain. In essence, subjects perceived the object to be in one location but their brain represented the object as being in a completely different location. In fact, activation was greatest at the trailing edge of the moving gratings used in the display (and this was true even when the moving gratings were surrounded by a hard edge and did not show an illusory shift in position). These findings have important implications for conclusions drawn from the location of fMRI activation in retinotopic visual areas and also suggest that the connection between brain activity and perceptual awareness is more complicated than was originally thought.

For more information see:

Whitney, D., Goltz, H., Thomas, C.G., Gati, J., Menon, R., & Goodale, M.A. (2003). Flexible retinotopy: Motion dependent position coding in the visual cortex. Science, 382, 878-881.
Download pdf    or    Download pdf of supplemental material



Fast Actions are Resistant to the Hollow-face Illusion

In a recent experiment, we looked for a dissociation between conscious perception and rapid target-directed action using the large and dramatic depth reversal of the hollow face, in which a realistic hollow mask appears as a convex face. The hollow-face illusion is a knowledge-based, top-down effect, where extensive and powerful (though implicit) knowledge of convex faces rejects the correct hollow perception in favor of reversed depth. The hollow-face illusion is thought to arise within the ventral stream and as a consequence should not affect visuomotor computations in the dorsal stream. Despite the presence of a strong illusion of a protruding convex face, participants in our experiment directed rapid flicking movements of their hand to the correct position of targets affixed to the surface of the hollow mask. In other words, the visuomotor system can use bottom-up sensory inputs (e.g., vergence) to guide behavior to veridical locations of targets in the real world, even when perceived positions are influenced, or even reversed, by top-down processing.

For more information about the hollow-face illusion and other interesting illusions, you are encouraged to explore Richard Gregory's website.


For more information see:

Kroliczak, G., Heard, P., Goodale, M.A., & Gregory, R.L. (2006). Dissociation of perception and action unmasked by the hollow-face illusion. Brain Research, 1080, 9-16.   Download pdf



How we see stuff!

Virtually all studies of object recognition have focused on the geometric structure of objects. Very few have focused on the recognition of the material properties of objects from surface-based visual cues. Even when the processing of surface-based cues, such as color and texture, has been studied, it has been in the context of using these cues to reveal the geometric structure of objects. In other words, research has focused on the recognition of things rather than the stuff from which they are made. Nevertheless, knowledge about the stuff (i.e. the material properties of objects) has, by itself, profound implications for understanding what an object is. In a recent line of work, my graduate student, Jon Cant (now at Harvard), and I have been using fMRI to examine the areas in the ventral stream of visual processing that are specialized for perceiving the material properties of objects. We first of all showed that the processing of object form was largely localized to the lateral occipital area in the ventral stream, whereas the processing of surface properties was localized to more medial areas within the fusiform and parahippocampal regions. In later experiments, we showed that the processing of object form and the processing of material properties are largely independent of one another (that is, one could attend to the form of an object without any interference from changes in the surface properties, and vice versa).


For more information see:

Cant, J.S. & Goodale, M.A. (2007). Attention to form or surface properties modulates different regions of human occipitotemporal cortex. Cerebral Cortex, 17, 713-731.   Download pdf  

Cant, J.S. & Goodale, M.A. (2011). Scratching beneath the surface: new insights into the functional properties of the lateral occipital area and parahippocampal place area. Journal of Neuroscience, 31, 8248-8258.   Download pdf  




| Home    | Biography    | Publications    | Links    | Current Research


| The Brain and Mind Institute    | Canadian Action and Perception Network    | Neuroscience   | Psychology  


Last Update:   March 25, 2014