This paper is a preprint based on a published version: Cole, J., Gallagher, S., McNeill, D., Duncan S., Furuyama, N. and McCullough, K-E. 1998. Gestures after total deafferentation of the bodily and spatial senses. 1998. Oralité et gestualité: Communication multi-modale, interaction. Eds. Santi et al, Paris: L. Harmattan: pp. 65-69. Please quote and cite the published version.

Gestures in a Deafferented Subject

Jonathan Cole (University of Southampton, UK),

Shaun Gallagher (Canisius College, USA)

& David McNeill (University of Chicago, USA)

 

Introduction.

An important way to look at gesture concerns the extent to which it is controlled by the cognitive/linguistic system in a way that differs from the motor control system used to pick up a glass, even though the same muscles and spinal pathways are used. If this is the case then the gestural movements of a deafferented subject may approach normalcy since the linguistic/cognitive processes are unaffected by the motor neuropathy. We describe the gestural performance of Ian Waterman (IW) who, when a young adult, suffered an infection that caused the loss of all tactile and proprioceptive feedback and spatial position sense from the neck down. All functions above the neck, including speech and cognition, were spared. The question we ask is whether gesture was spared as well.

IW relates that immediately after his neuropathy he could not move and could not gesture. However he spent long periods of time re-learning to move both for locomotor and daily living skills and, importantly for him, for gesture. Now, he relates, some gestures take care of themselves and are automatic. His fully automatic gestures are small movements made in a safe workspace. He has to expend cognitive effort to make larger movements which might require him to brace the body for, e.g., an extended arm movement, or to make precise movements, e.g., using two hands or a hand up to the head.

We have systematically observed IW's gestures under two conditions -- first, while narrating from memory the story of an animated color cartoon that we had just previously shown him, both with and without vision of his hands; second, while conversing with the experimenters also while vision was occluded.

Narration.

During the narration task, when vision of his hands was available, IW made numerous meaningful gestures well synchronized with his co-expressive speech, confirming that he had the ability to produce gestures. His gestures looked essentially identical to non-neuropathic performance.

Example: (note: videos of examples will be presented at the talk)

1. b-ball scene

However, without vision (a blind was placed before him in such a way as to block his view of his hands), IW did not gesture at all, his hands remaining clasped at his lap.

When asked by us to make gestures as he continued his narration under the blind condition, IW was able to perform gestures similar to those seen previously, with meaningful hand movements synchronized with co-expressive speech, although the gestures were smaller and showed some slight loss of coordination of the two hands.

Speech and gesture were more synchronous when vision was occluded -- suggesting that visual control when he could look at his hands may not have been as precise as control through the thought/linguistic system operating alone .

There was some standardization of IW's gestures when he could not see -- pointing (G-hand) and open (B) hand shapes, perhaps reflecting a 'repertoire' of gesture forms that IW describes drawing upon when composing gestures.

IW evidently had not lost the ability to perform and integrate gestures with speech in the absence of vision. The spatial organization of his gestures is especially important. Under the same blind conditions movements requiring spatial accuracy are impossible for him, but he can still use space to differentiate meanings; e.g., to the right for one meaning, to the left for a contrasting meaning, as is often seen in non-neuropathic gesture as well.

Space thus can be controlled by IW's linguistic/thought system even though topokinetic space, which demands accurate movement to specific external points, is degraded.

Is this a sufficient explanation of IW's ability to perform gestures? That is, is he better at gesture because they do not need to be accurate in place in relation to the external world, not because they are controlled via a different thought/language system? This account cannot be ruled out. Nonetheless, it doesn't explain how IW is able to synchronize speech and co-expressive gesture without vision. As we have shown, he performs gestures with speech under occluded visual conditions. And the earlier example in which, with occluded vision, IW depicted Sylvester going down the pipe, shows that apart from synchrony and how it is explained, IW's gestures are accurate morphokinetically. IW moved his hand downward in that example and at the same time wriggled it at the wrist for Sylvester and the bowling ball descending the drainpipe. That is, the wriggling and downward movement are sufficiently shaped in a way that we recognize as the gesture meaning "Sylvester and the bowling ball descending the drainpipe." This kind of accuracy is amazingly good in the blind condition, and it is likely that language and meaning contribute to morphokinetic accuracy.

Conversation.

During our taping IW demonstrated how he performs gestures. This provides an edifying example of gesture under two control conditions -- normal speaking style, and an 'inverted commas' style of demonstration. The conversation took place when vision was occluded; thus vision was not a controlling factor. The gestures during the conversation were all metaphoric; thus formulation of meaning was a factor. What is striking is the change of character of the gestures as IW began to make them as part of a demonstration. The demonstrat ion was slower and larger than normal-looking gestures by IW, and speech was drawled to make words and gestures coincide. The series of gestures that preceded the display clearly were not deliberate performances. They were small, quick, and well-synchronized with fluent, non-drawled speech. By deciding to present the meaning slowly, as a display, IW could slow down both the speech and gesture, in unison.

Also, during this conversation with vision still occluded IW performed metaphoric gestures -- movements in space that convey non-movement, non-spatial meanings. He said, for example, "I find it very difficult ..." and, without vision, spread out both hands from a curled position as though to hold in hand and present a meaning in space. This is familiar as a 'conduit' metaphoric gesture . As with his other gesture performances, it was well synchronized with the semantically corresponding parts of his utterance.

The question of virtual actions.

This brings us to an hypothesis independently advanced by Jurgen Streeck (1996) and Sotaro Kita (in press), that gestures are basically virtual actions. If IW can perform gestures normally under conditions in which instrumental action would be impossible for him, namely, without visual feedback, the boundary between gesture and action (which is a form of instrumental action) would appear to widen. It's clear there is a gap in the mapping of gestures onto actions if an individual who cannot carry out instrumental actions without visual control can, nonetheless, carry out gestures. When IW moved his hand down in the earlier example while wriggling his hand at the wrist for Sylvester and the bowling ball descending the drainpipe, should we say this was a virtual action, or the actualization of a thought pattern in action?

The important distinction is between (1) instrumental action (which is the kind of motor action that IW has problems with, including locomotion) and (2) communicative action, or action with meaning mapped onto it. The gesture-as-action hypothesis claims that gesture is a special form of instrumental action, or we might say gesture=a reenactment that reproduces an instrumental action/motor action in a virtual space, which mediates the mapping of meaning through a posited action stage. So gesture is the original action once again, but in an analogue form, and this time in the virtual environment. The virtual action, in turn, is the analogue of how the target is moving. On this view, IW would have difficulty with gesture since he has difficulty with instrumental action unless he can guide it with visual feedback. The direct mapping hypothesis maintains that gesture is not instrumental action but is action onto which meaning is mapped. Rather, gesture is an action that helps to create the narrative space that is shared in the communicative situation, and as such, it comes under the control of linguistic/communicative systems rather than the instrumental motor system.

Conclusions.

These findings suggest that gestures can be decoupled from visual monitoring and performed without feedback of any kind. Yet such gestures remain accurate in time and form, and can map semantic space onto concrete space. This suggests that gestural movements may be controlled by a system of cognition and language that is at least in part separate from that used to control the same muscles in instrumental actions. A crucial matter for future work is how far we can say that an automatic gestural movement is differently organized from other automatic motor programs, as in walking. A further test would be to find another purely morphokinetic movement and show that it is not as well controlled as gesture; such movements, however, may be difficult to discover. For the time being, our results suggest that the neural connections between gestures and movements involved in instrumental actions are partly separable, and thus the results show some limits on the hypothesis that gestures are virtual actions.

References.

Kita, S. 2000. How Representational Gestures Help Speaking. In McNeill, D. (ed.), Language and Gesture: Window into Language and Action.

Streeck, J. 1996. How to do Things with Things: Objects Trovés and Symbolization. Human Studies, 19, 365-384.