Protactile Research Network
Sign-creation in the Seattle DeafBlind community
A triumphant story about the regeneration of obviousness
Terra Edwards, Saint Louis University
Gesture
Keywords: DeafBlind, protactile, Protactile American Sign Language, language and embodiment, deictic integration, sign-creation
sign_creation.pdf | |
File Size: | 3506 kb |
File Type: |
Introduction
This article examines the social and interactional foundations of sign-creation among DeafBlind people in Seattle. Most members of the Seattle DeafBlind com- munity are born deaf and slowly become blind. In the past, as vision changed, DeafBlind people became increasingly dependent on sighted interpreters to guide them through physical spaces and to relay utterances and information about the environment. Then, between 2006 and 2010, the protactile movement took root in Seattle and since then has been spreading, very quickly, across the country.
The protactile movement started under the leadership of a woman I call Adrijana, who was the first ever DeafBlind director of the DeafBlind Service Center, a non-profit organization that provides advocacy and communication-related ser- vices. When Adrijana took her post, she replaced most of the staff with people who shared her vision for the future: to establish a DeafBlind space where interpreters were not needed and where navigation, communication, and co-presence felt nat- ural. It wasn’t clear at the outset what practices might count as protactile, but some things were clearly anti-tactile, like the habit people had of pausing, or “freezing” when a DeafBlind person touched them. The “freezing” phenomenon, according to Adrijana, had an eerie effect. Conference rooms, offices, and hallways seemed perpetually occupied by people who were suspended in mid-air. Adrijana told me that when she was with another person, for example, eating lunch and convers- ing, she would take a bite, and then feel the other person’s hand or arms to see if they were still eating or not. If they weren’t, she might say something to them, and if they were, she might want to feel their hands take the food to their mouths, or maybe feel their jaw chewing (just to see what kind of chewer they were). But every time she put her hands on theirs, they would pause, awkwardly, until she removed her hands. Or if people were standing around talking in the conference room before a meeting, she would approach them, put her hands on one of them, and hope that they would continue signing, so she could tell what they were talking about. Inevitably, though, the conversation would stop. Either the people would stop moving, as if they didn’t know what to do, or they would ask her what she wanted. She didn’t know what she wanted because she didn’t know what the pos- sibilities were; it was impossible to initiate any kind of communication, because as she said: “Everything we touched froze!” It was also hard to gain access to anything but the hands of other people. Even then, access was restricted to contexts where linguistic signs were being produced. Hands that were busy gripping an umbrella or turning a page, for example, were entirely inaccessible. Over time, this lack of access to embodied activity led to a deterioration of sighted intuitions about the meanings of gestures and other bodily movements. Things like a shrug, raised eyebrows, hunched shoulders, or pocketed hands, became cryptic, surprising, and even disturbing when they were encountered. Connections between visible bodily cues and shared activity frames were breaking down.
Prior to the protactile movement, these problems had been addressed by ramping up the compensatory mechanisms that were in play and trying to help DeafBlind people seem as sighted as possible. But the leaders of the protactile movement went a different way. They opted instead to establish conventions for reciprocal, tactile communication between DeafBlind people, which would re- quire no compensation at all. Toward this aim, a series of 20 protactile workshops were organized by two DeafBlind instructors for 11 DeafBlind participants in the fall of 2010. No interpreters were provided and no sighted people were included(1). In the workshops, DeafBlind people stopped trying to reconstruct visual scenes
or compensate for vision loss (2). Instead, they returned to basic interactional frames and sought out new, tactile ways of realizing them. The process usually started with two or more DeafBlind people remembering types of activity they used to enjoy. They would say: Remember when we used to be able to watch other people do things? Or remember when we used to be able to listen to someone else’s conversa- tion, and then decide that we didn’t want to join it? Interactional frames like this were recoverable, but the embodied knowledge required to enact and recognize them had been lost. Therefore, when people referred to people, things, or events in the immediate environment, reference was difficult or impossible to resolve.
This article focuses on the role of deictic reference in re-routing modes of access required to converge on common referents. I argue that as DeafBlind participants worked their way toward common pathways in their environment, language and gesture were aligned in new ways, giving rise to novel linguistic forms. At the center of this transformation is a process I call deictic integration. Deictic integration restricts the range of contextual values that the grammar can retrieve by coordinating the deictic system with patterns in activity. I argue that this process constrains how language and gesture are coordinated as new signs are created.
I start by discussing the problem of sign-creation as encountered in sign language linguistics literature. In the following section, I introduce three key con- structs for understanding processes of sign-creation in the Seattle DeafBlind community: the deictic system, the deictic field, and deictic integration. I then show how DeafBlind individuals have worked to re-structure their embodied orientation to the environment, and how orientation schemes are converging among DeafBlind people in interaction. In the section to follow, I show how pointing signs in Protactile American Sign Language are registering this shift in collective patterns of orientation. From there, I show how the same processes triggering a divergence in how deictic reference is accomplished in Protactile and Visual ASL are also affecting the organization of Protactile ASL at the sublexcial level. I conclude by highlighting implications of these findings for our understanding of sign-creation.
Sign-creation
Since the inception of the field of sign language linguistics, scholars have been at pains to distinguish language from “gesture” and “pantomime”. One way this has been done has been to posit hypothetical models of sign-creation that begin with an act of reference (e.g., Boyes-Braem, 1981, p. 42; Brennan, 1990, pp. 11–36; Mandel, 1981, pp. 204–211; Taub, 2001, pp. 43–60). In these scenarios, one aspect of the object is “selected” to metonymically represent the whole. This iconic, ges- tural form is then analyzed to the phonological parameters of a particular signed language. The resulting form, despite its iconicity, is thereby incorporated into the arbitrary linguistic system and is further subject to morphological, syntactic, and semantic constraints.
Hypothetical models like this, particularly in the early stages of research on signed languages, were used to defend the linguistic status of signed languages against claims of inferiority. The authors who established models of selection did not claim descriptive adequacy for any diachronic or interactional process. However, in recent work, similar models have been drawn on to explain actual instances of sign-creation in emergent signed languages (e.g., Aronoff et al., 2008; Sandler et al., 2011, p. 519).
The present study contributes to this work by showing how DeafBlind people draw on novel gestural resources, emerging from patterns of activity organized along tactile, rather than visual lines. Taking a practice approach (Hanks, 2005a, 2005b; Edwards, 2014), I argue that a range of potentially iconic relations are ruled out prior to the imposition of grammatical constraints, on social and interactional grounds. More specifically, I argue that successful acts of reference require the coordination of gestural and linguistic elements, both of which are constrained by shared modes of access to the environment. Prior to the establishment of shared modes of access, the selection of one aspect of the object to represent the whole will not feel obvious or transparent across a group of language-users.
The data for this study were collected during a 10 week series of protactile workshops, which were led by two DeafBlind instructors for 11 DeafBlind participants in Seattle in the fall and winter of 2010 and 2011. The overarching aim of these workshops was to create an environment where DeafBlind people could find ways of communicating with one another directly, without the use of sighted interpreters. In order to do this, many participants chose to wear blindfolds in order to prevent whatever vision they had left from interfering with this aim, and they participated in activities designed to cultivate tactile sensibilities and modes of communication. One of these activities was called “the object game.” The instructors brought a bag of objects to the workshop, which included things like a deco- rated metal tea strainer, a toy snake which moved in a distinctive way when picked up, a car-charger for a smart phone, and so on. Then they arranged participants in pairs. One person chose an object from the bag and explored it tactually. After put- ting the object back in the bag, they described it in detail to the second participant. The addressee was then given the object and asked to evaluate the description in terms of how well it prepared them for the qualities of the object. The pair went on to negotiate the description until they settled on one they both felt represented the object adequately. This paper focuses on forms that were ratified by this process, most of which were used to depict the handling of objects.
According to Dudis, depiction in American Sign Language requires the signer to map aspects of conceptualized scenarios onto the body of the signer and the surrounding space using symbolic and iconic resources that can be both linguistic and gestural (2004). Depiction is one way that signers can generate novel expressions. Depictive qualities of novel expressions can be foregrounded and back- grounded in subsequent usage events, and this possibility is a pervasive resource for language-users. This paper contributes to our understanding of depiction and sign-creation by examining the indexical relations that are drawn on in aligning modes of access among participants, thereby making common conceptualizations of embodied experience available for interactants to depict. In order to understand this process, I focus on the interaction of the deictic system and the deictic field.
The deictic system and the deictic field
Together, the deictic system and its corresponding deictic field structure how peo- ple refer to objects and events in the immediate environment (Bühler, 2011 [1934]; Hanks, 1990). Bühler compares the deictic system to a signpost, positioned on the “pathways” of its corresponding deictic field. Like a signpost, deictic words, such as here and there are combined with pointing gestures to create a perceptu- ally salient sign that directs its recipient. For example, when a human “opens his mouth and begins to speak deictically, he says … there! is where the station must be, and assumes temporarily the posture of a signpost” (Bühler, 2011 [1934], p. 145). Despite its minimal semantic content, the meaning of the deictic expression is not difficult to sort out because speakers “can do nothing other than take advantage – naturally to a greater or lesser extent – of the possibilities the deictic field offers them; moreover, they can do nothing that one who knows the deictic field could not predict, or, when it turns up, classify” (Bühler, 2011 [1934], p. 145). In other words, a deictic sign is a signal to choose one path over another; it does not launch a trajectory into unstructured space.
Within a field of limited choices the deictic sign, like the signpost, does two things: it names and it points. Its capacity to name derives from oppositions in the language (here is not there), which contribute to the definiteness of reference. The capacity of the deictic sign to point derives from patterns in the field where it is inserted. These pathways contribute to the directivity of reference. Therefore, when a deictic sign is applied in the speech situation, it must retrieve values from two distinct sources: the linguistic system and the deictic field. All deictic signs are composite in this respect, composed of both “symbols” and “signals” (Bühler, 2001 [1934], p. 99). Speaking deictically requires the coordination of values from each field in the unfolding of the utterance.
In spoken languages, the deictic system is composed of discrete, oppositional categories, which encode highly schematic semantic distinctions. There is grow- ing evidence that pointing signs in signed languages work like this, too. They can act as determiners, demonstrative pronouns, anaphoric deictic elements, personal pronouns; they can be lexicalized as temporal deictics like yesterday and to- morrow, and these different functions correspond to differences in form (Pfau, 2003, pp. 148–151).
Those differences, which derive from the linguistic system, help signers single out a thing among other things. However, when a deictic sign is applied in the speech situation, definite meanings must be coordinated with elements that di- rect the addressee’s attention along pathways in the deictic field, which are shaped by memory, perception, the physical capacities of the individual, routine routes through familiar spaces, the intuitions one develops for how a city, a village, a store, or a parking lot might be organized, etc. These pathways extend out around the language-user like an orienting grid. Each person’s experience is, in some measure, unique, and their orienting grids reflect those differences. Therefore, a “reciprocity of perspectives” must be established before reference can be reliably resolved. Schutz explains that where there is reciprocity:
I take it for granted – and assume my fellow man does the same – that if I change places with him so that his ‘here’ becomes mine, I would be at the same distance from things and see them in the same typicality as he actually does; moreover, the same things would be in my reach which are actually in his. (All this vice versa).
(Schutz, 1970, pp. 183)
From Bühler’s perspective, this kind of reciprocity is assumed, since patterns in the deictic field are learned, and therefore, to some degree, shared. However, ob- servers of interaction have noted that a great deal of work is invested to estab- lish reciprocity, moment-to-moment, in the unfolding of communicative activity; it cannot be assumed a priori (e.g., Goodwin & Heritage, 1990; Levinson, 2006; Goffman, 1964).
In order to account for the structures that are present prior to activity, and those that are worked out in the course of an interaction, Hanks synthesizes Goffman’s “situation” and Bühler’s deictic field (Hanks, 2005b, p. 192). This yields a construct that can account for: (1) “the positions of communicative agents rela- tive to the participant frameworks they occupy;” (2) “the position occupied by the object of reference;” and (3) “the multiple dimensions whereby agents have access to objects” (Hanks, 2005b, p. 193). These dimensions often include perceptual ac- cess, but they can also include shared knowledge, memory, imagination, or any other relation that allows signer and addressee to single out an object of reference against a horizon of potential referents. Therefore, while each individual comes to an interaction with orienting schemes of their own, the activity of referring requires those schemes to be coordinated in repeatable and expectable ways over time, as modes of access are embedded in routine and reciprocal patterns of activ- ity. One of the ways in which embedding occurs is through the movement from “participant frameworks” to “participant frames.”
Participant frameworks are the emergent configurations that communicative agents occupy in the unfolding of an interaction, while participant frames are the repository of regularities that emerge in participant frameworks across encounters (Hanks, 1990, pp. 137–187). Interactants schematize participant frameworks in the course of communicating, and these schemes generate a subset of maximally expectable configurations within which signs are produced and received (Hanks, 1990, p. 148). These regularities, which inhere in the deictic field, are in dynam- ic tension with conventional categories in the language.
For example, person categories in the deictic system of a language are linked to participant roles, so the use of pronouns “tends to sustain an inventory of participant frames by focal- izing them, engaging them as ground for further reference, or both” (Hanks, 1990,p. 148). In order to account for the dynamic tension between deictic categories in the language and their corresponding roles and structures in the deictic field, I would like to discuss the term deictic integration in some depth.
Deictic integration
Deictic integration accounts for coordination of linguistic and deictic elements into tighter and more restricted configurations over time such that (a) when a deictic sign is instantiated, the range of retrievable values in the deictic field is restricted to a small and alternating set; and (b) deictic signs are organized by contrastive opposition and subject to grammatical constraints. For example, the pronominal system of Visual American Sign Language (VASL) makes a two-way distinction between first and non-first person (Meier, 1990, p. 377). The first per- son pronoun is produced with a pointing sign directed toward the signer and the non-first person pronoun is produced with a pointing sign directed away from the signer. This distinction aligns with a basic bodily participant framework occupied by signer and addressee.
Hanks argues that basic participant frameworks should correspond to the way participants perceive interaction, and therefore should be relatively simple, since participants do not generally struggle as they move between participant roles. Some clues as to how participant frameworks are perceived can be found in conventional and commonly used labels, as well as the frames that are treated by participants as usual or expectable (Hanks, 1990, p. 152). Citation forms in VASL are consistently recorded by a camera placed at eye-level a few feet away from the signer. This is evidence that VASL signers treat that bodily configuration as basic. The pronominal system of VASL registers that fact by mapping a single distinction onto that configuration. In other words, the pronominal system is aligned with, and retrieves values from, the basic participant frame inhabited by VASL users and the result is that the language-user is forced to choose one of two highly restricted values: first or non-first person.
In contrast to pronouns in VASL, non-linguistic pointing gestures are responsive to a wide range of contextual dynamics. This is the difference between a pointing gesture and a pronoun: the former can retrieve a wide range of values from the deictic field, while the latter is set to retrieve one of a restricted set. If the pointing gesture is momentarily altered as it is brought into relation with some dimension of context, linguistic and deictic elements are merely coordinated. If there is a restricted set of values (e.g., person values), and one of those values must be selected in order to produce a grammatical utterance, linguistic and deictic elements are integrated. The process whereby the deictic system and the deictic field are coordinated into tighter and more restricted configurations is what I call deictic integration (Edwards, 2014, pp. 27–61, 159–190).
Recent work on nascent signed languages suggests that deictic integration plays an important role in language-emergence. In Nicaragua, for example, the emergence of a new signed language has been associated with the creation of spatially modulated verbs (Senghas, 2000; Senghas & Coppola, 2001; Kegl et al., 1999). Early on in the development of Nicaraguan Sign Language, verbs like “speaking-to (a person)” were expressed in the following way: the signer would point to a per- son in the immediate environment, produce the verb, and then sweep the finger from one person to another to indicate who was speaking to whom. Later on, signers moved the verb from one location to another, incorporating the sweeping pointing gesture into a single, verbal sign (Kegl et al., 1999; Senghas, 2000). This is like syntactic agreement in the sense that relations are being established between a verb and entities that can be represented by nominal signs. However, the referents are not represented by nominal signs. Instead, they are linked directly to the verb via a deictic gesture.
These elements, which behave in part as one would expect deictic gestures to behave, and in part, as one would expect grammatically marked nouns to be- have, suggest that a process of deictic integration is underway in Nicaraguan Sign Language. Recall that deictic integration draws linguistic and deictic elements into tighter and more restricted configurations as the language develops (Edwards, 2014, p. 27–61, 159–190). On the one hand, deictic relations are increasingly caught up in and organized by grammatical relations, but the reverse is also true: the language becomes increasingly dependent on the deictic field to express grammatical relations. We know that as signed languages emerge, they tend to develop a class of verbs known as “directional verbs”, which rely on this kind of mutual de- pendency (Meier & Lillo-Martin, 2012). Directional verbs integrate anaphoric deictic elements and linguistic elements to mark grammatical relations (Rathmann & Mathur, 2002). Here, a highly restricted set of alternating values (i.e., person and number values) are retrieved from the anaphoric deictic field. However, once they are retrieved, they act like arguments of the verb, as opposed to referents. This ambiguity between arguments and referents is a result of deictic integration, and is associated with the transition from gestural communication system to signed language (Edwards, 2014, pp. 27–61).
Deictic integration has also been important in the emergence of Al-Sayyid Bedouin Sign Language (ABSL). For example, early on in its development, ABSL developed a productive morphological process whereby one deictic sign and one characterizing sign are compounded to produce place names (Aronoff et al.,2008). As these connections have become increasingly integrated, the order of the compounded elements has become fixed; the deictic component is word-final (Aronoff et al., 2008, p. 146). In other words, deictic elements have become increasingly caught up in, and coordinated by grammatical relations, and the inverse is also true: as the grammar became more tightly coordinated with elements and relations in the deictic field, semiosis became more language-like. Deictic integration is a resource for sign creation because it aligns the language with modes of access and orientation that are reciprocal among users of that language. This is only possible because, as Karl Bühler points out, linguistic elements are not related to the fields in which they are inserted as matter is related to form (2001 [1934],p. 17). Instead, linguistic and non-linguistic fields form a gestalt, foregrounding and backgrounding inserted elements. Objects are represented indirectly via the juxtaposition of interlocking fields, each one introducing some arbitrariness of its own. As you go further out of the core mediating systems of a language, you arrive at the world, where you find what Bühler calls “differences in world view” (2001 [1934], p. 171) or what Schutz calls “differences in perspective”. At the outer perimeter of the language is the deictic system, reaching on one side toward the grammar, and on the other, toward the deictic field. Through patterns of retrieval and integration, the language is aligned with the world as it is perceived by the users of that language, and those processes echo in arbitrary ways as they move from the perimeter to the core of the grammar.
In what follows, I show how deictic integration is affecting processes of sign- creation among DeafBlind people in Seattle. I begin with the reconfiguration of sensory orientation schemes; I describe some of the practices that contributed to this process and I show how it has triggered a reconfiguration of the deictic field. Next, I show how patterns in deictic retrieval are bringing the linguistic system in line with the deictic field. I argue that iconic relations become reciprocally avail- able across a group of language-users as a result of this process.
Reconfiguration of orientation schemes among DeafBlind people
In 2010 and 2011, as part of a 12-month period of sustained anthropological fieldwork, I accompanied two DeafBlind people during Orientation and Mobility (O&M) training sessions, where they learned to use a cane and other mobility equipment to navigate public, urban spaces. I took detailed notes and drew sketch- es of the spaces we moved through. As the DeafBlind person applied what they were taught, the O&M instructor, Marcus, narrated. On one of our first outings, Marcus explained to me that the first task is to develop “tactile awareness” around materials – brick, concrete, gravel – the differences between them and patterns in sequencing. The second task is to apply this awareness to travel, so when the cane encounters a texture, it is incorporated seamlessly into the rhythm of the forward- moving traveler.
For example, a DeafBlind man who I call Allan, was learning the route from his home to a particular bus stop. Blocks are easy to count thanks to the textures where the sidewalk meets the road on the corner. However, bus stops are not on corners; they are always some ambiguous distance away. To cope with this fact, Allan had to learn to attend in a different way to patterns in how the city is orga- nized. He started with facts he already knew, for example: in cities there are many doorways. From there, he noticed things he didn’t already know, for example, that the material on the ground in the entryway sometimes has a different texture than the main sidewalk, which can sometimes be detected by the cane. He learned that sometimes, entryways are set back from the rest of the wall, and form a negative space that is detectable with the cane, or with the “mini-guide,” a small, handheld device that bounces sonar off of surfaces, returning different intensities of vibration, depending on how close the object or surface is. When Allan shifted his attention to these aspects of his environment, he could easily grasp that the bus stop was two doorways from the corner, along the “shoreline” (or an orienting line – in this case, the line where the front of the businesses on that block come in contact with the sidewalk).
Orientation and Mobility training is one way to re-organize trajectories, path- ways, and grids, around new modes of access. This training requires the traveler to cultivate modes of receptivity and responsiveness to the material qualities of things. Material fragments are concretely incorporated into a trajectory and a rhythm. A doorway becomes a tactile silence in the rhythm, which is preceded by a hard tap against the brick-sided building and is followed by the same. The se- quence of material cues is incorporated into the pathway between the street corner and the bus stop along with other material clues, all of which guide the traveler. The patterns in the pathway accrue to the traveler as embodied schemes for ori- entation. These schemes, insofar as they are reciprocal across a group of language- users, contribute to the indexical ground of reference, affecting the language-user’s ideas about what will be relevant, accessible, and detectable in their environment(3). When DeafBlind people started communicating directly with one another, they struggled to resolve reference. As a result, they had to adjust their orientation schemes in ways that highlighted tactually accessible figure/ground relations. For example, if you have lived your life as a sighted person, then the path in Figure 1will feel intuitive to you. This is because you have traveled to many doorways, and you have orientation schemes that have been built up around those experiences so that lines like this extend out around you, suggesting possible routes of travel, and possible relations between materials, according to which objects can be identified. If I were to ask you where the door is, you might extend your arm out along one of these lines, and although my vision might be a little worse or better than yours, your vestibular system might be a little off, and so on, there is enough reciprocity in our perspectives that these grids we’re embedded in, converge, allowing us to resolve reference with no extraordinary effort.
Figure 1. Visual orientation scheme
If you are a person who has lost your vision, then you are a visual person who can’t see. You do not automatically acquire new orientation schemes. However, if you begin to attend to your environment in new ways, as Allan did, the path to the door in Figure 1, will no longer feel intuitive. Instead, an alternate route would be far more likely. First, some kind of shoreline would have to be identified, as in Figure 2. From there, the door can be identified as negative space as the trav- eler moves from their location against the wall to the door. Over time, intuitions grow stronger about how and where potential lines of travel intersect, and where spaces, protrusions, and patterns in the sequencing of materials will emerge. The DeafBlind traveler is subsumed by these patterns and an orienting grid of overlap- ping coordinate systems extends out around them. Sensory systems converge on, and are elaborated by, this grid, but they are not by any means identical with it. In order for reference to objects like the door in Figure 2 to be resolved, orienting grids must become reciprocal across the group of language-users, and those grids must fit seamlessly with sensory capacities that are common to all(4).
Figure 2. Tactile orientation scheme
Prior to the protactile workshops, DeafBlind individuals were at various stages in this process of developing new tactile orientation schemes. The schemes therefore were not reciprocal and there was no interactional process putting pressure on that fact, since communication was mediated by sighted interpreters. This state of affairs caused the deictic system to disarticulate from the deictic field, so when pointing signs were instantiated in the speech situation, they had no directivity. DeafBlind people would say to their interpreters: “When you point like that, it’s all air to me,” but most interpreters didn’t know how to do anything else. When DeafBlind people began communicating directly with one another, they addressed these problems in ways that sighted people would never have imagined.
Convergence of orientation schemes in interaction
Prior to the protactile movement, DeafBlind people received instructions from interpreters to respond, or to wait, or to comment, or to turn their body toward the addressee. In protactile frameworks, DeafBlind people responded to one another, directly in configurations like the one in Figure 3.
Figure 3. Protactile configuration
When DeafBlind people started communicating directly with one another, communicative signals were exchanged and patterns were established. Slow tapping on the knee became a signal of agreement; contact without movement became a sign of attention; fingertips moving quickly up and down became a way of expressing amusement, and so on. The signer could tell that the addressee was distracted if their responses were not timed correctly, and if the addressee’s body heated up, an inquiry might follow – Is this stressful? Am I being unclear? The exchange of signals like these opened up the possibility of carefully timed, mutual adjustment. As a result, DeafBlind people began to converge on reciprocal orientation schemes, and their language registered this shift.
Pointing in VASL and PTASL
Pointing signs were the first, but not the only, signs to register changes in the configuration of the deictic field. Prior to the protactile movement, pointing signs were produced as would be expected in Visual American Sign Language, and received via tactile reception (Figure 4). In protactile configurations, DeafBlind people realized that pointing signs like this are not effective for resolving reference because (a) they project a vector against a visually perceptible ground, and (b), they articulate to a field organized around visual modes of access.
Figure 4. VASL pointing sign
Figure 5. PTASL pointing sign
In order to address these problems, DeafBlind signers systematically altered point- ing signs along two dimensions. First, they transposed the pointing handshape to the body of the addressee. I call this process signal transposition (Edwards, 2014, pp. 173–176). Second, they altered the directional lines of the sign, so they ar- ticulate to a field organized by tactile geometries like the one in Figure 5, rather than visual geometries, like the one in Figure 4. I call this process sign calibra- tion (Edwards, 2014, pp. 173–179). Notice that the lines traced on the palm of the addressee map onto pathways in the environment. In other words, the sign is motivated by an iconic relation. However, that relation only becomes available to participants once the deictic system and the deictic field are brought into align- ment with one another and reciprocal modes of access have been established.
Sign-creation in the Seattle DeafBlind community
I have argued that in order to align the deictic system and the deictic field, DeafBlind signers systematically altered pointing signs in two ways. First, they transposed the handshape of the pointing sign onto the body of the addressee. Second, the directional lines of the sign were calibrated to a grid of intelligibility organized around tactile modes of access. Neither of these processes are linguistic, however, they were part of a broader divergence at the sublexical level in the structure of Visual and Protactile ASL(5), which affects the way that novel signs are created. In addition, aspects of objects are beginning to be foregrounded, or “selected” in depictive expressions, which would not be expected given visual modes of access and orientation.
Below, I consider several new PTASL signs, which were created in the con- text of a longer interaction between three DeafBlind people in the protactile workshops. In these examples, Lee is describing a phone charger like the one in Figure 6. In order to describe the cord, she recruits the hand of the addressee. First she manipulates the addressee’s hand into a partially open fist. Then she runs her pinky finger into the center of the fist, tracing a tight spiral pattern on the addressee’s palm, moving outward (Figure 7, represented schematically in Figure 8a and Figure 8b).
Figure 6. The phone charger
Figure 7. Sign representing the cord
Figure 8. Schematic representation of sign depicting cord
I encourage the reader to place their pinky finger inside of their partially-closed fist, and in a spiral motion, move from the center to the outside of the fist. If you have a spiral cord, like the one shown in Figure 6, pull it slowly through your partially closed hand and move your hand over it. If you have done this, you will no- tice a tactile resemblance between the sign and its referent. However, in order for this resemblance to appear, you must turn your attention to the tactile qualities of the object and the tactile dimensions of the representation, and you must assume that your interlocutor will do the same. From there, you must apply an orientation scheme, which extends beneath linguistic and non-linguistic processes and adheres to the following criterion: each foregrounded element must be presented against an accessible ground. This criterion renders possible places of articulation given by the grammar of VASL inadequate; neither the body of the signer, nor the space in front of the signer is accessible as a ground against which signs can be articulated. To resolve this problem, the signal has been transposed onto the body of the addressee. As the signer continues, this pattern of adjustment in figure- ground relations continues to organize the creation of signs.
In Figure 10, Lee describes the button at the top of the charger in Figure 9 by grasping the index, middle, and ring finger of the addressee. She presses on the tip of the middle finger several times as in Figure 11. Again, imagine yourself exploring the object tactually. As you run your fingers over the body of the charger and up toward the tip, you encounter a small piece of metal, which gives way to your touch. The most salient thing about this part of the charger, as you explore it tactually, is the fact that it moves when pressed on, while the rest of the charger remains stationary. The sign representing the button is iconic, but the salience of that element over others, is structured by modes of access, which accrue to the indexical ground of reference.
Figure 9. The button
Figure 10. Sign representing the button
Figure 11. Sketch of the sign representing the button
Another part of the phone charger that is salient from a tactile perspective, but perhaps not from a visual perspective, is the metal springs on either side of the shaft, which hold it in place when it is plugged in (Figure 12). In order to describe this portion of the charger, Lee isolates the index and middle fingers of the addressee and then pushes and releases several times, as in the sketch in Figure 13.
Figure 12. The metal springs on the charger
Figure 13. Sketch of sign representing metal springs on the charger
I encourage the reader to produce this sign on your own hand, or even better, someone else’s hand. You will notice a feeling that is tactually similar to pressing on small, metal springs. Once again, however, the assumption that the addressee will have tactile, rather than visual knowledge of the object follows from reciprocal modes of access.
No user of Visual American Sign Language would spontaneously create these signs. The most obvious reason is that they violate restrictions on where signs can be produced in VASL. For example, Stokoe (1960) observed that the “zero tab” (the space in front of the torso of the signer) is constrained by motor capacity as well as economy. While it is physically possible to produce signs in other regions around the body of the signer, this restricted space allows for the greatest ease of articulation (2005 [1960], p. 25). Within this restricted signing space, there are also arbitrary constraints, which come into view in a cross-linguistic frame. For example, the back of the head and the underarm are never recruited as places of articulation in VASL, but in other signed languages they are (Mandel, 1981, p. 11). The use of a range of locations on the body of the addressee, which are not drawn on by the VASL system, suggests a divergence in underlying constraints on sign production. In Figure 14, the shaded regions represent locations on the addressee’s body where PTASL signs are produced(6).
Figure 14. Attested places of articulation in PTASL
While any location on the body of the addressee would satisfy the figure/ground requirements of a tactile language, the locations that are thus far attested allow for ease of articulation. They do not, however, allow for ease of articulation in visual signed languages – not because of any motoric or cognitive constraints imposed on the signer alone – but because the basic participant frameworks that Deaf, sighted people occupy differ from basic participant frameworks occupied by protactile DeafBlind people.
Recall that the physical relation of one body to another in interaction is organized by participant frames. Access to the object is grounded in the bodily con- figurations through which participant frames are realized. Therefore, objects are objectified against a background which includes the embodied orientation scheme occupied by speaker and addressee. Principles of economy can only be applied given stable patterns in interaction. There is a shift in perspective that is necessary for grasping this fact. Rather than viewing the body as a producer and receiver of signs, it must be viewed as part of the indexical ground of communicative activity. The body that appears under this perspective interacts with the body that appears under a linguistic perspective, but it is not identical with it, and must be distinguished, analytically (i.e., one would not want to analyze deictic reference as though it were a phonetic phenomenon). In practice, though, there is only one body; therefore, when relations between the body and objects in the immediate environment snap to a new set of coordinates, organized by new modes of access, the linguistic system is affected. DeafBlind signers began to generate new orientation schemes, which extend in practice, across several analytically distinct domains. These orientation schemes structure processes of sign-creation in unpredictable ways, as they are filtered through interlocking linguistic and interactional fields. This process did not begin with the language, but rather, with changes in how touch is evaluated socially.
Activity frames that DeafBlind people had previously realized in sighted ways, had to find new expression via kinesthetic, olfactory, tactile, and thermal channels, which at the outset, felt inappropriate to DeafBlind participants. Pressing my chest against your back, reaching my arms over your arms, placing my hands on your hands, and attending to the subtle movements of your fingers on clay is like standing a few feet from you, fixing my gaze on your hands, noticing the way you manipulate clay. These practices are the same, we can call them both “watching”, and we will refrain from evaluating the tactile version as inappropriate. Co-presence is also organized by schemes that can be re-routed via tactile and thermal channels: if I put my hand on your thigh or shoulder, you have access to my body’s fluctuating temperature. These signals reflect changes in my mood, exertions of effort, and my physical responses to the environment. Attending to the temperature of my skin is the same as looking at me from a location nearby (nothing inappropriate has transpired). For Deaf, sighted people, these practices are foreign, unfamiliar, or else assigned to different frames of activity. For DeafBlind people, they have become new ways of expressing old interactional patterns. Protactile practices, therefore, are generating coherence for DeafBlind people, while simultaneously driving a divergence between Tactile and Visual American Sign Language.
Conclusion
Models of sign-creation in signed language linguistics literature posit an iconic, gestural approximation of a conceptual representation, which is analyzed to the phonological parameters of a particular signed language (Boyes-Braem, 1981, p. 42; Brennan, 1990, pp. 11–36; Mandel, 1981, pp. 204–211; Taub, 2001, pp. 43– 60; Sandler et al., 2011). The process I have just described contributes to our understanding of this process by examining some of the social and interactional mechanisms that restrict input to the process. In the Seattle DeafBlind community, changes in touch that are socially evaluated led to changes in structures of participation. This led, in turn, to changes in the way deictic reference was resolved. Over time, patterns in deictic retrieval triggered a process of deictic integration, which affected the way that language-users converge on common perspectives, including the figure/ground relations that structure processes of sign creation.
Footnotes
1 The only sighted people present were part of our video crew. Participants were instructed not to interact with us.
2 Oliver Sacks writes about a man, John Hull, who slowly lost his sight. He describes how Hull became increasingly alienated from visual imagery and memory and eventually, how he lost all connection to the visual world. This became evident when the faces of loved ones could no longer be conjured; deictic words, such as “here” and “there”, lost meaning; and objects were no longer imbued with visual characteristics of any kind (2003, pp. 48–49). Instead of fighting the process, Hull gave himself over to it, entering into a world of rich acoustic experience. For ex- ample, the sound of rain, which was once experienced as a background sound, took on a crucial role in delineating entire landscapes. “Rain”, Hull writes, “has a way of bringing out the contours of everything … Instead of an intermittent and thus fragmented world, the steadily falling rain creates continuity of acoustic experience…[it] gives a sense of perspective and the actual relationships of one part of the world to another.” (Sacks, 2003, p. 49). According to Sacks, when Hull let go of visuality, the gaps in his experience were filled in, and a new kind of perceptual coherence was achieved. However, there are also people who respond to vision loss by devoting all of their attention to developing what Sacks calls their “inner eye”, a strategy that involves con- structing visual scenes around available sensory input, or reconstructing them from memory. These scenes can reportedly be as vivid and detailed as visual perception, or else more vivid – like a hallucination or a dream. People who can no longer see remember “sky-blue buses”, “egg- yellow trams”, and “beache[s] of crystallized salt shimmering like snow under an evening sun” (Sacks, 2003, p. 52). In the Seattle DeafBlind community, the “inner eye” proved to be unreliable, and those who decided to abandoned it, joined the protactile movement, choosing to fill in the gaps in their experience in new tactile, olfactory, and kinesthetic ways.
3 While Orientation and Mobility training has been available to DeafBlind people in Seattle for many years, the protactile movement made this kind of training more appealing. The process was also replicated in unofficial ways in other venues.
4 Reciprocity is not a descriptive term. It is a principle interactants orient to. Participants as- sume that their interlocutors can access particular olfactory, thermal, and tactile aspects of the environment, and that they cannot access visual or auditory dimensions, even though some people have partial vision. Deaf communities work this way too: While auditory capacities vary widely among Deaf people, in signing environments, visual, kinesthetic, and olfactory modes of access are assumed. One does not ask, “How much do you hear? Do you need me to sign?” before signing. Regardless of how much anyone can hear, it is assumed that Visual ASL is the appropriate choice for communication.
5 Sublexical constraints on the formation of signs, including constraints on symmetry, com- plexity, place of articulation, and “weak-drop”, are either relaxed or out-right violated (Edwards, 2014, pp. 192–244).
6 This was the case as of 2011. However, in 2015, in a short field trip to Seattle, the author noted that these regions had grown. Signs were being produced in regions of the addressee’s body, which were expected to be excluded on social grounds, including the chest and the belly of the addressee (regardless of gender).
Acknowledgments
Thank you to the Wenner-Gren Foundation for Anthropological Research (Grant #8110) and the Diebold Foundation for Linguistic Anthropological Research for funding this research as well as the Office for Research Support and International Affairs at Gallaudet University for supporting the writing phase. The argument presented in this paper greatly benefited from con- versations with Robert T. Sirvage and members of his protactile design class at Gallaudet, Jelica Nuccio, ajgranda, Paul Dudis, Mara Green, Eve Sweetser, Elisabeth Wehling, and the participants of the Berkeley Gesture Pragmatics conference.
References
Aronoff, Mark, Irit Meir, Carol Padden, & Wendy Sandler (2008). The roots of linguistic organi- zation in a new language. Interaction Studies, 9 (1), 133–153. doi: 10.1075/is.9.1.10aro
Boyes-Braem, Penny Kaye (1981). Features of the handshape in American Sign Language. PhD Dissertation, University of California, Berkeley.
Brennan, Mary (1990). Word formation in BSL. Doctoral Dissertation, University of Stockholm. Bühler, Karl (2001 [1934]). Theory of language: The representational function of language. Amsterdam & Philadelphia: John Benjamins.
Edwards, Terra (2014). Language emergence in the Seattle DeafBlind community. PhD Dissertation, University of California, Berkeley.
Goffman, Erving (1964). The neglected situation. American Anthropologist, 66, 133–136. doi: 10.1525/aa.1964.66.suppl_3.02a00090
Goodwin, Charles & John Heritage (1990). Conversation analysis. The Annual Review of Anthropology, 19, 283–307. doi: 10.1146/annurev.an.19.100190.001435
Hanks, William F. (1990). Referential practice: Language and lived space among the Maya. Chicago: University of Chicago Press.
Hanks, William F. (2005a). Pierre Bourdieu and the practices of language. Annual Review of Anthropology, 34, 67–83. doi: 10.1146/annurev.anthro.33.070203.143907
Hanks, William F. (2005b). Explorations in the deictic field. Current Anthropology, 46, 191–220. doi: 10.1086/427120
Kegl, Judy, Ann Senghas, & Marie Coppola (1999). Creation through contact: Sign language emergence and sign language change in Nicaragua. In Michel DeGraff (Ed.), Language creation and Language change: Creolization, diachrony, and development (pp. 179–237). Cambridge, MA & London: MIT Press.
Levinson, Stephen C. (2006). On the human interaction engine. In Stephen C. Levinson & Nicholas J. Enfield (Eds.), The roots of human sociality: Culture, cognition, and interaction. London: Berg.
Mandel, Mark Alan (1981). Phonotactics and morphophonology in American Sign Language. PhD Dissertation, University of California, Berkeley.
Meier, Richard P. (1990). Person deixis in American Sign Language. In Susan D. Fischer & Patricia Siple (Eds.), Theoretical issues in sign language research, Vol. 1: Linguistics (pp. 175– 190). Chicago: University of Chicago Press.
Meier, Richard P. & Diane Lillo-Martin (2012). Response: The apparent reorganization of ges- ture in the evolution of verb agreement in signed languages. Theoretical Linguistics, 38, 153–157. doi: 10.1515/tl-2012-0009
Rathmann, Christian & Gaurav Mathur (2002). Is verb agreement the same crossmodally? In Richard P. Meier, Kearsy Cormier, & David Quinto-Pozos (Eds.), Modality and structure in signed and spoken languages (pp. 370–404). Cambridge: Cambridge University Press.doi: 10.1017/CBO9780511486777.018
Sacks, Oliver (2003). The mind’s eye: What the Blind see. The New Yorker, July 28, 48–59.
Sandler, Wendy, Mark Aronoff, Irit Meier, & Carol Padden (2011). The gradual emergence of phonological form in a new language. Natural Language and Linguistic Theory, 29, 503– 543. doi: 10.1007/s11049-011-9128-2
Schutz, Alfred (1970). On phenomenology and social relations. Chicago & London: University of Chicago Press.
Senghas, Ann (2000). The development of early spatial morphology in Nicaraguan Sign Language. In
S. C. Howell, S. A. Fish and T. Keith-Lucas (Eds.), Proceedings of the 24th Annual Boston University Conference on Language Development, Vol. 2 (pp. 696–707). Somerville, MA: Cascadilla Press.
Senghas, Ann & Marie Coppola (2001). Children creating language: How Nicaraguan Sign Language acquired a spatial grammar. Psychological Science, 12, 323–328.doi: 10.1111/1467-9280.00359
Stokoe, William C. (2005 [1960]). Sign language structure: An outline of the visual communication systems of the American Deaf. Journal of Deaf Studies and Deaf Education, 10, 3–37. doi: 10.1093/deafed/eni001
Taub, Sarah F. (2001). Language from the body: Metaphor and iconicity in American Sign Language. Cambridge: Cambridge University Press. doi: 10.1017/CBO9780511509629
Biographical notes
Terra Edwards is Assistant Professor of Anthropology at Saint Louis University. Her research examines the social and interactional foundations of a grammatical divergence between Visual American Sign Language and Protactile American Sign Language in the Seattle DeafBlind community. She has published several articles about DeafBlind language and communication and has been involved in the Seattle DeafBlind community in a range of capacities for more than 20 years.
This article examines the social and interactional foundations of sign-creation among DeafBlind people in Seattle. Most members of the Seattle DeafBlind com- munity are born deaf and slowly become blind. In the past, as vision changed, DeafBlind people became increasingly dependent on sighted interpreters to guide them through physical spaces and to relay utterances and information about the environment. Then, between 2006 and 2010, the protactile movement took root in Seattle and since then has been spreading, very quickly, across the country.
The protactile movement started under the leadership of a woman I call Adrijana, who was the first ever DeafBlind director of the DeafBlind Service Center, a non-profit organization that provides advocacy and communication-related ser- vices. When Adrijana took her post, she replaced most of the staff with people who shared her vision for the future: to establish a DeafBlind space where interpreters were not needed and where navigation, communication, and co-presence felt nat- ural. It wasn’t clear at the outset what practices might count as protactile, but some things were clearly anti-tactile, like the habit people had of pausing, or “freezing” when a DeafBlind person touched them. The “freezing” phenomenon, according to Adrijana, had an eerie effect. Conference rooms, offices, and hallways seemed perpetually occupied by people who were suspended in mid-air. Adrijana told me that when she was with another person, for example, eating lunch and convers- ing, she would take a bite, and then feel the other person’s hand or arms to see if they were still eating or not. If they weren’t, she might say something to them, and if they were, she might want to feel their hands take the food to their mouths, or maybe feel their jaw chewing (just to see what kind of chewer they were). But every time she put her hands on theirs, they would pause, awkwardly, until she removed her hands. Or if people were standing around talking in the conference room before a meeting, she would approach them, put her hands on one of them, and hope that they would continue signing, so she could tell what they were talking about. Inevitably, though, the conversation would stop. Either the people would stop moving, as if they didn’t know what to do, or they would ask her what she wanted. She didn’t know what she wanted because she didn’t know what the pos- sibilities were; it was impossible to initiate any kind of communication, because as she said: “Everything we touched froze!” It was also hard to gain access to anything but the hands of other people. Even then, access was restricted to contexts where linguistic signs were being produced. Hands that were busy gripping an umbrella or turning a page, for example, were entirely inaccessible. Over time, this lack of access to embodied activity led to a deterioration of sighted intuitions about the meanings of gestures and other bodily movements. Things like a shrug, raised eyebrows, hunched shoulders, or pocketed hands, became cryptic, surprising, and even disturbing when they were encountered. Connections between visible bodily cues and shared activity frames were breaking down.
Prior to the protactile movement, these problems had been addressed by ramping up the compensatory mechanisms that were in play and trying to help DeafBlind people seem as sighted as possible. But the leaders of the protactile movement went a different way. They opted instead to establish conventions for reciprocal, tactile communication between DeafBlind people, which would re- quire no compensation at all. Toward this aim, a series of 20 protactile workshops were organized by two DeafBlind instructors for 11 DeafBlind participants in the fall of 2010. No interpreters were provided and no sighted people were included(1). In the workshops, DeafBlind people stopped trying to reconstruct visual scenes
or compensate for vision loss (2). Instead, they returned to basic interactional frames and sought out new, tactile ways of realizing them. The process usually started with two or more DeafBlind people remembering types of activity they used to enjoy. They would say: Remember when we used to be able to watch other people do things? Or remember when we used to be able to listen to someone else’s conversa- tion, and then decide that we didn’t want to join it? Interactional frames like this were recoverable, but the embodied knowledge required to enact and recognize them had been lost. Therefore, when people referred to people, things, or events in the immediate environment, reference was difficult or impossible to resolve.
This article focuses on the role of deictic reference in re-routing modes of access required to converge on common referents. I argue that as DeafBlind participants worked their way toward common pathways in their environment, language and gesture were aligned in new ways, giving rise to novel linguistic forms. At the center of this transformation is a process I call deictic integration. Deictic integration restricts the range of contextual values that the grammar can retrieve by coordinating the deictic system with patterns in activity. I argue that this process constrains how language and gesture are coordinated as new signs are created.
I start by discussing the problem of sign-creation as encountered in sign language linguistics literature. In the following section, I introduce three key con- structs for understanding processes of sign-creation in the Seattle DeafBlind community: the deictic system, the deictic field, and deictic integration. I then show how DeafBlind individuals have worked to re-structure their embodied orientation to the environment, and how orientation schemes are converging among DeafBlind people in interaction. In the section to follow, I show how pointing signs in Protactile American Sign Language are registering this shift in collective patterns of orientation. From there, I show how the same processes triggering a divergence in how deictic reference is accomplished in Protactile and Visual ASL are also affecting the organization of Protactile ASL at the sublexcial level. I conclude by highlighting implications of these findings for our understanding of sign-creation.
Sign-creation
Since the inception of the field of sign language linguistics, scholars have been at pains to distinguish language from “gesture” and “pantomime”. One way this has been done has been to posit hypothetical models of sign-creation that begin with an act of reference (e.g., Boyes-Braem, 1981, p. 42; Brennan, 1990, pp. 11–36; Mandel, 1981, pp. 204–211; Taub, 2001, pp. 43–60). In these scenarios, one aspect of the object is “selected” to metonymically represent the whole. This iconic, ges- tural form is then analyzed to the phonological parameters of a particular signed language. The resulting form, despite its iconicity, is thereby incorporated into the arbitrary linguistic system and is further subject to morphological, syntactic, and semantic constraints.
Hypothetical models like this, particularly in the early stages of research on signed languages, were used to defend the linguistic status of signed languages against claims of inferiority. The authors who established models of selection did not claim descriptive adequacy for any diachronic or interactional process. However, in recent work, similar models have been drawn on to explain actual instances of sign-creation in emergent signed languages (e.g., Aronoff et al., 2008; Sandler et al., 2011, p. 519).
The present study contributes to this work by showing how DeafBlind people draw on novel gestural resources, emerging from patterns of activity organized along tactile, rather than visual lines. Taking a practice approach (Hanks, 2005a, 2005b; Edwards, 2014), I argue that a range of potentially iconic relations are ruled out prior to the imposition of grammatical constraints, on social and interactional grounds. More specifically, I argue that successful acts of reference require the coordination of gestural and linguistic elements, both of which are constrained by shared modes of access to the environment. Prior to the establishment of shared modes of access, the selection of one aspect of the object to represent the whole will not feel obvious or transparent across a group of language-users.
The data for this study were collected during a 10 week series of protactile workshops, which were led by two DeafBlind instructors for 11 DeafBlind participants in Seattle in the fall and winter of 2010 and 2011. The overarching aim of these workshops was to create an environment where DeafBlind people could find ways of communicating with one another directly, without the use of sighted interpreters. In order to do this, many participants chose to wear blindfolds in order to prevent whatever vision they had left from interfering with this aim, and they participated in activities designed to cultivate tactile sensibilities and modes of communication. One of these activities was called “the object game.” The instructors brought a bag of objects to the workshop, which included things like a deco- rated metal tea strainer, a toy snake which moved in a distinctive way when picked up, a car-charger for a smart phone, and so on. Then they arranged participants in pairs. One person chose an object from the bag and explored it tactually. After put- ting the object back in the bag, they described it in detail to the second participant. The addressee was then given the object and asked to evaluate the description in terms of how well it prepared them for the qualities of the object. The pair went on to negotiate the description until they settled on one they both felt represented the object adequately. This paper focuses on forms that were ratified by this process, most of which were used to depict the handling of objects.
According to Dudis, depiction in American Sign Language requires the signer to map aspects of conceptualized scenarios onto the body of the signer and the surrounding space using symbolic and iconic resources that can be both linguistic and gestural (2004). Depiction is one way that signers can generate novel expressions. Depictive qualities of novel expressions can be foregrounded and back- grounded in subsequent usage events, and this possibility is a pervasive resource for language-users. This paper contributes to our understanding of depiction and sign-creation by examining the indexical relations that are drawn on in aligning modes of access among participants, thereby making common conceptualizations of embodied experience available for interactants to depict. In order to understand this process, I focus on the interaction of the deictic system and the deictic field.
The deictic system and the deictic field
Together, the deictic system and its corresponding deictic field structure how peo- ple refer to objects and events in the immediate environment (Bühler, 2011 [1934]; Hanks, 1990). Bühler compares the deictic system to a signpost, positioned on the “pathways” of its corresponding deictic field. Like a signpost, deictic words, such as here and there are combined with pointing gestures to create a perceptu- ally salient sign that directs its recipient. For example, when a human “opens his mouth and begins to speak deictically, he says … there! is where the station must be, and assumes temporarily the posture of a signpost” (Bühler, 2011 [1934], p. 145). Despite its minimal semantic content, the meaning of the deictic expression is not difficult to sort out because speakers “can do nothing other than take advantage – naturally to a greater or lesser extent – of the possibilities the deictic field offers them; moreover, they can do nothing that one who knows the deictic field could not predict, or, when it turns up, classify” (Bühler, 2011 [1934], p. 145). In other words, a deictic sign is a signal to choose one path over another; it does not launch a trajectory into unstructured space.
Within a field of limited choices the deictic sign, like the signpost, does two things: it names and it points. Its capacity to name derives from oppositions in the language (here is not there), which contribute to the definiteness of reference. The capacity of the deictic sign to point derives from patterns in the field where it is inserted. These pathways contribute to the directivity of reference. Therefore, when a deictic sign is applied in the speech situation, it must retrieve values from two distinct sources: the linguistic system and the deictic field. All deictic signs are composite in this respect, composed of both “symbols” and “signals” (Bühler, 2001 [1934], p. 99). Speaking deictically requires the coordination of values from each field in the unfolding of the utterance.
In spoken languages, the deictic system is composed of discrete, oppositional categories, which encode highly schematic semantic distinctions. There is grow- ing evidence that pointing signs in signed languages work like this, too. They can act as determiners, demonstrative pronouns, anaphoric deictic elements, personal pronouns; they can be lexicalized as temporal deictics like yesterday and to- morrow, and these different functions correspond to differences in form (Pfau, 2003, pp. 148–151).
Those differences, which derive from the linguistic system, help signers single out a thing among other things. However, when a deictic sign is applied in the speech situation, definite meanings must be coordinated with elements that di- rect the addressee’s attention along pathways in the deictic field, which are shaped by memory, perception, the physical capacities of the individual, routine routes through familiar spaces, the intuitions one develops for how a city, a village, a store, or a parking lot might be organized, etc. These pathways extend out around the language-user like an orienting grid. Each person’s experience is, in some measure, unique, and their orienting grids reflect those differences. Therefore, a “reciprocity of perspectives” must be established before reference can be reliably resolved. Schutz explains that where there is reciprocity:
I take it for granted – and assume my fellow man does the same – that if I change places with him so that his ‘here’ becomes mine, I would be at the same distance from things and see them in the same typicality as he actually does; moreover, the same things would be in my reach which are actually in his. (All this vice versa).
(Schutz, 1970, pp. 183)
From Bühler’s perspective, this kind of reciprocity is assumed, since patterns in the deictic field are learned, and therefore, to some degree, shared. However, ob- servers of interaction have noted that a great deal of work is invested to estab- lish reciprocity, moment-to-moment, in the unfolding of communicative activity; it cannot be assumed a priori (e.g., Goodwin & Heritage, 1990; Levinson, 2006; Goffman, 1964).
In order to account for the structures that are present prior to activity, and those that are worked out in the course of an interaction, Hanks synthesizes Goffman’s “situation” and Bühler’s deictic field (Hanks, 2005b, p. 192). This yields a construct that can account for: (1) “the positions of communicative agents rela- tive to the participant frameworks they occupy;” (2) “the position occupied by the object of reference;” and (3) “the multiple dimensions whereby agents have access to objects” (Hanks, 2005b, p. 193). These dimensions often include perceptual ac- cess, but they can also include shared knowledge, memory, imagination, or any other relation that allows signer and addressee to single out an object of reference against a horizon of potential referents. Therefore, while each individual comes to an interaction with orienting schemes of their own, the activity of referring requires those schemes to be coordinated in repeatable and expectable ways over time, as modes of access are embedded in routine and reciprocal patterns of activ- ity. One of the ways in which embedding occurs is through the movement from “participant frameworks” to “participant frames.”
Participant frameworks are the emergent configurations that communicative agents occupy in the unfolding of an interaction, while participant frames are the repository of regularities that emerge in participant frameworks across encounters (Hanks, 1990, pp. 137–187). Interactants schematize participant frameworks in the course of communicating, and these schemes generate a subset of maximally expectable configurations within which signs are produced and received (Hanks, 1990, p. 148). These regularities, which inhere in the deictic field, are in dynam- ic tension with conventional categories in the language.
For example, person categories in the deictic system of a language are linked to participant roles, so the use of pronouns “tends to sustain an inventory of participant frames by focal- izing them, engaging them as ground for further reference, or both” (Hanks, 1990,p. 148). In order to account for the dynamic tension between deictic categories in the language and their corresponding roles and structures in the deictic field, I would like to discuss the term deictic integration in some depth.
Deictic integration
Deictic integration accounts for coordination of linguistic and deictic elements into tighter and more restricted configurations over time such that (a) when a deictic sign is instantiated, the range of retrievable values in the deictic field is restricted to a small and alternating set; and (b) deictic signs are organized by contrastive opposition and subject to grammatical constraints. For example, the pronominal system of Visual American Sign Language (VASL) makes a two-way distinction between first and non-first person (Meier, 1990, p. 377). The first per- son pronoun is produced with a pointing sign directed toward the signer and the non-first person pronoun is produced with a pointing sign directed away from the signer. This distinction aligns with a basic bodily participant framework occupied by signer and addressee.
Hanks argues that basic participant frameworks should correspond to the way participants perceive interaction, and therefore should be relatively simple, since participants do not generally struggle as they move between participant roles. Some clues as to how participant frameworks are perceived can be found in conventional and commonly used labels, as well as the frames that are treated by participants as usual or expectable (Hanks, 1990, p. 152). Citation forms in VASL are consistently recorded by a camera placed at eye-level a few feet away from the signer. This is evidence that VASL signers treat that bodily configuration as basic. The pronominal system of VASL registers that fact by mapping a single distinction onto that configuration. In other words, the pronominal system is aligned with, and retrieves values from, the basic participant frame inhabited by VASL users and the result is that the language-user is forced to choose one of two highly restricted values: first or non-first person.
In contrast to pronouns in VASL, non-linguistic pointing gestures are responsive to a wide range of contextual dynamics. This is the difference between a pointing gesture and a pronoun: the former can retrieve a wide range of values from the deictic field, while the latter is set to retrieve one of a restricted set. If the pointing gesture is momentarily altered as it is brought into relation with some dimension of context, linguistic and deictic elements are merely coordinated. If there is a restricted set of values (e.g., person values), and one of those values must be selected in order to produce a grammatical utterance, linguistic and deictic elements are integrated. The process whereby the deictic system and the deictic field are coordinated into tighter and more restricted configurations is what I call deictic integration (Edwards, 2014, pp. 27–61, 159–190).
Recent work on nascent signed languages suggests that deictic integration plays an important role in language-emergence. In Nicaragua, for example, the emergence of a new signed language has been associated with the creation of spatially modulated verbs (Senghas, 2000; Senghas & Coppola, 2001; Kegl et al., 1999). Early on in the development of Nicaraguan Sign Language, verbs like “speaking-to (a person)” were expressed in the following way: the signer would point to a per- son in the immediate environment, produce the verb, and then sweep the finger from one person to another to indicate who was speaking to whom. Later on, signers moved the verb from one location to another, incorporating the sweeping pointing gesture into a single, verbal sign (Kegl et al., 1999; Senghas, 2000). This is like syntactic agreement in the sense that relations are being established between a verb and entities that can be represented by nominal signs. However, the referents are not represented by nominal signs. Instead, they are linked directly to the verb via a deictic gesture.
These elements, which behave in part as one would expect deictic gestures to behave, and in part, as one would expect grammatically marked nouns to be- have, suggest that a process of deictic integration is underway in Nicaraguan Sign Language. Recall that deictic integration draws linguistic and deictic elements into tighter and more restricted configurations as the language develops (Edwards, 2014, p. 27–61, 159–190). On the one hand, deictic relations are increasingly caught up in and organized by grammatical relations, but the reverse is also true: the language becomes increasingly dependent on the deictic field to express grammatical relations. We know that as signed languages emerge, they tend to develop a class of verbs known as “directional verbs”, which rely on this kind of mutual de- pendency (Meier & Lillo-Martin, 2012). Directional verbs integrate anaphoric deictic elements and linguistic elements to mark grammatical relations (Rathmann & Mathur, 2002). Here, a highly restricted set of alternating values (i.e., person and number values) are retrieved from the anaphoric deictic field. However, once they are retrieved, they act like arguments of the verb, as opposed to referents. This ambiguity between arguments and referents is a result of deictic integration, and is associated with the transition from gestural communication system to signed language (Edwards, 2014, pp. 27–61).
Deictic integration has also been important in the emergence of Al-Sayyid Bedouin Sign Language (ABSL). For example, early on in its development, ABSL developed a productive morphological process whereby one deictic sign and one characterizing sign are compounded to produce place names (Aronoff et al.,2008). As these connections have become increasingly integrated, the order of the compounded elements has become fixed; the deictic component is word-final (Aronoff et al., 2008, p. 146). In other words, deictic elements have become increasingly caught up in, and coordinated by grammatical relations, and the inverse is also true: as the grammar became more tightly coordinated with elements and relations in the deictic field, semiosis became more language-like. Deictic integration is a resource for sign creation because it aligns the language with modes of access and orientation that are reciprocal among users of that language. This is only possible because, as Karl Bühler points out, linguistic elements are not related to the fields in which they are inserted as matter is related to form (2001 [1934],p. 17). Instead, linguistic and non-linguistic fields form a gestalt, foregrounding and backgrounding inserted elements. Objects are represented indirectly via the juxtaposition of interlocking fields, each one introducing some arbitrariness of its own. As you go further out of the core mediating systems of a language, you arrive at the world, where you find what Bühler calls “differences in world view” (2001 [1934], p. 171) or what Schutz calls “differences in perspective”. At the outer perimeter of the language is the deictic system, reaching on one side toward the grammar, and on the other, toward the deictic field. Through patterns of retrieval and integration, the language is aligned with the world as it is perceived by the users of that language, and those processes echo in arbitrary ways as they move from the perimeter to the core of the grammar.
In what follows, I show how deictic integration is affecting processes of sign- creation among DeafBlind people in Seattle. I begin with the reconfiguration of sensory orientation schemes; I describe some of the practices that contributed to this process and I show how it has triggered a reconfiguration of the deictic field. Next, I show how patterns in deictic retrieval are bringing the linguistic system in line with the deictic field. I argue that iconic relations become reciprocally avail- able across a group of language-users as a result of this process.
Reconfiguration of orientation schemes among DeafBlind people
In 2010 and 2011, as part of a 12-month period of sustained anthropological fieldwork, I accompanied two DeafBlind people during Orientation and Mobility (O&M) training sessions, where they learned to use a cane and other mobility equipment to navigate public, urban spaces. I took detailed notes and drew sketch- es of the spaces we moved through. As the DeafBlind person applied what they were taught, the O&M instructor, Marcus, narrated. On one of our first outings, Marcus explained to me that the first task is to develop “tactile awareness” around materials – brick, concrete, gravel – the differences between them and patterns in sequencing. The second task is to apply this awareness to travel, so when the cane encounters a texture, it is incorporated seamlessly into the rhythm of the forward- moving traveler.
For example, a DeafBlind man who I call Allan, was learning the route from his home to a particular bus stop. Blocks are easy to count thanks to the textures where the sidewalk meets the road on the corner. However, bus stops are not on corners; they are always some ambiguous distance away. To cope with this fact, Allan had to learn to attend in a different way to patterns in how the city is orga- nized. He started with facts he already knew, for example: in cities there are many doorways. From there, he noticed things he didn’t already know, for example, that the material on the ground in the entryway sometimes has a different texture than the main sidewalk, which can sometimes be detected by the cane. He learned that sometimes, entryways are set back from the rest of the wall, and form a negative space that is detectable with the cane, or with the “mini-guide,” a small, handheld device that bounces sonar off of surfaces, returning different intensities of vibration, depending on how close the object or surface is. When Allan shifted his attention to these aspects of his environment, he could easily grasp that the bus stop was two doorways from the corner, along the “shoreline” (or an orienting line – in this case, the line where the front of the businesses on that block come in contact with the sidewalk).
Orientation and Mobility training is one way to re-organize trajectories, path- ways, and grids, around new modes of access. This training requires the traveler to cultivate modes of receptivity and responsiveness to the material qualities of things. Material fragments are concretely incorporated into a trajectory and a rhythm. A doorway becomes a tactile silence in the rhythm, which is preceded by a hard tap against the brick-sided building and is followed by the same. The se- quence of material cues is incorporated into the pathway between the street corner and the bus stop along with other material clues, all of which guide the traveler. The patterns in the pathway accrue to the traveler as embodied schemes for ori- entation. These schemes, insofar as they are reciprocal across a group of language- users, contribute to the indexical ground of reference, affecting the language-user’s ideas about what will be relevant, accessible, and detectable in their environment(3). When DeafBlind people started communicating directly with one another, they struggled to resolve reference. As a result, they had to adjust their orientation schemes in ways that highlighted tactually accessible figure/ground relations. For example, if you have lived your life as a sighted person, then the path in Figure 1will feel intuitive to you. This is because you have traveled to many doorways, and you have orientation schemes that have been built up around those experiences so that lines like this extend out around you, suggesting possible routes of travel, and possible relations between materials, according to which objects can be identified. If I were to ask you where the door is, you might extend your arm out along one of these lines, and although my vision might be a little worse or better than yours, your vestibular system might be a little off, and so on, there is enough reciprocity in our perspectives that these grids we’re embedded in, converge, allowing us to resolve reference with no extraordinary effort.
Figure 1. Visual orientation scheme
If you are a person who has lost your vision, then you are a visual person who can’t see. You do not automatically acquire new orientation schemes. However, if you begin to attend to your environment in new ways, as Allan did, the path to the door in Figure 1, will no longer feel intuitive. Instead, an alternate route would be far more likely. First, some kind of shoreline would have to be identified, as in Figure 2. From there, the door can be identified as negative space as the trav- eler moves from their location against the wall to the door. Over time, intuitions grow stronger about how and where potential lines of travel intersect, and where spaces, protrusions, and patterns in the sequencing of materials will emerge. The DeafBlind traveler is subsumed by these patterns and an orienting grid of overlap- ping coordinate systems extends out around them. Sensory systems converge on, and are elaborated by, this grid, but they are not by any means identical with it. In order for reference to objects like the door in Figure 2 to be resolved, orienting grids must become reciprocal across the group of language-users, and those grids must fit seamlessly with sensory capacities that are common to all(4).
Figure 2. Tactile orientation scheme
Prior to the protactile workshops, DeafBlind individuals were at various stages in this process of developing new tactile orientation schemes. The schemes therefore were not reciprocal and there was no interactional process putting pressure on that fact, since communication was mediated by sighted interpreters. This state of affairs caused the deictic system to disarticulate from the deictic field, so when pointing signs were instantiated in the speech situation, they had no directivity. DeafBlind people would say to their interpreters: “When you point like that, it’s all air to me,” but most interpreters didn’t know how to do anything else. When DeafBlind people began communicating directly with one another, they addressed these problems in ways that sighted people would never have imagined.
Convergence of orientation schemes in interaction
Prior to the protactile movement, DeafBlind people received instructions from interpreters to respond, or to wait, or to comment, or to turn their body toward the addressee. In protactile frameworks, DeafBlind people responded to one another, directly in configurations like the one in Figure 3.
Figure 3. Protactile configuration
When DeafBlind people started communicating directly with one another, communicative signals were exchanged and patterns were established. Slow tapping on the knee became a signal of agreement; contact without movement became a sign of attention; fingertips moving quickly up and down became a way of expressing amusement, and so on. The signer could tell that the addressee was distracted if their responses were not timed correctly, and if the addressee’s body heated up, an inquiry might follow – Is this stressful? Am I being unclear? The exchange of signals like these opened up the possibility of carefully timed, mutual adjustment. As a result, DeafBlind people began to converge on reciprocal orientation schemes, and their language registered this shift.
Pointing in VASL and PTASL
Pointing signs were the first, but not the only, signs to register changes in the configuration of the deictic field. Prior to the protactile movement, pointing signs were produced as would be expected in Visual American Sign Language, and received via tactile reception (Figure 4). In protactile configurations, DeafBlind people realized that pointing signs like this are not effective for resolving reference because (a) they project a vector against a visually perceptible ground, and (b), they articulate to a field organized around visual modes of access.
Figure 4. VASL pointing sign
Figure 5. PTASL pointing sign
In order to address these problems, DeafBlind signers systematically altered point- ing signs along two dimensions. First, they transposed the pointing handshape to the body of the addressee. I call this process signal transposition (Edwards, 2014, pp. 173–176). Second, they altered the directional lines of the sign, so they ar- ticulate to a field organized by tactile geometries like the one in Figure 5, rather than visual geometries, like the one in Figure 4. I call this process sign calibra- tion (Edwards, 2014, pp. 173–179). Notice that the lines traced on the palm of the addressee map onto pathways in the environment. In other words, the sign is motivated by an iconic relation. However, that relation only becomes available to participants once the deictic system and the deictic field are brought into align- ment with one another and reciprocal modes of access have been established.
Sign-creation in the Seattle DeafBlind community
I have argued that in order to align the deictic system and the deictic field, DeafBlind signers systematically altered pointing signs in two ways. First, they transposed the handshape of the pointing sign onto the body of the addressee. Second, the directional lines of the sign were calibrated to a grid of intelligibility organized around tactile modes of access. Neither of these processes are linguistic, however, they were part of a broader divergence at the sublexical level in the structure of Visual and Protactile ASL(5), which affects the way that novel signs are created. In addition, aspects of objects are beginning to be foregrounded, or “selected” in depictive expressions, which would not be expected given visual modes of access and orientation.
Below, I consider several new PTASL signs, which were created in the con- text of a longer interaction between three DeafBlind people in the protactile workshops. In these examples, Lee is describing a phone charger like the one in Figure 6. In order to describe the cord, she recruits the hand of the addressee. First she manipulates the addressee’s hand into a partially open fist. Then she runs her pinky finger into the center of the fist, tracing a tight spiral pattern on the addressee’s palm, moving outward (Figure 7, represented schematically in Figure 8a and Figure 8b).
Figure 6. The phone charger
Figure 7. Sign representing the cord
Figure 8. Schematic representation of sign depicting cord
I encourage the reader to place their pinky finger inside of their partially-closed fist, and in a spiral motion, move from the center to the outside of the fist. If you have a spiral cord, like the one shown in Figure 6, pull it slowly through your partially closed hand and move your hand over it. If you have done this, you will no- tice a tactile resemblance between the sign and its referent. However, in order for this resemblance to appear, you must turn your attention to the tactile qualities of the object and the tactile dimensions of the representation, and you must assume that your interlocutor will do the same. From there, you must apply an orientation scheme, which extends beneath linguistic and non-linguistic processes and adheres to the following criterion: each foregrounded element must be presented against an accessible ground. This criterion renders possible places of articulation given by the grammar of VASL inadequate; neither the body of the signer, nor the space in front of the signer is accessible as a ground against which signs can be articulated. To resolve this problem, the signal has been transposed onto the body of the addressee. As the signer continues, this pattern of adjustment in figure- ground relations continues to organize the creation of signs.
In Figure 10, Lee describes the button at the top of the charger in Figure 9 by grasping the index, middle, and ring finger of the addressee. She presses on the tip of the middle finger several times as in Figure 11. Again, imagine yourself exploring the object tactually. As you run your fingers over the body of the charger and up toward the tip, you encounter a small piece of metal, which gives way to your touch. The most salient thing about this part of the charger, as you explore it tactually, is the fact that it moves when pressed on, while the rest of the charger remains stationary. The sign representing the button is iconic, but the salience of that element over others, is structured by modes of access, which accrue to the indexical ground of reference.
Figure 9. The button
Figure 10. Sign representing the button
Figure 11. Sketch of the sign representing the button
Another part of the phone charger that is salient from a tactile perspective, but perhaps not from a visual perspective, is the metal springs on either side of the shaft, which hold it in place when it is plugged in (Figure 12). In order to describe this portion of the charger, Lee isolates the index and middle fingers of the addressee and then pushes and releases several times, as in the sketch in Figure 13.
Figure 12. The metal springs on the charger
Figure 13. Sketch of sign representing metal springs on the charger
I encourage the reader to produce this sign on your own hand, or even better, someone else’s hand. You will notice a feeling that is tactually similar to pressing on small, metal springs. Once again, however, the assumption that the addressee will have tactile, rather than visual knowledge of the object follows from reciprocal modes of access.
No user of Visual American Sign Language would spontaneously create these signs. The most obvious reason is that they violate restrictions on where signs can be produced in VASL. For example, Stokoe (1960) observed that the “zero tab” (the space in front of the torso of the signer) is constrained by motor capacity as well as economy. While it is physically possible to produce signs in other regions around the body of the signer, this restricted space allows for the greatest ease of articulation (2005 [1960], p. 25). Within this restricted signing space, there are also arbitrary constraints, which come into view in a cross-linguistic frame. For example, the back of the head and the underarm are never recruited as places of articulation in VASL, but in other signed languages they are (Mandel, 1981, p. 11). The use of a range of locations on the body of the addressee, which are not drawn on by the VASL system, suggests a divergence in underlying constraints on sign production. In Figure 14, the shaded regions represent locations on the addressee’s body where PTASL signs are produced(6).
Figure 14. Attested places of articulation in PTASL
While any location on the body of the addressee would satisfy the figure/ground requirements of a tactile language, the locations that are thus far attested allow for ease of articulation. They do not, however, allow for ease of articulation in visual signed languages – not because of any motoric or cognitive constraints imposed on the signer alone – but because the basic participant frameworks that Deaf, sighted people occupy differ from basic participant frameworks occupied by protactile DeafBlind people.
Recall that the physical relation of one body to another in interaction is organized by participant frames. Access to the object is grounded in the bodily con- figurations through which participant frames are realized. Therefore, objects are objectified against a background which includes the embodied orientation scheme occupied by speaker and addressee. Principles of economy can only be applied given stable patterns in interaction. There is a shift in perspective that is necessary for grasping this fact. Rather than viewing the body as a producer and receiver of signs, it must be viewed as part of the indexical ground of communicative activity. The body that appears under this perspective interacts with the body that appears under a linguistic perspective, but it is not identical with it, and must be distinguished, analytically (i.e., one would not want to analyze deictic reference as though it were a phonetic phenomenon). In practice, though, there is only one body; therefore, when relations between the body and objects in the immediate environment snap to a new set of coordinates, organized by new modes of access, the linguistic system is affected. DeafBlind signers began to generate new orientation schemes, which extend in practice, across several analytically distinct domains. These orientation schemes structure processes of sign-creation in unpredictable ways, as they are filtered through interlocking linguistic and interactional fields. This process did not begin with the language, but rather, with changes in how touch is evaluated socially.
Activity frames that DeafBlind people had previously realized in sighted ways, had to find new expression via kinesthetic, olfactory, tactile, and thermal channels, which at the outset, felt inappropriate to DeafBlind participants. Pressing my chest against your back, reaching my arms over your arms, placing my hands on your hands, and attending to the subtle movements of your fingers on clay is like standing a few feet from you, fixing my gaze on your hands, noticing the way you manipulate clay. These practices are the same, we can call them both “watching”, and we will refrain from evaluating the tactile version as inappropriate. Co-presence is also organized by schemes that can be re-routed via tactile and thermal channels: if I put my hand on your thigh or shoulder, you have access to my body’s fluctuating temperature. These signals reflect changes in my mood, exertions of effort, and my physical responses to the environment. Attending to the temperature of my skin is the same as looking at me from a location nearby (nothing inappropriate has transpired). For Deaf, sighted people, these practices are foreign, unfamiliar, or else assigned to different frames of activity. For DeafBlind people, they have become new ways of expressing old interactional patterns. Protactile practices, therefore, are generating coherence for DeafBlind people, while simultaneously driving a divergence between Tactile and Visual American Sign Language.
Conclusion
Models of sign-creation in signed language linguistics literature posit an iconic, gestural approximation of a conceptual representation, which is analyzed to the phonological parameters of a particular signed language (Boyes-Braem, 1981, p. 42; Brennan, 1990, pp. 11–36; Mandel, 1981, pp. 204–211; Taub, 2001, pp. 43– 60; Sandler et al., 2011). The process I have just described contributes to our understanding of this process by examining some of the social and interactional mechanisms that restrict input to the process. In the Seattle DeafBlind community, changes in touch that are socially evaluated led to changes in structures of participation. This led, in turn, to changes in the way deictic reference was resolved. Over time, patterns in deictic retrieval triggered a process of deictic integration, which affected the way that language-users converge on common perspectives, including the figure/ground relations that structure processes of sign creation.
Footnotes
1 The only sighted people present were part of our video crew. Participants were instructed not to interact with us.
2 Oliver Sacks writes about a man, John Hull, who slowly lost his sight. He describes how Hull became increasingly alienated from visual imagery and memory and eventually, how he lost all connection to the visual world. This became evident when the faces of loved ones could no longer be conjured; deictic words, such as “here” and “there”, lost meaning; and objects were no longer imbued with visual characteristics of any kind (2003, pp. 48–49). Instead of fighting the process, Hull gave himself over to it, entering into a world of rich acoustic experience. For ex- ample, the sound of rain, which was once experienced as a background sound, took on a crucial role in delineating entire landscapes. “Rain”, Hull writes, “has a way of bringing out the contours of everything … Instead of an intermittent and thus fragmented world, the steadily falling rain creates continuity of acoustic experience…[it] gives a sense of perspective and the actual relationships of one part of the world to another.” (Sacks, 2003, p. 49). According to Sacks, when Hull let go of visuality, the gaps in his experience were filled in, and a new kind of perceptual coherence was achieved. However, there are also people who respond to vision loss by devoting all of their attention to developing what Sacks calls their “inner eye”, a strategy that involves con- structing visual scenes around available sensory input, or reconstructing them from memory. These scenes can reportedly be as vivid and detailed as visual perception, or else more vivid – like a hallucination or a dream. People who can no longer see remember “sky-blue buses”, “egg- yellow trams”, and “beache[s] of crystallized salt shimmering like snow under an evening sun” (Sacks, 2003, p. 52). In the Seattle DeafBlind community, the “inner eye” proved to be unreliable, and those who decided to abandoned it, joined the protactile movement, choosing to fill in the gaps in their experience in new tactile, olfactory, and kinesthetic ways.
3 While Orientation and Mobility training has been available to DeafBlind people in Seattle for many years, the protactile movement made this kind of training more appealing. The process was also replicated in unofficial ways in other venues.
4 Reciprocity is not a descriptive term. It is a principle interactants orient to. Participants as- sume that their interlocutors can access particular olfactory, thermal, and tactile aspects of the environment, and that they cannot access visual or auditory dimensions, even though some people have partial vision. Deaf communities work this way too: While auditory capacities vary widely among Deaf people, in signing environments, visual, kinesthetic, and olfactory modes of access are assumed. One does not ask, “How much do you hear? Do you need me to sign?” before signing. Regardless of how much anyone can hear, it is assumed that Visual ASL is the appropriate choice for communication.
5 Sublexical constraints on the formation of signs, including constraints on symmetry, com- plexity, place of articulation, and “weak-drop”, are either relaxed or out-right violated (Edwards, 2014, pp. 192–244).
6 This was the case as of 2011. However, in 2015, in a short field trip to Seattle, the author noted that these regions had grown. Signs were being produced in regions of the addressee’s body, which were expected to be excluded on social grounds, including the chest and the belly of the addressee (regardless of gender).
Acknowledgments
Thank you to the Wenner-Gren Foundation for Anthropological Research (Grant #8110) and the Diebold Foundation for Linguistic Anthropological Research for funding this research as well as the Office for Research Support and International Affairs at Gallaudet University for supporting the writing phase. The argument presented in this paper greatly benefited from con- versations with Robert T. Sirvage and members of his protactile design class at Gallaudet, Jelica Nuccio, ajgranda, Paul Dudis, Mara Green, Eve Sweetser, Elisabeth Wehling, and the participants of the Berkeley Gesture Pragmatics conference.
References
Aronoff, Mark, Irit Meir, Carol Padden, & Wendy Sandler (2008). The roots of linguistic organi- zation in a new language. Interaction Studies, 9 (1), 133–153. doi: 10.1075/is.9.1.10aro
Boyes-Braem, Penny Kaye (1981). Features of the handshape in American Sign Language. PhD Dissertation, University of California, Berkeley.
Brennan, Mary (1990). Word formation in BSL. Doctoral Dissertation, University of Stockholm. Bühler, Karl (2001 [1934]). Theory of language: The representational function of language. Amsterdam & Philadelphia: John Benjamins.
Edwards, Terra (2014). Language emergence in the Seattle DeafBlind community. PhD Dissertation, University of California, Berkeley.
Goffman, Erving (1964). The neglected situation. American Anthropologist, 66, 133–136. doi: 10.1525/aa.1964.66.suppl_3.02a00090
Goodwin, Charles & John Heritage (1990). Conversation analysis. The Annual Review of Anthropology, 19, 283–307. doi: 10.1146/annurev.an.19.100190.001435
Hanks, William F. (1990). Referential practice: Language and lived space among the Maya. Chicago: University of Chicago Press.
Hanks, William F. (2005a). Pierre Bourdieu and the practices of language. Annual Review of Anthropology, 34, 67–83. doi: 10.1146/annurev.anthro.33.070203.143907
Hanks, William F. (2005b). Explorations in the deictic field. Current Anthropology, 46, 191–220. doi: 10.1086/427120
Kegl, Judy, Ann Senghas, & Marie Coppola (1999). Creation through contact: Sign language emergence and sign language change in Nicaragua. In Michel DeGraff (Ed.), Language creation and Language change: Creolization, diachrony, and development (pp. 179–237). Cambridge, MA & London: MIT Press.
Levinson, Stephen C. (2006). On the human interaction engine. In Stephen C. Levinson & Nicholas J. Enfield (Eds.), The roots of human sociality: Culture, cognition, and interaction. London: Berg.
Mandel, Mark Alan (1981). Phonotactics and morphophonology in American Sign Language. PhD Dissertation, University of California, Berkeley.
Meier, Richard P. (1990). Person deixis in American Sign Language. In Susan D. Fischer & Patricia Siple (Eds.), Theoretical issues in sign language research, Vol. 1: Linguistics (pp. 175– 190). Chicago: University of Chicago Press.
Meier, Richard P. & Diane Lillo-Martin (2012). Response: The apparent reorganization of ges- ture in the evolution of verb agreement in signed languages. Theoretical Linguistics, 38, 153–157. doi: 10.1515/tl-2012-0009
Rathmann, Christian & Gaurav Mathur (2002). Is verb agreement the same crossmodally? In Richard P. Meier, Kearsy Cormier, & David Quinto-Pozos (Eds.), Modality and structure in signed and spoken languages (pp. 370–404). Cambridge: Cambridge University Press.doi: 10.1017/CBO9780511486777.018
Sacks, Oliver (2003). The mind’s eye: What the Blind see. The New Yorker, July 28, 48–59.
Sandler, Wendy, Mark Aronoff, Irit Meier, & Carol Padden (2011). The gradual emergence of phonological form in a new language. Natural Language and Linguistic Theory, 29, 503– 543. doi: 10.1007/s11049-011-9128-2
Schutz, Alfred (1970). On phenomenology and social relations. Chicago & London: University of Chicago Press.
Senghas, Ann (2000). The development of early spatial morphology in Nicaraguan Sign Language. In
S. C. Howell, S. A. Fish and T. Keith-Lucas (Eds.), Proceedings of the 24th Annual Boston University Conference on Language Development, Vol. 2 (pp. 696–707). Somerville, MA: Cascadilla Press.
Senghas, Ann & Marie Coppola (2001). Children creating language: How Nicaraguan Sign Language acquired a spatial grammar. Psychological Science, 12, 323–328.doi: 10.1111/1467-9280.00359
Stokoe, William C. (2005 [1960]). Sign language structure: An outline of the visual communication systems of the American Deaf. Journal of Deaf Studies and Deaf Education, 10, 3–37. doi: 10.1093/deafed/eni001
Taub, Sarah F. (2001). Language from the body: Metaphor and iconicity in American Sign Language. Cambridge: Cambridge University Press. doi: 10.1017/CBO9780511509629
Biographical notes
Terra Edwards is Assistant Professor of Anthropology at Saint Louis University. Her research examines the social and interactional foundations of a grammatical divergence between Visual American Sign Language and Protactile American Sign Language in the Seattle DeafBlind community. She has published several articles about DeafBlind language and communication and has been involved in the Seattle DeafBlind community in a range of capacities for more than 20 years.
Proudly powered by Weebly