Protactile Research Network
From Compensation to Integration:
Effects of the pro-tactile movement on the sublexical structure of Tactile American Sign Language
Terra Edwards
Journal of Pragmatics
Keywords: Complexity; DeafBlind; Integration; Language emergence; Sublexical; Tactile American Sign Language
compensation_to_integration.pdf | |
File Size: | 2274 kb |
File Type: |
1. Introduction
This article examines a recent divergence in the sublexical[i] structure of Visual American Sign Language (VASL) and Tactile American Sign Language (TASL). My central claim is that TASL is a language, not just a relay for VASL. In order to make that case, I show how changes in the structure of interaction, driven by the aims of the “pro-tactile” social movement contributed to a redistribution of complexity across grammatical sub-systems. I argue that these changes constitute a departure from the structure of VASL and the emergence of a new, tactile language. In doing so, I apprehend language emergence not as a “liberation” from context (Sandler et al. 2005:2664-5), but as a process of “integration,[ii]” through which forms and their associated meanings undergo “reshaping” “conversion” and “transformation” as they are instantiated[iii] (Hanks 2005a:194, Edwards 2012:61-3).
Recent research on languag[1]e use among DeafBlind people in the United States[iv] has described differences in production and reception of signs as “accommodations”, and “adjustments” (Collins and Petronio 1998; Collins 2004; Petronio and Dively 2006). Collins states that “Tactile ASL is a clear example of a dialect in a signed language,” (2004:23) and Petronio and Dively concur, defining it as “a variety of ASL used in the deaf-blind community in the United States[v]” (2006:57). Crucially, this research was conducted prior to the pro-tactile movement, when DeafBlind people communicated primarily via sighted interpreters. Since then, DeafBlind people in Seattle have established conventions for direct and reciprocal tactile communication. I am arguing that these changes, triggered by the pro-tactile movement, are leading to a more radical divergence, and ultimately to the emergence of a new language.
Most members of the Seattle DeafBlind community are born deaf and due to a genetic condition, lose their vision over the course of many years. While they grow up participating in Deaf social networks, visual environments eventually become untenable, and they are drawn to Seattle where jobs and communication resources are available. Since the 1970s, the community has grown and has developed new communication conventions. However, until recently, these conventions were aimed at helping DeafBlind people maintain access to visual fields of engagement. Greater forms of authority accrued to sighted social roles and legitimacy accrued to visual modalities. Therefore, DeafBlind people attempted to use visual reception long after it had become ineffective.
In 2006, Adrijana[vi] became the first ever DeafBlind director of the DeafBlind Service Center (DBSC), a non-profit organization that provides communication and advocacy services. She and her staff traced the many problems facing the community to a single cause: DeafBlind people did not have enough direct, tactile contact with their environment and others in it. In an effort to address this problem, a series of 20 pro-tactile workshops were organized for 11 DeafBlind participants in spring of 2011. No interpreters were provided, and everyone—no matter how much they could still see—was required to communicate tactually. In these workshops, new interactional conventions were established, triggering a grammatical divergence between TASL and VASL. In analyzing this process, I draw on videorecordings collected during the workshops as well as one year of dissertation fieldwork and 14 years of involvement with this community in a range of capacities, including interpreting. I begin in Section 2 with some conceptual preliminaries developed in recent work on language and practice theory. Section 3 describes DeafBlind communication prior to the pro-tactile movement; and Section 4 examines pro-tactile communication, including new participant frameworks and their effect on the production and reception of signs. Section 5 concludes by arguing that these changes constitute the emergence of a new language.
My argument relies on the assumption that “a language” can be delimited and compared to other languages. From a strictly linguistic point of view, this is a difficult, but not impossible claim. For example, Comrie argues that Anglo-Saxon and Modern English are now distinct languages due to a history of radical change in morphological typology (from synthetic to analytic plus reduction in fusion) and word order typology (development of strict SVO order). Along these dimensions, he says, “it is hard to imagine two languages more different from one another than Anglo-Saxon and Modern English” ([1981]1999:203). Likewise, Sandler et al. use distinct word order in ABSL and the surrounding languages to be evidence of a clear boundary between systems (2005:2664). Along different typological dimensions, similarities might outweigh differences, and it is not clear how many similarities or differences would be necessary. Therefore, socio-political considerations often become decisive, especially where “standards” and “variants” are in play (e.g. Milroy 2001, Silverstein 1996, Trudgill 2008). As objects of metalinguistic reflection, languages are just one part of broader schemes of valuation and inequality, and claims about language-boundaries are often caught up in those dynamics.
The pro-tactile movement is not driven by metalinguistic reflection or valuation, but rather, by a shared desire for immediacy and co-presence (Edwards forthcoming). DeafBlind people have reflected upon and changed interaction conventions in order to establish tactile modes of co-presence. The emergence of new grammatical subsystems is an unintended consequence of those efforts[vii]. Therefore, social and political dynamics do affect the development of the language, but not via language-planning, shifts in language ideology, or other forms of metalinguistic discourse.
In arguing that TASL is emerging as a distinct language, I am making two claims. First, several grammatical subsystems are currently diverging from VASL in ways that foreshadow typologically divergent patterns (Edwards, forthcoming). Second, the grammar of TASL is being reconfigured as it articulates to new, historically emergent interactional and social fields. I am therefore claiming that a language is a configuration of grammatical subsystems embedded in historically and interactionally constituted fields of activity. In other words, a language is not strictly linguistic. However it cannot be reduced to ideologies about language or meaning-effects that emerge out of interaction, either. Rather, a language as a whole must be grasped in the relations of embedding that cohere between social, interactional, and linguistic phenomena. This article focuses specifically on the sublexical structure of TASL as it is transformed by the sub-type of embedding I am calling “integration.” To understand this transformation, I appeal to practice theory, adapted for the study of language (Bourdieu 1990, Giddens 1979, Hanks 2005a, 2005b, 2009, Edwards 2012).
2. A Practice Approach to Language Emergence
DeafBlind people in Seattle were once sighted, and as vision deteriorated, they continued to orient to their environment as sighted people do. However, starting in 2007, under the influence of the pro-tactile movement, they began to cultivate tactile sensibilities. In order to account for these shifts in orientation and attention, I draw on Bourdieu’s notion of “habitus”.
2.1 Habitus
The habitus is shaped by socially and historically specific patterns of perception, thought, and action weighed against notions of appropriateness and politeness. It is formed through socialization in childhood and continues to solidify throughout life (Bourdieu 1990[1980]:53). We learn, as children, to recognize immediate and urgent triggers to speak and act in particular ways. This trigger-response loop operates below the level of awareness, making it possible for acquired patterns and schemes, which predispose us to respond to stimuli in particular ways, to reproduce the systems and regularities which created them (ibid.:55). This circularity yields a ground of “reasonable” and “common-sense” ideas (ibid.:58). Children are socialized to accept common sense as such, thereby naturalizing historical effects.
Bourdieu’s notion of habitus is influenced by Panofsky (among others), who identified broad, underlying cultural logics that derive from homologies between philosophical thought and the thought of cultural producers of a given period (Hanks 2005a:70). However, under the influence of Merleau-Ponty, Bourdieu argued that “the body, not the mind, was the ‘site’ of habitus” (ibid.:71). Merleau-Ponty conceives of the body as the site of a particular kind of knowledge or “grasp” that social actors have of being a body—a “corporeal schema”, which is transmitted by the habitus at the level of motoric habituation (Hanks 1996:69). Habitus exists only in dynamic tension with “field”.
2.2 Field
Hanks distinguishes between three kinds of fields: semantic, deictic, and social. A semantic field is “any structured set of terms that jointly subdivide a coherent space of meaning” (Hanks 2005b:192). A single term characterizes aspects of setting, but it also analyzes them according to contrasts with other terms in the same domain (ibid.:200). The deictic field includes: (1) “the positions of communicative agents relative to the participant frameworks they occupy”; (2) “The position occupied by the object of reference”; and (3) “The multiple dimensions whereby agents have access to objects” (ibid.:192-3). Lastly, Bourdieu’s social field is summarized as follows:
(a) A form of social organization with two main aspects: a configuration of social roles, agent positions, and the structures they fit into and (b) the historical processes in which those positions are actually taken up [and] occupied by actors (individual or collective) (Hanks 2005b.:72).
Following Bourdieu, Hanks understands discourse production as ways of taking positions in the social field. In position-taking, “habitus and field articulate: social positions give rise to embodied dispositions. To sustain engagement in a field is to be shaped, at least potentially by the positions one occupies” (1996:73). This is why when we engage power structures, we tend to reproduce, rather than change them, regardless of intent. This process of social reproduction is linked to language-use by means of legitimation and authorization. Legitimation accrues to styles and genres, and constraints on who has access to legitimate styles and genres limits access to power, reinforcing unequal power relations (ibid.:76). Authorization, on the other hand, accrues to the positions social actors occupy. Legitimation and authorization jointly constrain position-taking in the social field[viii].
While habitus and field are crucial for understanding shifts in sensory orientation among DeafBlind people, Bourdieu’s social actor is not quite reflective enough to account for the role that DeafBlind people are playing in the process. Giddens (1979) and Kockelman (2007) (via Peirce (1955/1940 [1893-1910])) provide useful alternatives by breaking the consciousness of the actor onto three planes. Giddens’ categories include practical consciousness, discursive consciousness, and the unconscious (1979:2). He recognizes a kind of tacit, embodied knowledge like the kind transmitted by the habitus, but he argues that all social actors also “have some degree of discursive penetration of the social systems to whose constitution they contribute” (ibid.:5).
Kockelman also breaks the consciousness of the actor into threes by appealing to three different types of “interpretant,” or sign-effect: affective, energetic, and representational (Peirce 1955/1940 [1893-1910]):378). Affective interpretants involve a change in body state like blushing or sweating; energetic interpretants involve a physical response that requires some effort, but not necessarily intention, such as a flinch or a glance; and representational interpretants have propositional content, for example, an assertion such as “That was loud!” Each type has a double, or “ultimate interpretant”, which accumulates patterns (ibid.:378-9). For example, an ultimate affective interpretant is a ``disposition for one’s bodily state to change’’ as opposed to an instance of one’s bodily state changing (ibid.:378). Ultimate affective and energetic interpretants are similar to the habitus, since both involve a disposition to respond to sensory stimuli in particular ways. Ultimate interpretants are dissimilar from habitus in that there is no correlate to one of its core components—the Aristotelian notion of hexis, or the meeting of a desire to act with judgments of that desire against frames of social value (Hanks 1996:69). In addition, while ultimate representational interpretants account for more reflective and discursive modes of consciousness, it is not clear whether ultimate relations can obtain across categories. Can there be an ultimate representational interpretant, which accumulates affective and energetic patterns? This would be necessary in accounting for representational modes of reflection about co-presence and immediacy among DeafBlind people.
Kockleman’s framework diverges from the one developed here in another (and perhaps not unrelated) way: his object is abstract. It is a “correspondence-preserving projection from all interpretants of a sign” (ibid.:378). The object in the present analysis is, instead, an input to processes of embedding, and more specifically, integration. If the object were (primarily) a semiotic projection, the language would not be under such pressure to change; it is a problem of directionality. As will be discussed in the final section of the article, novel modes of perceptual access to the material qualities of objects, apart from thematization or characterization, exert pressure on the grammar via selective integration of linguistic and non-linguistic forms. While projection is certainly involved, effects are also moving in the other direction. The bi-directionality of integration, and embedding more generally, attributes a kind of concreteness to the object that is not found in a semiotically projected world (also see Edwards 2012:39). Excessive abstraction can also mask the importance of sensory capacity and orientation, which intervene in the sign-object relation in consequential ways via the body. In the present framework, the body, like the object, is relatively concrete.
2.3. Three Perspectives on the Body
There are three general perspectives on the body that are necessary for understanding the emergence of TASL: first, as a producer and receiver of signs; second, as an object of description and evaluation; and finally as part of the indexical ground against which activity unfolds. Constraints on the production and perception of signs are what sign language linguists call “phonetic” constraints. With respect to VASL, Battison observes that from the addressee’s perspective, the body appears symmetrical—two eyes, two arms, two hands, and so-on. However, from the signer’s perspective, the body is bilaterally asymmetrical (1978:26). One side is always more dominant than the other. The opposition between visual symmetry and the motoric asymmetry of the signer “creates a dynamic tension of great importance for the formational organization of signs…” (ibid.). A sign can be neither too complex to perceive, nor too complex to produce. That is to say, the physical production of the sign cannot involve motoric tasks that are difficult to execute (e.g. patting your head while rubbing your stomach) or perceptual tasks that are too demanding (e.g. producing movements that are too small to perceive easily). This type of constraint constitutes the first reduction in what is possible in VASL at the sublexical level.
In a practice framework, additional constraints are imposed by non-linguistic factors, which inhere in social and deictic fields. In the former, sign production and reception can be constrained by frames of social value that arise in part through talk about the body. If it is socially unacceptable to touch the addressee’s body, for example, a tactile language will not emerge. In order to negotiate norms like this, the body must be treated as “an object of evaluation through reference, description, and categorization” (Hanks 1996:248). In the deictic field, the body is part of the indexical ground against which activity unfolds (Hanks 1996:254-7, 2005b). If a DeafBlind person tells another DeafBlind person, “Here it is,” resolution of reference will require shared access to the environment (e.g. a grasp of where they are in space, reciprocal sensory access to the object, and any other relation that is relevant to both speaker and addressee). Here, the body is neither objectified, nor is its primary role to produce and receive signs. Rather, it is part of the background against which communicative activity becomes legible. While phonetic constraints inhere in the linguistic system, social and deictic constraints do not[ix]. However, they act, indirectly on the language via “embedding.”
2.4. Embedding
Embedding describes a process whereby schematic form-meaning correspondences undergo “reshaping” “conversion” and “transformation” as values are retrieved from deictic and social fields (Hanks 2005a:194). Patterns of retrieval align the linguistic system with its contexts of use so that, as Bühler says, language is not “taken by surprise” when it encounters the world (Bühler 2001 [1934]:197). Rather, the linguistic system acts like a network of receptors, which have been shaped by these patterns and are therefore set to receive certain field-values and not others. At the same time, retrieval tends to echo across grammatical subsystems in arbitrary ways as the language develops.
Four mechanisms of embedding have been proposed: practical equivalences, counterparts, rules of thumb (Hanks 2005b) and integration (Edwards 2012). Practical equivalences are correspondences between “modes of access that interactants have to objects” (Hanks 2005b:202). For example, in Yucatec Maya, there are two enclitics, a’ and o’, which when combined with one of four bases, produce a proximal/distal distinction (ibid.:198-9). However, in practice, the o’ form can be used to refer to denotata that are “off-scene” (ibid.:201). In order to use the “distal” deictic this way, a “practical equivalence” must be established between “off-scene” and “distal”.
Counterparts establish relations of identity between objects (Hanks 2005b:202). For example, the proximal deictic can be used by a shaman to refer to a child who is off-scene if there is a visual trace of that child in his divining crystal. This is possible because the visual trace of the child is construed as the counterpart of the actual child (ibid.:201). The shaman is authorized to establish this relation by virtue of his social position, just as the radiologist’s position authorizes him to interpret x-rays (ibid.). Therefore, counterparts establish relations between: (1) schematic form-meaning correspondences (e.g. a’/o’=proximal/distal); (2) the deictic field, where access to the referent is established, and (3) the social field, where authorized speakers establish relations between (1) and (2) by using legitimate styles and genres of language use.
Rules of thumb guide speakers in responding to commonly occurring, or “stereotypical” situations (Hanks 2005b:206). For example, in Yucatec Maya, a stereotypical greeting includes a question-response sequence like the following (ibid.:206[x]):
Speaker A: “Where ‘ya goin’?”
Speaker B: “Just over here.”
This exchange “tells A nothing about where B is going or how far away it is, only that he is heading there.” (ibid.) Therefore, the proximal form, translated as “here”, is not associated with proximity at all, but rather, a routine situation. Each of these principles of embedding involves the instantiation and subsequent re-shaping of a form-meaning correspondence.
Embedding may, at first, appear indistinguishable from neighboring concepts such as “contextualization” (Gumperz 1992) and “keying” (Goffman 1974:40-82). Contextualization is an inferential process (i.e. Sperber and Wilson 1986, Levinson 1983), which involves “hypothesis–like tentative assessments of communicative intent” (Gumperz 1992:230). Similarly, keying involves a change in frame through which an activity is understood, for example, when playful, “bitinglike behavior” turns to biting (Goffman 1974:41-4). Both concepts work well for analyzing changes in meaning that correspond to changes in interactional context, signaled by things like facial expressions, bodily cues, prosody, code choice, etc. While embedding accounts for changes like this, it also requires a third analytic step that links interactional phenomena to broader and more lasting transformations such as those associated with colonization, missionization, and large-scale religious conversion (e.g. Hanks 2010). These processes operate on historical and institutional scales. Practice theory distinguishes between interactional and social scales in order to relate them in principled ways. In Giddens, for example, historical and interactional scales are linked via the “layering” of social structures (1979:65), which is similar to the notion of social embedding developed here. However, Giddens is concerned with social and interactional structures, while embedding draws attention to relations between social, interactional, and, crucially, linguistic structures.
Contextualization and keying both unfold, primarily, in the give and take, or back and forth of face-to-face interaction. Embedding in the social field shifts attention to the socio-political projects people pursue or are caught up in. Under this perspective, actors interact, but in doing so, they also fight for recognition and resources, intervene in discursive loops and demand new framings of their actions, encounter limits in the institutional roles made available to them by prior historical activity, and apply their common-sense reasoning in ways that often reproduce those limits. Embedding accounts for processes like this, which are more peripheral in interaction-based concepts[xi].
2.4.1 Integration as Embedding
Practical equivalences, counterparts, and rules of thumb all involve a shift or substitution in meaning with respect to a stable linguistic form. For example, when a “distal” deictic is used to refer to an off-scene denotatum in Yucatec Maya, the meaning is converted, but the form remains constant. In contrast, “integration”, accounts for cases where both form and meaning are converted (Edwards 2012:61-3). In cognitive science, integration implies a partial projection of elements from two domains into a third, which manifests a structure that is not present in either of its inputs (Fauconnier and Turner 1998:133). The term is used here to describe the emergence of new linguistic forms, not present in the input. However, it takes into account a range of inputs that cannot be understood exclusively in terms of cognition, including social, deictic, and linguistic phenomena.
Effects of integration can be transient or perduring. For example, if two sighted users of VASL are communicating across a football field, they will extend the space within which signs are conventionally produced to increase visual salience. As a result, “location” and “movement” parameters of the sign will change. This is an effect of embedding in a deictic field where participants momentarily have reduced visual access to signs[xii]. Insofar as communicating across football fields constitutes a marked interactional context, this change in production is not relevant to our understanding of the structure of VASL. If, on the other hand, limited visual access is a permanent circumstance among a group of language users, and if this circumstance leads to historical shifts in sensory orientation and social organization, then integration will have more lasting effects.
Modes of access like this are also made feasible (or not) by broader processes of authorization and legitimation. For example, if the use of signed languages in public thrusts the signer into a subordinated social position, they are less likely to sign in public (e.g. Nakamura 2006:5). Therefore, while authorization and legitimation constrain position-taking, these processes can also restrict the feasibility of logically possible linguistic forms on social grounds. As new forms of authority accrued to DeafBlind social roles and the tactile modality was legitimized a wider range of tactile linguistic forms became feasible for the language.
2.4.2. Embedding and Language Emergence
Emergent languages do not have fully formed linguistic systems as input (if they did, they would be the product of language change). However, where full-fledged languages emerge, there is always some kind pre-existing semiotic input, such as gestural “home sign systems” which are developed by deaf children or in small kin-networks, where no signed language is available. These systems exhibit certain language-like properties, but do not constitute full-fledged languages (e.g. Kegl et al. 2001, Sandler et al. 2005, Goldin-Meadow 2010). In the Seattle DeafBlind community, pre-existing systems include reduced, or simplified versions of VASL. DeafBlind people using VASL have limited perceptual access to signs[xiii]. As vision is lost, context becomes increasingly important for distinguishing signs from one another, tracking referents, linking deictic signs to language-external objects, etc. However, vision loss also restricts access to context. The convergence of these circumstances leads to a splintering of the language into simplified and idiosyncratic versions (Edwards Forthcoming). When simplified idiolects were instantiated in a tactile field, they were transformed, and a new language began to emerge. This process began not with the language, but with the reconfiguration of the habitus[xiv].
3. The DeafBlind Habitus in a Visual Field
Prior to the pro-tactile movement, DeafBlind people approximated visual modes of participation by working with sighted interpreters. For example, in Figure 1, the DeafBlind man on the right is standing on stage giving a presentation to an audience of DeafBlind people. The interpreter next to him relays visual cues, such as a raised hand, from the audience.
Figure 1: DeafBlind presenter (right) with sighted interpreter (left)
The audience is filled with dyads composed of one DeafBlind person and one interpreter. For example, in Figure 2, the man on the left is DeafBlind and the woman on the right is a sighted interpreter. The interpreter copies the presenter’s signs, so they can be received tactually by the Deaf Blind person.
Figure 2: DeafBlind audience member (left) with sighted interpreter (right)
Each DeafBlind audience member using tactile reception must have at least one interpreter dedicated to them. Therefore, if there are 10 DeafBlind people present, there will be at least 10 interpreters working at any given time. Heavy mediation like this generates forms of distance between DeafBlind people and their environment that are detrimental to the visual habitus.
3.1 Degradation of the Visual Habitus
In interpreter-mediated participation frameworks, DeafBlind people do not have direct access to one another. Instead, utterances are channeled through several relays before reaching the intended addressee(s). This was the norm prior to the pro-tactile movement. If the original author produces an utterance with a grin or flushed cheeks, the only option for the interpreter is to add emoticon-like signs to the end of utterances (e.g. smile to represent a smile, or h-a for laughter). But these signs are not sensitive to the qualities of a particular smile, or the intensity of a particular gaze. So for DeafBlind people who received descriptions like this for years, people started to feel like types of people, interactions started to feel like examples of interaction, and places started to feel like representations of places. This had adverse effects that were talked about as a lack of feeling, connection, emotion, and depth.
Lee, one of the leaders of the pro-tactile movement, explained that over time, this kind of distance leads to characteristically strange behavior. For example, sighted people living in Seattle are familiar with downtown hotels. They expect to find automatic, sliding glass doors at the entrance. They anticipate the slightly squishy floor mat as they pass through the threshold. If they are holding a paper coffee cup, only a half-glance will be necessary to confirm the existence of a cylindrical silver trash can into which they can dispose of their cup. “It’s always the same!” Lee said. However, DeafBlind people have, until recently, relied on sighted interpreters to navigate public spaces, preventing them from cultivating tactile sensibilities. As a result, Lee says, scenes like the following are likely to unfold[xv]:
A DeafBlind person walks into a [hotel], and runs into the garbage can turning the corner. They look shocked and tell the person they’re with that the placement of the trash can is not safe!
Outbursts like this seem out of place, since to everyone else, the placement of the trash can is utterly expectable and would become expectable for the DeafBlind person if they were using a cane and taking note of such regularities in their daily lives. However, prior to the pro-tactile movement it was common for DeafBlind people to avoid this kind of transition in their sensory orientation, since it would mean engaging less legitimate modes of communication. I encountered this problem often in routine interactions.
For example, one day, I entered a coffee shop with a DeafBlind man[xvi]. I told him there were several people in line ahead of us. He responded by repeatedly adjusting his footing, saying “Sorry. Sorry.” He clenched his fists and cringed, as if bracing for a collision. This sort of thing happened all the time: I would give a DeafBlind person a piece of information, and they would yell, “I’m sorry!” “I didn’t know!” “No one told me!” Or “I’m blind!” When I was living in Seattle and working as an interpreter—before I went to graduate school, and before the pro-tactile movement took hold, these events felt quirky to me, but not unusual. As a researcher, and after being influenced by the pro-tactile movement, I began to see them as symptoms of a serious and alarming problem, a sign that a process of social degeneration had begun—something like the degeneration of the habitus.
When the habitus is intact, we respond to immediate triggers to act in socially recognizable ways. However, our ability to do so depends on access to the immediate environment. The jumpiness observed among some DeafBlind people comes from the presence of triggers to act, minus particularities in the environment needed to guide specific action. You may know that addressing a sighted interlocutor requires particular posture or orientation, but after years of limited access to the bodies of others, you forget how to enact them, and your actions fail to snap to a common grid of intelligibility.
Over time, these failures accrue to the individual as the habitus degenerates. A person without a habitus has no common sense. They run into “perfectly ordinary” objects and complain loudly about their placement. “Out of nowhere” they brace for a collision. They respond to “routine” questions by yelling, “I’m blind!” These events thrust DeafBlind people into devalued social positions. They come to be viewed as developmentally delayed or incompetent. They are described as slow learners or as socially isolated and suffering. Younger DeafBlind people are horrified when they see these small dramas unfold, and they wonder if this is their fate. Sighted people rush in to provide more visual information, which triggers further confusion. Over several decades, the DeafBlind person drifts away from any legible position in the social order. They become “eccentric” or “odd”. There are stories about how so and so used to be really attractive—“I know it’s hard to believe now,” they say, “but women were lining up after him.” And then the mystery surrounding his decline—What happened? Why do you think he’s like that now?
Leaders of the pro-tactile movement saw these problems as rooted not in the failures of the individual, but in naturalized interactional structures. Their hypothesis was that DeafBlind people behave in non-normative ways because they don’t have enough direct, tactile access to their environment. Representations only make sense if they conjure experience, and too much reliance on interpreters had opened up a chasm between the two. In the terms employed here, they saw, via a “reflexive monitoring of conduct” (Giddens 1979:25) that habitus must articulate with field. Rather than attempting to prop up the visual habitus, they changed it and the fields it articulates to.
4. Generating a Tactile Habitus
Habitus and field were transformed in the pro-tactile workshops as DeafBlind people established new communication conventions, built around tactile modes of access and orientation. Early on, these changes were confusing. A bid for a turn was misunderstood as a sexual advance. An attempt at co-presence was misunderstood as a bid for a turn. Fairly quickly, though, possibilities were narrowed as patterns in interaction began to settle and social boundaries around touch were redrawn. Within new limits, a range of possible and expectable behaviors cohered and began to be evaluated against new frames of social value. There were new ways of being inappropriate and politeness quickly became a common sense matter— a new habitus began to emerge. In the following sections, I show how these changes, which are fundamentally social, are leading to changes in the linguistic system, and specifically, to a redistribution of sublexical complexity[xvii].
4.1 The Redistribution of Sub-Lexical Complexity in a Tactile Field
Comparing sublexical organization in spoken and signed languages, Battison (1978) begins with the unrestricted human vocal apparatus. The human body, he says, can make a wide range of sounds of which only a small portion can be recruited for speech (ibid.:20). Sublexical constraints act on this limited range of sound to produce a finite set of units. These units are combined in rule-governed ways to yield the allowable morphemes of a specific language, including their alternations when they occur in utterances (ibid.). By analogic extension, the human body can make a wide range of gestures. Sublexical constraints in signed languages act on some sub-set of physically possible gestures to produce a finite set of units, which when combined in rule-governed ways, produce the allowable morphemes in a signed language (as well as their alternations) (ibid.). Finite units are composed of contrastive handshapes, movements, and locations, which are not, themselves, meaningful, but combine to form signs that are systematically distinguishable from one another (ibid.:21-3).
In the case of both spoken and signed languages there is a series of reductions enacted as increasingly demanding constraints are imposed on the capacities of the body. At the outer phonetic limits, capacity is primary. That is to say—there will be no gestural or sonic material admitted into the language that cannot be produced or perceived. However, in order to identify the limits of capacity, an unmarked, or “basic” interactional context must be established[xviii]. Participants must not be in unusual, or unconventional configurations (e.g. at unusual distances or in unusually low lighting). Therefore, at the most fundamental level, the phonetic structure of a language is dually constrained by the body’s capacity to produce and receive signs, on the one hand, and the habitual, sensory orientations that accrue to conventional participant frameworks on the other. During the pro-tactile workshops, conventional configurations of participant roles changed, yielding two, competing, underlying participant frames (“speaker-addressee”, and “speaker-addressees”). These frames were most commonly realized via two and three-person frameworks like those below.
4.1.1 Two-Person Configurations
In Figure 3, Adrijana (left) is listening to Collin (right) using her left hand. Adrijana uses her right hand to provide tactile backchanneling cues and maintain co-presence. During the workshops, the passive hand was increasingly recruited for these purposes, putting pressure on the addressee to use one-handed reception. In three-person configurations, additional pressures were exerted, not only on the reception of signs, but also on their production.
Figure 3: Two-person configuration
4.1.2 Three-Person Configurations
In Figure 4 Adrijana is signing no to two interlocutors[xix]. In a three-person configuration like this, all signs are duplicated, so there is one copy for each addressee (Figure 5). Insofar as three-person configurations are a realization of a basic participant frame, this change has implications for the sub-lexical structure of TASL.
Figure 4-5: Duplicated one-handed sign: “no”
In visual signed languages, there are two manual articulators. The interaction of the articulators is constrained at the sublexical level (e.g. van der Hulst 1996, Sandler 1993, Eccarius and Brentari 2007, Morgan and Mayberry 2012, Stokoe 1960, Battison 1978, Channon 2004, Napoli and Wu 2003). New, and importantly, conventional, participant frameworks among DeafBlind people are exerting pressure on the way the manual articulators interact, and therefore on this level of grammatical organization. In particular, the role of the non-dominant hand is changing in three-person configurations. While in VASL, the hands work in tandem to produce two-handed signs, in TASL, each hand must produce an independently meaningful sign: one for each addressee. One-handed signs like no are straightforward, however, duplication of two-handed signs is more complicated, and are leading to more consequential changes in the production of lexical signs.
4.2 Changes in Constraints on Two-Handed Lexical Signs
There are three types of two-handed signs in VASL, which differ in the degree of symmetry that obtains between the two hands. First, there are signs in which the hands are maximally symmetrical such as which[xx] (Figure 6).
Figure 6: VASL sign “which”
Here, the hands perform identical motor acts with either synchronous, or alternating movement (Battison 1978:28-9). Next, there are two-handed signs that are specified for the same handshape, but one hand is active and the other is passive, as in name (Figure 7), where the dominant hand moves and the non-dominant hand remains stationary. These signs are less symmetrical, since the hands do not perform identical motor acts.
Figure 7: The VASL sign “name”
Finally, there are two-handed signs that are maximally asymmetrical, such as discuss (Figure 8).
Figure 8: The VASL sign “discuss”
These signs present the greatest challenge for DeafBlind signers in three-person configurations. In order to see how this problem was addressed, I collected tokens produced by people with different levels of exposure to pro-tactile practices. Group 1 includes 8 DeafBlind people who were in their first 2.5 weeks of the workshops, and therefore had very limited exposure. The second group is composed of 6 signers who had been participating in the workshops for at least 2.5 weeks. The third group includes the instructors—Adrijana and Lee, who had been developing pro-tactile practices for about 4 years at the time of the workshops[xxi].
For maximally asymmetrical signs, three strategies were applied. Early on, signers simply failed to duplicate the sign. For discuss this resulted in the VASL sign in Figure 10. This strategy (or rather, lack thereof) was used most often by signers with very little exposure to pro-tactile practices (Group 1). In groups 2 and 3, exposure to pro-tactile practices increased and this strategy was used less often. In Figure 9, The Y-axis shows the percentage of tokens that were produced as one would expect in VASL. Moving from left to right on the X-axis, exposure to pro-tactile practices increases.
Figure 9: Percentage of signs produced like VASL signs
Early on, asymmetrical two-handed signs were produced as they would be in VASL more than 40% of the time. In later weeks of the workshops, the frequency of this (non) strategy dropped to just over 20%[xxii]. After several years of exposure, 0% of these signs were produced as would be expected in VASL[xxiii]. Instead, two alternate strategies were applied in three-person configurations. After a few weeks of exposure, signers tended to alternate the active hand sequentially so the dominant hand took the active role first, and then the non-dominant hand. This provided sequential access for both addressees via distinct channels. After a few weeks of exposure, signers tended to alternate the active hand sequentially so the dominant hand took the active role first, and then the non-dominant hand. I call this “sequential alternation”. For discuss, the resulting sign is depicted in Figure 10.
Figure 10: active role assumed first by right hand, then by left.
Figure 11: Percentage of signs alternated sequentially
Early on in the workshops, sequential alternation was used about 25% of the time. In the later weeks of the workshops, this strategy more than doubled in frequency. After several years of exposure, there was a slight decline (Figure 11). The third strategy for duplicating maximally asymmetrical signs involved dropping the non-dominant hand entirely. In the resulting sign, both hands play simultaneous, active roles (Figure 12).
Figure 12: discuss (non-dominant hand drops)
For non-instructors in the workshops, the non-dominant hand was dropped about 20-30% of the time. Among the instructors, who had been developing pro-tactile practices for 4 years, this strategy increased in frequency to about 50%. The other 50% of signs were sequentially alternated (Figure 13).
Figure 13: Percentage of signs with non-dominant hand dropped
For moderately symmetrical two-handed signs (e.g. name in Figure 7), similar patterns were observed (Edwards, forthcoming). In addition, maximally symmetrical two-handed VASL signs (e.g. which in Figure 6), tended to become even more symmetrical. Recall that in this type of sign, the hands perform identical motor acts. However, the movement can be alternating or synchronous. In TASL, there is a preference for synchronous movement instead[xxiv]. The resulting sign for which is depicted in Figure 14.
Figure 14: which with synchronous movement
This shows that in TASL, more demanding constraints on symmetry are emerging in the production of two-handed signs in three-person configurations. One may ask, however, whether this constitutes a change in the sublexical structure of the language, or is, rather, a pragmatic adjustment for three-person configurations. In other words, is the two-person frame more basic than the three-person frame, or vice-versa? In order to address this question, I collected tokens of one-handed signs produced in two-person configurations. In this context, there are two logical possibilities. Signs can be produced as in VASL, or they can be duplicated as in three-person configurations. I found that duplication of one-handed signs in two-person configurations increased from 0% among those with little exposure to pro-tactile practices[xxv] to 18% among those with several years of exposure[xxvi]. This suggests that the motoric patterns shaped by three-person configurations are influencing the production of signs in two-person configurations, even though there is no pragmatic pressure to do so. A continuation of this trend is expectable, since languages do not generally reconfigure the articulatory apparatus according to the number of addressees present. In its current state of development, this is leading to a minimization of sublexical complexity in TASL.
4.3 Minimization of Sublexical Complexity
Constraints on symmetry are becoming more demanding in TASL as a result of communication pressures. While there are several lexical sign-types in VASL, TASL has one: maximally symmetrical two-handed signs. For all lexical signs in TASL, the hands must perform identical or symmetrical motor acts in every respect. As a result, features that mark phonological distinctions in VASL are becoming imperceptible in TASL. For example, without access to the face or the second hand, discuss (in Figure 10 above) is indistinguishable from the VASL sign argue (Figure 15).
Figure 15: VASL sign argue
In the early stages of the workshops, ambiguities like this were common, and not easily resolved. The solution was not to make small adjustments to the visual system, but rather, to alter the motoric patterns that constrain the system. Here we see habitus and sublexical structure converge. A tactile habitus is shaped by the cultivation of tactile sensibilities, which run counter to the normative visual world. Stigmas were confronted, triggers to act were defused and re-set, selves were lost and remade. New pro-tactile people began to apply strategies to communicative problems that involved more radical solutions than ever before. When faced with three-person configurations, they did not stop at surface level modifications. They went so far as to change the bilateral asymmetry of the body, becoming more ambidextrous than their previous, visual selves. This allowed them to produce two-handed signs that were maximally redundant, thereby enabling them to address two people at the same time.
In Battison’s terms, this constitutes a minimization of formational “complexity”, which Morgan and Mayberry succinctly capture: “A two-handed sign that shares all phonological aspects is the most redundant and therefore least complex […]. Increasing mismatches (departures from symmetry between the two hands) in each of these aspects create more complexity…” (2012:148). Insofar as complexity in lexical signs decreases further and ambiguity increases as a result, distinctive features can be expected to be redistributed as the system develops. Indeed, this is already occurring.
4.4. Increased Complexity in Classifier Constructions
In signed languages, lexical signs can be contrasted with “classifier constructions”, which are named for their similarity to classifiers in spoken languages. A verbal classifier in spoken language, affixed to the verb classifies a nominal argument according to semantic criteria, as in the following example from Cayuga (Grinvald 2000:67):
ohon'atatke: ak-hon'at-a:k
it-potato-rotten past/I-CL(potato)-eat
'I ate a rotten potato'
In visual signed languages, there are constructions that work like this. For example, in VASL, the B-handshape (Figure 16) can be incorporated into the verb complex to classify one of its nominal arguments as rectangular and flat:
paper flat-rectangular-thing-lay-on table.
“The piece of paper is lying on the table”
Figure 16: VASL classifier for flat, rectangular objects
However, there are also differences between classifier constructions in the two modalities. Departing from canonical understandings of classifiers in spoken languages, Edwards (2012) analyzes classifiers in VASL as forms that integrate linguistic and deictic elements. Linguistic elements are expressed by conventional handshapes that derive from the linguistic system (such as the B-handhsape in Figure 16), and deictic elements are expressed by pairing those handshapes with gestures that respond to patterns in the deictic field (ibid.:43-49). While the B-handhshape is a conventional form associated with the meaning “flat-rectangular-thing”, its placement (or location) and the movement necessary to convey that placement are not specified phonologically. Rather, they are guided by modes of access that speaker and addressee have to the referent (e.g. perceptual access, memory, shared knowledge, etc.). Therefore, while the handshape, location, and movement parameters of lexical signs can be accounted for with strictly linguistic analytics, this is only partially true for classifier constructions.
Despite these semiotic differences, it has been shown that visual sign language classifiers adhere to the same formational constraints that obtain for lexical signs, given the following, very general formulation of those constraints: “Maximize symmetry and restrict complexity in the handshape features of the two hands.” (Eccarius and Brentari 2007:1198). In TASL, the formational complexity of lexical signs has decreased as constraints on symmetry have grown more demanding. However, this is complemented by an increase of complexity in classifier constructions. For example, in Figure 18, Lee is describing the switch on a measuring tape like the one in the circle in Figure 17.
Figure 17: The Switch on the measuring tape
Lee addresses the DeafBlind man on the right, while the woman on the left observes. Using her addressee’s hand as a place of articulation, Lee traces the rectangular shape of the switch (Figure 18).
Figure 18: Lee describes the shape of the switch
The square in 19 (a) represents the path of the signer’s fingers on the addressee’s hand. Figures 19 (b) and (c) represent the handshapes used by the signer to trace the edges of the rectangle.
(a) (b) (c)
Figure 19: path traced on addressee’s hand by signer
Lee then goes on to describe the way the measuring tape is handled. Figure 20 represents the hand of the signer moving up and down on the hand of the addressee. While possible places of articulation in VASL are limited to the body of the signer and a restricted space around it, TASL allows locations on the addressee’s body to be incorporated.
Figure 20: path of signer’s hand (right) on addressee’s hand (left)
What’s more, places of articulation on the body of the addressee are not limited to the hands. In the pro-tactile workshops, it became common to recruit the knees, the front of the thighs, the chest, arms, hands, face, head, and back of the addressee as well. However, articulation was not performed on the groin area, the area below the knees, the inner portion and backs of the thighs, the feet, or the front of the neck of the addressee. This shift constitutes a violation of constraints on location in VASL and suggests the emergence of new constraints in TASL.
In addition, when the hands of the addressee are incorporated into the sign, motoric constraints on bi-manual coordination are distributed over the dyad, resulting in increased articulatory complexity. For example, In Figure 21 Lee is describing the movement of a snake’s body. She grips Manuel’s arm just below the armpit and holds onto his wrist. Then she moves each point of contact alternately to produce a snake-like motion in Manuel’s arm. In Figure 21 (a), she moves Manuel’s arm away from her body, and in Figure 21 (b) she moves it back again.
(a) (b)
Figure 21: Lee (left) manipulates Manuel’s (right) arm into a snake-like motion
This requires motor coordination between signer and addressee. In addition, there are three, rather than two articulators involved, each one with a distinct motor task. In other words, articulatory complexity is greater in classifier constructions than in lexical signs, and in ways that violate constraints on complexity in VASL. In sum, formational complexity is being redistributed in TASL, and with further conventionalization, new constraints are expected to emerge, which will restrict articulatory complexity in new ways. In combination with changes in constraints on lexical signs, this constitutes a divergence in the structure of TASL and VASL.
5. Conclusion
Prior to the pro-tactile movement, greater forms of authority accrued to sighted social roles and legitimacy accrued to visual modalities. In an attempt to maintain legitimacy, DeafBlind people used VASL long after its communicative efficacy had been compromised. As a result of the pro-tactile movement, forms of authority accrued to pro-tactile social roles and legitimacy accrued to tactile modalities. This transition involved redrawing the boundaries around touch, thereby enabling the emergence of reciprocal, tactile participation frameworks. Within these frameworks, the DeafBlind body schema shifted, and with it, the indexical ground of reference, description, and categorization. One of the many effects of this was to exert pressure on the way signs were produced and received, eventually leading to a redistribution of formational complexity across grammatical sub-systems.
Focusing exclusively on the channel through which signs are produced and received, DeafBlind communication prior to the pro-tactile movement might appear to constitute a “tactile language”. However, when contrasted with the phenomena described here, it becomes clear that tactile reception of VASL is a compensatory strategy. It is a way of receiving signs tactually that were meant to be seen, just as lip-reading is a way of receiving signs visually that were meant to be heard. It was not until the language was thoroughly integrated with the social and physical environment that it became a “tactile language”. This view diverges from those that locate language emergence in a moment when the linguistic system is “liberated” from its contexts of use[xxvii] (e.g. Sandler et al. 2005:2664-5). What we are observing in the case of TASL is not the emergence of a new language as it is cut away from context, but the opposite—a process whereby the linguistic system is integrated into the fields with which it articulates. This distinction between abstraction and integration is at the core of current debates about language emergence.
For example, recent approaches to language emergence that emphasize abstraction tend to focus on the innate capacities of the human mind, as distinct from those of other primates (Goldin-Meadow and Feldman 1977, Goldin-Meadow and Mylander 1983, Goldin-Meadow and Morford 1985, A. Senghas 2000 [1999], A. Senghas and Coppola 2001, Kegl et al. 2001). Innate structures are, by definition, present prior to activity. Therefore, in order to discern their nature and organization, context must be factored out to the greatest degree possible. For example, Sandler et al. report that Al-Sayyid Bedouin Sign Language (ABSL) developed a consistent word order in the space of two generations (2005). They argue that word order functions syntactically to signal relations between a verb and its arguments, and they conclude with the following reflection:
Of greater significance to us than any particular word order is the discovery that, very early in the life history of a language, a conventionalized pattern emerges for relating actions and events to the entities that perform and are affected by them, a pattern routed in the basic syntactic notions of subject, object, and verb or predicate. Such conventionalization has the effect of liberating the language from its context or from relying on the semantic relations between a verb and its arguments (Sandler et al. 2005:2664-5).
A question was immediately raised in response to these claims about whether word order patterns in ABSL are driven by an emergent syntactic system or by patterns in discourse[xxviii]. This is a fundamental question because if patterns in word order are driven by discourse, their emergence cannot not be attributed to the innate capacities of the mind alone.
The underlying issue is not unique to language emergence, and it is not new. It can be found, for example, in the problematic interaction of Saussure's principles of arbitrariness and linearity (1972 [1915]:66-70) and has resurfaced repeatedly as the field of linguistics has developed (e.g. Chomsky 1965, Fillmore 1968, Searle 1974, Sadock 1985, Jackendoff 1990, 2002, Yuasa and Sadock 2002, McCawley 1976, Jakobson 1971, Haiman 1985). Jakobson, for example, highlighted these problems when he argued that the order in which words are organized is not entirely arbitrary with respect to the phenomena they refer to since ``the temporal order of speech events tends to mirror the order of narrated events in time or in rank'' (1971:27). In order to address this problem, the semiotician Charles Morris argued that the ``syntactical dimension'' of language is constituted in the relations of sign vehicles to sign vehicles, and yet syntax also provides a set of rules through which interpreters respond to objects (1971 [1938]:26). Morris locates syntax, then, in the tension between ``conventionalism'' and ``empiricism'', which together account for ``the dual control of linguistic structure'' (ibid.:12-13). However, in later responses to the problem, this kind of duality became unacceptable. For example, the construct that accounts for ``competence'' (Chomsky 1985) is above all else, autonomous (Newmeyer 1983:4). And yet, autonomy is always being breached in one way or another (ibid.:27).
In turning to integration, over and against abstraction or “liberation,” the partial autonomy of the linguistic system is not a problem to be denied, but an opening to be explored via historical, ethnographic, and interactional modes of analysis. By looking at the linguistic system not as a perfectly bounded system, nor as an infinitely synthetic effect of Peircian rhetoric, we find ourselves with Wittgenstein, asking:
An indefinite sense—that would really not be a sense at all.—This is like: An indefinite boundary is not really a boundary at all. Here one thinks perhaps: if I say, `I have locked the man up fast in the room—there is only one door left open’—then I simply haven’t locked him in at all; his being locked in is a sham. One would be inclined to say here: ‘You haven’t done anything at all’. An enclosure with a hole in it is as good as none—but is that true?” (2001 [1958, 1953] §99)
Our answer is this: the holes in the enclosure are like receptors, set to receive values that do not come from the language itself, but rather, from the social and deictic fields where the language has grown up (Bühler (2001 [1934]:99). When values are retrieved, there are effects that echo in the grammar in arbitrary ways. Emergent signed languages allow us to glimpse the mechanisms underlying this process in actual, historical time. In this article, I have proposed that integration, as a relation of embedding, accounts for crucial dimensions of this process. In doing so, I have also sketched a new, anthropological approach to the study of emergent signed languages, which I am calling the practice approach to language emergence.
Acknowledgements
Thank you to the members of the Seattle DeafBlind community who participated in and contributed to this research. The argument has benefited from support and feedback at earlier stages, especially from E. Mara Green, Gaurav Mathur, Peter Graif, Nick Enfield, Tom Porcello, Shaylih Muehlmann, Charles Goodwin, Frank Bechter, Eve Sweetser, Len Talmy, Dan Slobin, Kensy Cooperrider, Kamala Russell, Chiho Sunakawa, Xochtil Marsili Vargas, Diane Brentari, Sachiko Ide, Bill Hanks, Jack Sidnell, Bianca Dahl, Alejandro Paz, Wendy Sandler, James Fox, Miyako Inoue, Hope Morgan, Deniz İlkbaşaran, Carol Padden, Jelica Nuccio, aj granda, Theresa B. Smith, Vince Nuccio, Isaac Waisberg, and Nitzan Waisberg. Thank you, also, to two anonymous reviewers who provided exceedingly helpful comments and to the Wenner-Gren foundation (Grant # 8110) and the department of Anthropology at the University of California, Berkeley, for funding this research.
References:
Ahearn, Laura M. (2001). Annual Review of Anthropology 30: 109-137.
Battison, Robbin (1978). Lexical Borrowing in American Sign Language. Silver Spring, MD: Linstock Press.
Bourdieu, Pierre (1990 [1980]). The Logic of Practice. Stanford: Stanford University Press.
Bühler, Karl (2011 [1934]). The Deictic Field of Language and Deictic Words. Theory of Language: the representational function of language. Amsterdam/Philadelphia: John Benjamins. 93-163.
Channon, Rachel (2004). The Symmetry and Dominance Conditions Reconsidered. Chicago Linguistic Society. Chicago. 44-57.
Chomsky, Noam (1965). Aspects of a Theory of Syntax. Cambridge: MIT Press.
Chomsky, Noam (1985 [1965]). Methodological Preliminaries. In Katz, J. (Ed.), The Philosophy of Linguistics. Oxford: Oxford University Press. 80-125.
Collins, Steven & Petronio, Karen (1998). What Happens in Tactile ASL? In Lucas, C. (Ed.), Pinky Extension and Eye Gaze: Language Use in Deaf Communities. Washington, D.C.: Gallaudet University Press. 18-37.
Collins, Steven Douglas (2004). Adverbial Morphemes in Tactile American Sign Language. Graduate College of Union Institute and University.
Comrie, Bernard (1989[1981]). Language Universals and Linguistic Typology. Chicago: The University of Chicago Press.
Crasborn, Onno (2011). The other hand in sign language phonology. In Oostendorp, M. v., Ewen, C. J., Hume, E. & Rice, K. (Eds.), The Blackwell companion to phonology. Oxford: Wiley-Blackwell. 223-240.
DeGraff, Michel (2001[1999]). Creolization, Language Change, and Language Acquisition. In DeGraff, M. (Ed.), Language Creation and Language Change: Creolization, Diachrony, Development. Cambridge, Massachusetts: MIT Press. 1-46.
Duranti, Alessandro (1994). From Grammar to Politics: Linguistic Anthropology in a Western Samoan Village. Berkeley and Los Angeles: University of California Press.
Eccarius, Petra & Brentari, Diane (2007). Symmetry and Dominance: A cross-linguistic study of signs and classifier constructions. Lingua 117: 1169-1201.
Edwards, Terra (2012). Sensing the Rhythms of Everyday Life: temporal integration and tactile translation in the Seattle Deaf-Blind Community. Language In Society 41(1).
Edwards, Terra (forthcoming). Language Emergence in the Seattle DeafBlind Community. Unpublished PhD Dissertation., The University of California, Berkeley.
Enfield, Nick J. (2009). Composite Utterances. The Anatomy of Meaning: Speech, Gesture, and Composite Utterances. Cambridge: Cambridge University Press.
Fauconnier, Gilles & Turner, Mark (1998). Conceptual Integration Networks. Cognitive Science 22(2): 133-187.
Fillmore, Charles J. (1968). The Case for Case. In Back, E. & Harms, R. T. (Eds.), Universals in Linguistic Theory. New York: Holt, Rinehart and Winston. 1-90.
Fusellier-Souza, I. (2006). Emergence and development of sign languages: from a semiogenetic point of view. Sign Language Studies 7(1): 30-56.
Giddens, Anthony (1979). Central Problems in Social Theory: Action, Structure and Contradiction in Social Analysis. Berkeley and Los Angeles: University of California Press.
Goffman, Erving (1974). Frame Analysis: An Essay on the Organization of Experience. Boston: Northeastern University Press.
Goldin-Meadow, Susan (2003). The Resilience of Language. New York: Psychology Press.
Goldin-Meadow, Susan (2010). Widening the Lens on Language Learning: Language Creation in Deaf Children and Adults in Nicaragua. Human Development 53: 303-311.
Goldin-Meadow, Susan & Feldman, Heidi (1977). The Development of Language-Like Communication Without a Language Model. Science 197( 4301): 22-24.
Goldin-Meadow, Susan & Morford, Marolyn (1985). Gesture in Early Child Language: Studies in Deaf and Hearing Children. Merrill-Palmer Quarterly 31(2): 145-176.
GoldinMeadow, Susan & Mylander, Carolyn (1983). Gestural Communication in Deaf Children: Noneffect of Parental Input on Language Development. Science 221(4608): 372-374.
Grinevald, Colette (2000). A morphosyntactic typology of classifiers. In Senft, G. (Ed.), Systems of nominal classification. Cambridge: Cambridge University Press.
Gumperz, John J. (1992). Contextualization and Understanding. In Duranti, A. & Goodwin, C. (Eds.), Rethinking Context. Cambridge: Cambridge University Press. 229-252.
Haiman, John (1985). Introduction. Natural Syntax: Iconicity and Erosion. Cambridge: Cambridge University Press. 1-18.
Hanks, William F. (1990). Referential Practice: Language and Lived Space among the Maya. Chicago: The University of Chicago Press.
Hanks, William F. (1996). Language and Communicative Practice. Boulder: Westview Press.
Hanks, William F. (2005). Pierre Bourdieu and the Practices of Language. Annual Review of Anthropology 34.
Hanks, William F. (2005). Explorations in the Deictic Field. Current Anthropology 46(2): 191-220.
Hanks, William F. (2009). Fieldwork on Deixis. Journal of Pragmatics 41: 10-24.
Hanks, William F. (2010). Converting Words: Maya in the Age of the Cross. Berkeley: University of California Press.
Heller, Monica (2014). Gumperz and Social Justice. Journal of Linguistic Anthropology 23(3): 192-198.
Hill, Jane H. & Irvine, Judith T. (1992). Responsibility and Evidence in Oral Discourse. Journal of Pragmatics 28: 1-28.
Hulst, Harry van der (1996). On the other hand. Lingua 98: 121-143.
Jackendoff, Ray (1990). Semantic Structures. Cambridge: MIT Press.
Jackendoff, Ray (2002). Foundations of Language: Brain, Meaning, Grammar, Evolution. New York: Oxford University Press.
Jakobson, Roman (1971 [1939]). Signe Zero. The Collected Writings of Roman Jakobson. 211-219.
Keating, Elizabeth & Mirus, Gene (2004). Signing in the car: Some issues in language and context. Deaf Worlds 20(264-273).
Kegl, Judy, Senghas, Ann & Coppola, Marie (2001). Creation through Contact: sign language emergence and sign language change in Nicaragua. In DeGraff, M. (Ed.), Language Creation and Language Change: creolization, diachrony, and development. London: MIT Press.
Kockelman, Paul (2007). Agency: The Relation between Meaning, Power, and Knowledge. Current Anthropology 48(3): 375-401.
Kooij, Els Van der (2002). Reducing phonological categories in Sign Language of The Netherlands: phonetic implementation and iconic motivation. Doctoral Dissertation. Leiden University.
Levinson, Stephen C. (1983). Pragmatics. Cambridge: Cambridge University Press.
McCawley, James D. (1976). Syntax and Semantics 7: Notes from the linguistic underground. New York: Academic Press.
Milroy, James (2001). Language ideologies and the consequences of standardization. Journal of Sociolinguistics 5(4): 530-555.
Morgan, Hope E. & Mayberry, Rachel I. (2012). Complexity in two-handed signs in Kenyan Sign Language. Sign Language &Linguistics 15(1): 147-174.
Morris, Charles (1971 [1938]). Foundations of the Theory of Signs. Chicago: University of Chicago Press.
Nakamura, Karen (2006). Deaf in Japan. Ithaca, NY: Cornell University Press.
Napoli, Donna jo & Wu, Jeff (2003). Morpheme structure constraints on two-handed signs in American Sign Language: notions of symmetry. Sign Language &Linguistics 6(2): 123-205.
Newmeyer, Fredrick J. (1983). Grammatical Theory: its limits and its possibilities. Chicago: University of Chicago Press.
Newport, Elissa (2001[1999]). Reduced Input in the Acquisition of Signed Languages: Contributions to the Study of Creolization. In DeGraff, M. (Ed.), Language Creation and Language Change: Creolization, Diachrony, Development. Cambridge, Massachusetts: MIT Press. 161-178.
Nonaka, Angela M. (2007). Emergence of an Indigenous Sign Language and a Speech/Sign Community in Ban Khor, Thailand. University of California, Los Angleles.
Peirce, Charles Sanders (1955/1940 [1893-1910]). Logic as Semiotic: The Theory of Signs. In Buchler, J. (Ed.), Philosophical Writings of Peirce. New York: Dover.
Petronio, Karen & Dively, Valeria (2006). YES, #NO, Visibility, and Variation in ASL and Tactile ASL. Sign Language Studies 7(1).
Quinto-Pozos, David (2002). Deictic points in the visual-gestural and tactile-gestural modalities. In Meier, R. P., Cormier, K. & Quinto-Pozos, D. (Eds.), Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge University Press. 442-467.
Reed, Charlotte M., Delhorne, Lorraine A., Durlach, Nathaniel I. & Fischer, Susan D. (1995). A study of the tactual reception of Sign Language. Journal of Speech and Hearing Research 38(477-489).
Sadock, Jerrold M. (1985). Autolexical Syntax: a proposal for the treatment of noun incorporation and similar phenomena. Natural Language and Linguistic Theory 3: 379-439.
Sandler, Wendy (1993). Hand in hand: The roles of the nondominant hand in Sign Language Phonology. The Linguistic Review 10(4): 337-390.
Sandler, Wendy, Meir, Irit, Padden, Carol & Aronoff, Mark (2005). The Emergence of Grammar: Systematic Structure in a New Language. Proceedings of the National Academy of Sciences of the United States of America 102(7): 2661-2665.
Saussure, Ferdinand de (1972 [1915]). Course in General Linguistics. New York: McGraw Hill.
Searle, John (1982[1974]). Chomsky's Revolution in Linguistics. In Harman, G. (Ed.), On Noam Chomsky: critical essays. The University of Massachusetts Press: Amherst.
Senghas, Ann (1999). The Development of Early Spatial Morphology in Nicaraguan Sign Language. In Howell, S. C., Fish, S. A. & Keith-Lucas, T. (Eds.), The Proceedings of the Boston University Conference on Language Development. Boston: Cascadilla Press.
Senghas, Ann (2000 [1999]). The Development of Early Spatial Morphology in Nicaraguan Sign Language. In Howell, S. C., Fish, S. A. & Keith-Lucas, T. (Eds.), The Proceedings of the Boston University Conference on Language Development. Boston: Cascadilla Press.
Senghas, Ann & Coppola, Marie (2001). Children Creating Language: How Nicaraguan Sign Language Acquired a Spatial Grammar. Psychological Science 12(4).
Senghas, Richard (2003). New Ways to be Deaf in Nicaragua: Changes in Language, Personhood, and Community. In Monaghan, L., Nkamura, K., Schmaling, C. & Turner, G. H. (Eds.), Many Ways to be Deaf: International, Linguistic, and Sociocultural Variation. Washington D.C.: Gallaudet University Press. 260-282.
Sidnell, Jack & Enfield, Nick J. (2012). Language Diversity and Social Action. Current Anthropology 53(302-333).
Silverstein, Michael (1996). Monoglot "Standard" in America: Standardization and Metaphors of Linguistic Hegemony. In Brenneis, D. (Ed.), The Matrix of Language: Contemporary Linguistic Anthropology. Boulder, CO: Westview.
Sperber, Dan & Wilson, D. (1986). Relevance. Cambridge: Harvard University Press.
Stokoe, William C. (2005 [1960]). Sign Language Structure; An Outline of the Visual Communication Systems of the American Deaf. Journal of Deaf Studies and Deaf Education 10(1).
Thomason, Sarah Grey Contact-induced typological change. In Haspelmath, M. (Ed.), Language Typology and Language Universals: an international handbook. berlin: Walter de Gruyter. 1640-1648.
Tomasello, Michael (2008). Origins of Human Communication. Cambridge: MIT Press.
Trudgill, Peter (2008). Colonial dialect contact in the history of European languages: On the irrelevance of identity to new-dialect formation. Language in Society 37: 241-280.
Wittgenstein, Ludwig (2001 [1958, 1953]). Philosophical Investigations. Oxford: Blackwell.
Yuasa, Etsuyo & Sadock, Jerry M. (2002). Pseudo-subordination: a mismatch between syntax and semantics. Journal of Linguistics 38: 87-111.
Zeshan, Ulrike & Vos, Connie de (Eds.) (2012). Sign Languages in Village Communities. Boston/Berlin: Gruyter.
Terra Edwards is a PhD Candidate in the Department of Anthropology at the University of California, Berkeley. Her dissertation, Language Emergence in the Seattle DeafBlind Community (forthcoming), examines the social, historical, and interactional foundations of the emergence and development of Tactile American Sign Language. She has also published on the topic of DeafBlind interpreting (Edwards 2012).
Footnotes
[i] In both spoken and signed languages, morphemes can be broken down into repeatable, meaningless elements, and combinations of those elements is constrained in arbitrary, language-specific ways. I use the term “sublexical” to refer to meaningless elements, which combine in rule-governed ways to form lexical signs. In signed languages, lexical signs can be broken down into contrastive handshapes, locations, and movements and these elements are combined in arbitrary, rule-governed ways, which differ cross-linguistically.
[ii] See Fusellier-Souza (2006), Nonaka (2007), R.J. Senghas (2003), and Zeshan and de Vos (2012) for complementary perspectives on the social foundations of language emergence and subsequent endangerment.
[iii] In a Peircian framework, this divergence could be analyzed as a “rhetorical” process, whereby sign-chains trigger elaborations in interpretants as the semiotic ground shifts from visual to tactile (Peirce 1955/1940 [1893-1910]:99). However, this perspective is less helpful in distinguishing semiotic systems that serve as input from the system that is created. I am concerned here not only with elaboration, but also the boundary between VASL and TASL grounded in typologically significant structural differences (see Thomason 2011:8) and differences in relations of embedding (discussed in the body of the text).
[iv] Including Seattle, but also Boston, Washington, D.C. and elsewhere.
[v] Also see Quinto-Pozos (2002) for a study of tactile communication systems used by three deaf-blind individuals.
[vi] All names are pseudonyms.
[vii] This is precisely the opposite of what Sidnell and Enfield call “collateral effects,” (2012:313). It is an effect of interaction on grammar rather than an effect of grammar on interaction.
[viii] This is one of many ways of understanding constraints on agency and intentionality with respect to language use (e.g. Ahearn 2001 Duranti 1994, Hill and Irvine 1992, Giddens 1979, Kockelman 2007).
[ix] Following Bühler (2001 [1934]), I assume a distinction between the deictic field and the deictic system. Deictic signs name and point. Prior to embedding, their meanings are highly schematic (Hanks 2005). When they are applied in the speech situation, they receive specific and determinate “field values” (Bühler (2001 [1934]:99). Their symbolic meaning derives from oppositions in the language (Here is not there; I am not you), which accounts for definiteness of reference. Their indexical meaning derives from the deictic field, which accounts for directivity of reference. Bühler compares the deictic field to pathways and deictic signs to signposts on those pathways. For example, when a human “opens his mouth and begins to speak deictically, he says “ . . . there! is where the station must be, and assumes temporarily the posture of a signpost” (ibid.:145). Construal of the deictic is not difficult because speakers and signposts “can do nothing other than take advantage—naturally to a greater or lesser extent—of the possibilities the deictic field offers them” (ibid.). In other words, pointing, like a signpost, merely clarifies potential ambiguities between, for instance, branches in a pathway. Therefore efficacy of a deictic sign is primarily attributable to the pathways, not the language (also see Hanks 2005:193-196).
[x] This is a selective reproduction of Hanks’ example.
[xi] See also Heller (2014).
[xii] For example, Keating and Mirus (2004) present an interesting discussion of momentary constraints imposed by a car on signs produced by its occupants.
[xiii] In a study of the tactile reception of sign language, Reed et al. (1995) found that deaf-blind individuals (not residing in the Seattle community) received VASL signs with 60-85% accuracy.
[xiv]This is comparable to processes of creolization, where reciprocal access to a shared code is compromised and as a result, new grammatical sub-systems emerge that are typologically distinct from the source-languages (DeGraff 2001 [1999]; Thomason 2011:8). Prior to the pro-tactile movement VASL had splintered into reduced and idiosyncratic communication systems, which is evidenced in part by the fact that once DeafBlind people have lost enough vision, the most effective communicators are those who know them personally and can draw on extensive shared knowledge. These idiosyncratic systems served as input to TASL. The effects of reduced input on language acquisition have been examined among deaf sighted children. For example, some deaf children are exposed to a grammatically simplified system produced by non-competent signers and these systems have been compared to pidgins (Newport 2001 [1999]). In the acquisition process, the pidgin is elaborated, eventually yielding a creole-like system (ibid.). Where these processes unfold in a community of signers, idiosyncratic homesign systems contribute to the emergence of full-fledged languages (Goldin-Meadow 2003, Senghas 1999, Sandler et al. 2005). Unlike the children involved in these studies, DeafBlind people are adult language-users who have acquired VASL as children and are trying to recover lost functionality. It is this process of (interactional) reconstruction, which is acting on the grammar and not a process of language acquisition and transmission. Nevertheless, the process of building a new language out of idiosyncratic and reduced semiotic inputs grounds the comparison.
[xv] Taken from a videorecorded interview conducted by the author, which was later transcribed and translated into English.
[xvi] From fieldnotes recorded during dissertation fieldwork in 2010.
[xvii] Tomosello posits a social “infrastructure” through which, and against which, communicative intent can be inferred, communication conventions can be established, and languages can emerge (2008:1-12). The case of TASL supports this perspective, since its structure is being shaped by biological and interactional pressures in a specific cultural-historical context (ibid.:10-11). However, the focus on communicative intent is tempered by tensions endemic to the habitus-field relation.
[xviii] See Hanks (1990:148-152) for more on basic level participant frames and Edwards (forthcoming) for more their relation to phonetics.
[xix] The fourth person in the frame had not yet joined the conversation. Participants are wearing blindfolds because residual vision can impinge on attempts to cultivate tactile sensibilities.
[xx] These images were taken from an online ASL dictionary (www.lifeprint.com) and in some cases modified for clarity.
[xxi] See Edwards (forthcoming) for a history of the pro-tactile movement and practices shaped by it.
[xxii] Out of 61 tokens produced by Group 1 signers, 46% were produced as one would expect in VASL. Out of 51 tokens produced by Group 2 signers, 25% were produced as one would expect in VASL.
[xxiii] Out of 39 tokens, 0% were produced as one would expect in VASL.
[xxiv] See Edwards (forthcoming) for a detailed analysis of changes in this sign type.
[xxv] out of 87 tokens
[xxvi] out of 85 tokens
[xxvii] See Edwards (forthcoming).
[xxviii] Stephen Anderson, David Perlmutter, and Maria Polinsky posed these questions in person.
This article examines a recent divergence in the sublexical[i] structure of Visual American Sign Language (VASL) and Tactile American Sign Language (TASL). My central claim is that TASL is a language, not just a relay for VASL. In order to make that case, I show how changes in the structure of interaction, driven by the aims of the “pro-tactile” social movement contributed to a redistribution of complexity across grammatical sub-systems. I argue that these changes constitute a departure from the structure of VASL and the emergence of a new, tactile language. In doing so, I apprehend language emergence not as a “liberation” from context (Sandler et al. 2005:2664-5), but as a process of “integration,[ii]” through which forms and their associated meanings undergo “reshaping” “conversion” and “transformation” as they are instantiated[iii] (Hanks 2005a:194, Edwards 2012:61-3).
Recent research on languag[1]e use among DeafBlind people in the United States[iv] has described differences in production and reception of signs as “accommodations”, and “adjustments” (Collins and Petronio 1998; Collins 2004; Petronio and Dively 2006). Collins states that “Tactile ASL is a clear example of a dialect in a signed language,” (2004:23) and Petronio and Dively concur, defining it as “a variety of ASL used in the deaf-blind community in the United States[v]” (2006:57). Crucially, this research was conducted prior to the pro-tactile movement, when DeafBlind people communicated primarily via sighted interpreters. Since then, DeafBlind people in Seattle have established conventions for direct and reciprocal tactile communication. I am arguing that these changes, triggered by the pro-tactile movement, are leading to a more radical divergence, and ultimately to the emergence of a new language.
Most members of the Seattle DeafBlind community are born deaf and due to a genetic condition, lose their vision over the course of many years. While they grow up participating in Deaf social networks, visual environments eventually become untenable, and they are drawn to Seattle where jobs and communication resources are available. Since the 1970s, the community has grown and has developed new communication conventions. However, until recently, these conventions were aimed at helping DeafBlind people maintain access to visual fields of engagement. Greater forms of authority accrued to sighted social roles and legitimacy accrued to visual modalities. Therefore, DeafBlind people attempted to use visual reception long after it had become ineffective.
In 2006, Adrijana[vi] became the first ever DeafBlind director of the DeafBlind Service Center (DBSC), a non-profit organization that provides communication and advocacy services. She and her staff traced the many problems facing the community to a single cause: DeafBlind people did not have enough direct, tactile contact with their environment and others in it. In an effort to address this problem, a series of 20 pro-tactile workshops were organized for 11 DeafBlind participants in spring of 2011. No interpreters were provided, and everyone—no matter how much they could still see—was required to communicate tactually. In these workshops, new interactional conventions were established, triggering a grammatical divergence between TASL and VASL. In analyzing this process, I draw on videorecordings collected during the workshops as well as one year of dissertation fieldwork and 14 years of involvement with this community in a range of capacities, including interpreting. I begin in Section 2 with some conceptual preliminaries developed in recent work on language and practice theory. Section 3 describes DeafBlind communication prior to the pro-tactile movement; and Section 4 examines pro-tactile communication, including new participant frameworks and their effect on the production and reception of signs. Section 5 concludes by arguing that these changes constitute the emergence of a new language.
My argument relies on the assumption that “a language” can be delimited and compared to other languages. From a strictly linguistic point of view, this is a difficult, but not impossible claim. For example, Comrie argues that Anglo-Saxon and Modern English are now distinct languages due to a history of radical change in morphological typology (from synthetic to analytic plus reduction in fusion) and word order typology (development of strict SVO order). Along these dimensions, he says, “it is hard to imagine two languages more different from one another than Anglo-Saxon and Modern English” ([1981]1999:203). Likewise, Sandler et al. use distinct word order in ABSL and the surrounding languages to be evidence of a clear boundary between systems (2005:2664). Along different typological dimensions, similarities might outweigh differences, and it is not clear how many similarities or differences would be necessary. Therefore, socio-political considerations often become decisive, especially where “standards” and “variants” are in play (e.g. Milroy 2001, Silverstein 1996, Trudgill 2008). As objects of metalinguistic reflection, languages are just one part of broader schemes of valuation and inequality, and claims about language-boundaries are often caught up in those dynamics.
The pro-tactile movement is not driven by metalinguistic reflection or valuation, but rather, by a shared desire for immediacy and co-presence (Edwards forthcoming). DeafBlind people have reflected upon and changed interaction conventions in order to establish tactile modes of co-presence. The emergence of new grammatical subsystems is an unintended consequence of those efforts[vii]. Therefore, social and political dynamics do affect the development of the language, but not via language-planning, shifts in language ideology, or other forms of metalinguistic discourse.
In arguing that TASL is emerging as a distinct language, I am making two claims. First, several grammatical subsystems are currently diverging from VASL in ways that foreshadow typologically divergent patterns (Edwards, forthcoming). Second, the grammar of TASL is being reconfigured as it articulates to new, historically emergent interactional and social fields. I am therefore claiming that a language is a configuration of grammatical subsystems embedded in historically and interactionally constituted fields of activity. In other words, a language is not strictly linguistic. However it cannot be reduced to ideologies about language or meaning-effects that emerge out of interaction, either. Rather, a language as a whole must be grasped in the relations of embedding that cohere between social, interactional, and linguistic phenomena. This article focuses specifically on the sublexical structure of TASL as it is transformed by the sub-type of embedding I am calling “integration.” To understand this transformation, I appeal to practice theory, adapted for the study of language (Bourdieu 1990, Giddens 1979, Hanks 2005a, 2005b, 2009, Edwards 2012).
2. A Practice Approach to Language Emergence
DeafBlind people in Seattle were once sighted, and as vision deteriorated, they continued to orient to their environment as sighted people do. However, starting in 2007, under the influence of the pro-tactile movement, they began to cultivate tactile sensibilities. In order to account for these shifts in orientation and attention, I draw on Bourdieu’s notion of “habitus”.
2.1 Habitus
The habitus is shaped by socially and historically specific patterns of perception, thought, and action weighed against notions of appropriateness and politeness. It is formed through socialization in childhood and continues to solidify throughout life (Bourdieu 1990[1980]:53). We learn, as children, to recognize immediate and urgent triggers to speak and act in particular ways. This trigger-response loop operates below the level of awareness, making it possible for acquired patterns and schemes, which predispose us to respond to stimuli in particular ways, to reproduce the systems and regularities which created them (ibid.:55). This circularity yields a ground of “reasonable” and “common-sense” ideas (ibid.:58). Children are socialized to accept common sense as such, thereby naturalizing historical effects.
Bourdieu’s notion of habitus is influenced by Panofsky (among others), who identified broad, underlying cultural logics that derive from homologies between philosophical thought and the thought of cultural producers of a given period (Hanks 2005a:70). However, under the influence of Merleau-Ponty, Bourdieu argued that “the body, not the mind, was the ‘site’ of habitus” (ibid.:71). Merleau-Ponty conceives of the body as the site of a particular kind of knowledge or “grasp” that social actors have of being a body—a “corporeal schema”, which is transmitted by the habitus at the level of motoric habituation (Hanks 1996:69). Habitus exists only in dynamic tension with “field”.
2.2 Field
Hanks distinguishes between three kinds of fields: semantic, deictic, and social. A semantic field is “any structured set of terms that jointly subdivide a coherent space of meaning” (Hanks 2005b:192). A single term characterizes aspects of setting, but it also analyzes them according to contrasts with other terms in the same domain (ibid.:200). The deictic field includes: (1) “the positions of communicative agents relative to the participant frameworks they occupy”; (2) “The position occupied by the object of reference”; and (3) “The multiple dimensions whereby agents have access to objects” (ibid.:192-3). Lastly, Bourdieu’s social field is summarized as follows:
(a) A form of social organization with two main aspects: a configuration of social roles, agent positions, and the structures they fit into and (b) the historical processes in which those positions are actually taken up [and] occupied by actors (individual or collective) (Hanks 2005b.:72).
Following Bourdieu, Hanks understands discourse production as ways of taking positions in the social field. In position-taking, “habitus and field articulate: social positions give rise to embodied dispositions. To sustain engagement in a field is to be shaped, at least potentially by the positions one occupies” (1996:73). This is why when we engage power structures, we tend to reproduce, rather than change them, regardless of intent. This process of social reproduction is linked to language-use by means of legitimation and authorization. Legitimation accrues to styles and genres, and constraints on who has access to legitimate styles and genres limits access to power, reinforcing unequal power relations (ibid.:76). Authorization, on the other hand, accrues to the positions social actors occupy. Legitimation and authorization jointly constrain position-taking in the social field[viii].
While habitus and field are crucial for understanding shifts in sensory orientation among DeafBlind people, Bourdieu’s social actor is not quite reflective enough to account for the role that DeafBlind people are playing in the process. Giddens (1979) and Kockelman (2007) (via Peirce (1955/1940 [1893-1910])) provide useful alternatives by breaking the consciousness of the actor onto three planes. Giddens’ categories include practical consciousness, discursive consciousness, and the unconscious (1979:2). He recognizes a kind of tacit, embodied knowledge like the kind transmitted by the habitus, but he argues that all social actors also “have some degree of discursive penetration of the social systems to whose constitution they contribute” (ibid.:5).
Kockelman also breaks the consciousness of the actor into threes by appealing to three different types of “interpretant,” or sign-effect: affective, energetic, and representational (Peirce 1955/1940 [1893-1910]):378). Affective interpretants involve a change in body state like blushing or sweating; energetic interpretants involve a physical response that requires some effort, but not necessarily intention, such as a flinch or a glance; and representational interpretants have propositional content, for example, an assertion such as “That was loud!” Each type has a double, or “ultimate interpretant”, which accumulates patterns (ibid.:378-9). For example, an ultimate affective interpretant is a ``disposition for one’s bodily state to change’’ as opposed to an instance of one’s bodily state changing (ibid.:378). Ultimate affective and energetic interpretants are similar to the habitus, since both involve a disposition to respond to sensory stimuli in particular ways. Ultimate interpretants are dissimilar from habitus in that there is no correlate to one of its core components—the Aristotelian notion of hexis, or the meeting of a desire to act with judgments of that desire against frames of social value (Hanks 1996:69). In addition, while ultimate representational interpretants account for more reflective and discursive modes of consciousness, it is not clear whether ultimate relations can obtain across categories. Can there be an ultimate representational interpretant, which accumulates affective and energetic patterns? This would be necessary in accounting for representational modes of reflection about co-presence and immediacy among DeafBlind people.
Kockleman’s framework diverges from the one developed here in another (and perhaps not unrelated) way: his object is abstract. It is a “correspondence-preserving projection from all interpretants of a sign” (ibid.:378). The object in the present analysis is, instead, an input to processes of embedding, and more specifically, integration. If the object were (primarily) a semiotic projection, the language would not be under such pressure to change; it is a problem of directionality. As will be discussed in the final section of the article, novel modes of perceptual access to the material qualities of objects, apart from thematization or characterization, exert pressure on the grammar via selective integration of linguistic and non-linguistic forms. While projection is certainly involved, effects are also moving in the other direction. The bi-directionality of integration, and embedding more generally, attributes a kind of concreteness to the object that is not found in a semiotically projected world (also see Edwards 2012:39). Excessive abstraction can also mask the importance of sensory capacity and orientation, which intervene in the sign-object relation in consequential ways via the body. In the present framework, the body, like the object, is relatively concrete.
2.3. Three Perspectives on the Body
There are three general perspectives on the body that are necessary for understanding the emergence of TASL: first, as a producer and receiver of signs; second, as an object of description and evaluation; and finally as part of the indexical ground against which activity unfolds. Constraints on the production and perception of signs are what sign language linguists call “phonetic” constraints. With respect to VASL, Battison observes that from the addressee’s perspective, the body appears symmetrical—two eyes, two arms, two hands, and so-on. However, from the signer’s perspective, the body is bilaterally asymmetrical (1978:26). One side is always more dominant than the other. The opposition between visual symmetry and the motoric asymmetry of the signer “creates a dynamic tension of great importance for the formational organization of signs…” (ibid.). A sign can be neither too complex to perceive, nor too complex to produce. That is to say, the physical production of the sign cannot involve motoric tasks that are difficult to execute (e.g. patting your head while rubbing your stomach) or perceptual tasks that are too demanding (e.g. producing movements that are too small to perceive easily). This type of constraint constitutes the first reduction in what is possible in VASL at the sublexical level.
In a practice framework, additional constraints are imposed by non-linguistic factors, which inhere in social and deictic fields. In the former, sign production and reception can be constrained by frames of social value that arise in part through talk about the body. If it is socially unacceptable to touch the addressee’s body, for example, a tactile language will not emerge. In order to negotiate norms like this, the body must be treated as “an object of evaluation through reference, description, and categorization” (Hanks 1996:248). In the deictic field, the body is part of the indexical ground against which activity unfolds (Hanks 1996:254-7, 2005b). If a DeafBlind person tells another DeafBlind person, “Here it is,” resolution of reference will require shared access to the environment (e.g. a grasp of where they are in space, reciprocal sensory access to the object, and any other relation that is relevant to both speaker and addressee). Here, the body is neither objectified, nor is its primary role to produce and receive signs. Rather, it is part of the background against which communicative activity becomes legible. While phonetic constraints inhere in the linguistic system, social and deictic constraints do not[ix]. However, they act, indirectly on the language via “embedding.”
2.4. Embedding
Embedding describes a process whereby schematic form-meaning correspondences undergo “reshaping” “conversion” and “transformation” as values are retrieved from deictic and social fields (Hanks 2005a:194). Patterns of retrieval align the linguistic system with its contexts of use so that, as Bühler says, language is not “taken by surprise” when it encounters the world (Bühler 2001 [1934]:197). Rather, the linguistic system acts like a network of receptors, which have been shaped by these patterns and are therefore set to receive certain field-values and not others. At the same time, retrieval tends to echo across grammatical subsystems in arbitrary ways as the language develops.
Four mechanisms of embedding have been proposed: practical equivalences, counterparts, rules of thumb (Hanks 2005b) and integration (Edwards 2012). Practical equivalences are correspondences between “modes of access that interactants have to objects” (Hanks 2005b:202). For example, in Yucatec Maya, there are two enclitics, a’ and o’, which when combined with one of four bases, produce a proximal/distal distinction (ibid.:198-9). However, in practice, the o’ form can be used to refer to denotata that are “off-scene” (ibid.:201). In order to use the “distal” deictic this way, a “practical equivalence” must be established between “off-scene” and “distal”.
Counterparts establish relations of identity between objects (Hanks 2005b:202). For example, the proximal deictic can be used by a shaman to refer to a child who is off-scene if there is a visual trace of that child in his divining crystal. This is possible because the visual trace of the child is construed as the counterpart of the actual child (ibid.:201). The shaman is authorized to establish this relation by virtue of his social position, just as the radiologist’s position authorizes him to interpret x-rays (ibid.). Therefore, counterparts establish relations between: (1) schematic form-meaning correspondences (e.g. a’/o’=proximal/distal); (2) the deictic field, where access to the referent is established, and (3) the social field, where authorized speakers establish relations between (1) and (2) by using legitimate styles and genres of language use.
Rules of thumb guide speakers in responding to commonly occurring, or “stereotypical” situations (Hanks 2005b:206). For example, in Yucatec Maya, a stereotypical greeting includes a question-response sequence like the following (ibid.:206[x]):
Speaker A: “Where ‘ya goin’?”
Speaker B: “Just over here.”
This exchange “tells A nothing about where B is going or how far away it is, only that he is heading there.” (ibid.) Therefore, the proximal form, translated as “here”, is not associated with proximity at all, but rather, a routine situation. Each of these principles of embedding involves the instantiation and subsequent re-shaping of a form-meaning correspondence.
Embedding may, at first, appear indistinguishable from neighboring concepts such as “contextualization” (Gumperz 1992) and “keying” (Goffman 1974:40-82). Contextualization is an inferential process (i.e. Sperber and Wilson 1986, Levinson 1983), which involves “hypothesis–like tentative assessments of communicative intent” (Gumperz 1992:230). Similarly, keying involves a change in frame through which an activity is understood, for example, when playful, “bitinglike behavior” turns to biting (Goffman 1974:41-4). Both concepts work well for analyzing changes in meaning that correspond to changes in interactional context, signaled by things like facial expressions, bodily cues, prosody, code choice, etc. While embedding accounts for changes like this, it also requires a third analytic step that links interactional phenomena to broader and more lasting transformations such as those associated with colonization, missionization, and large-scale religious conversion (e.g. Hanks 2010). These processes operate on historical and institutional scales. Practice theory distinguishes between interactional and social scales in order to relate them in principled ways. In Giddens, for example, historical and interactional scales are linked via the “layering” of social structures (1979:65), which is similar to the notion of social embedding developed here. However, Giddens is concerned with social and interactional structures, while embedding draws attention to relations between social, interactional, and, crucially, linguistic structures.
Contextualization and keying both unfold, primarily, in the give and take, or back and forth of face-to-face interaction. Embedding in the social field shifts attention to the socio-political projects people pursue or are caught up in. Under this perspective, actors interact, but in doing so, they also fight for recognition and resources, intervene in discursive loops and demand new framings of their actions, encounter limits in the institutional roles made available to them by prior historical activity, and apply their common-sense reasoning in ways that often reproduce those limits. Embedding accounts for processes like this, which are more peripheral in interaction-based concepts[xi].
2.4.1 Integration as Embedding
Practical equivalences, counterparts, and rules of thumb all involve a shift or substitution in meaning with respect to a stable linguistic form. For example, when a “distal” deictic is used to refer to an off-scene denotatum in Yucatec Maya, the meaning is converted, but the form remains constant. In contrast, “integration”, accounts for cases where both form and meaning are converted (Edwards 2012:61-3). In cognitive science, integration implies a partial projection of elements from two domains into a third, which manifests a structure that is not present in either of its inputs (Fauconnier and Turner 1998:133). The term is used here to describe the emergence of new linguistic forms, not present in the input. However, it takes into account a range of inputs that cannot be understood exclusively in terms of cognition, including social, deictic, and linguistic phenomena.
Effects of integration can be transient or perduring. For example, if two sighted users of VASL are communicating across a football field, they will extend the space within which signs are conventionally produced to increase visual salience. As a result, “location” and “movement” parameters of the sign will change. This is an effect of embedding in a deictic field where participants momentarily have reduced visual access to signs[xii]. Insofar as communicating across football fields constitutes a marked interactional context, this change in production is not relevant to our understanding of the structure of VASL. If, on the other hand, limited visual access is a permanent circumstance among a group of language users, and if this circumstance leads to historical shifts in sensory orientation and social organization, then integration will have more lasting effects.
Modes of access like this are also made feasible (or not) by broader processes of authorization and legitimation. For example, if the use of signed languages in public thrusts the signer into a subordinated social position, they are less likely to sign in public (e.g. Nakamura 2006:5). Therefore, while authorization and legitimation constrain position-taking, these processes can also restrict the feasibility of logically possible linguistic forms on social grounds. As new forms of authority accrued to DeafBlind social roles and the tactile modality was legitimized a wider range of tactile linguistic forms became feasible for the language.
2.4.2. Embedding and Language Emergence
Emergent languages do not have fully formed linguistic systems as input (if they did, they would be the product of language change). However, where full-fledged languages emerge, there is always some kind pre-existing semiotic input, such as gestural “home sign systems” which are developed by deaf children or in small kin-networks, where no signed language is available. These systems exhibit certain language-like properties, but do not constitute full-fledged languages (e.g. Kegl et al. 2001, Sandler et al. 2005, Goldin-Meadow 2010). In the Seattle DeafBlind community, pre-existing systems include reduced, or simplified versions of VASL. DeafBlind people using VASL have limited perceptual access to signs[xiii]. As vision is lost, context becomes increasingly important for distinguishing signs from one another, tracking referents, linking deictic signs to language-external objects, etc. However, vision loss also restricts access to context. The convergence of these circumstances leads to a splintering of the language into simplified and idiosyncratic versions (Edwards Forthcoming). When simplified idiolects were instantiated in a tactile field, they were transformed, and a new language began to emerge. This process began not with the language, but with the reconfiguration of the habitus[xiv].
3. The DeafBlind Habitus in a Visual Field
Prior to the pro-tactile movement, DeafBlind people approximated visual modes of participation by working with sighted interpreters. For example, in Figure 1, the DeafBlind man on the right is standing on stage giving a presentation to an audience of DeafBlind people. The interpreter next to him relays visual cues, such as a raised hand, from the audience.
Figure 1: DeafBlind presenter (right) with sighted interpreter (left)
The audience is filled with dyads composed of one DeafBlind person and one interpreter. For example, in Figure 2, the man on the left is DeafBlind and the woman on the right is a sighted interpreter. The interpreter copies the presenter’s signs, so they can be received tactually by the Deaf Blind person.
Figure 2: DeafBlind audience member (left) with sighted interpreter (right)
Each DeafBlind audience member using tactile reception must have at least one interpreter dedicated to them. Therefore, if there are 10 DeafBlind people present, there will be at least 10 interpreters working at any given time. Heavy mediation like this generates forms of distance between DeafBlind people and their environment that are detrimental to the visual habitus.
3.1 Degradation of the Visual Habitus
In interpreter-mediated participation frameworks, DeafBlind people do not have direct access to one another. Instead, utterances are channeled through several relays before reaching the intended addressee(s). This was the norm prior to the pro-tactile movement. If the original author produces an utterance with a grin or flushed cheeks, the only option for the interpreter is to add emoticon-like signs to the end of utterances (e.g. smile to represent a smile, or h-a for laughter). But these signs are not sensitive to the qualities of a particular smile, or the intensity of a particular gaze. So for DeafBlind people who received descriptions like this for years, people started to feel like types of people, interactions started to feel like examples of interaction, and places started to feel like representations of places. This had adverse effects that were talked about as a lack of feeling, connection, emotion, and depth.
Lee, one of the leaders of the pro-tactile movement, explained that over time, this kind of distance leads to characteristically strange behavior. For example, sighted people living in Seattle are familiar with downtown hotels. They expect to find automatic, sliding glass doors at the entrance. They anticipate the slightly squishy floor mat as they pass through the threshold. If they are holding a paper coffee cup, only a half-glance will be necessary to confirm the existence of a cylindrical silver trash can into which they can dispose of their cup. “It’s always the same!” Lee said. However, DeafBlind people have, until recently, relied on sighted interpreters to navigate public spaces, preventing them from cultivating tactile sensibilities. As a result, Lee says, scenes like the following are likely to unfold[xv]:
A DeafBlind person walks into a [hotel], and runs into the garbage can turning the corner. They look shocked and tell the person they’re with that the placement of the trash can is not safe!
Outbursts like this seem out of place, since to everyone else, the placement of the trash can is utterly expectable and would become expectable for the DeafBlind person if they were using a cane and taking note of such regularities in their daily lives. However, prior to the pro-tactile movement it was common for DeafBlind people to avoid this kind of transition in their sensory orientation, since it would mean engaging less legitimate modes of communication. I encountered this problem often in routine interactions.
For example, one day, I entered a coffee shop with a DeafBlind man[xvi]. I told him there were several people in line ahead of us. He responded by repeatedly adjusting his footing, saying “Sorry. Sorry.” He clenched his fists and cringed, as if bracing for a collision. This sort of thing happened all the time: I would give a DeafBlind person a piece of information, and they would yell, “I’m sorry!” “I didn’t know!” “No one told me!” Or “I’m blind!” When I was living in Seattle and working as an interpreter—before I went to graduate school, and before the pro-tactile movement took hold, these events felt quirky to me, but not unusual. As a researcher, and after being influenced by the pro-tactile movement, I began to see them as symptoms of a serious and alarming problem, a sign that a process of social degeneration had begun—something like the degeneration of the habitus.
When the habitus is intact, we respond to immediate triggers to act in socially recognizable ways. However, our ability to do so depends on access to the immediate environment. The jumpiness observed among some DeafBlind people comes from the presence of triggers to act, minus particularities in the environment needed to guide specific action. You may know that addressing a sighted interlocutor requires particular posture or orientation, but after years of limited access to the bodies of others, you forget how to enact them, and your actions fail to snap to a common grid of intelligibility.
Over time, these failures accrue to the individual as the habitus degenerates. A person without a habitus has no common sense. They run into “perfectly ordinary” objects and complain loudly about their placement. “Out of nowhere” they brace for a collision. They respond to “routine” questions by yelling, “I’m blind!” These events thrust DeafBlind people into devalued social positions. They come to be viewed as developmentally delayed or incompetent. They are described as slow learners or as socially isolated and suffering. Younger DeafBlind people are horrified when they see these small dramas unfold, and they wonder if this is their fate. Sighted people rush in to provide more visual information, which triggers further confusion. Over several decades, the DeafBlind person drifts away from any legible position in the social order. They become “eccentric” or “odd”. There are stories about how so and so used to be really attractive—“I know it’s hard to believe now,” they say, “but women were lining up after him.” And then the mystery surrounding his decline—What happened? Why do you think he’s like that now?
Leaders of the pro-tactile movement saw these problems as rooted not in the failures of the individual, but in naturalized interactional structures. Their hypothesis was that DeafBlind people behave in non-normative ways because they don’t have enough direct, tactile access to their environment. Representations only make sense if they conjure experience, and too much reliance on interpreters had opened up a chasm between the two. In the terms employed here, they saw, via a “reflexive monitoring of conduct” (Giddens 1979:25) that habitus must articulate with field. Rather than attempting to prop up the visual habitus, they changed it and the fields it articulates to.
4. Generating a Tactile Habitus
Habitus and field were transformed in the pro-tactile workshops as DeafBlind people established new communication conventions, built around tactile modes of access and orientation. Early on, these changes were confusing. A bid for a turn was misunderstood as a sexual advance. An attempt at co-presence was misunderstood as a bid for a turn. Fairly quickly, though, possibilities were narrowed as patterns in interaction began to settle and social boundaries around touch were redrawn. Within new limits, a range of possible and expectable behaviors cohered and began to be evaluated against new frames of social value. There were new ways of being inappropriate and politeness quickly became a common sense matter— a new habitus began to emerge. In the following sections, I show how these changes, which are fundamentally social, are leading to changes in the linguistic system, and specifically, to a redistribution of sublexical complexity[xvii].
4.1 The Redistribution of Sub-Lexical Complexity in a Tactile Field
Comparing sublexical organization in spoken and signed languages, Battison (1978) begins with the unrestricted human vocal apparatus. The human body, he says, can make a wide range of sounds of which only a small portion can be recruited for speech (ibid.:20). Sublexical constraints act on this limited range of sound to produce a finite set of units. These units are combined in rule-governed ways to yield the allowable morphemes of a specific language, including their alternations when they occur in utterances (ibid.). By analogic extension, the human body can make a wide range of gestures. Sublexical constraints in signed languages act on some sub-set of physically possible gestures to produce a finite set of units, which when combined in rule-governed ways, produce the allowable morphemes in a signed language (as well as their alternations) (ibid.). Finite units are composed of contrastive handshapes, movements, and locations, which are not, themselves, meaningful, but combine to form signs that are systematically distinguishable from one another (ibid.:21-3).
In the case of both spoken and signed languages there is a series of reductions enacted as increasingly demanding constraints are imposed on the capacities of the body. At the outer phonetic limits, capacity is primary. That is to say—there will be no gestural or sonic material admitted into the language that cannot be produced or perceived. However, in order to identify the limits of capacity, an unmarked, or “basic” interactional context must be established[xviii]. Participants must not be in unusual, or unconventional configurations (e.g. at unusual distances or in unusually low lighting). Therefore, at the most fundamental level, the phonetic structure of a language is dually constrained by the body’s capacity to produce and receive signs, on the one hand, and the habitual, sensory orientations that accrue to conventional participant frameworks on the other. During the pro-tactile workshops, conventional configurations of participant roles changed, yielding two, competing, underlying participant frames (“speaker-addressee”, and “speaker-addressees”). These frames were most commonly realized via two and three-person frameworks like those below.
4.1.1 Two-Person Configurations
In Figure 3, Adrijana (left) is listening to Collin (right) using her left hand. Adrijana uses her right hand to provide tactile backchanneling cues and maintain co-presence. During the workshops, the passive hand was increasingly recruited for these purposes, putting pressure on the addressee to use one-handed reception. In three-person configurations, additional pressures were exerted, not only on the reception of signs, but also on their production.
Figure 3: Two-person configuration
4.1.2 Three-Person Configurations
In Figure 4 Adrijana is signing no to two interlocutors[xix]. In a three-person configuration like this, all signs are duplicated, so there is one copy for each addressee (Figure 5). Insofar as three-person configurations are a realization of a basic participant frame, this change has implications for the sub-lexical structure of TASL.
Figure 4-5: Duplicated one-handed sign: “no”
In visual signed languages, there are two manual articulators. The interaction of the articulators is constrained at the sublexical level (e.g. van der Hulst 1996, Sandler 1993, Eccarius and Brentari 2007, Morgan and Mayberry 2012, Stokoe 1960, Battison 1978, Channon 2004, Napoli and Wu 2003). New, and importantly, conventional, participant frameworks among DeafBlind people are exerting pressure on the way the manual articulators interact, and therefore on this level of grammatical organization. In particular, the role of the non-dominant hand is changing in three-person configurations. While in VASL, the hands work in tandem to produce two-handed signs, in TASL, each hand must produce an independently meaningful sign: one for each addressee. One-handed signs like no are straightforward, however, duplication of two-handed signs is more complicated, and are leading to more consequential changes in the production of lexical signs.
4.2 Changes in Constraints on Two-Handed Lexical Signs
There are three types of two-handed signs in VASL, which differ in the degree of symmetry that obtains between the two hands. First, there are signs in which the hands are maximally symmetrical such as which[xx] (Figure 6).
Figure 6: VASL sign “which”
Here, the hands perform identical motor acts with either synchronous, or alternating movement (Battison 1978:28-9). Next, there are two-handed signs that are specified for the same handshape, but one hand is active and the other is passive, as in name (Figure 7), where the dominant hand moves and the non-dominant hand remains stationary. These signs are less symmetrical, since the hands do not perform identical motor acts.
Figure 7: The VASL sign “name”
Finally, there are two-handed signs that are maximally asymmetrical, such as discuss (Figure 8).
Figure 8: The VASL sign “discuss”
These signs present the greatest challenge for DeafBlind signers in three-person configurations. In order to see how this problem was addressed, I collected tokens produced by people with different levels of exposure to pro-tactile practices. Group 1 includes 8 DeafBlind people who were in their first 2.5 weeks of the workshops, and therefore had very limited exposure. The second group is composed of 6 signers who had been participating in the workshops for at least 2.5 weeks. The third group includes the instructors—Adrijana and Lee, who had been developing pro-tactile practices for about 4 years at the time of the workshops[xxi].
For maximally asymmetrical signs, three strategies were applied. Early on, signers simply failed to duplicate the sign. For discuss this resulted in the VASL sign in Figure 10. This strategy (or rather, lack thereof) was used most often by signers with very little exposure to pro-tactile practices (Group 1). In groups 2 and 3, exposure to pro-tactile practices increased and this strategy was used less often. In Figure 9, The Y-axis shows the percentage of tokens that were produced as one would expect in VASL. Moving from left to right on the X-axis, exposure to pro-tactile practices increases.
Figure 9: Percentage of signs produced like VASL signs
Early on, asymmetrical two-handed signs were produced as they would be in VASL more than 40% of the time. In later weeks of the workshops, the frequency of this (non) strategy dropped to just over 20%[xxii]. After several years of exposure, 0% of these signs were produced as would be expected in VASL[xxiii]. Instead, two alternate strategies were applied in three-person configurations. After a few weeks of exposure, signers tended to alternate the active hand sequentially so the dominant hand took the active role first, and then the non-dominant hand. This provided sequential access for both addressees via distinct channels. After a few weeks of exposure, signers tended to alternate the active hand sequentially so the dominant hand took the active role first, and then the non-dominant hand. I call this “sequential alternation”. For discuss, the resulting sign is depicted in Figure 10.
Figure 10: active role assumed first by right hand, then by left.
Figure 11: Percentage of signs alternated sequentially
Early on in the workshops, sequential alternation was used about 25% of the time. In the later weeks of the workshops, this strategy more than doubled in frequency. After several years of exposure, there was a slight decline (Figure 11). The third strategy for duplicating maximally asymmetrical signs involved dropping the non-dominant hand entirely. In the resulting sign, both hands play simultaneous, active roles (Figure 12).
Figure 12: discuss (non-dominant hand drops)
For non-instructors in the workshops, the non-dominant hand was dropped about 20-30% of the time. Among the instructors, who had been developing pro-tactile practices for 4 years, this strategy increased in frequency to about 50%. The other 50% of signs were sequentially alternated (Figure 13).
Figure 13: Percentage of signs with non-dominant hand dropped
For moderately symmetrical two-handed signs (e.g. name in Figure 7), similar patterns were observed (Edwards, forthcoming). In addition, maximally symmetrical two-handed VASL signs (e.g. which in Figure 6), tended to become even more symmetrical. Recall that in this type of sign, the hands perform identical motor acts. However, the movement can be alternating or synchronous. In TASL, there is a preference for synchronous movement instead[xxiv]. The resulting sign for which is depicted in Figure 14.
Figure 14: which with synchronous movement
This shows that in TASL, more demanding constraints on symmetry are emerging in the production of two-handed signs in three-person configurations. One may ask, however, whether this constitutes a change in the sublexical structure of the language, or is, rather, a pragmatic adjustment for three-person configurations. In other words, is the two-person frame more basic than the three-person frame, or vice-versa? In order to address this question, I collected tokens of one-handed signs produced in two-person configurations. In this context, there are two logical possibilities. Signs can be produced as in VASL, or they can be duplicated as in three-person configurations. I found that duplication of one-handed signs in two-person configurations increased from 0% among those with little exposure to pro-tactile practices[xxv] to 18% among those with several years of exposure[xxvi]. This suggests that the motoric patterns shaped by three-person configurations are influencing the production of signs in two-person configurations, even though there is no pragmatic pressure to do so. A continuation of this trend is expectable, since languages do not generally reconfigure the articulatory apparatus according to the number of addressees present. In its current state of development, this is leading to a minimization of sublexical complexity in TASL.
4.3 Minimization of Sublexical Complexity
Constraints on symmetry are becoming more demanding in TASL as a result of communication pressures. While there are several lexical sign-types in VASL, TASL has one: maximally symmetrical two-handed signs. For all lexical signs in TASL, the hands must perform identical or symmetrical motor acts in every respect. As a result, features that mark phonological distinctions in VASL are becoming imperceptible in TASL. For example, without access to the face or the second hand, discuss (in Figure 10 above) is indistinguishable from the VASL sign argue (Figure 15).
Figure 15: VASL sign argue
In the early stages of the workshops, ambiguities like this were common, and not easily resolved. The solution was not to make small adjustments to the visual system, but rather, to alter the motoric patterns that constrain the system. Here we see habitus and sublexical structure converge. A tactile habitus is shaped by the cultivation of tactile sensibilities, which run counter to the normative visual world. Stigmas were confronted, triggers to act were defused and re-set, selves were lost and remade. New pro-tactile people began to apply strategies to communicative problems that involved more radical solutions than ever before. When faced with three-person configurations, they did not stop at surface level modifications. They went so far as to change the bilateral asymmetry of the body, becoming more ambidextrous than their previous, visual selves. This allowed them to produce two-handed signs that were maximally redundant, thereby enabling them to address two people at the same time.
In Battison’s terms, this constitutes a minimization of formational “complexity”, which Morgan and Mayberry succinctly capture: “A two-handed sign that shares all phonological aspects is the most redundant and therefore least complex […]. Increasing mismatches (departures from symmetry between the two hands) in each of these aspects create more complexity…” (2012:148). Insofar as complexity in lexical signs decreases further and ambiguity increases as a result, distinctive features can be expected to be redistributed as the system develops. Indeed, this is already occurring.
4.4. Increased Complexity in Classifier Constructions
In signed languages, lexical signs can be contrasted with “classifier constructions”, which are named for their similarity to classifiers in spoken languages. A verbal classifier in spoken language, affixed to the verb classifies a nominal argument according to semantic criteria, as in the following example from Cayuga (Grinvald 2000:67):
ohon'atatke: ak-hon'at-a:k
it-potato-rotten past/I-CL(potato)-eat
'I ate a rotten potato'
In visual signed languages, there are constructions that work like this. For example, in VASL, the B-handshape (Figure 16) can be incorporated into the verb complex to classify one of its nominal arguments as rectangular and flat:
paper flat-rectangular-thing-lay-on table.
“The piece of paper is lying on the table”
Figure 16: VASL classifier for flat, rectangular objects
However, there are also differences between classifier constructions in the two modalities. Departing from canonical understandings of classifiers in spoken languages, Edwards (2012) analyzes classifiers in VASL as forms that integrate linguistic and deictic elements. Linguistic elements are expressed by conventional handshapes that derive from the linguistic system (such as the B-handhsape in Figure 16), and deictic elements are expressed by pairing those handshapes with gestures that respond to patterns in the deictic field (ibid.:43-49). While the B-handhshape is a conventional form associated with the meaning “flat-rectangular-thing”, its placement (or location) and the movement necessary to convey that placement are not specified phonologically. Rather, they are guided by modes of access that speaker and addressee have to the referent (e.g. perceptual access, memory, shared knowledge, etc.). Therefore, while the handshape, location, and movement parameters of lexical signs can be accounted for with strictly linguistic analytics, this is only partially true for classifier constructions.
Despite these semiotic differences, it has been shown that visual sign language classifiers adhere to the same formational constraints that obtain for lexical signs, given the following, very general formulation of those constraints: “Maximize symmetry and restrict complexity in the handshape features of the two hands.” (Eccarius and Brentari 2007:1198). In TASL, the formational complexity of lexical signs has decreased as constraints on symmetry have grown more demanding. However, this is complemented by an increase of complexity in classifier constructions. For example, in Figure 18, Lee is describing the switch on a measuring tape like the one in the circle in Figure 17.
Figure 17: The Switch on the measuring tape
Lee addresses the DeafBlind man on the right, while the woman on the left observes. Using her addressee’s hand as a place of articulation, Lee traces the rectangular shape of the switch (Figure 18).
Figure 18: Lee describes the shape of the switch
The square in 19 (a) represents the path of the signer’s fingers on the addressee’s hand. Figures 19 (b) and (c) represent the handshapes used by the signer to trace the edges of the rectangle.
(a) (b) (c)
Figure 19: path traced on addressee’s hand by signer
Lee then goes on to describe the way the measuring tape is handled. Figure 20 represents the hand of the signer moving up and down on the hand of the addressee. While possible places of articulation in VASL are limited to the body of the signer and a restricted space around it, TASL allows locations on the addressee’s body to be incorporated.
Figure 20: path of signer’s hand (right) on addressee’s hand (left)
What’s more, places of articulation on the body of the addressee are not limited to the hands. In the pro-tactile workshops, it became common to recruit the knees, the front of the thighs, the chest, arms, hands, face, head, and back of the addressee as well. However, articulation was not performed on the groin area, the area below the knees, the inner portion and backs of the thighs, the feet, or the front of the neck of the addressee. This shift constitutes a violation of constraints on location in VASL and suggests the emergence of new constraints in TASL.
In addition, when the hands of the addressee are incorporated into the sign, motoric constraints on bi-manual coordination are distributed over the dyad, resulting in increased articulatory complexity. For example, In Figure 21 Lee is describing the movement of a snake’s body. She grips Manuel’s arm just below the armpit and holds onto his wrist. Then she moves each point of contact alternately to produce a snake-like motion in Manuel’s arm. In Figure 21 (a), she moves Manuel’s arm away from her body, and in Figure 21 (b) she moves it back again.
(a) (b)
Figure 21: Lee (left) manipulates Manuel’s (right) arm into a snake-like motion
This requires motor coordination between signer and addressee. In addition, there are three, rather than two articulators involved, each one with a distinct motor task. In other words, articulatory complexity is greater in classifier constructions than in lexical signs, and in ways that violate constraints on complexity in VASL. In sum, formational complexity is being redistributed in TASL, and with further conventionalization, new constraints are expected to emerge, which will restrict articulatory complexity in new ways. In combination with changes in constraints on lexical signs, this constitutes a divergence in the structure of TASL and VASL.
5. Conclusion
Prior to the pro-tactile movement, greater forms of authority accrued to sighted social roles and legitimacy accrued to visual modalities. In an attempt to maintain legitimacy, DeafBlind people used VASL long after its communicative efficacy had been compromised. As a result of the pro-tactile movement, forms of authority accrued to pro-tactile social roles and legitimacy accrued to tactile modalities. This transition involved redrawing the boundaries around touch, thereby enabling the emergence of reciprocal, tactile participation frameworks. Within these frameworks, the DeafBlind body schema shifted, and with it, the indexical ground of reference, description, and categorization. One of the many effects of this was to exert pressure on the way signs were produced and received, eventually leading to a redistribution of formational complexity across grammatical sub-systems.
Focusing exclusively on the channel through which signs are produced and received, DeafBlind communication prior to the pro-tactile movement might appear to constitute a “tactile language”. However, when contrasted with the phenomena described here, it becomes clear that tactile reception of VASL is a compensatory strategy. It is a way of receiving signs tactually that were meant to be seen, just as lip-reading is a way of receiving signs visually that were meant to be heard. It was not until the language was thoroughly integrated with the social and physical environment that it became a “tactile language”. This view diverges from those that locate language emergence in a moment when the linguistic system is “liberated” from its contexts of use[xxvii] (e.g. Sandler et al. 2005:2664-5). What we are observing in the case of TASL is not the emergence of a new language as it is cut away from context, but the opposite—a process whereby the linguistic system is integrated into the fields with which it articulates. This distinction between abstraction and integration is at the core of current debates about language emergence.
For example, recent approaches to language emergence that emphasize abstraction tend to focus on the innate capacities of the human mind, as distinct from those of other primates (Goldin-Meadow and Feldman 1977, Goldin-Meadow and Mylander 1983, Goldin-Meadow and Morford 1985, A. Senghas 2000 [1999], A. Senghas and Coppola 2001, Kegl et al. 2001). Innate structures are, by definition, present prior to activity. Therefore, in order to discern their nature and organization, context must be factored out to the greatest degree possible. For example, Sandler et al. report that Al-Sayyid Bedouin Sign Language (ABSL) developed a consistent word order in the space of two generations (2005). They argue that word order functions syntactically to signal relations between a verb and its arguments, and they conclude with the following reflection:
Of greater significance to us than any particular word order is the discovery that, very early in the life history of a language, a conventionalized pattern emerges for relating actions and events to the entities that perform and are affected by them, a pattern routed in the basic syntactic notions of subject, object, and verb or predicate. Such conventionalization has the effect of liberating the language from its context or from relying on the semantic relations between a verb and its arguments (Sandler et al. 2005:2664-5).
A question was immediately raised in response to these claims about whether word order patterns in ABSL are driven by an emergent syntactic system or by patterns in discourse[xxviii]. This is a fundamental question because if patterns in word order are driven by discourse, their emergence cannot not be attributed to the innate capacities of the mind alone.
The underlying issue is not unique to language emergence, and it is not new. It can be found, for example, in the problematic interaction of Saussure's principles of arbitrariness and linearity (1972 [1915]:66-70) and has resurfaced repeatedly as the field of linguistics has developed (e.g. Chomsky 1965, Fillmore 1968, Searle 1974, Sadock 1985, Jackendoff 1990, 2002, Yuasa and Sadock 2002, McCawley 1976, Jakobson 1971, Haiman 1985). Jakobson, for example, highlighted these problems when he argued that the order in which words are organized is not entirely arbitrary with respect to the phenomena they refer to since ``the temporal order of speech events tends to mirror the order of narrated events in time or in rank'' (1971:27). In order to address this problem, the semiotician Charles Morris argued that the ``syntactical dimension'' of language is constituted in the relations of sign vehicles to sign vehicles, and yet syntax also provides a set of rules through which interpreters respond to objects (1971 [1938]:26). Morris locates syntax, then, in the tension between ``conventionalism'' and ``empiricism'', which together account for ``the dual control of linguistic structure'' (ibid.:12-13). However, in later responses to the problem, this kind of duality became unacceptable. For example, the construct that accounts for ``competence'' (Chomsky 1985) is above all else, autonomous (Newmeyer 1983:4). And yet, autonomy is always being breached in one way or another (ibid.:27).
In turning to integration, over and against abstraction or “liberation,” the partial autonomy of the linguistic system is not a problem to be denied, but an opening to be explored via historical, ethnographic, and interactional modes of analysis. By looking at the linguistic system not as a perfectly bounded system, nor as an infinitely synthetic effect of Peircian rhetoric, we find ourselves with Wittgenstein, asking:
An indefinite sense—that would really not be a sense at all.—This is like: An indefinite boundary is not really a boundary at all. Here one thinks perhaps: if I say, `I have locked the man up fast in the room—there is only one door left open’—then I simply haven’t locked him in at all; his being locked in is a sham. One would be inclined to say here: ‘You haven’t done anything at all’. An enclosure with a hole in it is as good as none—but is that true?” (2001 [1958, 1953] §99)
Our answer is this: the holes in the enclosure are like receptors, set to receive values that do not come from the language itself, but rather, from the social and deictic fields where the language has grown up (Bühler (2001 [1934]:99). When values are retrieved, there are effects that echo in the grammar in arbitrary ways. Emergent signed languages allow us to glimpse the mechanisms underlying this process in actual, historical time. In this article, I have proposed that integration, as a relation of embedding, accounts for crucial dimensions of this process. In doing so, I have also sketched a new, anthropological approach to the study of emergent signed languages, which I am calling the practice approach to language emergence.
Acknowledgements
Thank you to the members of the Seattle DeafBlind community who participated in and contributed to this research. The argument has benefited from support and feedback at earlier stages, especially from E. Mara Green, Gaurav Mathur, Peter Graif, Nick Enfield, Tom Porcello, Shaylih Muehlmann, Charles Goodwin, Frank Bechter, Eve Sweetser, Len Talmy, Dan Slobin, Kensy Cooperrider, Kamala Russell, Chiho Sunakawa, Xochtil Marsili Vargas, Diane Brentari, Sachiko Ide, Bill Hanks, Jack Sidnell, Bianca Dahl, Alejandro Paz, Wendy Sandler, James Fox, Miyako Inoue, Hope Morgan, Deniz İlkbaşaran, Carol Padden, Jelica Nuccio, aj granda, Theresa B. Smith, Vince Nuccio, Isaac Waisberg, and Nitzan Waisberg. Thank you, also, to two anonymous reviewers who provided exceedingly helpful comments and to the Wenner-Gren foundation (Grant # 8110) and the department of Anthropology at the University of California, Berkeley, for funding this research.
References:
Ahearn, Laura M. (2001). Annual Review of Anthropology 30: 109-137.
Battison, Robbin (1978). Lexical Borrowing in American Sign Language. Silver Spring, MD: Linstock Press.
Bourdieu, Pierre (1990 [1980]). The Logic of Practice. Stanford: Stanford University Press.
Bühler, Karl (2011 [1934]). The Deictic Field of Language and Deictic Words. Theory of Language: the representational function of language. Amsterdam/Philadelphia: John Benjamins. 93-163.
Channon, Rachel (2004). The Symmetry and Dominance Conditions Reconsidered. Chicago Linguistic Society. Chicago. 44-57.
Chomsky, Noam (1965). Aspects of a Theory of Syntax. Cambridge: MIT Press.
Chomsky, Noam (1985 [1965]). Methodological Preliminaries. In Katz, J. (Ed.), The Philosophy of Linguistics. Oxford: Oxford University Press. 80-125.
Collins, Steven & Petronio, Karen (1998). What Happens in Tactile ASL? In Lucas, C. (Ed.), Pinky Extension and Eye Gaze: Language Use in Deaf Communities. Washington, D.C.: Gallaudet University Press. 18-37.
Collins, Steven Douglas (2004). Adverbial Morphemes in Tactile American Sign Language. Graduate College of Union Institute and University.
Comrie, Bernard (1989[1981]). Language Universals and Linguistic Typology. Chicago: The University of Chicago Press.
Crasborn, Onno (2011). The other hand in sign language phonology. In Oostendorp, M. v., Ewen, C. J., Hume, E. & Rice, K. (Eds.), The Blackwell companion to phonology. Oxford: Wiley-Blackwell. 223-240.
DeGraff, Michel (2001[1999]). Creolization, Language Change, and Language Acquisition. In DeGraff, M. (Ed.), Language Creation and Language Change: Creolization, Diachrony, Development. Cambridge, Massachusetts: MIT Press. 1-46.
Duranti, Alessandro (1994). From Grammar to Politics: Linguistic Anthropology in a Western Samoan Village. Berkeley and Los Angeles: University of California Press.
Eccarius, Petra & Brentari, Diane (2007). Symmetry and Dominance: A cross-linguistic study of signs and classifier constructions. Lingua 117: 1169-1201.
Edwards, Terra (2012). Sensing the Rhythms of Everyday Life: temporal integration and tactile translation in the Seattle Deaf-Blind Community. Language In Society 41(1).
Edwards, Terra (forthcoming). Language Emergence in the Seattle DeafBlind Community. Unpublished PhD Dissertation., The University of California, Berkeley.
Enfield, Nick J. (2009). Composite Utterances. The Anatomy of Meaning: Speech, Gesture, and Composite Utterances. Cambridge: Cambridge University Press.
Fauconnier, Gilles & Turner, Mark (1998). Conceptual Integration Networks. Cognitive Science 22(2): 133-187.
Fillmore, Charles J. (1968). The Case for Case. In Back, E. & Harms, R. T. (Eds.), Universals in Linguistic Theory. New York: Holt, Rinehart and Winston. 1-90.
Fusellier-Souza, I. (2006). Emergence and development of sign languages: from a semiogenetic point of view. Sign Language Studies 7(1): 30-56.
Giddens, Anthony (1979). Central Problems in Social Theory: Action, Structure and Contradiction in Social Analysis. Berkeley and Los Angeles: University of California Press.
Goffman, Erving (1974). Frame Analysis: An Essay on the Organization of Experience. Boston: Northeastern University Press.
Goldin-Meadow, Susan (2003). The Resilience of Language. New York: Psychology Press.
Goldin-Meadow, Susan (2010). Widening the Lens on Language Learning: Language Creation in Deaf Children and Adults in Nicaragua. Human Development 53: 303-311.
Goldin-Meadow, Susan & Feldman, Heidi (1977). The Development of Language-Like Communication Without a Language Model. Science 197( 4301): 22-24.
Goldin-Meadow, Susan & Morford, Marolyn (1985). Gesture in Early Child Language: Studies in Deaf and Hearing Children. Merrill-Palmer Quarterly 31(2): 145-176.
GoldinMeadow, Susan & Mylander, Carolyn (1983). Gestural Communication in Deaf Children: Noneffect of Parental Input on Language Development. Science 221(4608): 372-374.
Grinevald, Colette (2000). A morphosyntactic typology of classifiers. In Senft, G. (Ed.), Systems of nominal classification. Cambridge: Cambridge University Press.
Gumperz, John J. (1992). Contextualization and Understanding. In Duranti, A. & Goodwin, C. (Eds.), Rethinking Context. Cambridge: Cambridge University Press. 229-252.
Haiman, John (1985). Introduction. Natural Syntax: Iconicity and Erosion. Cambridge: Cambridge University Press. 1-18.
Hanks, William F. (1990). Referential Practice: Language and Lived Space among the Maya. Chicago: The University of Chicago Press.
Hanks, William F. (1996). Language and Communicative Practice. Boulder: Westview Press.
Hanks, William F. (2005). Pierre Bourdieu and the Practices of Language. Annual Review of Anthropology 34.
Hanks, William F. (2005). Explorations in the Deictic Field. Current Anthropology 46(2): 191-220.
Hanks, William F. (2009). Fieldwork on Deixis. Journal of Pragmatics 41: 10-24.
Hanks, William F. (2010). Converting Words: Maya in the Age of the Cross. Berkeley: University of California Press.
Heller, Monica (2014). Gumperz and Social Justice. Journal of Linguistic Anthropology 23(3): 192-198.
Hill, Jane H. & Irvine, Judith T. (1992). Responsibility and Evidence in Oral Discourse. Journal of Pragmatics 28: 1-28.
Hulst, Harry van der (1996). On the other hand. Lingua 98: 121-143.
Jackendoff, Ray (1990). Semantic Structures. Cambridge: MIT Press.
Jackendoff, Ray (2002). Foundations of Language: Brain, Meaning, Grammar, Evolution. New York: Oxford University Press.
Jakobson, Roman (1971 [1939]). Signe Zero. The Collected Writings of Roman Jakobson. 211-219.
Keating, Elizabeth & Mirus, Gene (2004). Signing in the car: Some issues in language and context. Deaf Worlds 20(264-273).
Kegl, Judy, Senghas, Ann & Coppola, Marie (2001). Creation through Contact: sign language emergence and sign language change in Nicaragua. In DeGraff, M. (Ed.), Language Creation and Language Change: creolization, diachrony, and development. London: MIT Press.
Kockelman, Paul (2007). Agency: The Relation between Meaning, Power, and Knowledge. Current Anthropology 48(3): 375-401.
Kooij, Els Van der (2002). Reducing phonological categories in Sign Language of The Netherlands: phonetic implementation and iconic motivation. Doctoral Dissertation. Leiden University.
Levinson, Stephen C. (1983). Pragmatics. Cambridge: Cambridge University Press.
McCawley, James D. (1976). Syntax and Semantics 7: Notes from the linguistic underground. New York: Academic Press.
Milroy, James (2001). Language ideologies and the consequences of standardization. Journal of Sociolinguistics 5(4): 530-555.
Morgan, Hope E. & Mayberry, Rachel I. (2012). Complexity in two-handed signs in Kenyan Sign Language. Sign Language &Linguistics 15(1): 147-174.
Morris, Charles (1971 [1938]). Foundations of the Theory of Signs. Chicago: University of Chicago Press.
Nakamura, Karen (2006). Deaf in Japan. Ithaca, NY: Cornell University Press.
Napoli, Donna jo & Wu, Jeff (2003). Morpheme structure constraints on two-handed signs in American Sign Language: notions of symmetry. Sign Language &Linguistics 6(2): 123-205.
Newmeyer, Fredrick J. (1983). Grammatical Theory: its limits and its possibilities. Chicago: University of Chicago Press.
Newport, Elissa (2001[1999]). Reduced Input in the Acquisition of Signed Languages: Contributions to the Study of Creolization. In DeGraff, M. (Ed.), Language Creation and Language Change: Creolization, Diachrony, Development. Cambridge, Massachusetts: MIT Press. 161-178.
Nonaka, Angela M. (2007). Emergence of an Indigenous Sign Language and a Speech/Sign Community in Ban Khor, Thailand. University of California, Los Angleles.
Peirce, Charles Sanders (1955/1940 [1893-1910]). Logic as Semiotic: The Theory of Signs. In Buchler, J. (Ed.), Philosophical Writings of Peirce. New York: Dover.
Petronio, Karen & Dively, Valeria (2006). YES, #NO, Visibility, and Variation in ASL and Tactile ASL. Sign Language Studies 7(1).
Quinto-Pozos, David (2002). Deictic points in the visual-gestural and tactile-gestural modalities. In Meier, R. P., Cormier, K. & Quinto-Pozos, D. (Eds.), Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge University Press. 442-467.
Reed, Charlotte M., Delhorne, Lorraine A., Durlach, Nathaniel I. & Fischer, Susan D. (1995). A study of the tactual reception of Sign Language. Journal of Speech and Hearing Research 38(477-489).
Sadock, Jerrold M. (1985). Autolexical Syntax: a proposal for the treatment of noun incorporation and similar phenomena. Natural Language and Linguistic Theory 3: 379-439.
Sandler, Wendy (1993). Hand in hand: The roles of the nondominant hand in Sign Language Phonology. The Linguistic Review 10(4): 337-390.
Sandler, Wendy, Meir, Irit, Padden, Carol & Aronoff, Mark (2005). The Emergence of Grammar: Systematic Structure in a New Language. Proceedings of the National Academy of Sciences of the United States of America 102(7): 2661-2665.
Saussure, Ferdinand de (1972 [1915]). Course in General Linguistics. New York: McGraw Hill.
Searle, John (1982[1974]). Chomsky's Revolution in Linguistics. In Harman, G. (Ed.), On Noam Chomsky: critical essays. The University of Massachusetts Press: Amherst.
Senghas, Ann (1999). The Development of Early Spatial Morphology in Nicaraguan Sign Language. In Howell, S. C., Fish, S. A. & Keith-Lucas, T. (Eds.), The Proceedings of the Boston University Conference on Language Development. Boston: Cascadilla Press.
Senghas, Ann (2000 [1999]). The Development of Early Spatial Morphology in Nicaraguan Sign Language. In Howell, S. C., Fish, S. A. & Keith-Lucas, T. (Eds.), The Proceedings of the Boston University Conference on Language Development. Boston: Cascadilla Press.
Senghas, Ann & Coppola, Marie (2001). Children Creating Language: How Nicaraguan Sign Language Acquired a Spatial Grammar. Psychological Science 12(4).
Senghas, Richard (2003). New Ways to be Deaf in Nicaragua: Changes in Language, Personhood, and Community. In Monaghan, L., Nkamura, K., Schmaling, C. & Turner, G. H. (Eds.), Many Ways to be Deaf: International, Linguistic, and Sociocultural Variation. Washington D.C.: Gallaudet University Press. 260-282.
Sidnell, Jack & Enfield, Nick J. (2012). Language Diversity and Social Action. Current Anthropology 53(302-333).
Silverstein, Michael (1996). Monoglot "Standard" in America: Standardization and Metaphors of Linguistic Hegemony. In Brenneis, D. (Ed.), The Matrix of Language: Contemporary Linguistic Anthropology. Boulder, CO: Westview.
Sperber, Dan & Wilson, D. (1986). Relevance. Cambridge: Harvard University Press.
Stokoe, William C. (2005 [1960]). Sign Language Structure; An Outline of the Visual Communication Systems of the American Deaf. Journal of Deaf Studies and Deaf Education 10(1).
Thomason, Sarah Grey Contact-induced typological change. In Haspelmath, M. (Ed.), Language Typology and Language Universals: an international handbook. berlin: Walter de Gruyter. 1640-1648.
Tomasello, Michael (2008). Origins of Human Communication. Cambridge: MIT Press.
Trudgill, Peter (2008). Colonial dialect contact in the history of European languages: On the irrelevance of identity to new-dialect formation. Language in Society 37: 241-280.
Wittgenstein, Ludwig (2001 [1958, 1953]). Philosophical Investigations. Oxford: Blackwell.
Yuasa, Etsuyo & Sadock, Jerry M. (2002). Pseudo-subordination: a mismatch between syntax and semantics. Journal of Linguistics 38: 87-111.
Zeshan, Ulrike & Vos, Connie de (Eds.) (2012). Sign Languages in Village Communities. Boston/Berlin: Gruyter.
Terra Edwards is a PhD Candidate in the Department of Anthropology at the University of California, Berkeley. Her dissertation, Language Emergence in the Seattle DeafBlind Community (forthcoming), examines the social, historical, and interactional foundations of the emergence and development of Tactile American Sign Language. She has also published on the topic of DeafBlind interpreting (Edwards 2012).
Footnotes
[i] In both spoken and signed languages, morphemes can be broken down into repeatable, meaningless elements, and combinations of those elements is constrained in arbitrary, language-specific ways. I use the term “sublexical” to refer to meaningless elements, which combine in rule-governed ways to form lexical signs. In signed languages, lexical signs can be broken down into contrastive handshapes, locations, and movements and these elements are combined in arbitrary, rule-governed ways, which differ cross-linguistically.
[ii] See Fusellier-Souza (2006), Nonaka (2007), R.J. Senghas (2003), and Zeshan and de Vos (2012) for complementary perspectives on the social foundations of language emergence and subsequent endangerment.
[iii] In a Peircian framework, this divergence could be analyzed as a “rhetorical” process, whereby sign-chains trigger elaborations in interpretants as the semiotic ground shifts from visual to tactile (Peirce 1955/1940 [1893-1910]:99). However, this perspective is less helpful in distinguishing semiotic systems that serve as input from the system that is created. I am concerned here not only with elaboration, but also the boundary between VASL and TASL grounded in typologically significant structural differences (see Thomason 2011:8) and differences in relations of embedding (discussed in the body of the text).
[iv] Including Seattle, but also Boston, Washington, D.C. and elsewhere.
[v] Also see Quinto-Pozos (2002) for a study of tactile communication systems used by three deaf-blind individuals.
[vi] All names are pseudonyms.
[vii] This is precisely the opposite of what Sidnell and Enfield call “collateral effects,” (2012:313). It is an effect of interaction on grammar rather than an effect of grammar on interaction.
[viii] This is one of many ways of understanding constraints on agency and intentionality with respect to language use (e.g. Ahearn 2001 Duranti 1994, Hill and Irvine 1992, Giddens 1979, Kockelman 2007).
[ix] Following Bühler (2001 [1934]), I assume a distinction between the deictic field and the deictic system. Deictic signs name and point. Prior to embedding, their meanings are highly schematic (Hanks 2005). When they are applied in the speech situation, they receive specific and determinate “field values” (Bühler (2001 [1934]:99). Their symbolic meaning derives from oppositions in the language (Here is not there; I am not you), which accounts for definiteness of reference. Their indexical meaning derives from the deictic field, which accounts for directivity of reference. Bühler compares the deictic field to pathways and deictic signs to signposts on those pathways. For example, when a human “opens his mouth and begins to speak deictically, he says “ . . . there! is where the station must be, and assumes temporarily the posture of a signpost” (ibid.:145). Construal of the deictic is not difficult because speakers and signposts “can do nothing other than take advantage—naturally to a greater or lesser extent—of the possibilities the deictic field offers them” (ibid.). In other words, pointing, like a signpost, merely clarifies potential ambiguities between, for instance, branches in a pathway. Therefore efficacy of a deictic sign is primarily attributable to the pathways, not the language (also see Hanks 2005:193-196).
[x] This is a selective reproduction of Hanks’ example.
[xi] See also Heller (2014).
[xii] For example, Keating and Mirus (2004) present an interesting discussion of momentary constraints imposed by a car on signs produced by its occupants.
[xiii] In a study of the tactile reception of sign language, Reed et al. (1995) found that deaf-blind individuals (not residing in the Seattle community) received VASL signs with 60-85% accuracy.
[xiv]This is comparable to processes of creolization, where reciprocal access to a shared code is compromised and as a result, new grammatical sub-systems emerge that are typologically distinct from the source-languages (DeGraff 2001 [1999]; Thomason 2011:8). Prior to the pro-tactile movement VASL had splintered into reduced and idiosyncratic communication systems, which is evidenced in part by the fact that once DeafBlind people have lost enough vision, the most effective communicators are those who know them personally and can draw on extensive shared knowledge. These idiosyncratic systems served as input to TASL. The effects of reduced input on language acquisition have been examined among deaf sighted children. For example, some deaf children are exposed to a grammatically simplified system produced by non-competent signers and these systems have been compared to pidgins (Newport 2001 [1999]). In the acquisition process, the pidgin is elaborated, eventually yielding a creole-like system (ibid.). Where these processes unfold in a community of signers, idiosyncratic homesign systems contribute to the emergence of full-fledged languages (Goldin-Meadow 2003, Senghas 1999, Sandler et al. 2005). Unlike the children involved in these studies, DeafBlind people are adult language-users who have acquired VASL as children and are trying to recover lost functionality. It is this process of (interactional) reconstruction, which is acting on the grammar and not a process of language acquisition and transmission. Nevertheless, the process of building a new language out of idiosyncratic and reduced semiotic inputs grounds the comparison.
[xv] Taken from a videorecorded interview conducted by the author, which was later transcribed and translated into English.
[xvi] From fieldnotes recorded during dissertation fieldwork in 2010.
[xvii] Tomosello posits a social “infrastructure” through which, and against which, communicative intent can be inferred, communication conventions can be established, and languages can emerge (2008:1-12). The case of TASL supports this perspective, since its structure is being shaped by biological and interactional pressures in a specific cultural-historical context (ibid.:10-11). However, the focus on communicative intent is tempered by tensions endemic to the habitus-field relation.
[xviii] See Hanks (1990:148-152) for more on basic level participant frames and Edwards (forthcoming) for more their relation to phonetics.
[xix] The fourth person in the frame had not yet joined the conversation. Participants are wearing blindfolds because residual vision can impinge on attempts to cultivate tactile sensibilities.
[xx] These images were taken from an online ASL dictionary (www.lifeprint.com) and in some cases modified for clarity.
[xxi] See Edwards (forthcoming) for a history of the pro-tactile movement and practices shaped by it.
[xxii] Out of 61 tokens produced by Group 1 signers, 46% were produced as one would expect in VASL. Out of 51 tokens produced by Group 2 signers, 25% were produced as one would expect in VASL.
[xxiii] Out of 39 tokens, 0% were produced as one would expect in VASL.
[xxiv] See Edwards (forthcoming) for a detailed analysis of changes in this sign type.
[xxv] out of 87 tokens
[xxvi] out of 85 tokens
[xxvii] See Edwards (forthcoming).
[xxviii] Stephen Anderson, David Perlmutter, and Maria Polinsky posed these questions in person.
Proudly powered by Weebly