Protactile Research Network
Language Emergence in the Seattle DeafBlind Community
by Terra Edwards
A dissertation submitted in partial satisfaction of the
Requirements for the degree of Doctor of Philosophy in Anthropology in the Graduate Division of the University of California, Berkeley
Introduction
This dissertation examines the social and interactional foundations of a grammatical divergence between Tactile American Sign Language (TASL) and Visual American Sign Language (VASL). My central claim is that TASL is breaking away from the scaffolding of VASL and is emerging as a distinct, linguistic system. In order to make that case, I examine the effects of a recent social movement, known as the pro-tactile movement, on communication practices in the Seattle DeafBlind(1) community, and I show how those practices are giving rise to new grammatical subsystems in TASL.
Prior research on language use among DeafBlind people in the United States shows that differences in production and reception of signs prior to the pro-tactile movement were “accommodations” and “adjustments” to VASL (Collins and Petronio 1998, Collins 2004, Petronio and Dively 2006). DeafBlind people compensated for vision loss by adjusting VASL signs in idiosyncratic ways. These compensatory strategies are comparable to lip-reading; they are ways of accessing a visual language, tactually, just as lip-reading is a way of accessing an auditory language, visually(2).
In contrast, the pro-tactile movement created new kinds of tactile people who no longer sought to reconstruct the visual world they once inhabited. Instead, they set out to build a world of their own(3). As this tactile world came together, it was coordinated with the linguistic system in new and consequential ways. I call this process “contextual integration” (4) and I accord it a central role in the emergence of TASL and language emergence more broadly(5).
My overarching argument relies on the assumption that “a language” can be delimited and compared to other languages. From a strictly linguistic perspective, this is a difficult claim. Typological categories, which serve as the basis of cross-linguistic comparison, do not apply to whole languages, but rather, to particular parts of a language, such as morphology, word order, or clause structure (Comrie 1989:52). It is theoretically possible to classify whole languages by correlating logically independent typological parameters. Comrie compares this approach to biological classification, “where typologizing an animal as a mammal subsumes a significant correlation among a number of logically independent criteria (e.g. viviparous, being covered with fur, having external ears, suckling its young)” (Comrie 1989:40). In linguistic typology, this kind of classification has been attempted, though not with much success (Comrie 1989:40). Using strictly linguistic parameters, it is difficult to find any ground for comparison, and therefore, it is difficult to distinguish one language from another.
In addition, variation at the level of the individual and the subordinated group, as well as diachronic change, further complicates any attempt to identify a language as such without relying on socially determined hierarchies, which value one variety above all others (Bynon 1977, Labov 1972). Equally problematic is the fact that bilingual language-users often mix codes in ways that obscure language boundaries (e.g. Urciuoli 1995) and in cases of language shift, competence may vary across a group of language-users so that some are “speakers” while others are “semi-speakers” (e.g. Dorian 1981, cited in Urciuoli 1995:530). This raises interesting questions about whether or when a language becomes a non-linguistic mode of semiosis, and how the boundary between the two might be identified. In reverse, this question is also raised in recent work on young signed languages and homesign systems. When, for example, does a phonological system emerge, as such, and is it possible to have a “language” without one? (Brentari et al. 2009, Sandler et al. 2011).
In contrast, language boundaries come into sharp relief as objects of socio-political valuation (e.g. Gal and Irvine 1995, Milroy 2001, Silverstein 1996, Trudgill 2008, Urciuoli 1995). However, the pro-tactile movement is not driven by metalinguistic reflection or valuation, but rather, by a shared desire for immediacy and co-presence (Clark 2014, Chapter 3 and 4). In order to achieve tactile immediacy, DeafBlind people have reflected upon and changed their communication practices; the emergence of new grammatical subsystems is an unintended consequence of those efforts. Therefore, while social and political dynamics do affect the development of the language, it is not through language-planning, shifts in language ideology, or other forms of metalinguistic discourse. Rather, changes in the grammar are collateral effects(6) of changes in social and interactional processes.
In arguing that TASL is emerging as a distinct language, I am making two claims. First, several grammatical subsystems are currently diverging from VASL. At this stage of development, changes are most evident in the deictic and phonological systems. However, there are clear implications for morphological and syntactic systems as well. My second claim is that these changes are a result of a dis-articulation of VASL from the interactional and social fields it has grown up in, and a re-articulation of idiosyncratic, simplified versions of VASL to new, historically emergent fields.
I am therefore claiming that a language is a configuration of grammatical subsystems embedded in historically and interactionally constituted fields of activity. In other words, a language is not strictly linguistic. However it cannot be reduced to ideologies about language or meaning-effects that emerge out of interaction, either. Rather, a language as a whole must be grasped in relations that cohere between social, interactional, and linguistic phenomena. As these relations tighten into increasingly restricted configurations via contextual integration, semiosis becomes more “language-like.” Sapir assumed a process like contextual integration when he claimed that all languages are “formally complete”:
By formal completeness I mean a profoundly significant peculiarity which is easily overlooked [ ... ] [A] language is so constructed that no matter what any speaker of it may desire to communicate, no matter how original or bizarre his idea or his fancy, the language is prepared to do his work (1949 [1934]:153).
Like Sapir, I am claiming that “a language” (7) should be seamlessly embedded in its contexts of use so that it can do all of the work its speakers require of it. However, under conditions of significant sensory change, this claim may not be valid. In Seattle, VASL could no longer do what its DeafBlind users required of it. This highlights the fact that languages can go through stages where they are not, in Sapir’s sense “formally complete,” or seamlessly integrated with their contexts of use. Integration must therefore be understood as the outcome of socio-historical and interactional processes and not as an inherent property of all languages.
Most members of the Seattle DeafBlind community were born sighted and slowly lose vision. Many of them acquired VASL as children, but over time, the language became increasingly difficult to use. Prior to the pro-tactile movement, DeafBlind individuals compensated for those difficulties in increasingly idiosyncratic ways as vision deteriorated. This led to a splintering of the language and the pragmatic norms necessary for its use. For example, in the early stages of the pro-tactile movement, most members of the community were still resistant to new communication practices. Lee, one of the leaders of the movement, explained how people insisted on keeping their own idiosyncratic strategies, rather than adhering to emergent pro-tactile norms:
A month ago, I was with [Janet], and I ended interpreting what people were saying because I wasn’t lost, but she was totally lost and frustrated, and [she was] complaining that people weren’t following all of the many ridiculous rules that you have to follow to make visual communication with her possible. She put it in terms of “respect.” She said people weren’t respecting her. They shouldn’t walk quickly by--its confusing. They should stand at the right distance. They should sign slowly ... It is not reasonable to expect people to do that, and they don’t. So the result is that she’s left out, and is getting more and more frustrated as time goes by ... I have already become pro-tactile. She won’t embrace the pro-tactile movement, and she’s getting older. She must be in her 50s by now. It is really incredible.
Prior to the pro-tactile movement, these kinds of “ridiculous” idiosyncratic rules were the only option and as Lee noted, they were usually not followed by others. Over time, this led to difficulties in language-use, an increase in social isolation, and ultimately, the un-learning of the language itself.
Evidence of un-learning can be found in the way DeafBlind signers produce utterances. For example, older DeafBlind signers stop expressing grammatical and prosodic cues on the face, leading to a “flat” stream of production, which can be difficult to parse (8). In some cases, compensatory cues are added, such as substituting the manual sign no for negation, where it would otherwise be expressed with the face and/or head (Petronio and Dively 2006). However, it is more often the case that DeafBlind listeners are expected to fill in missing information via pragmatic maneuvering of various kinds, including inference, guessing, and requests for more information. This only works for so long, and at some point they not only stop filling in the cues as listeners; they stop producing them as well.
Maintaining the psychological reality of the language also means remembering visually accessible forms, which correspond to differences in meaning. But as visual memory fades, the ability to maintain those connections is affected, and as a result, the language itself deteriorates. My evidence for this, presented throughout the dissertation, is largely ethnographic. DeafBlind people who have been blind for many years do not produce VASL signs the way that sighted people do, and they can be exceedingly difficult to communicate with. One signer, for example, who is in his 70s, produces lengthy pauses between individual signs and very few facial expressions. It is difficult for me, when listening to this signer, to understand what the topic is, when something happened, who did what to whom, and other basic information that one would expect, given a shared code, to be unambiguous.
In addition, words that are commonplace among the sighted such as “email” and “computer” are not associated with any meaning at all in some DeafBlind signers, and attempts to explain their meanings often fail if there is a lack of experience with the objects or processes represented by those signs. Lexical and grammatical resources deteriorate in idiosyncratic ways for each DeafBlind person. This is an effect of vision loss, but it is also an effect of different degrees of isolation from the world and from things in the world. For example, in Figure 1.1, a visual interpreter is describing a sculpture in downtown Seattle to a DeafBlind man who I call Roman. The sculpture (Figure 1.3) is a representation of a man holding a hammer. His arm is moving very slowly, up and down, hammering in slow-motion.
First the interpreter describes the motion of the arm on the sculpture by combining a conventional hand configuration (the fist in Figure 1.1a, represented schematically in Figure 1.2.), with context-sensitive movements that represent the way the arm of the sculpture moves. Together, these elements characterize the referent according to its relevant dimensions. The interpreter then adds a deictic to direct the DeafBlind person’s attention to, and individuate, the referent (Figure 1.1b). However, this description does not inspire immediate recognition for Roman. After a few seconds of searching for the referent (Figure 1.1c), and apparently failing to locate it, he says, “I remember I saw that sculpture about ten years ago.”
Modes of access that allow the interpreter to link the hand configuration to its referent are tenuous for Roman. He is relying almost entirely on faded, flat memories that are not likely to conjure the sculpture’s towering size, its immutable presence--black against a sharp, grey sky--or the striking temporal juxtaposition of the arm, slowly sliding back and forth against the fast-paced activity in the city around it. The interpreter’s description can only be received by Roman as uprooted and abstract. He can to some degree or another understand the meaning of the interpreter’s words, but he is alienated from the visual field the description is meant to articulate to. Roman is receiving utterances that are detached from the material particularities of the objects to which they refer. This form of abstraction, occurring across a group of language users over time, leads to a reduction in semantic complexity.
According to Fillmore (1976), the meanings of words are linked to other words via interactional and cognitive frames. An interactional frame structures things like greetings and leave-takings and a cognitive frame links elements in a prototypical interaction. For example, the frame for a commercial transaction links elements like a buyer, a seller, the goods, the money, and so-on. Activation of the entire system is a prerequisite to understanding the meaning of any one word within it. Aspects of frame and setting activate one another in the minds of people who have learned the conventional associations, and learning these associations is one of the main activities of language acquisition in early childhood (Fillmore 1976). Over time, as new domains of experience are linked to old frames in a given speech community, the frames themselves grow more complex. Therefore, Fillmore argues that frame semantics can be used to gain insight into the evolution of language by analyzing nascent linguistic systems, such as pidgins, creoles, and child language in terms of relative frame complexity. The more complex the system of frames, the more developed the language. (ibid.:30).
For Roman and for other DeafBlind people in Seattle, frame complexity in VASL has decreased. That is to say that when a form in VASL is received by Roman, fewer and fewer associations are activated for him, and over time, entire patches of the semantic field go dark. This slow loss of frame complexity in VASL can be compared to a reversal in the process of language acquisition, as it is conceived of by Fillmore. I call this process of language acquisition in reverse, “semantic erosion.”
Semantic erosion presents an additional layer of difficulty for DeafBlind people attempting to use VASL. Not only is the sign increasingly difficult to perceive and to distinguish from other signs, the meanings associated with signs deteriorate as well. Across a group of language users, the cumulative effect can be thought of as a slow leak, through which semantic content is evacuated in idiosyncratic ways. The root of the problem is the fundamentally non-reciprocal nature of communication for DeafBlind individuals. Everyone they communicated with prior to the pro-tactile movement had visual access to the immediate environment and for the most part, communicated as if others did as well.
Reciprocity has been identified as a key requirement for the emergence of signed languages, more generally. For example, when deaf children grow up without access to a visually accessible language, they often create “homesign” systems (Goldin-Meadow and Feldman 1977). These systems do not become full-fledged languages because they are not shared by a community of users (Goldin-Meadow 2010:306)(9). Deaf children use homesigns to communicate with hearing caregivers and members of their family. However, just as sighted interpreters go on using VASL, hearing caregivers go on speaking English. Whatever co-speech gesture they use is integrated into a coherent communicative stream (Goldin-Meadow and Mylan-der 1983). Since the speech stream is inaccessible to the deaf homesigners, they receive partial and disordered communicative input, which they compensate for in different ways, generating idiosyncratic, but internally consistent communication systems.
When homesigners are brought together, for example, in a school, these systems can develop into a full-fledged language (Senghas 2000 [1999], A. Senghas and Coppola 2001, Kegl et al. 2001, Goldin-Meadow 2010). Two prerequisites have been identified as crucial for this transformation to take place. First, the system must be produced and received reciprocally within a community of users (Goldin-Meadow 2010:306). Second, the system must be transmitted from cohort to cohort (Senghas 2000 [1999]) or generation to generation (Sandler et al. 2005).
The Seattle DeafBlind community has been established since the early 1980s. However, a tactile language did not begin to emerge until 2010, when communication became reciprocal. This confirms that reciprocity is required, and it pushes this requirement beyond the exchange of the semiotic system itself, to include a more far-reaching “reciprocity of perspectives” (Schutz 1970:183). The reciprocity of perspectives is not a descriptive fact, but a principle that people orient to--they act as if there were a certain degree of similarity between their perspective and that of their interlocutor. At the perceptual level, this includes assumptions about the mutual accessibility of objects, people, signs, and events in the immediate environment, so that when I say “this,” while pointing to an object, I assume that my interlocutor can see what I am pointing to, in more or less the same way that I see it.
In the DeafBlind community, this as if clause was pushed to its breaking point. While differ-ences in sensory capacity, sensory orientation, social roles, status, biography, and memory, all affect the ability of participants to establish reciprocity (Hanks 2013), this case highlights the fact that perspectives must be, to some degree, actually reciprocal. The pro-tactile movement legitimized tactile modes of access to the immediate environment, thereby building a foundation for a broader, tactually grounded “perspective.” This made it possible for Deaf-Blind people to evaluate qualities such as pressure, speed, rhythm, and texture against new frames of social value. DeafBlind people no longer took instruction on how to hold their body or orient their gaze in order to give sighted people the impression that they were worthwhile, interesting, or legible. Instead, they began to instruct others on how to cultivate tactile sen-9Green (2014) and Goodwin (2000) show many of the ways that radically non-reciprocal linguistic competence can be overcome (or not) via social and interactional means. I am arguing that similar procedures can act not only as a means of circumventing asymmetries, but also as a means of correcting them via augmentation of the linguistic system itself.
sibilities so that value and worth can be apprehended and evaluated in tactile terms. Within these frames of value, the social field took on a coherent and asymmetric organization--some DeafBlind signers emerged as legitimate leaders, imbued with more authority than others. Their authority was applied in judgements about the “correctness” of particular linguistic forms and interactional conventions, which contributed to processes of conventionalization. In Chatper2, I argue that similar processes can be identified in other cases of language emergence as well. For example, in Nicaraguan Sign Language and Al-Sayyid Bedouin Sign Language, some styles, genres, or modalities of language became legitimate ways of being educated, smart, interesting, or “culturally Deaf,” while others did not. Insofar as the language marks social distinctions like this, and can be used to access desirable positions in the social field, it will continue to organize idiosyncratic perspectives as social actors struggle and compete for resources.
In 2007, as part of pro-tactile movement, communicative expertise was redistributed within the community, contributing to the reorganization of the social field. DeafBlind people began to turn to one another to solve communication problems rather than relying on sighted people and in doing so they realized that new communication conventions would need to be established. Toward this end, a series of 20 pro-tactile workshops were organized by two DeafBlind leaders for 11 DeafBlind participants. The goal of the workshops was to establish new conventions for direct, reciprocal, tactile communication, thereby reducing dependence on sighted people. As part of my dissertation research, I collected approximately 120 hours of videorecordings of interaction and language use among DeafBlind people during the workshops. Over the course of ten weeks, these new communication practices contributed to a grammatical divergence between TASL and VASL, and ultimately, to the emergence of a new, tactile language. The main goal of this dissertation is to understand this process and establish a framework that is useful for understanding the relationship between language and context in other cases of language emergence as well.
1.1 Language Emergence and the Problem of Context
Recent approaches to language emergence have focused on the innate capacities of the human mind, as distinct from those of other primates (Goldin-Meadow and Feldman 1977, Goldin-Meadow and Mylander 1983, Goldin-Meadow and Morford 1985, A. Senghas 2000 [1999], A. Senghas and Coppola 2001, Kegl et al. 2001). Innate structures are, by definition, present prior to activity. Therefore, in order to discern the nature and organization of these structures, context must be factored out to the greatest degree possible. Analytically, this amounts to a problem of extraction, since the innate structures of the mind are only visible via observation of language in use and other forms of activity.
For example, Sandler et al. report that Al-Sayyid Bedouin Sign Language developed a consistent word order in the space of two generations (2005). They argue that word order functions syntactically to signal relations between a verb and its arguments, and they conclude with the following reflection:
Of greater significance to us than any particular word order is the discovery that, very early in the life history of a language, a conventionalized pattern emerges for relating actions and events to the entities that perform and are affected by them, a pattern rooted in the basic syntactic notions of subject, object, and verb or predicate. Such conventionalization has the effect of liberating the language from its context or from relying on the semantic relations between a verb and its arguments (Sandler et al. 2005:2664-5).
Upon reporting these findings, the authors were asked whether word order patterns in ABSL are driven by an emergent syntactic system or by patterns in discourse (10). This question is important because if patterns in word order are driven by discourse, their emergence cannot not be attributed to the innate capacities of the mind alone.
The underlying problem is not new, nor is it specific to language emergence. It arises, for example, in the problematic interaction of Saussure’s principles of arbitrariness and linearity (1972 [1915]:66-70). For Saussure, there is no abstract syntax that can be separated from co-present sound-patterns in a sequence, such a sentence, or a “syntagma” (1972 [1915]:121). Value accrues to a unit in a syntagma by virtue of what precedes and/or what follows that unit. The units, in order to be related in this way, must be co-present. In other words, “syntagmatic relations hold in praesentia” (ibid.:122). The principle of linearity, in tandem with the principle of arbitrariness, govern langue, and yet linearity cannot be entirely extracted from the realm of parole: “Where syntagms are concerned ... one must recognize the fact that there is no clear boundary separating the language, as confirmed by communal usage, from speech, marked by the freedom of the individual. In many cases it is difficult to assign a combination of units to one or the other. Many combinations are the product of both, in proportions which cannot be accurately measured” (ibid.: 123).
The semiotician Charles Morris recognizes a related analytic problem when he claims that syntax is constituted in the relations of sign vehicles to sign vehicles, and yet it also provides a set of rules through which interpreters respond to objects (1971 [1938]:26). The solution is to posit a tension between “conventionalism” and “empiricism,” which accounts for “the dual control of linguistic structure” (ibid.: 12-13). Along these same lines, Jackobson notes that the order in which words are organized is not entirely arbitrary with respect to the phenomena they refer to since “the temporal order of speech events tends to mirror the order of narrated events in time or in rank” (1971:27). These problems are encountered any time the analyst attempts to move from language-use to abstract syntactic patterns, and therefore, they have resurfaced often as the field of linguistics has developed (e.g. Chomsky 1965, Fillmore 1968, Searle 1982 [1974], Sadock 1985, Jackendoff 1990, Yuasa and Sadock 2002, McCawley 1976, Jakobson 1971, Haiman 1985). However, these old problems are encountered in new and productive ways in debates about emergent signed languages.
In the case of homesign, deaf children develop language-like gestural systems, despite the fact that they are not exposed to a perceptible language (Goldin-Meadow and Feldman 1977, Goldin-Meadow and Mylander 1983, Goldin-Meadow and Morford 1985). Goldin-Meadow and colleagues emphasize the important role the child must play in these processes, since there is no viable model for them to learn from. Therefore, analyzing these emergent gestural systems offers a window onto the innate, creative capacities of the child’s mind. However, in order to be sure that the phenomenon under investigation can be referred to innate capacities and is not an effect of some external process, distinct modes of semiosis must be distinguished from one another.
In the early work on homesign, the framework that was used to accomplish this combined Fillmore’s case theory as it appeared in The Case for Case (1968) with a framework like the one put forth by Charles Morris in the Foundations of the Theory of Signs (1971 [1938]). Only the former was identified explicitly, however, the two basic categories of signs out of which phrases are built (deictic and characterizing signs), align with the terms found in Morris (1971 [1938]), and their use is consistent with his framework. By revisiting these frameworks, we can understand how the problems outlined above were addressed. In doing so, a broader range of semiotic phenomena are made explicit in ways that clarify the boundaries between innate capacities, the languages that are acquired when those capacities are applied, and the contexts in which languages are used.
1.1.1 The Case for a Theory of Signs
Morris defines semiosis as “the process in which something functions as a sign” (1971 [1938]:3). This process requires three things: (1) The Sign Vehicle/sign: “that which acts as a sign”; (2) The Designatum/denotatum: “That which the sign refers to”; and (3) The Interpretant/interpreter: “The effect of the sign on an interpreter, by virtue of which, the sign counts as a sign to that interpreter” (ibid.). In order to account for the relationship of the sign to context, Morris posits a three-way distinction between indexical, characterizing, and universal signs. Indexical signs denote an object and are exemplified by pointing. Characterizing signs denote objects, but also analyze them in some way, highlighting certain aspects (1971 [1938]:17).
In order for an object to be responded to, it must be located in terms of its relevant characteristics. This requires the combination of a characterizing sign and an indexical sign. The characterizing sign provides the determinateness of expectation (if I say “dog,” you expect a dog); and the indexical sign provides the directivity of reference. Lastly, there must be signs that indicate the relation of these signs to one another and their relation to the class they are members of. These are “universal signs” (1938:17). These sign types map onto the distinction between pragmatics, semantics, and syntactics in Morris. Pragmatics is constituted in the relation between the interpretant and the sign vehicle. Semantics inheres in the relation between the sign vehicle and the designatum. Syntactics is constituted in the relations between sign vehicles and the categories to which they belong. No one dimension can be dissociated from the others; a language is irreducibly triadic.
While Morris is clearly relevant to analyses of homesign, his framework is not foregrounded. Instead, Goldin-Meadow and colleagues point to Fillmore’s case theory (1968) in accounting for the innate structures of the child’s mind. Their challenge is to factor out external input, to be sure that contributions to the emergence of language-like homesign systems are the achievements of the child alone. However, the only factor outside of the child’s innate capacities that is explicitly ruled out is linguistic input. Other contextual factors play a pivotal role, which is reflected in the terms of analysis as well as the examples (11). This can be seen most clearly by viewing one of their examples first through Fillmore’s framework and then juxtaposing this with an analysis from Morris’s perspective. What I aim to show is that both frameworks are necessary in accounting for the regularities observed in homesign, and that this has consequences for our understanding of language emergence.
In The Case for Case, Fillmore argues that the syntax of a language cannot be stripped of all associated semantic elements, and further that semantic relations actually constitute an underlying structure, or “frame” that explains many syntactic constraints. The following example and others like it form the core of Fillmore’s argument. He begins with a covert distinction between affectum and effectum, which is observable in the following two sentences (1968:4): (1) John ruined the table; and (2) John built the table. In sentence (1), the object exists prior to John’s activities and in sentence (2), it exists as a result of John’s activities. It would appear, Fillmore says, that the distinction is purely semantic and that the syntactic system of English does not require its speakers to confront it. In other words, the ability to interpret the verb-object relation in two distinct ways in these two sentences, has nothing to do with a knowledge of English syntax. Nevertheless, the distinction has syntactic relevance: “The effectum object does not allow interrogation of the verb with “do to,” while the affectum object does.” Therefore, if you ask “What did John do to the table?” You can answer: “What John did to the table was ruin it.” But you cannot answer: “What John did to the table was build it.” (1968:4). The reason is that, prior to being built, the table doesn’t exist.
This is a semantic fact that has implications for syntax. Fillmore calls the relations between the two case relations, or simply case (1969:21). Case relations are covert, and in their totality, form “a universal system of deep-structure cases” (1968:21). Case forms, on the other hand, are the expression of case relations “through affixation, suppletion, use of clitic particles, or constraints on word order” in a particular language (ibid.:21). At one level, cases are linguistic in nature, but Fillmore backs up further and sees them as consistent with a broader range of cognitive capacities, which are “identified” by the cases, just as the cases are identified by verbs and nouns. In Fillmore’s words
The case notions comprise a set of universal, presumably innate, concepts which identify certain types of judgments human beings are capable of making about the events that are going on around them, judgements about such matters as who did it, who it happened to, and what got changed (1968:24).
These broader cognitive capacities allow for the mental representation of events, actions, and the things that participate in them. In order to identify the structures that allow humans to discern who did it, who it happened to, and what got changed, syntax must be extractable, and therefore, autonomous, and yet, as Fillmore shows, its autonomy is a persistent problem.
In Fillmore’s scheme, the correlate to signs that refer to, or characterize, actions are verbs and those that refer to, or characterize, objects or entities are noun phrases (Fillmore 1968:24-5). The homesigners that Goldin-Meadow and colleagues are working with do not produce verbs and noun phrases, but combinations of pointing gestures and characterizing gestures. This poses no problem because in Fillmore’s framework, the surface structure of the utterance is not important. The focus is instead on the relations that obtain between representations of referents (noun-like forms) and representations of actions and states (verb-like forms). Goldin-Meadow and Mylander “stress that [they] use linguistic terms such as sentence loosely and only to suggest that the deaf child’s gesture strings share certain elemental properties with early sentences in child language” (1983:372). They never claim that these systems are linguistic systems, and are careful to distinguish language-like phenomena from language. However, verb-like gestures are, through the use of Fillmore’s terms, implicitly compared to verbs and noun-like gestures are compared to nouns (or noun phrases). Goldin-Meadow and Feldman decompose communicative events into elements and relations like this, arguing that when deprived of exposure to a conventional language, the minds of children act on the gestural resources available to them in ways that the mind of any child capable of acquiring language would to yield a language like any other.
In one example, a child points at a shoe and then points at a table. In Fillmore’s scheme, we would start with the requested action: Please put the shoe on the table. The first pointing gesture stands in for a noun phrase that refers to the shoe. In relation to the action (verb-like element), this pointing gesture can be interpreted as the expression of the covert semantic element: patient. The second pointing gesture stands in for a noun phrase that refers to the table and can be interpreted as the expression of the covert semantic element: recipient.
In Morris’s scheme, the first pointing gesture (or sign vehicle) refers to an object (or desig-natum), as does the second pointing gesture. For Morris, semantics consists in the relation between the sign vehicle and the designatum, so a semantic relation is expressed by these elements in Morris, just as it is in Fillmore. But we have only accounted for the noun-like elements of the example. There is no overt manifestation of the verb-like element. This element is a product of the interpretation--that the two pointing gestures are a request to put the shoe on the table. If the mother responded to the pointing gesture (sign vehicle) by picking up the shoe and putting it on the table, this response would constitute the in-terpretant, or “the effect of the sign on the interpreter.” Since the utterance itself does not demand this interpretation, the analyst must have inferred it from a contextual scenario like the one I have just proposed.
For Morris, the response of the care-giver does not belong to semantics; it belongs to pragmatics, which inheres in the relations between interpretants and sign vehicles. Fillmore’s model does not account for communicative effects of sign vehicles, nor does it account for objects apart from their mental representations. Therefore, both frameworks are necessary in assigning semantic roles to the gestures that make up the gesture phrase. Without pragmatics, there is no action, without an action, there can be no case relations, and without case relations, there can be no innate capacities of the mind. Therefore, while “syntactics,” in Morris’ terms has become central to arguments about the emergence of new languages, autonomy (not surprisingly) remains problematic.
If Fillmore and Morris were explicitly combined we could understand the increasingly consistent ordering of semantic elements in homesign systems as a kind of integration between deictic, characterizing, and universal signs. With repeated use in familiar contexts, deictic and characterizing signs become increasingly caught up in and coordinated by relations of signs to one another and to the underlying categories they are members of; and the reverse is also true. The relations of signs to one another and the underlying categories they are members of are increasingly caught up in and coordinated by patterns in the way objects are individuated and characterized. This move brings us into a broader, analytic frame in order to distinguish between what is “universal” (in Morris’ terms) and what is not--prior to more detailed analysis of any one dimension of the phenomenon.
In addition, these semiotic processes are embedded in socio-historical frames, which have also been crucial in understanding how nascent signed languages emerge. For example, 1946, the first special education school was established in Managua (Polich 2005:24). Before that time, deaf children in Nicaragua had very little contact with the outside world and no contact at all with other deaf children. There were no schools for deaf children (or children with other disabilities) and no way for them to acquire basic communication or living skills (ibid.:13-24). By 1974, there were four schools involved in educating deaf children (Polich 2005:24). These changes coincided with an important, and much broader, transition in public perspectives on disability. Deaf people went from being seen as “eternal children” incapable of becoming productive adults to being seen as “potentially remediable subjects” (Polich 2005:24). Opportunities for deaf people in Nicaragua began to grow. Then, in 1979, the Sandinista Revolution took root, and the number of special education schools grew as well. From there, advocacy groups, clubs, and grass-roots organizations emerged (ibid.:53-91).
Within these groups certain individuals emerged as leaders within the deaf community. Even prior to the emergence of a full-fledged language, meta-linguistic discourses began to circulate, and the internal stratification of the community imbued some deaf people with the authority to decide what counted as the “correct” form of a sign (Polich 2005:53-91). The possibility of signing in “correct” and “incorrect” ways and the emergence of experts within the group meant that the language, even as it was forming, was viewed by deaf people as a legitimate means of position-taking in an internally asymmetric social field.
In chapter 2, I argue that this process is a prerequisite for language emergence. If the semiotic system in question is not a legitimate means of position-taking, it will not become a full-fledged language (12). Therefore, in addition to the requirement that a semiotic system must be transmitted from cohort to cohort (Senghas 2000 [1999]) or generation to generation in a community of users (Sandler et al. 2005), and that it must be a reciprocal means of communication (given a broader understanding of reciprocity), I am also claiming that a language must be a viable way of occupying social positions, and that those positions must be embedded in patterns of inequality within the community of language-users. In order to build a framework that examines language emergence in broader semiotic and socio-historical frames, I appeal to practice theory, as it has been developed for the analysis of language (Bourdieu 1990 [1980], Giddens 1979, Hanks 2005a, 2005b, 2009, Edwards 2012). In the following section, three key concepts are discussed in relation to the emergence of TASL and other signed languages: habitus, field, and embedding.
1.2 Language Emergence in a Practice Framework
DeafBlind people in Seattle were once sighted. They oriented to their immediate environment in ways that sighted people do, and they continued to do so, even after they lost their vision. Starting in 2007, under the influence of the pro-tactile movement, DeafBlind people began to cultivate tactile sensibilities. This shift, which eventually led to the emergence of new grammatical systems in TASL, can be understood as a reconfiguration of the “habitus.” This process is social in nature and does not yield to linguistic analytics, but, as I will show, it has consequences for the structure of the emergent linguistic system.
1.2.1 Habitus
Habitus derives from socially and historically specific patterns of perception, thought, and action weighed against notions of correctness, appropriateness, and politeness. These patterns take shape through processes of socialization in childhood and beyond (Bourdieu 1990 [1980]:53). According to Bourdieu, we are socialized to recognize certain immediate and urgent triggers to say something or not say it, to act or not act, and to identify certain objects in the environment as relevant, or not relevant. The trigger-response loop is automatic, which hides the fact that all of these acquired patterns and schemes which predispose us to respond to stimuli in particular ways are themselves predisposed to reproduce the systems and regularities which created them (ibid.:55). Out of this circularity, a “common sense” is instilled in the individual, understood as “embodied history, internalized as second nature and so forgotten as history” (ibid.:56). Children are socialized to accept common sense as such and this works to naturalize historical effects (13).
Bourdieu’s formulation of habitus can be traced to Panofsky, who viewed “cultural production [as] profoundly shaped by the ways of the thinking of its time” (Hanks 2005a: 70). Panofsky proposed homologies between philosophical thought and the thought procedures of cultural producers in a given period, which give rise to widespread, underlying logics of cultural production. Bourdieu drew on Panofsky’s thinking, but under the influence of Merleau-Ponty, he went on to propose “that the body, not the mind, was the site’ of habitus” (ibid.:71). Panofsky’s notion was further modified through its synthesis with the Aristotelian notion of hexis--the meeting of an intention (or desire) to act with judgments of that intention against frames of social value and meaning as well as phenomenological notions of habituality and embodiment. The phenomenological dimensions of habitus were taken from Merleau-Ponty, who saw the body as the site of a particular kind of knowledge or “grasp” that social actors have of being a body-- a “corporeal schema,” which is transmitted by the habitus (see Hanks 1996:69). In sum, the habitus is shaped by patterns of perception, thought, and action, along with social frames of value that guide the actor in applying those patterns in ways that feel appropriate, correct, and polite. These patterns are internalized at the level of the corporeal schema, where they are difficult to reflect on or reason about.
DeafBlind people grew up sighted, and during that time, they developed a corporeal schema, which was coherent in a field of visual dynamics and relations. Prior to the pro-tactile movement, communicative conventions in the community were established in order to maintain that schema. DeafBlind people used interpreters who could help them orient their body to their addressee in a way that would feel appropriate to sighted people; they stood at distances that would feel polite, and refrained from touching others, for fear of being rude. All of this served to maintain the visual habitus as long as possible. However, attempts at enacting the visual habitus eventually led to characteristically strange behavior, which, in turn, led to less coherent social relationships and ultimately to greater social isolation.
Leaders of the pro-tactile movement traced these problems to a single cause: DeafBlind people did not have enough tactile access to their environment. They argued that representations only make sense if they conjure experience, and because DeafBlind people had been relying so heavily on interpreters, a chasm between the two had opened up. In other words, via a “reflexive monitoring of conduct” (Giddens 1979:25), DeafBlind leaders saw that habitus must articulate with field. Rather than attempting to prop up the visual habitus, they intervened in the social order at the level of motoric habituation, and established a tactile habitus. They did this consciously and effectively, in ways that Bourdieu might not have predicted, since his social actor operates in mostly non-reflexive modes. In order to account for these kinds of conscious interventions, Giddens breaks the consciousness of the actor onto three planes: practical consciousness, discursive consciousness, and the unconscious (1979:2). He recognizes a kind of tacit, embodied knowledge like the kind transmitted by the habitus, but he argues that all social actors also “have some degree of discursive penetration of the social systems to whose constitution they contribute” (ibid.:5).
In the pro-tactile workshops, discursive and practical consciousness were ramped up, and the “unconscious” in Giddens’ terms, was altered. As both Bourdieu and Giddens might expect, these changes were confusing in early stages of the transformation. A bid for a turn was misunderstood as a sexual advance. An attempt at co-presence was misunderstood as a bid for a turn. Fairly quickly, though, possibilities were narrowed as patterns in interaction began to settle and social boundaries around touch were redrawn. Within new limits, a range of possible and expectable behaviors cohered and began to be evaluated against new frames of social value. Embodied communicative behaviors went from choppy and arhythmic to smooth and automatic within the span of a few weeks. There were new ways of being inappropriate and politeness quickly became a common sense matter-- a new habitus began to emerge.
This process, which is fundamentally social, unfolds on the level of motoric habituation and therefore also affects the production and reception of signs in ways that are linguistically significant. However, feature hierarchies are not useful for understanding changes in the habitus and politeness is irrelevant for understanding feature hierarchies. Therefore, the two orders of phenomena must remain analytically distinct, despite the fact that, in practice, they are intimately related. In addition, the habitus must be distinguished from the social fields to which it articulates, despite the fact that in practice, they are inextricable. The “field” concept is useful in establishing these analytic distinctions.
1.2.2 Field
A field, broadly construed, is a structured space, into which elements can be inserted, or on which, they can be arranged. For example, an electronic form is composed of spaces paired with specifications for information, such as last name, first name, date of birth, etc. Each space is set to receive elements arranged in a particular order or formatted in a particular way. For example, names that are too long are truncated and if a date is entered in an unrecognizable format, the form will be returned.
For Karl Bu¨hler, language and everything around it is replete with fields of all kinds: the symbolic field, the deictic field, the perceptual field, the inner field, the outer field, the field system of the type language, and so on (2001 [1934])(14). Bu¨hler’s fields are exemplified by grids, schemes, chess boards, geographical coordinates, the lines on music paper, vacant slots, and pathways in the countryside, on which signposts are situated. In practice theory, the field concept has been taken up as a way of understanding the dynamics of institutionally embedded social roles (Bourdieu 1990 [1980], Hanks 2005a)(15). In this dissertation, I distinguish between three fields, all of which are necessary for understanding the social and interactional foundations of language emergence in the Seattle DeafBlind community: the social field, the deictic field, and the symbolic field.
The Social Field
The social field is a structured space, into which elements are inserted and values are assumed. Its structure is defined by two things: “(a) a configuration of social roles, agent positions, and the structures they fit into; and (b) the historical process in which those positions are actually taken up, occupied by actors (individual or collective)” (Hanks 2005:72). For example, the DeafBlind community was built around a a local institution called the Seattle Lighthouse for the Blind, which is a manufacturing company with a social service mission. The Seattle Lighthouse and other organizations were once “sheltered workshops for the blind,” established to provide work alternatives to blind adults who could not find employment.
In the early 1970s, the scope of these organizations was broadened to include people with disabilities other than blindness (Koestler 1976:229). Shortly thereafter, DeafBlind people from across the country started to relocate to Seattle to take advantage of new employment opportunities. In addition to the provision of jobs, the Lighthouse also addressed the medical, personal, and housing needs of its DeafBlind employees. In order to receive these services, DeafBlind people had to learn to inhabit the social roles given by the history of the field, such as the “expedient blind person,” the “true believer,” and the “professional blind person” (Scott 1969:86-7). The expedient blind person tries to perform the role expected of him when sighted people are present, but takes this activity to be a performance that can be abandoned. The true believer is a blind person who actually experiences the emotions that the experts demand (ibid.:87). They express sincere gratitude to the organization and they genuinely believe that they would not be able to live without it (ibid.). The professional blind person lives in a network of blind organizations and agencies, and has very little contact with anyone outside of it (ibid.). The professional is often employed by a blindness organization that views their employment as act of goodwill or charity. These roles are endemic to the field of “blindness” and in order to take up a position in that field (thereby obtaining resources) DeafBlind people have to learn to inhabit them (chapter 3).
However, a social field is not just a place where people obtain or provide resources such as employment, education, or social services. Within any social field, values, such as prestige and authority also circulate, and accrual of these values motivates the strategic action of agents (Bourdieu 1980:112-134, Hanks 2005:73). Each field has a distinct history and a distinct set of circulating values. For example, in the social field of blindness, employment is often offered in exchange for “dignity” (Koestler 1976) and monetary gain is a secondary consideration.16 The historical processes that exclude and include values in a particular field constrain possibilities for action within that field. Each time a DeafBlind person performs work duties or receives social services in meetings, assessments, interviews, and trainings, they encounter these constraints, and over time, are shaped by them. As Hanks point out, this is where “habitus and field articulate: Social positions give rise to embodied dispositions. To sustain engagement in a field is to be shaped, at least potentially, by the positions one occupies” (Hanks 2005a:73).
Language-use, in a practice framework, is a means of position-taking in the social field. Legitimacy accrues to particular styles and genres of language use and not others, so that access to power is restricted by the way you speak (Hanks 2005b). Prior to the pro-tactile movement, visual modes of communication were a legitimate means of taking up valued social roles and tactile modes of communication were not. Then, in 2007 a DeafBlind person who communicated exclusively via tactile reception was hired as the director of a non-profit organization in Seattle. This catalyzed a reconfiguration of institutionally embedded social roles and the values circulating among them. As part of this, tactile modes of communication were legitimized, and communication practices were radically reorganized. While these changes were motivated by struggle and competition in the social field, they also affected the embodied dynamics of interaction among DeafBlind people. Recall that the habitus operates at the level of motoric habituation and affects the body schema of the social actor. This includes perceptual and cognitive schemes used to orient to the immediate environment. These same orientation schemes play a central role in the organization of the deictic field. Therefore, changes in the habitus can affect the way acts of referring are accomplished.
The Deictic Field
The deictic field is organized by the kinds of access that participants have to objects of reference. From the perspective of the individual, access is structured by schemes and patterns of various kinds: perceptual schemes, routine routes through familiar spaces, intuitions one develops for how a city, a village, a store, or a parking lot might be organized, etc. These schemes extend out around the language-user like an orienting grid. When a deictic sign is used, both the signer and the addressee must retrieve values from the deictic field. This requires a reciprocity of perspectives. In other words, participants must be able to take for granted a certain degree of similarity between their perspective and that of their interlocutor. Schutz explains that in a reciprocal configuration
I take it for granted--and assume my fellow man does the same--that if I change places with him so that his ‘here’ becomes mine, I would be at the same distance from things and see them in the same typicality as he actually does; moreover, the same things would be in my reach which are actually in his. (All this vice versa) (Schutz 1970:183).
When a minimum threshold of reciprocity cannot be reached, participants do interactional work to converge on the object. In order to account for the structures that are present prior to activity, and those that are worked out in the course of an interaction, Hanks synthesizes Goffman’s “situation” and Bu¨hler’s deictic field (Hanks 2005a:192). This yields a construct that can account for: (1) “the positions of communicative agents relative to the participant frameworks they occupy”; (2) “The position occupied by the object of reference”; and (3) “The multiple dimensions whereby agents have access to objects” (ibid.:193). These dimensions often include perceptual access, but they can also include shared knowledge, memory, imagination, or any other relation that allows signer and addressee to single out the referent against a horizon of potentiality. Therefore, while each individual comes to an interaction with orienting schemes of their own, the activity of referring requires those schemes to be coordinated in repeatable and expectable ways.17) Coordination of this kind is accomplished within participant frameworks, some of which are more conventional than others. Prior to the pro-tactile movement, participant frameworks organized around visual access were maintained among DeafBlind people, despite the fact that those frameworks actually prevented them from establishing access to the object (see chapters 5 and 6). This was because tactile modes of communication were not a legitimate means of taking-up valued positions in the social field. Once a person started compensating too obviously for vision loss, their social status was compromised. The reconfiguration of the social field opened up the possibility of establishing new participant frameworks, this time organized around tactile modes of access. This had consequences for the organization of the deictic field, and changes in the deictic field had consequences for the language.
When a deictic sign is applied in the speech situation, it retrieves values from two, distinct fields: the deictic field and the symbolic field. All deictic signs are composite in this respect, composed of both “symbols” and “signals” (Bu¨hler 2001 [1934]:99). Their symbolic meaning derives from oppositions in the language (Here is not there; I am not you), which accounts for definiteness of reference. Their indexical meaning derives from the deictic field, which accounts for directivity of reference. Speaking deictically requires the coordination of values from each field in the unfolding of the utterance.
When language-users enact particular retrieval patterns repeatedly, those patterns can become more restricted. This is what I am calling “deictic integration.” For example, the emergence of a full-fledged signed language in Nicaragua has been associated with the emergence of “spatial modulations” which establish relations between the verb and its arguments, or else between the verb and its “referents” (Senghas 2000 [1999]:679). This ambiguity between arguments and referents is at the center of this case of language emergence.
A canonical example cited in the literature on Nicaraguan Sign Language (NSL) involves the signs see and pay. As the language develops, signers consistently move both signs toward a single locus in the space in front of the signer in order to indicate that the same person was both seen and paid. In the first cohort of NSL signers, there was no consistent relationship between the direction in which the signs were produced and who was seen and paid. In the second cohort, movement was consistently represented from the character’s perspective as opposed to the signer’s perspective so that the directionality of the verb could be relied on to express whether or not the same person was both seen and paid. This is analyzed as a case of “co-reference” and also as a case of “agreement.” The two signs co-refer to the locus by moving toward it in space, and in doing so, manifest agreement between both verbs and their shared “nominal argument.” This is presented as evidence that Nicaraguan Sign Language has achieved full-fledged linguistic status with the following conclusive remarks: “Signs produced in a common location now unambiguously indicated a common referent [ . . . ] At this point, the construction could be used to link a verb to its arguments [ . . . ] ” (R.J. Senghas et al. 2005:301).
These findings raise the question of whether or not a verb can have referents, and whether or not this relation is linguistic. If the locus where the signs converge is in fact an argument of the verbs, how can it be specified phonologically and listed in the grammar? If the locus is an expression of a non-linguistic conceptualization of space, then what accounts for the relation between the two? Bootstrapping? Inference? Blending? Abstraction? Conventionalization? From a practice perspective, an ambiguity between referents and arguments poses no problem for the framework. Rather, it is a clear indication that a process of deictic integration is under way.
Deictic integration coordinates the linguistic system with the deictic field, leading to increasingly restricted retrieval patterns. In other words, these verbs were set to retrieve a wider range of deictic values for signers in the first cohort of NSL signers than they were for signers in the second cohort. In the first cohort, the directional movement of the verb was free to respond (or not) to a wide range of deictic phenomena. In the second cohort, the verb developed receptors, set to receive more specific information from the deictic field, and expressed this information consistently from the character’s perspective. “Perspective” is not a linguistic relation, but rather, a relation that accrues to the indexical ground of reference. Nevertheless, the way in which perspective is used to establish syntax-like relations is not a deictic phenomenon. This is where the deictic and symbolic fields converge and are coordinated into tighter and more restricted configurations by NSL signers as they “surpass their input” (Senghas and Coppola 2001:327). Under this perspective, NSL does not emerge as a full-fledged language as it is cut away from context, but rather, as it is integrated with the deictic and social fields it articulates to.
The Symbolic Field
The concept of the symbolic field, as it is employed by Bu¨hler, is a very general one, which encompasses too much to be applied to the analysis of specific linguistic structures. I adopt it here as a way of filtering phenomena at the outset into two distinct categories: those that are amenable to linguistic analysis and those that are not. Phenomena that unfold in the deictic and social fields are not governed by linguistic principles of organization, while those that unfold in the symbolic field often are. For Bu¨hler, the symbolic field is usually exemplified by syntax, but it also stands in for“grammar” more generally, understood as a system for establishing relations between representations of objects (2001 [1934]:28). He writes:
Language does not paint to the extent that would be possible with the resources of the human voice, but rather, symbolizes; naming words are symbols of objects. But just as the painter’s colours require a painting surface, so too do language symbols require a surrounding filed in which they can be arranged. We call this the symbolic field of language (2001 [1934]:171).
All naming, or in Morris’s terms, “characterizing signs” receive their field values from the symbolic field (Bu¨hler 2001 [1934]:94, Morris (1971 [1938]:17-21). Characterizing signs denote and also analyze the objects they represent, highlighting certain aspects and not others (Morris 1971 [1938]:17-21). As characterizing signs are used, denotational and analytic patterns sediment and conventional form-meaning correspondences are established in the type language. This gives rise to a language-internal “semantic field,” which broadly speaking, is made up of “any structured set of terms that jointly subdivide a coherent space of meaning” (Hanks 2005a:192). The analyst knows that the semantic field is relevant when the use of different forms systematically invokes different aspects of setting (ibid.:200). When these units are inserted into the symbolic field, they assume particular values.
The phonological system does not receive its values from the symbolic field, however, it is necessary for distinguishing symbols from one another. Therefore, the symbolic field and the phonological system are interlocking mechanisms, through which, all representations must be filtered. Unlike Saussure, who drew a hard line between form and substance in delimiting langue,18 Bu¨hler argues that “there is neither material without form nor form without material” (2001 [1934]:291). A phoneme is an auditory mark on a word, which can be counted. However, it is not extractable. It is embedded in the sound-shape of the word, “which changes like a human face with the fluctuation of expression . . . ” (ibid.:292). Therefore, from the perspective of the addressee, the phonological system is a system of “detectors” set to identify some marks in the sound stream and not others (ibid.:311). In this view, the figure-ground relation is central, and that relation is conditioned in part by modes of access to the sign-vehicle. Given this perspective, shifts in the deictic field, which affect modes of access, should echo into the phonological system of the language. This is precisely what has transpired in the case of TASL (chapters 8 and 9). This shift, where the linguistic system is transformed as it is aligned with its contexts of use is accounted for in the present framework with the concept of “embedding.”
1.2.3 Embedding
Embedding is a process whereby semiotic elements are converted as they assume values in the symbolic, deictic, and social fields.19 Where conventional practices emerge, relations between fields are tightened into increasingly restricted configurations, so that language is not “taken by surprise” when it encounters the world (Bu¨hler 2001 [1934]:197). Rather, the linguistic system acts like a network of receptors, set to receive certain field-values and not others.
Bu¨hler’s argues that any element inserted into a field must be “fieldable” (Bu¨hler 2001[1934]:211). His example is the following: “[t]he note symbol is not [capable of assuming a field value in the map field], it is not ‘fieldable’ there because it does not symbolize a geographical entity that could receive a local value” (ibid.:211). Therefore, if a musical note were inserted into a geographic map, it could only assume a non-musical value. For example, it might stand for a place where musical concerts are held, thereby undergoing a semiotic transformation.
The same is true of elements transferred from one language to another. A lexical sign removed from one language and inserted into another, will be incapable of assuming a field value without undergoing the structural changes associated with borrowing in that language (Thomason 2011, Battison 1978). In addition, the resulting value will necessarily be distinct from the corresponding value in the donor language (Saussure (1972 [1915]). This process through which elements assume field values as they are inserted into a particular field is, in its broadest sense, “embedding” (see Hanks 2005a:194 for further discussion).
In recent work on language and practice theory, four principles of embedding have been proposed: practical equivalences, counterparts, rules of thumb (Hanks 2005b) and integration (Edwards 2012). Practical equivalences are correspondences between “modes of access that interactants have to objects” (Hanks 2005b:202). For example, in Yucatec Maya, there are two enclitics, a’ and o’, which when combined with one of four bases, produce a proximal/distal distinction (ibid.:198-9). However, in practice, the o form can be used to refer to denotata that are “off-scene” (ibid.:201). In order to use the “distal” deictic this way, a “practical equivalence” must be established between “off-scene” and “distal.”
Counterparts establish relations of identity between objects (Hanks 2005b:202). For example, the proximal deictic can be used by a shaman to refer to a child who is off-scene if there is a visual trace of that child in his divining crystal. This is possible because the visual trace of the child is construed as the counterpart of the actual child (ibid.:201). The shaman is authorized to establish this relation by virtue of his social position, just as the radiologists position authorizes him to interpret x-rays (ibid.). Therefore, counterparts establish relations between: (1) form-meaning correspondences (e.g. a/o=proximal/distal); (2) the deictic field, where access to the referent is established, and (3) the social field, where authorized speakers establish relations between form-meaning correspondences and the deictic field by using legitimate styles and genres of language use.
Rules of thumb guide speakers in responding to commonly occurring, or “stereotypical” situations (Hanks 2005b:206). For example, in Yucatec Maya, a stereotypical greeting includes a question-response sequence like the following (ibid.:206 ):
Speaker A: “Where ya goin?”
Speaker B: “Just over here.”
This exchange “tells A nothing about where B is going or how far away it is, only that he is heading there” (ibid.). Therefore, the proximal form, translated as “here,” is not associated with proximity, but rather, a routine situation. Each of these principles of embedding involves the instantiation and subsequent re-shaping of a form-meaning correspondence.
Unlike related concepts such as “contextualization”20 (Gumperz 1992) and “keying”21 (Goff-man 1974:40-82), embedding draws attention not only to changes in meaning that emerge in interaction, but also to processes affecting the language which operate on historical and institutional scales. Practice theorists distinguish between interactional and social scales in order to establish principled relations between them. Giddens links historical and interactional scales via the “layering” of social structures (1978:65). This is similar to the notion of social embedding developed here. However, Giddens is concerned with social and interactional structures, while embedding draws attention to relations between social, interactional, and, crucially, linguistic structures.
Practical equivalences, counterparts, and rules of thumb all involve a shift or substitution in meaning with respect to a stable linguistic form. For example, when a “distal” deictic is used to refer to an off-scene denotatum in Yucatec Maya, the meaning is converted, but the form remains constant. In contrast, “integration” accounts for cases where both form and meaning are converted (Edwards 2012:61-3). In cognitive science, integration implies a partial projection of elements from two domains into a third, which manifests a structure that is not present in either of its inputs (Fauconnier and Turner 1998:133). The term is used here to describe the emergence of new linguistic forms, not present in the input. However, it focuses on the relations between social, deictic, and symbolic fields, which are not reducible to cognition.
I use the term “contextual integration” to account for effects of embedding in the deictic and social fields, which have consequences for both form and meaning. In both cases, effects can be momentary, or they can be more lasting. For example, if two sighted users of VASL are communicating across a football field, they will extend the space within which signs are conventionally produced to increase visual salience. As a result, “location” and “movement” parameters of the sign will change. This is an effect of embedding in a deictic field where participants momentarily have reduced visual access to signs. Insofar as communicating across football fields constitutes a marked interactional context, this change in production is not relevant to our understanding of the structure of VASL. If, on the other hand, limited visual access is a permanent circumstance among a group of language users, and if this circumstance leads to historical shifts in sensory orientation and social organization, then integration will have more lasting effects.
Particular modes of access are also made feasible (or not) by broader processes of authorization and legitimation, therefore embedding in the social field can have consequences for the organization of the language. For example, if the use of a tactile channel thrusts the language user into a subordinated social position, a tactile language is less likely to emerge. Therefore, while authorization and legitimation constrain position-taking, these processes can also restrict the feasibility of logically possible linguistic forms on social grounds. As new forms of authority accrued to DeafBlind social roles and the tactile modality was legitimized a wider range of tactile linguistic forms became feasible for the language.
1.3 Modality: what does it mean to call a language tactile?
A practice approach to language places the question of “modality” as it has been understood in the sign language linguistics literature within the broader frame of contextual integration. To say that a language is tactile is to say that it is seamlessly integrated with the social, deictic, and symbolic fields engaged by tactile people. Therefore, the emergence of TASL is coeval with the emergence of a tactile habitus and the social field with which it articulates as well as a deictic field organized around tactile modes of access and orientation. Each field is a structured, semi-autonomous space, into which elements can be inserted, or on which, they can be arranged.
Crucially, Bu¨hler’s fields are not related to the elements inserted into them as form is related
to matter. It is not that you insert material elements into formal structures. Instead there is a Gestalt--or a relation of figure to ground. Objects are represented indirectly via the juxtaposition of many interlocking “implements,” which act as filters and intermediaries, each one introducing some arbitrariness of its own. As you go further out of the core mediating implements of a language, you arrive at the world, where you find what Bu¨hler calls “differences in world view” (2001 [1934]:171), or what Schutz calls “differences in perspective.” Ultimately, the diversity observed in linguistic systems is attributed to these differences. At the outer perimeter of the language is the deictic system, reaching on one side toward the grammar, and on the other, toward the deictic field. Through patterns of retrieval and integration, the language is aligned with the world as it is perceived by the users of that language, and those processes echo in arbitrary ways as they move from the perimeter to the core of the grammar.
The semiotic transformations currently under way in the Seattle DeafBlind community suggest a theory of modality like this, which begins in large-scale socio-historical processes, but penetrates through to core grammatical systems. From this perspective, the degree to which grammar can be abstracted away from its contexts of use appears overstated, and at the same time, distinctions between interlocking systems remain important; phonology is not approached as if it were syntax, and syntax is not approached as if it were deixis. A language and the world in which it is used form a gestalt, which foregrounds and backgrounds elements in interlocking systems and fields, like moods passing over a face. A tactile language, then, is system of mediating implements, which is sensitive to, and shaped by, the social and physical world inhabited by tactile people.
1.4 Methods
In this dissertation, I draw on data collected in three field trips: two months in the summer of 2006, four months in the spring of 2008, and 12 months of dissertation research starting in the summer of 2010. In 2006, I conducted a set of 17 semi-formal interviews with 12 people that were videorecorded, analyzed, and transcribed. The average length of the interviews was 1.3 hours. Most of these interviews focused on the life histories of the people being interviewed, including their relationships to sighted interpreters and the the kinds of strategies that were effective or not as vision was lost. These data were originally collected as part of the larger “National Support Service Provider Pilot Project,” funded by the Department of Education, which resulted in a curriculum for training sighted interpreters and DeafBlind people who work with them (Nuccio and Smith 2010).
In 2008, I videorecorded 8 dyads composed of 1 DeafBlind person and 1 sighted person (either deaf or hearing) for 1.5-3.0 hours engaging in a variety of activities such as dog walking, grocery shopping, or attending an event. For those interactions where the subjects were walking, I walked in front of them and recorded them with a camera mounted on a harness and pointed backward over my shoulder. Fieldnotes were collected after recording sessions and these notes form the basis for some of my ethnographic descriptions of practices prior to the pro-tactile movement. I also took fieldnotes after socializing and interacting
with my friends and co-workers in a wide variety of contexts, conducted interviews with eight DeafBlind people in order to understand their perspectives on how interpreters who interpret visual information are useful to them, and conducted several interviews with people who I had not had the opportunity to interview in 2006 about their life histories. In addition to interviews and videorecording, I made myself available in 2006 and 2008 as a sighted interpreter for activities such as people watching and socializing, with the understanding that I would write about those interactions in my fieldnotes. All of these data provide a useful point of comparison with newer communicative practices that are the main focus of this dissertation.
While conducting my dissertation fieldwork, I collected approximately 160 hours of vide-orecordings of interaction and language use among Deaf-Blind people, which for the most part, excludes sighted participants. 120 hours of these were recorded during the pro-tactile workshops. This corpus has been indexed, selectively transcribed, and thematically organized. This is possible in part because many of the recordings are distinct angles on the same interaction. Therefore, in a one-minute interaction, I might have to analyze three minutes of video footage to capture relevant elements from visible angles. The videorecordings from the workshops form most of the empirical basis for the interactional and linguistic analysis in this dissertation. In addition, I draw on detailed field notes, recorded in the following contexts: (1) approximately 14 hours of orientation and mobility trainings with two different DeafBlind people; (2) bi-weekly classes called “DeafBlind class” where news is exchanged and information is shared via interpreters; and (3) informal interaction at a range of DeafBlind events, community meetings about urgent matters, and after socializing with my friends and acquaintances.
I have been involved in the Seattle DeafBlind community for over 14 years in a range of capacities. These experiences have made it possible for me to conduct this research and have shaped its course in many ways. I started socializing and volunteering as an interpreter in the community in 1997, as an undergraduate student. Over the next 5 years, I became increasingly involved in the community--as an interpreter, an employee, a roommate, a friend, a fellow board member, and in many other contexts, until I left for graduate school. Since then, I have returned regularly to visit and to conduct research. While the pro-tactile workshops are at the center of my dissertation research, all of these experiences, and the people who I have been closest to throughout, have shaped my understanding of the phenomenon.
1.5 Overview of the Dissertation
This dissertation begins, in chapter 2, by placing the practice approach to language emergence in a comparative frame. I analyze three cases of language emergence. In each case I ask: (1) what counts as language-like and (2) how relations between language-like phenomena and context are treated conceptually. The first case I examine is the emergence of language-like gestural systems, or “homesign systems” among deaf children who are not exposed to a visible language (Goldin-Meadow and Feldman 1977, Goldin-Meadow and My-lander 1983, Goldin-Meadow and Morford 1985). The second is the emergence of a national sign language in Nicaragua (A. Senghas 2000 [1999], A. Senghas and Coppola 2001, Kegl et al. 2001). The third is the emergence of a new signed language in a Bedouin community in the Negev desert (Sandler et al. 2005, 2011, Forthcoming). I chose these three cases because they have been foundational in establishing language emergence as a field of inquiry. They have well-established bodies of literature associated with them and they present a coherent theoretical ground from which to proceed.
I argue in each case, that a process of deictic integration is recoverable, and I propose that this process is central to processes of language emergence, more broadly. I also argue that in order for a full-fledged language to emerge (as opposed to a language-like gestural system), the semiotic system must become a legitimate means of position-taking in an internally asymmetric social field. In other words, leaders within the community must accrue the authority necessary to introduce evaluative frames for communication practices and language-use. This is the final phase in the integration of symbolic, deictic, and social fields. I call this overarching process contextual integration.
In order to show how contextual integration affected the grammar of TASL, I begin with the reconfiguration of the social field. In chapter 3, I examine the history of two institutions that were foundational in the development of the Seattle DeafBlind community. I show how these institutions gave rise to a limited set of social roles, which were organized around a core opposition between “sighted” and “blind.” Greater forms of authority accrued to sighted roles, and legitimacy accrued to visual communication modalities. Therefore, in an attempt to occupy more valued social positions, DeafBlind people continued to use visual communication practices long after they were no longer effective. In Chapter 4, I show how social roles were reconfigured by DeafBlind leaders as part of the pro-tactile movement, and how this led to the legitimation of tactile modes of knowledge production and interaction. From there, structures of interaction were reconfigured along tactile lines (Chapters 5 and 6), which gave rise to new linguistic mechanisms for referring to the immediate environment and tracking referents across a stream of discourse (Chapter 7), new rules for the formation of lexical signs (Chapter 8), and a new system for generating semiotically complex signs, which incorporate both linguistic and deictic elements (Chapter 9). I conclude in the final chapter with a brief reflection on the role of contextual integration in processes of language emergence--not only in the case of TASL, but in other cases as well.
Chapter 2
Establishing a Comparative Frame: contextual integration in three cases of language emergence
In this chapter, I examine three cases where the transmission of language from one generation to the next has been disrupted, novel communication practices have grown up in the absence of a viable alternative, and new language-like systems have emerged. The first case I examine is the emergence of language-like gestural systems among deaf children who are not exposed to a perceptible language (Goldin-Meadow and Feldman 1977, Goldin-Meadow and Mylander 1983, Goldin-Meadow and Morford 1985). The second is the emergence of a national sign language in Nicaragua (A. Senghas 2000 [1999], A. Senghas and Coppola 2001, Kegl et al. 2001). The third case is the emergence of a new signed language in a Bedouin community in the Negev desert (Sandler et al. 2005, 2011, Forthcoming).There are other cases of emergent signed languages or language-like systems (e.g. Nonaka 2007, Nyst 2007, Groce 1985, Kuschel 1973, Washabaugh 1986). However, these three cases have been foundational in establishing language emergence as a field of inquiry. They have well-established bodies of literature associated with them, and they present a coherent theoretical ground from which to proceed. All of this makes these three cases a productive starting place for complementary
approaches (1).
In each case, disruption is the result of sensory difference--either a single deaf individual being raised and educated in a hearing context (Goldin-Meadow and Feldman 1977; Goldin-Meadow and Mylander 1983; Goldin-Meadow and Morford 1985), a group of deaf people in a common educational setting, set apart from the broader, hearing society (Kegl et al. 2001, Senghas and Coppola 2001), or a small, tight-knit community with a high incidence of deafness, where sign language is in wide-spread usage among both deaf and hearing people (Sandler et al. 2005, Kisch 2012).
The systems that have emerged out of these contexts are signed languages and gestural communication systems with language-like properties. This literature has been overwhelmingly focused on the innate capacities of the human mind2 (e.g. Goldin-Meadow and Feldman 1977; Kegl et al. 2001, Senghas and Coppola 2001; Sandler et al. 2005, Newport [2001] 1999). In order to determine what role innate capacities play in the creation of new languages, context must be factored out to the greatest degree possible. This requires either implicit or explicit treatment of the relationships between capacity, language, and context. In what follows I track (1) what counts as language-like in the phenomena under study, and (2) how relations between this language-like object and phenomena outside it are treated conceptually.
I argue that in all three cases, relations between deictic and linguistic phenomena can be recovered, and that in each case the emergence of a language-like system corresponds with a tightening of those relations. This process, which I call deictic integration, yields signs, which, in addition to being incorporated into a more tightly organized language-internal system, are also capable of characterizing and localizing referents. In addition, where full-fledged language emerge, a social field, comprised of oppositional and asymmetrical social positions also emerges. In what follows, I bring together the ethnographic and linguistic research in order to understand the relationship between these two types of phenomena. I argue that in order for a viable language to be realized, it must become a legitimate means of position-taking in a particular social field.
2.1 Homesign
When deaf children are not exposed to any visible language, they and their family members often develop a limited repertoire of gestural signals to communicate. These repertoires are known as “home sign” systems. The work on homesign that started to appear in the 1970s addressed a question that has drawn interest since at least the seventh century B.C.: can a person who is not exposed to a conventional language develop a language-like communication system on their own? Prior to the early work on homesign, this question had been posed in various ways, but it had never been systematically studied by examining empirical evidence (Aronoff et al. 2004, Feldman et al. 1978). One of the first stories aimed at exploring this question was told by the Ancient Greek historian Herodotus. He said that the Egyptian King Psammetichos, or “Psamtik” wanted to know who the first peoples of the world were, so he gave a pair of newborn twins to a shepherd, sent them to a deserted island, and told the shepherd not to talk to them. Years passed, and then one day one of the twins spontaneously produced the Phrygian word for bread (‘bekos’). Based on this evidence, Psamtik concluded that the Phrygians were the first people (3)(Crystal 1987:288). Psamtik was not alone in his curiosity. This experiment was repeated by the Holy Roman Emperor Fredrick II (1194-1250) and James IV of Scotland (1473-1513) was also similarly compelled. In the latter case, the “shepherd” was reportedly a “deaf and dumb woman,” guaranteeing, he thought, that the neonates would not be exposed to any language at all (Danesi 1993:5-6).
In the modern context, this sort of scenario appeared relevant in new ways as the field of linguistics turned toward generative grammar and the innate capacities of the human mind. The degree to which the stimulus is impoverished may be difficult to determine in ordinary life, but it is less difficult to determine among neonates on an uninhabited island, or in situations where deaf children are denied access to visible language. However, Psamtik’s modern successors weren’t going to be satisfied by the production of a single word, as he was. They were looking for a wider range of formal properties and communicative functions associated with language. In addition, the range of social and interactional phenomena that they had to sort through to find these properties were far more complex than those found on a sparsely inhabited island.
2.1.1 Homesigners in Philadelphia and Chicago
Although the sign language linguistics literature has focused on native users of American Sign Language, these are the minority of d/Deaf people in the United States. Most deaf children are raised by hearing parents and a certain subset of these parents opt for an oral education for their children. For children who cannot hear the range of sounds used to produce spoken language, oral education is not effective (Lane et al. 1996, Mayberry 1992). This is apparent in the studies conducted by Goldin-Meadow and colleagues (e.g. 1977, 1983).
The children in the early studies lacked exposure to a language, but they participated in the daily lives of their families and they did not have any cognitive impairments (Goldin-Meadow and Feldman 1977:401). The six children included in the 1977 study ranged in age from 17-49 months(4). They were enrolled in oral education programs, their parents used only spoken language with them at home, and they had not acquired any usable spoken language (1977:401). They had not been exposed to a conventional sign language either. However, they did communicate gesturally with their caregivers and with the experimenters. Researchers videotaped 1-2 hour sessions in which one child interacted in their home with their primary care-giver (in all cases, the mother), and one or more members of the research team. Subsequently, gestures were individuated in the “stream of motor behavior” on the basis of physical criteria, and broken down into units comparable to words as well as strings of gestures comparable to phrases (ibid.). There was a high level of agreement between coders on the sign and phrase boundaries that were assigned.
Following Bloom’s method of “rich interpretation” (1970), referents were assigned to isolated signs. When signs were incorporated into phrases, they were assigned semantic elements, cases, and predicates, following Fillmore’s “case descriptions” (1968). Again, coders agreed in most instances on the referents and semantic elements that were assigned. Their findings, based on these categories, are summed up as follows (Goldin-Meadow and Feldman 1977:401):
[E]ach of our deaf subjects developed a structured communication system that incorporates properties found in all child languages. They developed a lexicon of signs to refer to objects, people, and actions, and they combined signs into phrases that express semantic relations in an ordered way.
There were two types of signs identified in the lexicon: “deictic signs” and “characterizing signs.” The deictic signs were mostly pointing gestures, which “allowed the child to make reference to any object or person in the present.” The characterizing signs were gestures that resembled their referent in some way,“[f]or example, a closed fist bobbed in and out near the mouth referred to a banana or to the act of eating a banana.” (1977:402).
Goldin-Meadow and Feldman looked for patterns in the way that these two types of signs were combined. They found that the children tended to produce phrases that included a patient, a recipient, and an act. The explain:
Some of the children tended to produce their signs for the patient, recipient, and act semantic elements in consistent positions of their two-sign phrases. Specifically [ . . . ] the children tended to produce phrases with patient-act, patient-recipient, and act-recipient orders [ . . . ]. Not all children showed ordering tendencies for all parts of the three elements; but if the children showed any ordering tendencies at all, those tendencies were ordered in the same direction. We can describe the children’s two-sign phrases with the following element-ordering rule:
Rule A: (choose any two maintaining order)
Phrase ! (patient)(act)(recipient)
Thus, it appears that some of the children expressed semantic relations in a systematic way, that is, by following a syntactic rule based on the semantic role of each of the sign units.
There are four examples given in the 1977 article where this pattern plays out. They are as follows (all taken from p. 402):
They conclude based on the ordering of semantic elements in examples like these, that, “a child can develop a structured communication system in a manual mode without the benefit of an explicit, conventional language model,” and they emphasize that, “[t]his achievement is cast into bold relief by comparison with the meager linguistic achievements of chimpanzees” (ibid.:403).
2.1.2 What Counts as Language-like in Homesign
Goldin-Meadow and colleagues are at pains to show that these regularities can be attributed to the innate structures of the mind that allow children to acquire language. In order to do this effectively, there are two requirements. First, it must be demonstrated that the regularities are not invented by the caregivers and then taught to the children. They convincingly demonstrate that the gestures produced by the care-givers are not ordered at all (1977, 1983)(5). This confirms the poverty of the stimulus. The second requirement, for isolating the relationship between semiotic regularities and the innate capacities of the child, is to have some idea of what aspects of semiosis are relevant to those capacities. This requires a model of the innate structures of the mind, and this model is taken from Fillmore (1968).
2.1.3 A Model for the Innate Structures of the Language-Ready Mind
At the time of Goldin-Meadow’s early work, syntax and semantics were being reunited in generative grammar (Harman 1982:xv-xvi). A key figure in the reunification was Charles Fillmore. It is not surprising, then, that the analytic categories used by Goldin-Meadow and her colleagues were shaped by Fillmore’s “case grammar,” as it appeared in The Case for Case (1968). In this work, Fillmore engages two main tenets of generative grammar: (1) the centrality of syntax, and (2) the importance of covert categories(6).
Fillmore argues that the syntax of a language can not be stripped of all associated semantic elements, and further that semantic relations actually constitute an underlying structure, or “frame” that explains many syntactic constraints. The relations between the two, he calls case relations, or simply case (1968:21). Case relations are covert, and in their totality, form “a universal system of deep-structure cases” (ibid.). Case forms, on the other hand, are the expression of case relations “through affixation, suppletion, use of clitic particles, or constraints on word order” in a particular language (ibid.). At one level of remove, these deep structure cases are linguistic in nature, but Fillmore backs up further and sees them as consistent with a broader range of cognitive capacities, which are “identified” by the cases, just as the cases are identified by verbs and nouns. In Fillmore’s words, “The case notions comprise a set of universal, presumably innate, concepts which identify certain types of judgments human beings are capable of making about the events that are going on around them, judgements about such matters as who did it, who it happened to, and what got changed.” (ibid.:24). These broader cognitive capacities allow for the mental representation of events, actions, and the things that participate in them.
The analytic framework employed by Goldin-Meadow and colleagues implies an innate capacity for the acquisition of language that is structured like this. By extension, they argue that the linguistic achievements of their research subjects can be attributed to the child’s capacity to make judgements about who did it, who it happened to, and what got changed. They emphasize that it is the child and the child alone who is responsible for the creation of their language-like system. It is clear, in their data, that the linguistic stimulus is devoid of any meaningful order, which is not surprising, given that the caregivers are primarily using spoken language. However, the only factor outside of the child’s innate capacities that is explicitly ruled out is linguistic input. Other contextual factors play a pivotal role in the analysis, which is reflected in the terms of analysis as well as the examples.
2.1.4 Deictic Characterizing and Universal Signs
The two basic categories of signs, out of which phrases are built by the homesigers are defined in terms of their relation to context. These terms align with those found in Morris (1971 [1938]) and his explanation of their significance is useful here. Morris defines semiosis as “the process in which something functions as a sign” (1971 [1938]:3). This process requires three things:
These three types of signs: deictic, characterizing, and universal map onto the distinction between pragmatics, semantics, and syntactics in Morris. Pragmatics is constituted in the relation between the interpretant and the sign vehicle. Semantics inheres in the relation between the sign vehicle and the designatum. Syntactics is constituted in the relations between sign vehicles and the categories to which they belong. No one dimension can be dissociated from the others. A language is irreducibly triadic.
2.1.5 Poverty of the Stimulus, Abundance of Stimuli
Although the linguistic stimulus is impoverished for the homesigners, non-linguistic stimuli are abundant.7 Goldin-Meadow et al. do not focus on the role of these contextual elements and dynamics, and yet they play a crucial role in each example. By viewing the examples first through Fillmore’s framework and then juxtaposing this with an analysis from Morris’s perspective, the examples are fully accounted for and the interaction of capacity and context is made explicit.
Viewed through Fillmore’s scheme, the correlate to signs that refer to, or characterize, actions are verbs and those that refer to or characterize objects or entities, are noun phrases (Fillmore 1968:24-5). Goldin-Meadow and colleagues are not working with verbs and noun phrases, but with combinations of pointing gestures and characterizing gestures. This poses no problem because in Fillmore’s framework, the surface structure of the utterance is not important. The focus is instead on the relations that obtain between representations of referents (nounlike forms) and representations of actions and states (verb-like forms). Goldin-Meadow and Mylander “stress that [they] use linguistic terms such as sentence loosely and only to suggest that the deaf child’s gesture strings share certain elemental properties with early sentences in child language” (1983:372). They never claim that these systems are linguistic systems, and are careful to distinguish language-like phenomena from language. It is perfectly clear that verb-like gestures are, through the use of Fillmore’s terms, implicitly compared to verbs and noun-like gestures are compared to nouns (or noun phrases). Goldin-Meadow and Feldman decompose communicative events into elements and relations like this, arguing that when deprived of exposure to a conventional language, the minds of children act on the gestural resources available to them in ways that the mind of any child capable of acquiring language would to yield a language like any other.
In their first example, a child points at a shoe and then points at a table. In Fillmore’s scheme, we would start with the requested action: Please put the shoe on the table. The first pointing gesture stands in for a noun phrase that refers to the shoe. In relation to the action (verb-like element), this pointing gesture can be interpreted as the expression of the covert semantic element: patient. The second pointing gesture stands in for a noun phrase that refers to the table and can be interpreted as the expression of the covert semantic element: recipient. In Morris’s scheme, the first pointing gesture (or sign vehicle) refers to an object (or designatum), as does the second pointing gesture. Recall that for Morris, semantics consists in the relation between the sign vehicle and the designatum, so a semantic relation is expressed by these elements in Morris, just as it is in Fillmore.
However, at this point, we have only accounted for the noun-like elements of the example. Notice that there is no overt manifestation of the verb-like element. This element is a product of the interpretation--that the two pointing gestures are a request to put the shoe on the table. If the mother responded to the pointing gesture (sign vehicle) by picking up the shoe and putting it on the table, this response would constitute the interpretant, or “the effect of the sign on the interpreter.” Since the utterance itself does not demand this interpretation, it must have been inferred from a contextual scenario like the one I have just proposed, by the analyst, by the caregiver, or both. For Morris, neither the response of the care-giver nor the response of the analyst belong to semantics. These responses belongs to pragmatics, which inheres in the relations between interpretants and sign vehicles. Fillmore’s model does not account for communicative effects of sign vehicles, nor does it account for objects apart from their mental representations. Therefore, both frameworks are necessary in assigning “semantic roles” to the gestures that make up the gesture phrase. Without pragmatics, there is no representation of an action, and without a representation of action, there can be no case relations. Without case relations, there can be no innate capacities of the mind.
In the second example, a child points to a jar and then produces a twisting motion in the air “to comment on mother’s having twisted open the jar.” In Fillmore’s framework, we can say that the twisting motion stands in for a verb, which is a representation of the semantic element: act. In relation to this verb-like element, the pointing gesture, which stands in for the noun phrase, which represents the jar, takes on the semantic role: patient. However the assignment of these semantic elements relies not only on the order of elements in the utterance, but also on the interpretation included in the second part of the example. Without combining a deictic sign, a characterizing sign, and the effect of these signs on the interpreter, it is difficult to know whether the sign is a request to open the jar, a comment on its existence, a comment on its characteristics, or something else entirely. It could be a request to be served a type of food which is normally stored in such a jar. Without the interpretation given in the example, semantic roles would have been more difficult to assign.
In the third example, a child opens his hand with his palm facing upward and then points to his chest. This is interpreted as a “request that an object be given to him.” In Fillmore’s framework, the upward facing palm stands in for a verb, which represents the semantic element: act. The deictic sign stands in for a noun phrase, which takes on the semantic role of patient with respect to the verb-like element. The question here is why a further interpretation is needed in order to make the example effective. Why must we know that this is interpreted as a request that an object be given to him? This interpretation seems to be, once again, an effect of the utterance (sign vehicles) and not attributable to relations between the gesture signs and their referents. Therefore, pragmatic and semantic elements are both necessary in establishing the parallels between homesign and language.
In the fourth example, David points at a picture of a shovel. He then points down. He then produces a digging motion in the air with two fists, and finally points down again. In this example, the usual format of a formal description plus an interpretation is broken away from, and the interpretation is incorporated throughout. I have reproduced Goldin-Meadow’s example below, but the formal description is in regular text and the interpretation of the forms is in italics:
David pointed at a picture of a shovel, pointed downstairs where a shovel was stored, produced a digging motion in the air with two fists, and finally pointed downstairs a second time. David had commented in one phrase on two aspects of the shovel, the act usually performed on the shovel and the habitual location of the shovel.
Semantic roles and relations are not assigned specifically here, but this is used as an example of “longer phrases that [express] at least two semantic relations” (1977:402). The two relations that they mention must be (1) between the pointing gesture and the habitual location of the shovel and (2) the digging gesture and the act usually performed with the shovel. (1) breaks entirely with Fillmore’s framework, since that framework has no interest in accounting for the ability of people to identify the actual locations of objects in the world. Knowledge about where people usually keep their shovel has even less of a place in his framework. (2) requires the same kind of pragmatic inference that the first three examples (above) required in order to generate an “act” which could stand in for a verb, which could represent a semantic element.
Semantic and pragmatic factors contribute to the emergence of language-like gestural systems among the homesigners that Goldin-Meadow and colleagues studied. By making their terms of analysis explicit, I have shown the necessity of pragmatics, or the “effect of the sign vehicle on the interpreter” in the assignment of semantic elements. Since the innate structures of the mind are modeled as relations between these elements, I have also returned to Fillmore’s insistence that despite a consistent and semi-arbitrary ordering of semantic elements, those elements cannot be extracted from semantic and pragmatic aspects of the communicative event. In order to attribute the achievement of consistent ordering of elements to the innate capacities of the human mind alone, these additional contextual factors would have to be factored out as well. In fact, only the possibility of linguistic input was discussed.
2.1.6 Deictic Integration in Homesign
In a broader frame, the innate capacities of the child’s mind appear to interact not only with gestural input, but with a range of semiotic processes. Some of these processes are identified in Lois Bloom’s method of “rich interpretation,” drawn on by Goldin-Meadow and colleagues in establishing their categories. Bloom explains the rationale for this method as follows:
It has often been observed that what young children say is usually related directly to what they do and see. Brown and Bellugi (1964, p. 135) took notice of the fact that children speak ‘very much in the here and now.’ Leopold (1949, Vol. III, p. 31) made extensive use of the ‘aid of the situation’ in inferring the intended meanings of utterances. Although some utterances may be equivocal or otherwise not interpretable, it is generally not difficult to judge the relationship between what a child says and what he is talking about. [ . . . ] Moreover, overt behavior and features of context and situation signal the meanings of what children say in a way that is not true for what adults say. [ ...] If an adult or an older child mounts a bicycle, there is no need for him to inform anyone who has seen him do it that he has done it. But a young child who mounts a tricycle will often ‘announce’ the fact: ‘I ride trike!’ What young children say usually relates directly to what they do and see, and what they do and see can also be seen and evaluated by a listener-observer in the environment.
For the purpose of this study, evaluation of the children’s language began with the basic assumption that it was possible to reach the semantics of children’s sentences by considering the nonlinguistic information from context and behavior in relation to linguistic performance. This is not to say that the inherent ‘meaning’ or the child’s actual semantic intent was obtainable for any given utterance. [ . ..] The only claim that could be made was that evaluation of an utterance in relation to the context in which it occurred provided more information for analyzing intrinsic structure then would a simple distributional analysis of the recorded corpus (Bloom 1970:9-10).
It is clear from this that the method of rich interpretation used by Goldin-Meadow takes the inextricability of semantics and pragmatics for granted. However, the implicit entanglement of these orders of phenomena does not come through clearly in the conclusions that are drawn from the research, such as the following:
In sum, it appears that neither communication pressure nor contingent approval shaped the deaf children’s sign orders or probabilities of sign production.
Our observations indicate that a child in a markedly atypical language learning environment can apparently develop communication with language-like properties without a tutor modeling or shaping the structural aspects of the communication. These results suggest that the child has a strong bias to communicate in language-like ways (1983:373).
It is clear from the data that both deictic and characterizing elements are necessary for the emergence of a language-like system. Furthermore, through routine use, those elements must be coordinated with patterns in everyday life, through which shared modes of access are established. In other words, the linguistic system and the indexical ground of reference must be coordinated into tighter and more restricted configurations such that a highly schematic pointing gesture can accrue a relatively specific meaning for the deaf children and their caretakers.
This process, or what I am calling deictic integration, does not disprove the finding that children have a bias to communicate in language-like ways, especially when compared with the lack of such biases in chimpanzees. However, understanding the nature of the bias as well as the structures that undergird it requires ruling out a wider range of social and semiotic processes, as well as an explicit theory of context. Social, interactional, and linguistic dimen-sions are all recoverable. However, the focus is on the relationship between the capacities of the mind and the language-like system. All other factors are viewed through constructs established for analyzing this relation. Without distinct analytics for distinct orders of phenomena, things can be located in the innate capacities of the mind that belong in the room, in memory, or in history.
2.2 Nicaraguan Sign Language
Language emergence in Nicaragua has also been framed as a case where the innate capacities of children to acquire language have played a central role. However, the interaction of capacity and context is made more explicit by the researchers and also by partial incorporation of independent socio-historical analyses (R.J. Senghas 2003, Polich 2005). Sociocultural analyses have focused on models of personhood available to deaf Nicaraguans, how these models have changed over time, and how they have been endured, occupied, or engaged by deaf people. They have also highlighted the international networks and circulations of discourse through which Deaf Nicaraguans began to see themselves as a language minority, and the way this shaped the development of their community (ibid.).
In the linguistic research, two aspects of this history have consistently been treated as relevant: (1) the year in which groups of children entered the school, and (2) the age of individual children at the time. These two factors have been isolated because they affect the capacities of the children to acquire language. In what follows, I trace additional links between the socio-cultural work that has been done and the linguistic research(8). I argue that in addition to previously emphasized social factors, one of the prerequisites to language emergence in Nicaragua was the legitimation of the signed language among deaf people as a means of taking up differentially valued social positions. In addition, I argue that conventional ways of accessing and referring to objects, people, and signs in the immediate environment, or “deictic patterns” had to crystalize. These patterns were then incorporated into the language as linguistic and deictic phenomena were drawn into tighter relation with one another.
2.2.1 Establishing a Social Field
Prior to 1946, children who were born deaf or lost their hearing early in childhood had very little contact with the outside world and no contact at all with other deaf children. There were no schools for deaf children (or children with other disabilities) and no way for them to acquire basic communication or living skills (Polich 2005:13-24). While some wealthy families sent their children to boarding schools in other countries, most kept their children at home in various states of isolation from the rest of society. Some families went so far as to physically restrain their deaf children to prevent them from “roaming” (ibid.:15). One girl was restricted to the fenced-in backyard of her relatives home after her mother died, where she reportedly slept, filthy, on a pile of cardboard in the corner (ibid.:17). Some were
so secluded that members of their extended family did not know they existed until after they had passed away (ibid.: 16). The families of the children did not expose them to signed language and the children could not hear spoken Spanish, therefore, they did not acquire any language. Deaf children and their families developed home-sign systems, however, they were often restricted to a small range of communicative situations (ibid.: 13-23). A volunteer from a local deaf association described the home sign system used in one family as “a language of orders where they tell him, for example, go get that, go clean that, go take a bath, go to the store and get some coffee. Sure its communication, but [the deaf child] doesnt get much out of it” (ibid.:14).
In 1946, the first special education school was established in Managua (Polich 2005:24). According to Polich, this coincided with an important transition where deaf people went from being seen as “eternal children” incapable of becoming productive adults to being seen as “potentially remediable subjects” (ibid.). While they were previously given up on, isolated in the family’s back yard, or kept secluded inside the house, now they were treated as disabled children, who, with enough specialized training, might learn to act like hearing people. The first special education school had 20 pupils, half of which were deaf. They used oral education methods (ibid.28-9).
By 1974, four schools in Managua were involved in the education of deaf children. Those who lived elsewhere were either not educated, or they had to relocate to the capital. In 1975, oral methods began to be challenged by the new “total communication” fad, which was passed from the United States to Nicaragua via networks of educators and doctors in Costa Rica, including a representative from Gallaudet (Polich 2005:45-6). A series of workshops were given in Costa Rica that included information about the linguistic structure of signed languages, and different signing methods for the education of deaf children. Some teachers from Nicaragua attended these workshops (ibid.:47). Total communication never became the official method used in Nicaraguan schools, but according to Polich, attitudes about signed languages changed significantly between the years of 1976 and 1980, and so did communication practices.
One interviewee left Nicaragua in 1974 and went to Spain, where he learned the signed language in use among Deaf people. When he returned in 1980, “he was pleasantly surprised to find more signs in use in Nicaragua, but, communication was still different than what he was used to in Spain because individual signs were chained together and getting one’s meaning across was still more awkward than it was in Spain.” (Polich 2005:49). Polich reports him saying that “communication at this time was still a combination of everything: signs, gestures, oral words, written words, acting out--whatever worked. He said that in 1980, he still did not see a sign language, such as he knew existed among the students at the school for the deaf in Spain” (ibid.:50). By 1984, communication was decidedly more fluid and complex meanings could be more easily conveyed (ibid.). However, it was still described by those who had had contact with fully developed signed languages as a mix of different home-sign systems (ibid.:52).
In 1979, the Sandinista Revolution triggered many changes that affected the education of the deaf (Polich 2005:53). Special education was broadened to include a wider range of students in many geographic locations around the country. In addition, the curriculum was standardized. By 1981, there were twenty-four special education schools (ibid.:53). In Managua, the National Center for Special Education (CNEE) was a major center for deaf education as well as the education of students with cognitive and physical disabilities. In the 1980s, a curriculum was adopted at the CNEE that forbade the use of signs and gesture. Students were encouraged to sit on their hands or hold objects while they talked and were only encouraged to use their hands for fingerspelling (ibid.:59-60). However, just as in other oral schools, deaf children did sign with one another outside of the classroom. A teacher who had worked at the school in the early 1980s was interviewed by Polich and reported the following:
We made sure that in the classroom, we taught the classes orally; but the kids outside were using signs among themselves. During recess at the snack bar, everywhere. Some of us used our hands, too, to communicate with the kids, but only in private or where no one could see. In the classroom it was us emphasizing the oral and the fingerspelling, but outside, it was another matter.
However, Polich says, we have no way of knowing whether the signing that was happening was language-like, or whether it was a mix of home-signing, gesturing, pantomime. She writes:
No one recorded it, and no one capable of categorizing it was there watching. Still, the reports from the few teachers who began to imitate the children and learn their communication systems, and from the children themselves, when they remember back as adults, is that at this point, it was, at most a very rudimentary language system (ibid.:64).
In the mid-1980s, the coordinator of deaf education, who had established and enforced oral methods (with some fingerspelling) left her position, and slowly, teaching methods became more flexible (Polich 2005:72). Meanwhile, the Sandinista government was encouraging the formation of grassroots organizations and some hearing advocates and educators of deaf children saw this as an opening for deaf people to improve education and employment opportunities (ibid.:80-1). A group was established called the Association to Help Integrate the Deaf, which was abbreviated APRIAS (ibid.:83). APRIAS came to function not only as an advocacy group but also as a social forum outside of the classroom (ibid.). Prior to the 1980s, there was not a lot of socialization or interaction among deaf people outside of the schools. Still, in the early days, most of the people in positions of authority were either hearing or they were deaf people who could speak (ibid.:84). It was the beginning of a deaf social world, but sociality did not revolve around sign language the way it would later.
As time went on, sign language became more and more important to the members and many of the older deaf people who missed their opportunity to learn sign language in school said that they learned sign language primarily at the APRIAS meetings. These meetings also served as an important venue for the standardization of the sign language that was developing. According to one of the people Polich interviewed, the meetings were difficult in the beginning “because there was no common sign language, and it was hard to understand each other. “But,” he said, “little by little, we learned” (Polich 2005:90).
In the mid 1980s, APRIAS also started having weekend “rescue” workshops, where Nicaraguan
Sign Language signs were sketched by hand and compiled into rudimentary dictionaries that would later be distributed (Polich 2005:89). In retrospect, many of the participants in the workshops and meetings characterized the “language” as combinations of gesture and finger-spelling, which were slowly taking on language-like properties (ibid.). This group of signers, who were being educated in schools with other deaf children and also eventually taking part in the APRIAS meetings and other social events, formed a cohort. Within the cohort, there were certain key figures who took on leadership roles and “taught” the new language to others, even as it was forming (ibid.:91). Polich considers this at some length, since it seems paradoxical to her that a person could be “teaching” a system that is not yet formed. About one of these key figures, she writes:
Javier is, thus, a key figure in the first group to use a standardized sign language as their major mode of communication. How he managed to learn the language first while simultaneously teaching it to the others is difficult to explain. Perhaps taught means that he was more enthusiastic about signing, used it more consistently, was patient about teaching what he knew to those less fluent, and took on the role of ‘language police,’ demanding that others conform to what was considered the ‘correct’ version of signs .. .I observed regular instances in which confusion over the ‘correct’ version of a sign was referred to Javier for arbitration. His decisions were accepted with no dispute. Javier, in a sense, is identified as the ‘apostle’ of NSL by older deaf adults. I had many informants tell me that Javier was the first to learn the language (how they dont know) and that he transmitted it to the rest of the deaf community, including themselves. (ibid.:91).
This suggests that there was a differentiated social field forming among deaf people at the time, which was an important precursor to language emergence. The possibility of using language in “correct” and “incorrect” ways and the emergence of experts within the group meant that the language, even as it was forming, was viewed by deaf people as a legitimate means of occupying more valued social roles within their own community. This shift was institutionalized when, in the late 1980s, the officers of APRIAS were replaced by deaf people who were more “pro-sign language,” and the name of the organization was changed to the National Nicaraguan Association of the Deaf, abbreviated “ANSNIC” (ibid.:97). Rather than a focus on the “integration” of individual deaf people into hearing society, they saw membership in the deaf community as the most effective way to exercise agency (ibid.). Polich explains:
By becoming members of the deaf association, deaf people are, de facto, integrated into a society, and they exercise their social agency, albeit as a subgroup in which their NSL is the major unifying factor. Because this mini-society retains ties through interpreters with the larger oral/Spanish-dependent society, members are, in a sense, integrated into the larger society by being situated in the smaller group. There is no need, and in fact, no wish to disperse the members individually to integrate into the larger society to function in a hearing manner. (ibid.:97).
Polich is focused here on the relationship between deaf people and the larger hearing society.
She argues that this model views deaf persons as “social agents” rather than remediable subjects, who can learn to be hearing given enough specialized training(9). However, she notes that this second wave of deaf people who ran and took part in the Deaf association, had had a different set of experiences with sign language and were also exposed to very different ideas about its value and utility. While this new perspective originated outside of the deaf community in broader historical transformations, its effects within the community crystalized around this time to yield a significant contrast between “pro-sign language” people and the group opposed to them.
In 1992, sign language was officially permitted in deaf classrooms for the purpose of instruction (ibid.:72). Around this same time, sign language “became less an adjunct to oral speech” and slowly developed into the dominant mode of communication among deaf people (ibid.:96). In addition, politically charged efforts to document and standardize the language intensified, and in 1997, a dictionary of Nicaraguan Sign Language was published (ibid.:97). This kind of legitimation and subsequent standardization can only be accomplished given an internally differentiated social field, where deaf people view sign language as a means of taking up more powerful social positions. Once a full-fledged language emerged, these dynamics crystalized further, so that deaf people who could not use the language fluently were called NO-SABES or “know-nothings” and they were restricted to a limited set of social roles in institutional settings (R.J. Senghas 2003:270). One of the consequences of institutionalization has been the adoption of organizational paradigms with built-in asymmetries:
It is by and through the national Nicaraguan government that ANSNIC has its legal status as a recognized organization. ANSNIC must therefore follow the government’s guidelines that assume certain paradigms of organization. These include concepts of voting, accountability, and tax-exempt status. ANSNIC has adopted certain structures, roles, and offices, and these certainly have social implications within the Deaf community. As one example, the layout of the ANSNIC facilities and the differential access to these facilities ... makes certain individuals more influential...
These two observations together suggest that differential access to to the social field aligns with local criteria for language competence, such that “better” signers are more likely to accrue authority. The establishment of an internally asymmetric field in which some deaf people had more authority than others is a prerequisite to the legitimation of the language. In the linguistic literature, there is a focus on the year on which groups of children entered the school and the age of individual children at the time. In addition to these factors, the establishment of an internally asymmetric social field and the legitimation of the semiotic system for position-taking in that field appear to be crucial conditions for language emergence.
2.2.2 Three Semiotic Systems: ISN, LSN and Mimicas
Three distinct modes of semiosis emerged out of this history. From a socio-historical perspective, many factors are relevant. From the perspective of those interested in the innate capacities of the mind, only those factors that enable or constrain the ability of children to acquire a first language are relevant. Kegl et al. identify three distinct “cohorts,” each of which developed semiotic systems that were distinct from the others in fundamental ways (2001:187). Membership in a cohort is defined by two main factors: (1) the age at which the individual entered school and started interacting with other deaf people, and (2) the year in which they entered the school (ibid.). The students who entered the school at a younger age tended to acquire (or develop) more complex grammatical structures than those who entered the school later in life. This was due, in part, to the fact that a more rich linguistic environment was available to students who entered the school in later years, since collective communication practices had had time to develop and in part because younger children acquire language more quickly and more completely than older children (ibid.:197).
Three Spanish terms were appropriated by researchers and applied to the semiotic systems available to each cohort. All three terms: lengua, lenguaje, and idioma can be translated into English to mean “language,” but in Spanish they have distinct meanings. A lenguaje can be any type of communication system, while an idioma is, more specifically, an official, national language (ibid.:181). The word lengua is a general term that can include lenguaje and idioma (ibid.). Kegl et al. distinguished between Lenguaje de Senas Nicaraguense (LSN) and Idioma de Senas Nicaraguenese (ISN). The former, they argue, is a “peer-group pidgin or jargon between signers,” while the latter is a “full-blown sign language” (ibid.:181). Both of these systems are distinct from the idiosyncratic home-sign systems that individual deaf children develop within their families, which are called “mimicas” by Spanish speakers.10 At the time the research was conducted, there were no metalinguistic signs used by deaf Nicaraguans that mapped onto this set of terms, however, Polich’s interview data suggest retrospective metalinguistic awareness among some deaf people, and their reflections do not contradict these categories.
Several grammatical characteristics were examined across these three cohorts of signers in order to reconstruct the process of language emergence. Of all of the characteristics, “spatial modulations,” or a tendency for verbs to encode information by moving between points in space, became more central to the arguments of language emergence than others. This became the characteristic that was used as evidence for the linguistic status of ISN. The linguistic status of spatial modulations has been at the center of one of the most productive debates in the field of sign language linguistics more broadly. In order to understand what counts as language-like in the Nicaraguan case of language emergence, key moments in this debate are outlined in the following section.
They also note a fourth “system,” which is a “pidgin” used between hearing and deaf signers--where “signers view themselves as speaking Spanish, and Spanish speakers view themselves as signing or using Mimicas” (ibid.:182). This phenomenon is recognizable given familiarity with the American Deaf community and is very interesting, but I take it to be on another level of communicative complexity in the sense that it combines the more basic systems. Therefore, I bracket discussions of it in my summary of this research.
2.2.3 What Counts as Language-Like in Nicaraguan Sign Language
The term “spatial modulations” is a relatively neutral term that includes a range of phenomena that have been analyzed variously as linguistic, non-linguistic, or some combination of the two depending on the theoretical approach taken and the subset of phenomena under investigation. The debate around spatial modulations in signed languages has been active since the inception of the field, and the issues raised by it are central to the question of what counts as language-like in the emergence of ISN.
In early work on Visual American Sign Language, three classes of verbs were identified11, which differ from one another according to the types of affixes12 they take: “plain verbs,” “agreement verbs,” and “spatial verbs” (Padden 1990:119). Plain verbs are either unin-flected or inflected for aspect (ibid.). An example of this kind of verb is the sign love (See Figure 2.1). In the sentence, “I love you” and “you love me” love is produced in the same way. In contrast, “Agreement verbs” inflect for person and number. An example is the sign
(a) (b)
Figure 2.1: love in VASL
give. For the sentence “I give you the book,” the sign begins near the signer’s body and ends near a point in space that is associated with the recipient of the book. If the receiver of the book were the signer, then the sign would move toward the signer’s body instead of away as in Figure 2.2. Therefore, “the position of the beginning point of the sign varies depending on whether the person of the subject of the clause is 1person . . . or 2person . . . ”(Padden 1983:14). If there is more than one recipient, the sign will move from the body of the signer to a series of loci in space, thereby encoding number. So if “the number of the subject and object varies, the beginning and end points will likewise change in form” (ibid.). Finally,
11Since then, similar classes of verbs have been identified in almost every signed language that has been documented (Mathur and Rathmann 2012:137).
(a) (b) (c)
Figure 2.2: you-give-me in VASL
“spatial verbs” do not inflect for person and number, however, they have locative “affixes.”13 One example of a spatial verb is the VASL sign put (See Figure 2.3). The handshape is
(a) (b)
Figure 2.3: put in VASL
specified, as is a movement, but the direction of the movement varies depending on the spatial relations involved in the represented act of putting. Therefore, spatial verbs are said to encode locative relations. Another kind of verb that has sometimes been included in this class are known as “verbs of location and motion” which are considered a kind of “classifier.” A classifier that represents the path through which a vehicle moves is an example of a spatial verb. The 3-handshape in Figure 2.4 (listed as “CL:3”) is associated with arguments of the verb that belong to the semantic category: ‘vehicle’. The movement of the represented vehicle, however, depends on the path the represented vehicle takes in the reported event. In an ASL dictionary,14 this classifier (“CL:3”) is described as follows: “Depending on the movement, you can use CL:3 to show the parking of a car, a row of cars, an accident, etc.” Notice the “etc.” at the end of the description. Unlike other dictionary entries, where a movement is specified (usually via arrows overlaid onto the image), no movement is specified here. This is because there is an open, rather than closed set of possibilities for the movement parameter. This movement parameter is what Padden (1990) and Supalla (1982) call a “locative affix.”
However, affixes are discrete units that come in finite sets. Therefore, the formal element
Figure 2.4: Classifier as Spatial Verb
with a locative function in spatial verbs like this one is not comparable to locative affixes found in spoken languages. And yet, spatial modulations in the production of these verbs establish relations between the verb and its arguments. In this sense, they are a grammatical manifestation of “agreement.”15 This generates several analytic and theoretical problems, some of which will be familiar from the discussion of homesign.16 For example, are relations between the verb and its associated elements semantic relations? Syntactic relations? Or are they spatial relations, which are conceptualized by the signer as any other spatial relation would be, and therefore, not linguistic at all? Interestingly, these semiotically complex forms have been treated as an indicator that a grammatical system is emerging.
2.2.4 Spatial Modulations in Nicaraguan Sign Language
While a range of linguistic features have been described in ISN, the literature is overwhelmingly focused on spatial modulations as a sure sign of emergent linguistic structure. Across researchers, and with time, different analyses have been proposed, with different implications for our understanding of how language emerges, where it comes from, and what sorts of principles govern its development. Kegl et al. treat spatial modulation as a kind of grammatical agreement between a verb and its arguments. They write, “the grammar of ASL allows a single verb to express subject and non-theme object agreement as well as person and number marking by spatial agreement of the verb with grammatically established referential index points in the signing space” (2001: 1 90). They consider the structure underlying these relations to be an “abstract grammatical device” which the human mind is predisposed to develop. This device is not present in LSN, but is present in ISN, which suggests that LSN is not a full-fledged language, while ISN is. However, this device does not develop spontaneously, as there are similar structures in LSN that appear to be precursors. They explain:
LSN signers do not seem to use any abstract grammatical device to establish spatial indices, especially for people. [However] [t]hey do sometimes agree with real-world locations or paths that are in the shared knowledge-base of the signer and addressee (ibid.).
However, verbs cannot “agree” directly with real world locations. Although the terminology is not explicit, Kegl et al. indirectly recognize this distinction by assigning linguistic status to the former phenomenon, and precursor status to the latter phenomenon. One example of this shift involves the following transition: In LSN, a verb like speaking-to (a person) is linked to participants via a pointing gesture, and “the people referred to are generally present and available as the targets of these pointing gestures” (ibid.:190-1). The pointing gesture “sweeps” from one location to another to indicate who is speaking to whom. In ISN, the same verb is produced by moving from one location to another, and the pointing gesture drops out. This is a characteristic shift that took place in the transition from LSN to ISN (ibid.:191). This change is understood as evidence that an abstract grammatical device appears in ISN which was not present in LSN.
These conclusions follow from the idea that syntax is the most language-like of all linguistic phenomena. Senghas, for example, begins by stating that “one of the most central components of a language’s grammar is its means of expressing argument structure; that is, how subjects and objects are linked to their respective verbs (2000 [1999]:679). Senghas says that such relations are often established in signed languages via spatial modulations in the verb. The directional movement within these modulations, she takes to be a “spatial morpheme”:
As in spoken languages, the concept of spatial morphological elements may be unfamiliar. As in spoken languages, developed sign languages append grammatical elements to words. Many signs are produced neutrally in a central location in front of the signer. By altering the direction of a sign’s movement to or from a non-neutral location, the signer adds a spatial morpheme. For example, in American Sign Language, nouns are marked as definite and specific by being indexed to a particular location in front of the signer; verbs then agree with their noun arguments by taking on these same locations. An agreeing verb will begin at the location assigned to its subject, and move to the location assigned to its object (ibid.:698).
However, in initial attempts to describe structures like these, Senghas and colleagues found that the signers did not localize nouns in the ways they had expected. The verbs were produced with movements to the left and right of the signer, but no “loci” were established before or after the production of the verb. Therefore, Senghas reports, “We [ . . . ] asked whether these movements toward non-neutral locations were predicted by the semantic role associated with the nouns in the sentence” (2000 [1999]:698-9). In other words--they asked if the direction in the movement of the verb consistently mapped onto semantic relations such as “agent” and “patient.” In order to answer this question, research subjects were shown a video stimulus that included 22 signed sentences produced by the research subjects themselves (both cohorts). These sentences had been elicited during an earlier study, using a simple video stimulus that involved events like a woman tapping a man. Research subjects were asked to watch the sentence and then choose from a list of pictures on an answer sheet. After each sentence, the research subjects were asked if the direction in which the verb was produced made any difference for the interpretation of the sentence they has just watched (ibid.:701-3).
Senghas found that signers in the first cohort interpreted directional verbs as corresponding to a wider range of stimuli than the second cohort. A difference in the directionality of the verb did not correspond to a difference of direction in the stimulus. So if a woman tapped a man, or a man tapped a woman, the form of the verb, including its directional movement, was likely to remain the same. The second cohort, on the other hand, assigned a more narrow interpretation to the directional movement of the verb, consistently associating it with the direction of the represented movement from the character’s perspective (as opposed to the signer’s perspective). These differences were also reflected in their metalinguistic judgements.
When asked ...whether the direction of movement in a verb made a difference in their responses, all four first-cohort subjects responded that a verb could be signed to the left or the right without changing the meaning of the sentence, and without affecting their responses. In contrast, all four of the second-cohort subjects responded that the direction in which the verb was produced did make a difference (Senghas 2000 [1999]:703).
Ultimately, it is this shift from a wider, to a more narrow interpretation (or an increase in “specificity”) that best describes the shift between the less and more elaborated semiotic systems. However, this is not exactly what Senghas was looking for at the outset, and it is not accounted for by any explicit theory of language. The goal in the beginning was to establish consistent relational patterns between the verb and its “subject” and “object”--all of which are syntactic categories. The explicit theoretical assumption was that this kind of syntactic relation is the most central component of a language’s grammar. However, in the absence of loci, which could be associated with the nominal elements, these relations could not be established formally. Instead of positing a zero morpheme, or a null argument, Senghas explored the possibility of assigning semantic roles to the lexical nouns in the signed sentence and establishing relations between those roles and the verb, much as Goldin-Meadow did (see section 2.1). However, no generalization emerged. Therefore, an even more basic notion of contrast (in the Saussurian sense) was appealed to. In a sentence where see and pay are both produced with directional movements to the left, signers in the first cohort would find two interpretations equally acceptable--either one person was seen and another was paid, or a single person was both seen and paid. Signers in the second cohort, however, only found the second interpretation acceptable. In the transition between LSN and ISN, a meaningless variation in signing became differentiated into two contrastive forms with systematically distinct meanings. The two forms became systematically contrastive.
This constitutes the “emergence of a new grammatical structure,” which Senghas speculates, may have originated in more “concrete” uses of space. Via metaphorical structuring of the kind found in Lakkoff and Johnson (1980) and (Taub 2001), these concrete uses of space were mapped onto more “abstract” uses of space for establishing relations between signs (R.J. Senghas 2003:527). For example
the movement toward a location with the verb give indicates the recipient of a giving event. Perhaps child learners of NSL first developed conventions for physical, locative descriptions, and then used these to bootstrap into devices for grammatical relations (R.J. Senghas 2003:527).
Here we see that the theory of language that is in play has momentarily shifted away from syntax and toward a more fundamental, structuralist notion of contrastive opposition. Con-trastive opposition is a relation between signs, and as in Saussure, this relation is considered “abstract” with respect to the undifferentiated conceptual and material substance it is differentiated against. Language emergence is associated here with this process of abstraction. However, unlike Saussure, Senghas speculates that the ground against which these distinctions emerge is, itself differentiated. The two orders are linked via metaphorical mapping (17).
Following up on this idea that grammatical use of space derives from more concrete uses of space, Senghas identifies two main functions associated with spatial modulations: (1) expressing the participants of events, or as she says, indicating who and (2) describing locations and orientations of referents, or as she says, indicating where. In order to determine if a who construction is in play, one must ask, “is the signing space used in a way that shows who did what to whom? For example, in a sentence that describes a man giving something to a woman, do signers use space to link the signs man and woman to the roles of giving and receiving?” (2010:292). Senghas answers these questions in the affirmative. In order to determine whether a where construction is in play or not, one must ask: do signers have a common system for representing objects and their locations? Do they have common signs for objects and common uses of signing space to locate referents relative to each other? For this, they must have consistent ways of “mapping between their spatial signs and physical locations in the world” (ibid.:296). The who construction is taken to be abstract, while the where construction is understood as “iconic,” and therefore more “concrete” and “closer to its gestural roots” (Senghas 2010:290). These are understood as distinct construction types, however, Senghas speculates that their origins are similar.
We do not doubt that both uses have their origins in the gestural reference to the locations of people and things. It is no surprise that we might describe something that is to someone’s right with a gesture to the right. Such a spatial reference was unquestionably adopted into the homesign systems that predated and fed into NSL [Coppola and Senghas, 2010]. It may even be the case that the argument structure constructions [the who constructions] initially adopted wholesale the forms used to describe spatial relations. That is, there may very well have been a time when she gave to him was expressed with a construction meaning she gave to the right (ibid.:299).
However, when the second cohort arrived, these two uses diverged and became two distinct types of construction (ibid.). Senghas asks, then, which came first, and concludes, counter to her initial intuitions (2003:527), that the abstract who construction came first. This suggests that the innate capacities of child language-learners have an important role in the process of language emergence.
The locative use of spatial modulation, however, is not expected to follow the same path of abstraction. That is because the locative forms are “iconic” and must remain that way in order to fulfill their function: “[M]uch of the form of such utterances is drawn from the structure of the world” (ibid.:291). What makes these constructions useful for communicating is that their interpretation is mediated not only by the relation of the sign to the world, but also by the relation of signs to other signs (ibid.). This relation between signs is accounted for by a “conventionalized device” that allows signers to determine “how space is being used in a particular utterance” (ibid.). Without such a device, “[a] single movement might be simultaneously to the north, toward the door, or to the right of the signer. The interlocutor must be able to identify which interpretation of the movement is intended” (ibid.:291). This “device” sounds like a grammatical structure, but appears to be identified only with the process of “conventionalization.” So here the linguistic (abstract) and iconic (concrete) dimensions of spatial modulation are linked via conventionalization--a fundamentally social process whereby arbitrary correspondences between form and meaning become stable over time.
The argument for the linguistic status of ISN, or “Nicaraguan Sign Language,” rests on the emergence of an abstract grammatical device, however, this device amounts to a conventionalized way of mapping signing space onto spatial relations in the “real world.” This involves relations between signs and referents as much as it does relations between signs and signs. The canonical example that is used in many works is the see and pay example, which is analyzed as a case of “co-reference” and “agreement.” The two signs co-refer to the locus by moving toward it in space, and in doing so, manifest agreement between both verbs and their shared nominal argument. This is presented as evidence that Nicaraguan Sign Language has achieved full-fledged linguistic status: “Signs produced in a common location now unambiguously indicated a common referent” (R.J. Senghas et al. 2005:301). R.J. Senghas and colleagues conclude that, “at this point, the construction could be used to link a verb to its arguments, a noun to its modifiers. Now a common spatial modulation could be used to mean that as single person was both seen and paid” (ibid.).
This argument raises problems that can also be found in the literature on spatial modulation more generally: Can a verb “refer” or ”co-refer” to its argument(s)? How can the locus with which the verb refers be phonologically specified? If it cannot be phonologically specified, then it must be posited as a null argument paired with a deictic gesture as it is realized, which would require an interaction of syntax and the deictic system. If, on the other hand, there is a non-linguistic conceptualization of space underlying the grammatical structure, then what mechanism accounts for their relationship? Bootstrapping? Inference? Blending? Abstraction? Conventionalization? Lastly, what if the non-linguistic world which interacts with linguistic structures and devices cannot be adequately described via conceptualizations of the world outside of language, but rather, must include additional elements and dynamics, which are not governed by strictly cognitive principles, but rather, by social, historical, or interactional principles?
In a practice framework, an ambiguity between referents and arguments is a clear indication that a process of deictic integration is under way. In this case, the process leads to a narrowing of values that are retrievable from the deictic field of the language. Signers in the first cohort interpreted directional verbs as corresponding to a wider range of stimuli than the second cohort. Therefore, if the stimulus included a woman tapping a man, or a man tapping a woman, the form of the verb, including its directional movement, would remain the same. The second cohort, on the other hand, assigned a more narrow interpretation to the directional movement of the verb, consistently associating it with the direction of the represented movement from the character’s perspective (as opposed to the signer’s perspective). Ultimately, it is this shift from a wider, to a more narrow interpretation (or an increase in “specificity”) that captures the shift between the less and more elaborated semiotic systems. In other words, a reciprocity of perspectives was been established, which affected the organization of the deictic field. Directional verbs, or, in a practice framework, what we might call “deictic verbs” retrieve values from that field. Over time, arbitrary restrictions on patterns of retrieval emerge. Ultimately, this process aligns the linguistic system with its contexts of use, including language-external modes of semiosis, which might otherwise be called “gesture.”
A Class of Verbs with a Gestural Component?
Scholars working in distinct theoretical frameworks have converged on two orders of phenomena that must be considered in any analysis of “agreement” verbs. Senghas calls these two orders “iconic” and “grammatical” and also “concrete” and “abstract.” Following Jackend-off (2002), Mathur and Rathmann (2002, 2012) view these two orders as distinct modules, related via an interface between “spatio-temporal structure” and “the articulatory-phonetic system.” The first module is syntactic, the second is gestural, and they posit a pairing of the null non-first person forms with a deictic pointing gesture to account for the endpoint of the verb’s directional movement. Meier and Lillo-Martin (2012) address this semiotically complex aspect of agreeing verbs in terms of a tendency to “point.” Nearly all signed languages studied to date have a sub-class of verbs that work this way, and interestingly, as signed languages mature, both dimensions become more closely associated with certain functions and meanings and these functions and meanings are coordinated with one another in increasingly restricted ways. Meir (2011) describes a process like the one recounted for Nicaraguan Sign Language, where static verbs plus pointing gestures are replaced by spatially modulating verbs. In a discussion of her results, Meier and Lillo Martin write:
With historical change, the endpoints of directional verbs have ceased to be fixed--they have lost their lexical specification--and instead have become free to point to locations associated with arguments of those verbs [ . . . ]. The surprising conclusion is that, with time and with the emergence of morphosyntactic processes that are agreement-like on our view and on that of Irit Meir, ISL verbs (or at least the endpoints of those verbs) have in some sense become more gestural, not less. They point more (2012:154).
In the research on NSL, this pairing of “pointing gestures” with grammatical processes is for some reason associated with the systematization of “iconic” elements. However, pointing suggests an indexical, not an iconic relation. More specifically, the functions of agreeing verbs that do not fit easily into a syntactic frame, are canonically associated with deixis.
As will be discussed in section 2.3, the typical tripartite verbal system found in nearly all signed languages is not found in the second generation of a very new signed language called Al-Sayyid Bedouin Sign Language. Instead, there is a two-way split between spatial verbs and plain verbs. There are no verbs with a directional component, where that directional component serves either an anaphoric function, or a syntactic function. This suggests that agreeing verbs derive, diachronically, from spatial verbs. If this is the case, then what we are seeing as signed languages mature, is a tightening of linguistic and deictic relations. By tightening, I mean that the relations between sign-vehicle and referent are increasingly caught up in and coordinated with relations between signs and the categories to which they belong (i.e. Morris’ ‘universal signs’). What makes them more linguistic than spatial verbs is the relative density of the relations between the two orders of phenomena. This is what I am calling “deictic integration.”
In order to get some analytic purchase on this notion of deictic integration, two distinctions must be made at the outset. First, the deictic system must be distinguished from the deictic field (Bu¨hler 2001 [1934], Hanks 1990). Prior to instantiation, deictic signs are highly schematic (Hanks 1990, 2005). When they are applied in the speech situation, they receive “field values” Bu¨hler (2001 [1934]:99). Field values are retrieved from distinct fields, including the symbolic field and the deictic field. The former inheres in the linguistic system, while the latter does not. Their symbolic meaning derives from oppositions in the language (Here is not there; I am not you), which accounts for definiteness of reference. Their indexical meaning derives from the deictic field, which accounts for directivity of reference. Bu¨hler compares the deictic field to pathways, which extend out around the speaker, projecting a limited set of choices for activity. He compares deictic signs to signposts on those pathways. We use deictic signs to prevent wrong-turns, clarify potential ambiguities, or highlight one choice over a limited set of alternatives (ibid.). Therefore, the efficacy of deictic signs is primarily attributable to the deictic field, which restricts possibilities for interpretation prior to the instantiation of the deictic sign (also see Hanks 2005b: 193-196). Second, processes and constraints that inhere in the deictic field must be analytically distinguished from the grammar of the language, more generally. Only then can principled relations be established.
In the cases of language emergence we have examined so far, phenomena organized by deictic principles have not been granted their own construct. They are backgrounded, and only called on as things that can fill in where needed to make the linguistic theory internally consistent. This is an effect of examining language emergence through a theory of language. In a broader semiotic frame, different kinds of semiosis can by distinguished from one another more easily. Once again, Morris (1971 [1938]) is useful in this respect because of the primacy he attributes to the “the syntactical dimension” of the sign while also situating syntax in a broader semiotic frame. For agreement verbs in signed languages, the autonomy of syntax is at once the problem and the solution. For example, if syntax is autonomous, then every element in the sign must be phonologically specified, otherwise, it cannot be accounted for with the categories and relations that represent the linguistic system. Then again, because syntax is autonomous, the abstract relations can be peeled away, and the problem of phonologically un-specified forms is reduced to the insignificant difference between an argument and a null argument.
The Primacy of the Syntactical Dimension
Morris’s sign is composed of one triadic relation, and three dyadic relations. The triadic relation consists of the designatum (D), the sign vehicle (S), and the interpreter (I). Each of these three aspects can be thought of as points that make up a triangle: The lines that
Figure 2.5: The Triadic Relation of the Sign
connect the points can be thought of as the dyadic relations (1971 [1938]:6). The first dyadic relation is that of sign vehicle to object (S to D). This is the “semantical dimension.” The second dyadic relation is that of the sign vehicle to the interpreter (S to I). This is called the “pragmatical dimension.” The third dyadic relation does not complete the triangle, as one might expect. Instead, it represents the formal relation of sign vehicles to one another (S to S). This third relation is the “syntactical” dimension. The reason there is no line
Figure 2.6: The Diadic Relation of the Sign
connecting the designatum and the interpreter, is that there is no unmediated experience. This appears as a problem to Morris. He states: “ ...It has become clear to many persons today that man--including scientific man--must free himself from the web of words which he has spun and that language--including scientific language--is greatly in need of purification, simplification, and systematization. The theory of signs is a useful instrument for such deba-belization” (Morris 1971 [1938]:3). Morris wants out of the webs of words he is suspended in, but he knows that there is no such thing as immediacy, or pure sense-perception. Therefore, he goes in the other direction (abstraction). He wants to break the transparency of language by creating a technical descriptive language for those webs, and others like them. In order for this to work, however, the language of semiotic must apply universally to all language, and so Morris says, “Semiotic supplies a general language applicable to any special language or sign, and so applicable to the language of science and specific signs which are used in science” (ibid.).
Although Morris stresses the “three dimensional” character of his approach, and says that no one dimension should be emphasized over any other (1971 [1938]:10), he goes on to say that a sign (triadic entity) can still be a sign without a denotatum. It can also be a sign without an actual interpreter. Therefore, neither the relation of S to D, nor the relation of S to I are necessary. “It is not possible, however, to have a language if the set of signs have no syntactical dimension, for it is not customary to call a single sign a language” (ibid.). The line connecting the sign vehicle to the sign vehicle, addresses the question of whether or not you can have an isolated sign vehicle that is not a member of a system of sign vehicles. Morris says you cannot: “Certainly, potentially, if not actually, every sign has relations to other signs, for what it is that the sign prepares the interpreter to take account of can only be stated in terms of other signs” (ibid.:7). Therefore, in Figure 2.6, the meaning of “S” must be thought of not as “sign-vehicle,” but as a system of relations through which sign-vehicles are defined by their relation to other sign-vehicles, or “syntax”-- not the syntax of a specific language, but that of a more general language, which can only be discovered on the basis of its necessary consequences in specific languages.
In established languages where syntax is the object of analysis, the analytic loop might run more smoothly from theory-internal logics of a general language to necessary consequences of that theory in specific languages. However, any argument for the emergence of a new language must necessarily posit a relationship between a system-internal logic like this and something else which is both prior to and semiotically distinct from that system (such as gesture, homesign, or a pidgin). If the two systems are taken to be of the same semiotic type, then the phenomenon becomes language change, not language emergence. This requires an explicit position on the relationship of syntax to phenomena that are, in some proportion, gestural, iconic, deictic, or otherwise semiotically distinct.
2.2.5 Deictic Integration in Nicaraguan Sign Language
So far, the tendency has been to posit a certain kind of abstraction or disassociation of syntax from the gestural phenomena they interact with.18 In this section I have shown that spatial modulations, which have been used as the primary evidence for language emergence in Nicaragua, simultaneously express syntactic and deictic relations by integrating deictic elements and relations into the linguistic system in tighter and more restricted ways over time. This observation contributes to the overarching argument of this dissertation--that theories of language emergence should include an explicit theory of context, which does not skip over everything in between grammar and demographics. In particular, I argue for the importance of the deictic field, which is organized by shared modes of access and orientation, as opposed to strictly linguistic principles. The deictic field is not part of the linguistic system, however, in this section, I have shown that understanding the process through which linguistic and deictic elements are coordinated in tighter and more restricted ways, is crucial to understanding processes of language emergence.
2.2.6 The Emergence of the Social Field of Nicaraguan Sign Language
In this section, I have also argued for a principled way of understanding the relationship between nascent signed languages and the social fields they grow up in. In the Nicaraguan case, there is a clear divide between linguistic and social analyses. From the psychologists’ perspective, the role of socio-historical phenomena is primarily limited to demographic data, including the age and year of entry into the school. However, Polich describes the emergence of an asymmetrical social structure within the Nicaraguan deaf community. Authority and legitimacy accrued to certain social positions and not others, and these asymmetries were institutionalized in the structure of national Deaf organizations, eventually influencing the schools as well. These are precisely the kinds of transformations that can be accounted for using the anthropological notion of a “social field,” which derives from Bourdieu’s practice theory and has since been applied to the analysis of language in social context (Hanks 2005a, 2005b). In this section, I have argued that in order for Nicaraguan Sign Language to emerge and become a full-fledged language, it had to become a legitimate means of position-taking in a specific, historically emergent social field. Close attention to naturalistic interaction among signers in that community would provide insight into the cumulative effects of position-taking on the disposition of language users in that community and the structure of their language.
2.3 Al-Sayyid Bedouin Sign Language
2.3.1 The Social Field of ABSL
Al-Sayyid Bedouin Sign Language (ABSL) emerged under a different set of social pressures than either homesign or Nicaraguan Sign Language. The incidence of deafness among the Al-Sayyid Bedouin is high, and many families have both hearing and deaf members (Kisch 2012:87). In a population of about 4500 people, approximately 130 are deaf (ibid.:90). In this context, hearing and deaf children are often exposed to the local signed language from birth. Therefore, Kisch calls ABSL and other signed languages like it “shared sign languages,” highlighting the fact that signing is not something that deaf people do exclusively amongst themselves. Rather, signing enables communication between hearing and deaf people.
Over the past 30 years, however, the sociolinguistic landscape has undergone many significant changes that have exerted pressures on how ABSL is used. First, separate schools have been set up for deaf and hearing children. The schools differ in the quality and focus of the education provided and they are leading to a divergence in social networks. One of the effects of these changes is that the space shared by deaf and hearing people has been consistently shrinking (ibid.:110). Another effect is that Deaf Al-Sayyid women are increasingly marrying Deaf men from elsewhere, and Israeli Sign Language (ISL) is becoming the language used in the home (ibid.:111). When Deaf Al-Sayyid women marry Al-Sayyid men, their husbands are, with rare exception, hearing (ibid). These patterns together lead to an increasingly significant split between the sign language that is used among deaf people (ISL) and the sign language that is used for deaf and hearing people to communicate (ABSL). The former is associated with an emerging deaf identity, or sense of “deafhood” (Kisch, 2008) which is necessary for accessing broader, deaf social networks. When schools for deaf and hearing children were separated, non-kin networks became more central in mediating employment opportunities for deaf men and when deaf women were employed, it was often in the schools themselves (ibid.:114). Kin-based networks tended to strengthen ties between deaf and hearing people and the local sign language grew. Within the newer non-kin networks, these ties are becoming weaker, and the use of ABSL is becoming less frequent (ibid.).
Linguists interested in the emergence of ABSL have focused on the earliest available generation of signers, who grew up before formal education was made available to deaf children, and when it was still rare for hearing children (ibid.). The first generation of signers included 6 deaf individuals (Kisch 2012:101). This generation developed home sign systems within their families, and were only exposed to external signed languages in very limited contexts.19 The younger siblings, however, were exposed to the more elaborated home sign systems of their older siblings since there was as much as 16 years separation in the ages of the siblings (ibid.). In addition, the hearing people who acquire the language are bilingual in the local signed and spoken languages. Therefore, Kisch argues, ABSL cannot “be considered to develop without exposure to a language model” (ibid.88).
Nevertheless, the structures described by linguists are distinct from the structures of surrounding spoken and signed languages. Therefore, despite the unspecified diachronic relation between the spoken language and the emergent signed language, a significant degree of autonomy appears to obtain. The second generation of signers is composed of 11 deaf signers (Kisch 2012:102). These signers did not grow up with older deaf and/or hearing signers in their homes. Kisch speculates, drawing on interview data, genealogical data, and social network analysis, that the parents of these children picked up some sign language from the first generation signers and relatives who learned to communicate with them, but for the most part, new homesign systems evolved independently in each family (ibid.:102-3). In addition, these homesigners came in contact with external signed languages, again in limited contexts.20 The third generation is increasingly bilingual, using both ABSL and ISL to communicate in their daily lives (Kisch 2012:104). In general, though, among the thirdgeneration, ABSL has become the language used for communicating with hearing family members and within extended kin-networks, while ISL is the language of school, work, and the language most closely associated with en emergent, Deaf identity movement.
Within 2 generations, then, homesign systems became integrated with the social field that organizes marriage patterns, labor patterns, socialization, and more broadly, the circulation of knowledge. However, like other signed languages that have arisen in similar circumstances (e.g. Zeshan and de Vos, 2012, Nonaka 2007), this field is now shifting, and knowledge of ABSL is becoming less useful for taking up desirable social positions. This is leading to more restricted usage of the language, and could, eventually lead to its attrition or death (ibid.). This suggests that a crucial element in the emergence and maintenance of a language is an institutional structure, or stable social field, which can be occupied via the use of the signed language. In the homesign case, no full-fledged language developed because homesign cannot be used to occupy a complex, internally asymmetrical social field. In the case of Nicaraguan Sign Language, a full-fledged language did emerge, and this hinged on the emergence of an internally differentiated social field, where institutional authority accrued to positions, taken up via legitimate use of the signed language.
2.3.2 What Counts as Language-like in ABSL
When linguists began studying the structure of ABSL, there was almost no evidence available from the first generation of signers. Therefore, they focused on the second generation (Sandler et al. 2005:2662).21 In circumscribing a language-like object of analysis, many of the same problems that arose in the first two cases also apply to ABSL. Like the homesign case, the first evidence that was presented to support the language-emergence case, was a robust word-order, which, importantly, was distinct from the surrounding spoken and signed languages (Sandler et al. 2005). Like the Nicaraguan case, this pattern emerged fairly quickly, in the second generation of signers. Like both previous cases, the phenomenon is treated as language-like because it provides a way of “relating actions and events to the entities that perform and are affected by them, a pattern rooted in the basic syntactic notions of subject, object, and verb or predicate” (ibid.:2664). Unlike non-linguistic means of making such connections, syntactic systems have the “effect of liberating the language from its context or from relying on the semantic relations between a verb and its arguments” (ibid.:2665). In other words, the ability of the syntactic system to dissociate from the semantic and pragmatic dimensions of determining who did what to whom, what happened to what, and what got changed, is the hallmark of language.
Recall that in spatial modulations of verbs in signed languages, the autonomy of syntax caused problems for the phonological representation of certain elements of the sign, since some of those elements were gestural. In the homesign case, the representation of a nominal argument of the verb took the form of a deictic gesture directed at an actual object in the room. This causes no problems for the analysis, because the syntax has abstracted away from the sign-vehicle; the NPs do not need to be phonologically specified. This all points to a demotion of phonology in the range of phenomena that can count as language-like, since phonological specification appears optional.
The work on ABSL pushes further in this direction. These scholars find that despite the generally accepted assumption that duality of patterning is one of the basic design features of language (Hockett 1960), ABSL, in its second generation, has no duality of patterning (Aronoff et al. 2008). Instead of claiming that ABSL is, therefore, not quite a full-fledged language, they claim that the basic design features of language should be reconsidered. Their evidence for this claim is, interestingly, not linguistic:
In the absence of a structural definition of what constitutes a completely developed human language, ABSL’s functional versatility and the absence of any apparent difficulty in communication, combined with its acceptance as a second language in the community, lead us to conclude that it is a bona fide but very new human language (Aronoff et al. 2008:134).
This harkens back to Sapir’s claim that language is a “complete system of reference,” which is to say that language will do everything that users of that language need it to do (Sapir 1949[1934]:153). There is a certain seamlessness in the fit between the linguistic system and the world in which it is instantiated, so that no trouble in communicating can be detected. This is presumably not the case for home signers, or others who do not use a full-fledged language. In place of phonology, both “holistic” and “compositional” expressions are found (ibid.:135). They explain:
Although we do not dwell on it here, we find (especially in the narratives of older signers) frequent occurrences of depictions of entire propositions in a single unanalyzable unit. For example, in describing an animated cartoon in which a cat peeks around a corner, one signer used his entire body to depict the cat’s action. These holistic pantomimes are interspersed with individual signs. The individual signs contrast with pantomimic expressions in several ways: they are conventionalized, much shorter, confined largely to the hands (rather than involving the entire body) and express concepts that are members of individual lexical categories (e.g. noun, verb, modifier) and distributed accordingly in the syntax. This mixing of pantomime and words suggests that the rudiments of language may encode events holistically to some extent, but that compositionally is available as a fundamental organizing principle at a very early point in the life of a language (ibid.).
Because their explicit definition of language is based on a goodness of fit between the communicative activity (or what they call “linguistic events” (ibid.) of signers and the world in which those activities unfold, both pantomime and compositional elements count as “linguistic expressions” (ibid.). This is consistent with their finding that ABSL had no duality of patterning until recently, so that a more direct connection between the sign-vehicle and the object to which it refers is permitted, while not compromising the linguistic status of ABSL.
2.3.3 Deictic Integration in ABSL
The earliest morphological processes described for ABSL, is compounding, and as in home-sign, the compounds are composed of one characterizing sign and one deictic sign. For example, place names tend to be generated by compounding a sign that represents a typical piece of clothing worn in the area, or a typical characteristic of the place in some other way, with a pointing sign that corresponds to the location of the place. The authors explain one case that involves a head scarf, typically worn in the place referred to, and a pointing gesture, which is glossed as the sign there:
The sign head-scarf is used as a single sign elsewhere in the language to refer to the kafiyeh commonly worn by Arab men throughout the region, but the compound form head scarf [plus] there, refers specifically to the Palestinian Authority (the West Bank and Gaza), and to cities located in those areas, such as Hebron. The sign long-beard describes facial hair, but in the compound long-beard-there, the form loses this specific reference and comes to mean Lebanon (Aronoff et al. 2008:146).
The order of the compounded elements is fixed--the deictic component is always word-final (ibid.). This consistent ordering of characterizing and deictic elements is an indication that deictic elements and relations are becoming increasingly caught up in and organized by the grammar of the language. In other words, deictic integration is contributing to the emergence of the morphological system of ABSL. Deictic integration can also contribute to our understanding of its emergent phonological system.
Sandler et al. argue that unlike established signed languages, ABSL is only beginning to develop phonological structure.22 By phonological structure, they mean a system of meaningless elements, which combine according to particular constraints to form meaningful units in the language (Sandler et al. 2011:508). Evidence for the existence of such units in established signed languages have included minimal pairs, the predictable absence of logically and motorically possible signs, and predictable assimilation patterns that do not follow from mere coarticulation effects (ibid.:508-15). In earlier stages of research, the authors had administered three picture-naming tasks to 23 subjects in an effort to compile an ABSL dictionary. However, they found a wide range of lexical and formational variation (ibid.:517). Therefore, they returned to their data, this time, in order to determine whether ABSL had any of the tell-tale signs of phonological structure present in established signed languages. They found very little evidence to support such a claim.
First of all, we have encountered no minimal pairs in our study of the language to date. While we can’t deny the logical possibility that minimal pairs are there but evading us, we find it striking that none have surfaced so far, in over 150 words of elicited vocabulary [ . . . ] hundreds of elicited sentences, and numerous narratives and conversations. Second, while constraints on the form of a sign are not absent, they are not strictly enforced. We interpret this as an indication that these constraints, shared as they are by established sign languages that have been studied, are articulatorily grounded, and become more strictly enforced as phonological organization emerges.
So, they say, it is “as if the signers are aiming for an iconic and holistic prototype, with details of formation taking a back seat” (ibid.). For example, the sign for lemon was produced by different signers using different handshapes, orientations, and movements. However, the variations are themselves meaningful in that they correspond to different ways of squeezing a lemon (ibid.:518). Another example is the sign for dog:
Of eleven signers, ten used the same lexical item, representing the barking mouth of a dog with the hand or hands. One signer represented a dog’s ears and paws, this exception proving the rule that dog was the same lexical item for the other subjects. Ten out of eleven is unusually high consensus on a lexical item and dog therefore gives us a good opportunity to observe phonetic variation. While the sign is iconically motivated, it is still lexicalized, in the sense that it conventionally selects a particular aspect of dogginess to represent: barking. [ . . . ]. Across the exemplars of dog in ABSL, there was a great deal of variation (ibid.:519).
Variation was distributed across high-level feature categories in established signed languages, such as handshape, selected fingers, location, and movement23 (ibid.:519-20). So, for example, in one instance, the sign dog was produced in the area of the torso, while in another, it was produced near the mouth of the signer. In established signed languages, these major body areas (head and torso) are contrastive. The authors argue that
[o]n the face of things, one might be tempted to suggest that it just so happens that these particular features are not contrastive in this language while other heretofore unattested features are contrastive. But we stress that this is unlikely, because differences in pronunciation such as those we exemplify here involve major feature categories . . . If the language does not exploit these broader categories to make distinctions, it seems unlikely that it will exploit finer distinctions. By looking for contrasts at higher levels of the hierarchy--comparable for example, to a contrast between voiced and voiceless states of the glottis or nasal and oral sounds rather than finer distinctions such as between coronal and palatal places of articulation--we are giving ABSL, a newly developing language, the benefit of the doubt, assuming that early contrasts would be at broader rather than finer levels of articulation . . . Even at the broader levels, we find non-constrastive variation and no minimal pairs” (ibid.:520).
Where signs in spoken languages can be broken down into meaningless elements, ABSL contains signs, which, as a whole, correspond to an “iconic prototype.” The conceptual prototype is not systematized in the language, but it does represent regularities in experience, some of which become foregrounded and expectable. In a footnote, the authors explain: “Dogs are not beloved pets in the Al-Sayyid village. Rather, they are feared, and are chained near livestock to fend off intruders. It is no wonder, then, that the most salient feature of a dog there is its barking mouth” (ibid.:519). While iconicity can account for the relation of resemblance between the sign and the referent from the perspective of ABSL signers, iconicity does not explain why the barking mouth, as opposed to other aspects of the dog would be selected as the relevant aspect of doginess (why not the running paws, as in Israeli sign language, for example?).
In order to explain the selection of the mouth, the indexical relation between the sign, the object, and the conceptual representation of the object must be considered. According to Peirce, an index “is a sign which refers to the Object that it denotes by virtue of being really affected by that Object” (1955/1940 [1893-1910]:102). An index is not related by similarity or analogy like an icon is, but rather by association, either in space or in “the senses or memory of the person for whom it serves as a sign” (ibid.107). For example, a weather vane is an index because it shifts according to the direction of the wind. In this same way, patterns on the surface of water can be an index of wind.24 In both cases, the “sign” is “really affected” by the object.
In any social world, things are next to other things. We are differentially affected by the things we live among, and these differential affections (or dynamical contiguities) cohere into patterns in everyday life.25 Therefore, as people in a particular place move through space, they have certain expectations about what they will encounter and how they will be affected. Insofar as these patterns of expectation are shared, they will tend to produce a convergence in the patterns of association and expectation that signers have, and this kind of convergence will influence the selection of certain aspects of the referent over others in the conventionalized lexical representation. The relation of resemblance (iconicity) that obtains between this aspect and the sign-vehicle is secondary. If Sandler et al. are right, and this convergence on a conventionalized lexical representation is a precursor to duality of patterning, then indexicality should be given a key role in processes of language emergence, and more specifically, deixis. It is not important that the new sign for dog resembles the dog, but rather that the process of creating a sign for dog is influenced by patterns in how people routinely encounter (or “access”) dogs in the course of an ordinary day.
These kinds of patterns give rise to “pathways” in Bu¨hler’s sense, which accrue to the indexical ground of utterance, and in some cases, are incorporated into the deictic field, which supplies values to the deictic system of the language. Here again, deictic integration, or the coordination of deictic and linguistic elements in tighter and more restricted configurations, takes on a crucial role in the process of language emergence. Iconicity cannot explain why one aspect of the referent would be incorporated into the representation, over and against others. In contrast, deictic integration makes the selection of one aspect an ethnographically predictable choice, hinging on shared modes of access and orientation to the immediate environment, which cohere in local patterns of activity and exchange.
2.4 Deictic Integration and Language Emergence
In all three cases, the emergence of a language-like system corresponds to a tightening of relations between linguistic and deictic phenomena into more restricted configurations. In the homesign case, deictic and characterizing signs combined in increasingly predictable orders as the system matured. In addition to the role that the innate capacities of the mind play, assignment of semantic elements in a given order relied on certain modes of access, such as shared knowledge, perceptual access, shared patterns of use (e.g. they are both familiar with the routine use of an object, the location of the object is expectable for both communicators, they can both see the object, etc.). If this distinction is not made, then knowledge about the location where the shovel is usually stored in a particular house would need to be stored in the semantics of the language and associated with a pointing gesture. It seems advantageous to assign a more schematic meaning to the gesture (e.g. locative) and attribute the rest of the meaning to the modes of access available to both speaker and addressee in the deictic field. From there, one can ask how semantic and deictic elements are integrated into tighter and more restricted configurations, to yield more elaborated and more predictable communicative effects.
In Nicaragua, language emergence has been associated with the emergence of spatially modulated verbs. I recounted the finding that for a verb like speaking-to (a person), signers used to point to a person in the immediate environment, produce the verb, and then sweep the finger from one person to another to indicate who was speaking to whom. Later on, signers moved the verb from one location to another, incorporating the sweeping pointing gesture into a single, verbal sign. This is like agreement in the sense that relations are being established between a verb and entities that can be represented by a nominal signs. However, the referents are not represented by nominal signs. Instead,they are linked directly to the verb via a deictic gesture. Positing a null argument is one way of addressing this problem. Another way, which I have put forth here, is to posit a process that draws linguistic and deictic elements into tighter and more restricted configurations as the language develops. Under this analysis, certain classes of verbs develop receptors, set to receive a limited range of values from the deictic field. Like a pointing sign, they cannot be interpreted until the sign has been applied the speech situation and field values have been retrieved.
Deictic integration has also been important in the emergence of ABSL. For example, ABSL
has recently developed a productive morphological process whereby one deictic and one characterizing sign are compounded to produce place names. As these connections have become increasingly conventionalized, the order of the compounded elements has become fixed; the deictic component is word-final. Therefore, in the terms that are being developed in this dissertation, the consistent ordering of elements (in addition to changes and reductions in the movements of the signs) enact the same kind of tightening of relations between deictic and linguistic phenomena that were noted in the NSL and homesign cases. In NSL, linguistic and deictic elements combined to yield a subset of verbs with a directional component. Agreeing verbs are generally assumed to be more linguistic than spatial verbs, because the deictic component has an anaphoric, rather than a strictly referential function. It indexes a relation between linguistic elements, rather than a relation between a linguistic element and an element outside of language. In ABSL, only spatial verbs have been identified. This suggests that the in the second generation of ABSL signers, deictic components of spatial verbs are not as tightly integrated into the relations between signs, as they are in more established signed languages, such as Visual American Sign Language.
Recall Fillmore’s claim that relations between a verb and its semantic elements are under-girded by “a set of universal, presumably innate, concepts which identify certain types of judgements human beings are capable of making about the events that are going on around them, judgements about such matters as who did it, who it happened to, and what got changed” (1968:24). In ABSL, such capacities no doubt were in play, but equally important are the kinds of access that participants have to objects and to other people in the routine patterns of their daily lives (see also Kisch 2012). These forms of access contribute to processes of conventionalization, which Sandler and colleagues note is far more central in language emergence than they had previously assumed (2011:536). Ultimately, in fact, they argue that
conventionalization among signers, and the automaticity and redundancy that go with it, underlie the emergence of a meaningless formal level of structure in the language of a community. As a particular sign becomes conventionalized, attention to the form-meaning correspondence is reduced, and the formational elements themselves self-organize, under cognitive and motoric pressures for ease of articulation, formal symmetry, and the like. An element that is automatically and conventionally part of some sign may become redundant in the sense that the meaning of the sign does not directly rely on it, and it can then become vulnerable to permutation under formal organization pressures such as ease of articulation (ibid.537).
What I am suggesting is that an important part of conventionalization--including the au-tomaticity and redundancy characteristic of form-meaning correspondences in language, derives in part from the patterns that organize the deictic field, or the modes of access and orientation through which speaker and addressee access objects in the immediate environment. These patterns are further embedded in a social field, which has taken shape around work, family, marriage, and school-related activities. This field has become internally complex and asymmetrical, such that ABSL can be used to access some social positions and not others (Kisch 2012). Therefore, in order for ABSL to emerge as a full-fledged language,linguistic elements have to be aligned with the deictic and social fields where the language is used. As these relations become more stable, and the language is more throughly embedded, it becomes more linguistic in nature. This means that language is not strictly linguistic. Rather, a language coheres in the relations of embedding between linguistic, deictic, and social phenomena. Nevertheless, each category of phenomena requires a different analytic approach, since each is governed by distinct principles of organization. Therefore, they are distinguished initially, in order to draw principled connections between them, simplifying the linguistic analysis and preventing the misapplication of linguistic models to nonlinguis-tic phenomena.
Chapter 3
The History of the Social Field of TASL
In this chapter, I sketch the history of the social field of Tactile American Sign Language (TASL).1 I show that sensory change is only one element in a complex set factors that contributed to this process. A tactile language did not emerge simply because a group of people who were deaf and blind came together in the same geographic location. However, it was also not the case that DeafBlind people decided to invent a language. Rather, they set out to solve practical problems via political and social means. One of the many effects of those efforts was the eventual emergence of a new language. This chapter examines shifts in sensory orientation, communication, and language among DeafBlind people in Seattle as part of broader social and political dynamics, in order to understand the social foundations of TASL.
The Seattle DeafBlind community was established by the late 1980s, and yet, TASL did not diverge significantly from VASL until the mid-2000s. Therefore, the first question that must be asked is not why a new language emerged, but why it didn’t happen sooner. Much of this chapter aims to address this question by looking at the institutionally embedded social roles available to DeafBlind people, how they came to occupy those roles, and eventually, how social roles and relations were reconsidered by DeafBlind leaders, leading to the initiation of a social movement, which took root between the years of 2006 and 2010.
This movement, known as the “pro-tactile” movement, triggered a fundamental shift in what was imaginable for DeafBlind people. Instead of working toward improved resources for compensating or coping with vision loss, DeafBlind people began to imagine a world that could be inhabited without compensation--a world that felt natural, concrete, and effortless. The pro-tactile movement started as a critique of the overwhelming dominance of sighted people in DeafBlind spaces. Almost immediately, though, critique gave way to the morass of what it would mean to establish a DeafBlind space. No one really knew. What kinds of practices would make a room “inviting” for a DeafBlind person? What would a meeting, run for and by DeafBlind people look like? How could groups of DeafBlind people communicate without relying on sighted people to mediate? If sighted people were not so ubiquitous, what decisions might DeafBlind people make for the future of their community? Therefore, from the start, the scope of the movement was necessarily broad, incorporating everything from co-presence and reference to legitimacy, authority, and power. It was never a set of fixed “techniques” for communication.
Pro-tactile practices2 are guided by what its leaders call a philosophy, which begins with the following axiom: Legitimate knowledge can be produced from a tactile perspective without first passing through visuality. In a visual world shaped by sighted people, vision loss leads inevitably to alienation and subordination. Sighted people will always know more about the world and their perspective on it will always be more legitimate. However, given a tactile world shaped by tactile people, it becomes possible to understand visual worlds in tactile terms and alienation is no longer inevitable.
Therefore, for leaders of the pro-tactile movement, the first move was not to create a bridge between DeafBlind individuals and the broader society, but to find a place away from sighted people where DeafBlind people could cultivate tactile sensibilities and modes of communi-2On the topic of myths, taboos, and stereotypes about blind people, Frances A. Koestler (1976) describes the dual figuration of blind people in the popular imagination. On the one hand, they are figured as tragic and dependent, worthy of pity and charity. On the other, they are imbued with magical or extra-sensory powers (ibid.:7). She cites many examples, including a young woman who, it was claimed, could distinguish colors by smell (ibid.:5), or another who could distinguish them by touch (ibid.:6). Another woman could purportedly read the bible, thanks to her “eyeless sight” (ibid.). These and many more cases were shown to be hoaxes or misunderstandings in the end, and Koestler implies, have more to do with entertaining the public than with the lives of blind people. Koestler points out that “what most people continue to misunderstand, is that both acuteness of hearing and sensitivity to touch in blind people are not compensatory gifts of nature but the products of long, hard concentration and training” (ibid.:4). In other words, the sensory orientations of blind people are the outcome of practices which incorporate sensory dimensions. They are not reducible to a natural outcome of sensory capacity or change. Recognition of this fact is the starting point of this chapter. However, I am not only interested in showing that this is the case, but also in how, particular practices were shaped by social and historical forces, and how these developments set the stage for the pro-tactile movement.
Prior to the pro-tactile movement, DeafBlind people rarely communicated directly with one another. Instead, they communicated via sighted interpreters. This meant that the field of engagement was organized along visual lines and accessed via compensatory strategies. Interaction was fundamentally non-reciprocal. People stood at visual distances from one another. They used visual attention-getting strategies (waving a hand in the direction of a person, for example). They used visual back-channeling cues, such as head-nods and eyebrow signals. They attended to the visual qualities of objects and the visual dimensions of encounters and represented those qualities and dimensions using a visual language. Although some DeafBlind people received visual signs tactually, the language and the fields to which it articulated, remained visual. This was possible because DeafBlind people worked with interpreters to find ways of approximating visual ways of listening, interacting, and thinking. However, as vision was lost, and visual memories faded, approximation became less and less effective. Therefore, greater vision loss meant greater exclusion from social life.
DeafBlind individuals did everything they could to avoid exclusion, and as part of this, powerful stigmas were established around everything related to touch. The pro-tactile movement works against these stigmas, insisting that tactility is not the problem, but the solution. However, simply changing the modality through which signs are produced and received would not have been enough. From early on, the leaders of the movement were calling for a broader shift in the way people oriented to their environment, their language, their bodies, and the institutionally embedded social roles they inhabited.
In order for these changes to take place, boundaries around what counted as appropriate and inappropriate touching, had to be revised, and the norms that felt intuitive to sighted people had to be left behind. Once this was accomplished, tactile alternatives to head-nodding, attention-getting, and turn-taking could be established. Tactile communication in groups could be worked out. DeafBlind people could learn to discern qualities such as politeness, impatience, and attractiveness by evaluating tactile cues against new frames of social value. All of these developments were prerequisites for language emergence. In other words, the emergence of TASL as a distinct linguistic system followed from a reconfiguration of power relations, new frames of social value against which communicative behaviors could be evaluated, new structures of interaction, and a new tactile habitus.4
While some of these changes happened slowly, there were key events that acted as catalysts.
In 2010, a series of 20 pro-tactile workshops were organized by DeafBlind leaders for 11 DeafBlind participants. Counter to custom, no interpreters were provided, and no sighted people were invited.5 Since these workshops, new communication practices have proliferated, along with discourses about their social significance.
The idea that DeafBlind people could develop their own communication practices and learn from one another, rather than from sighted people, was a major shift in thinking. Prior to these workshops, most communication-related instruction was provided to DeafBlind people by sighted people. Indeed, in a visual field of engagement, sighted people were the experts. In the pro-tactile workshops, DeafBlind instructors had to work hard to convince their students that in a tactile field of engagement, they were, in fact, the experts. Adrijana, one of the leaders of the movement and instructors in the pro-tactile workshops explained it to her student in the following way:
We need to teach sighted people our tactile ways. All this time, it has seemed like we’re slow to catch onto things. Sighted people are always thinking so hard about how to explain things to us. It makes so much sense for us to figure it out ourselves. We learn from each other really quickly. We don’t talk to each other as though things will be difficult to understand--saying things slowly and in perfectly broken down steps. The problem--the reason why they’ve done that all this time, is because they don’t know how tactility works.
They have no intuitive understanding of touch. They’re just more tuned in to auditory and visual aspects of things--all of their habits are based on sound and sight. So they aren’t the right people to try to figure out how tactile practices work. It really doesn’t make any sense for them to try to teach us how to communicate and how to relate to things. We’ve been working so hard to do it their way, but we can do better than that. We can meet half way by inviting them into our tactile world and showing them how touch works.
Adirjana is not saying that sighted people should be excluded from the DeafBlind community, or that they have nothing to contribute. There is nothing about pro-tactile discourses that suggest at attachment to separatism, authenticity, or identity politics. The focus is, instead, on the possibility of immediacy and the social and political futures riding on that possibility. In order for immediacy to be achieved, DeafBlind people have to have time and space to figure out how tactile communication works and what it means to be a tactile person. In Giddens’ terms, a process of “social integration” was required (1979:76-7).
In the passage above, Adrijana raises two problems. First, she points to the dominance of sighted people in the shaping of DeafBlind communication practices and argues that DeafBlind people are in a much better position to develop these practices, since it is easier for them to become attuned to the tactile dimensions of language and their environment. Second, she argues for direct communication between DeafBlind people, which had previously been rare. In this chapter I argue that both the concentration of communication expertise among sighted people and the absence of direct communication between DeafBlind people have historical explanations, and understanding this history is crucial in understanding the emergence of TASL (6).
3.1 The Seattle Lighthouse for the Blind
In the Seattle DeafBlind community today, there are two main institutions, around which the community has been built: The Seattle Lighthouse for the Blind, and the DeafBlind Service Center (DBSC). DeafBlind people have moved to Seattle in waves since the mid-1980s. Most were able to do so because they were offered employment at the Lighthouse. Therefore, the Lighthouse has played a foundational role in the establishment of the Seattle DeafBlind community. However, this fact is not reducible to the provision of jobs. They are a manufacturing company, but their mission has always included employment support and a variety of social services, in addition to employment opportunities. On their webpage,7 their mission is stated as follows:
[...] to create and enhance opportunities for independence and self-sufficiency of people who are blind, DeafBlind, and blind with other disabilities
This combination of manufacturing and social service is a distinct characteristic of organizations like the
Lighthouse, most of which began as “sheltered workshops for the blind.”
Sheltered workshops have played an important and contentious role in the lives of hearing blind Americans since the 19th Century and are at the center of political discourses that have intensified since the beginning of the 20th Century. In what follows, I draw on some of this history in order to sketch the scene that pre-existed the DeafBlind program at the Lighthouse. I argue that the inception of the DeafBlind program at the Lighthouse was a site for the convergence of Deaf and blind histories, social roles, and political dynamics. It was this complex and specific social field that eventually gave rise to the pro-tactile movement and to Tactile American Sign Language. Therefore, understanding these historical convergences is important for understanding this case of language emergence. The more general blind history recounted below is not meant to stand in for the history of the Lighthouse or the DeafBlind community, but rather, to give a sense of the broader social field that shaped both.
3.1.1 Sheltered Workshops for the Blind
The first Sheltered workshop was established as part of the Perkins School (then called the Perkins Institution and Massachusetts Asylum for the Blind) (Koestler 1976:209). The sheltered workshop was a solution to a widespread problem. When graduates of the Perkins Institution sought jobs, despite their training and capabilities, they faced many obstacles. So in 1840, a separate work department was established in the school and was soon replicated in schools for the blind across the country (ibid.). Later, the work departments were transferred from the schools to voluntary organizations and later still, to state agencies (ibid.). By the 1950s, they had been entirely transferred out of blind schools. However, they retained certain elements of their history. A school would be much more inclined to take responsibility for the moral and emotional well-being of children than to view them as laborers who could help turn a profit. This was also the case for the workshops.
The goal of these organizations was not to turn a profit, but to give blind people a sense of purpose and independence (ibid.). This view of blind labor also appeared in the 1930s, when blind people argued for a work program that would serve the same purpose that the PWA served for sighted Americans. However, there was a parallel discourse that viewed the provision of such jobs as an act of charity. As the country stabilized, and the PWA was shut down, the latter of these discourses prevailed. Blind labor was not primarily seen as something that was done in exchange for monetary compensation. Rather, it could be exchanged for “dignity” and “self-esteem” and was presented as an alternative to isolation. Monetary compensation (often minimal) took a secondary role in the arrangement (Koestler 1976:195).
By the 1950s, the sheltered workshops were well-established, but transportation was very limited, so blind people had to live nearby in boarding homes. Eventually, people who had not grown up blind, but had become blind later, came to live in these homes and be trained in “personal adjustment” and “work skills” (Koestler 1976:209). In this way, the workshops became vocational training centers as well. There were several ambiguities that were endemic to the institution from early on. First, it was not clear whether the workshops were intended to be temporary interventions that would help blind people find gainful employment elsewhere, or if they were intended to be a refuge for people who lacked alternatives.
In 1908, there were 16 workshops nationwide, all of which produced a limited range of handmade objects including brooms, caned chairs, and woven goods. They employed a total of 583 blind people. These workers were paid
an average of just over $3.00 per week per person. It was hardly a living wage, even in those days. But then, workshops were not expected to yield a living wage; they were subsidized by their sponsoring agencies, and the blind person whose family could not supply the difference between his earning and his needs usually received a small cash supplement from the agency (Koestler 1976:210).
However, during World War I, several hundred of these workers were employed in war factories, and paid significantly better wages. Their posts in the workshops were filled by
“multi-handicapped people,” so when the war was over, there were two problems. First, it was no longer clear who should have priority in the workshops, since many blind people had shown that they could work in industry. However, it was not clear that blind people with other physical or cognitive disabilities would be capable of such a thing, and therefore, maybe the workshops should be reserved primarily for them. Exacerbating the problem was the fact that blind people who had been working in industry were no longer interested in the low wages and poor work conditions that were common in the workshops (Koestler 1976:210). The same problems would arise during World War II, and answers to these questions would require further clarification as to the primary purpose of the workshops.
What should be the basic function of the workshop? Should it be primarily a training school to fit people for employment in open industry? Should it be a self-supporting production unit, able to compete in the open market with commercial firms? Should it be an outright social service, a work therapy setting for those blind people who could never realistically be expected to pull their economic weight? Should it combine all three functions? (Koestler 1976:210-11).
In the past, these questions have been answered in contradictory ways, contributing to tensions between blind laborers and those making decisions that affect them. Answers to these questions also change depending on how they are interpreted, and the historical context in which they are considered. For example, if people who were once considered incapable of working were suddenly able to pull their own weight in war times, then a designation of incapacity can be understood as a way of removing competitors from a saturated labor market, not a descriptive fact about blind people. However, some argue (though not in these terms) that the unwillingness of sighted people to hire blind workers is a social fact, which renders blind people unemployable. In this view, a distinction between social and physical reasons for unemployability is irrelevant.
Limited employment opportunities has been a central concern for blind people since at least the 1920s (Koestler 1976:9). In the 1930s, the situation became even more pressing, and three pieces of legislation were introduced to mitigate: the Randolph-Sheppard Act of 1936, the Wagner-O’Day Act of 1938, and the Vocational Rehabilitation Act amendments of 1943 (ibid.:193). The Randolph-Sheppard Act began prior to the 30s, with the observation that the Public Works Administration (PWA) provided work opportunities for millions of people, but much of the work it provided often could not be done by blind people. Therefore, there should be a supplementary national program through which blind people could be employed (ibid.:197).
Previously, in 1920, a law was passed, ensuring that blind people were one of the groups given priority in operating news stands in Federal buildings. This was a lucrative alternative to the limited range of “blind trades” that would have otherwise been available. The New York Association for the blind soon implemented a program helping people access this new opportunity through interest-free loans and other forms of support (ibid.:193). According to Koestler, this was an important development leading up to the Randolph-Sheppard Act because blind people moved into the public eye, where they were “showcased” as examples of competent business operators and not merely tragic dependents. This led to additional opportunities for blind people in manufacturing and production as well as Federal civil service(ibid.:198).
Blind leaders focused their efforts on continuing to improve the public image of blind people, in an attempt to broaden employment opportunities. In 1937, Joseph Clunk was appointed to administer the Randolph-Sheppard Act, thereby becoming the first blind civil servant (Koestler 1976:198). The Act required that at least 50% of those hired to administer the act at the Federal level should be blind as well, so Clunk was responsible for hiring the first blind Federal Civil Servants in the history of the United States. Clunk’s aim was to seize on the opportunities that the Randolph-Shepard Act created, while not acquiescing to the presuppositions that made the passage of the act possible. Rather than appealing to the sympathies of employers, or asking for “concessions,” he argued that the limitations of blind workers could easily be overcome with a little imagination on the part of employers. Once employers could be convinced that particular jobs could be done by blind workers just as well as they could be done by sighted workers, then blind people would be free to enter the labor market with no need to ask for charity. Furthermore, their labor could be exchanged primarily for money, rather than dignity.
3.1.2 From Sheltered Workshops to Big Business
The history of blind labor suggests that the possibility of work for blind people has more to do with ideological and economic conditions in a particular period than the physical capacities of blind people. Since the 1920s, the situation has fluctuated--improving and deteriorating as circumstances change in the labor market, in manufacturing in the United States, and elsewhere. However, in the late 1930s, a special place was carved out for blind labor in the “state-use” market to prevent blind people from being pushed out of their jobs every time one of these fluctuations occurred.
In the late 1920s, prison labor had started flooding markets, including broom manufacturing. Labor unions, manufacturer’s associations, and citizen groups all banded together to try to eliminate the unfair competition by restricting the sale of prison-made products to “state-use,” thereby removing them from the open market. One of the manufacturing associations suggested that the workshops for the blind be given priority in the production of state-use brooms. The workshops followed up on this. Though they weren’t given first priority, once the entire inventory of prison-made brooms had been purchased by Federal departments, workshops for the blind were allowed to bid for the remaining contracts (Koestler 1976:212). Workshops began competing with one another for work and in doing so, started undercutting each other’s prices (ibid.). This led to worsening conditions for blind workers. It became clear that in order to address the problem, the workshops would need to secure federal broom business that did not require such fierce competition (ibid.:213). To this end, the Wagner-O’Day Act was passed in 1938. This act mandated that brooms and mops and “other suitable commodities” be purchased from blind agencies at market price (ibid.:214). Two months later, the National Industries for the Blind was established to implement the Act.
In 1939, the first federal order was filled, and the 36 workshops participating sold $220,000.00
worth of brooms and mops (Koestler 1976:219). This was a positive outcome of the Wagner-O’Day Act as it had been conceived. However, with World War II, blind workers were one of many groups needed to meet production needs for the Federal government, and the Wagner-O’Day Act suddenly placed blind workers in a privileged position. State-use markets, which had once been marginal, were now booming, and the workshops had more work than they could follow through on (ibid.:220).
Only one year after the National Industries for the Blind was established, in 1940, workshop sales rose from $220,000 for 36 workshops to $1 million for 44 workshops (ibid.:220) and the average sales for the duration of the war was $8 million annually. In response, workshops expanded, and far in advance, began to plan for post-war changes in demand. By the time the war ended, the rapid decline in Federal sales was already being replaced by a rapid incline in commercial sales. By 1960, 62 NIB affiliated workshops were up to $24,000,000.00 in sales. $8,700,000.00 of this was earned through sales to Federal departments. From 1971 on, the military would be included as one of many Federal departments that were required to give organizations that employed blind laborers preference. Nevertheless, military cutbacks and a more general recession began in 1969, and the early 1970s were fraught. Koestler writes:
What happened to NIB during this troubled period constituted more than operational and financial reorganization. There was a change in direction, away from the toe-to-toe competition with profit-making industry which had been the main thrust during the Sixties and back to the basic purpose of services aimed at giving blind men and women maximum opportunity for self-support through constructive use of workshop facilities for vocational training and employment (1976:226).
However, over the previous several decades, vocational rehabilitation services has grown, and blind workers had been placed in jobs in open industry. Those who were still employed by workshops were mostly those with multiple disabilities (1976:226). While employing blind people had always required equipment modifications, the new demographic required many more services. Koestler writes:
Brought into play were medical, psychiatric, and psychological testing; individual and group counseling; assistance with mobility and with skills of daily living; recreational services; social work help with family relationships, housing, and other problems” (1976:226).
These changes coincided with a nation-wide emphasis on standards in training methods, required qualifications of staff, construction of facilities, and operating practices and procedures in the human services (ibid:227). One of the ambiguities about the function of sheltered workshops and the status of those employed by them emerged again as a problem around this time.
To the sponsoring agencies and the taxpaying or contributing public which financed the workshops, the people who worked in them were subsidized clients of a non-profit social service. Many of the people, however, thought of themselves as employees who earned by means of their labor and were therefore entitled to the
same rights and benefits as all other workers: minimum wages, unemployment insurance, paid vacations and various other fringe benefits. While many of the more enlightened workshops did, in fact, provide such benefits, others were guilty of substandard work practices if not outright exploitation. Even these, it should be said, were not necessary acting callously but out of differences in viewpoint as to what workshops were designed to accomplish. Those who believed workshops should operate as self-supporting entities, neither making a profit nor requiring subsidy, attempted to hold on to their best and most productive workers, making little or no effort to move them out into open industry. In such shops the less capable workers who could not earn their keep were left to fend for themselves (Koestler 1976:227).
If the employees of the workshops were employees, they had certain rights. If they were clients of a non-profit social service organization receiving training, therapy, and support, these rights did not necessarily apply. For example, “[s]ome were paying low trainee wages to persons employed under a vocational rehabilitation plan and kept such persons in trainee status for unduly long periods” (ibid). It was also claimed that Vocational Rehabilitation (VR) counselors contributed to the problem, by using the workshops as a an easy solution for people they thought would be difficult to place (ibid). Once they referred them to the workshops, they no longer attempted to place them elsewhere, so the workshops became a kind of dead end (ibid).
In 1966, against opposition from sheltered workshops, the Fair Labor Standards Act was passed, which mandated that employees of sheltered workshops be paid 50% of the minimum wage (ibid.:228). There were, however, classes of workshop clients who were exempted from this requirement--those who either held trainee status, or were “so severely handicapped that their earning capacity was severely impaired” (ibid.), or they were employed in “work activities centers” (ibid.). These were intended for people who were deemed incapable of productive labor, and provided therapy, support, and activity, as opposed to work (ibid.). Although a minimum wage had been established, many other standards and benefits were denied, including unemployment insurance and collective bargaining rights (ibid.).
In 1971, with the amendment of the Wagner-O’Day Act, workshops for the blind were no longer strictly for the blind. Their privileged position in production for the federal government was opened up to workshops that served people with any kind of disability, not limited to blindness (Koestler 1976:229). This created an important opening for DeafBlind people. On the one hand, there were more jobs available for them, since hearing blind people had moved increasingly into open industry, and on the other hand, there were less internal barriers to broadening the range of accommodations and services that could be provided, such as interpreting services.
In combination with other state agencies, the Seattle Lighthouse for the Blind would become central to the lives of many DeafBlind people. Their housing, medical, personal, and employment related needs were often addressed via the Lighthouse. In order to receive these services, they had to take on roles given by the organizations that provided the services, and in doing so, they were shaped by those organizations. DeafBlind subjectivity in Seattle has emerged, since the 1970s, as something unique that is irreducible to either of its constituent terms. In order to understand this process, I begin with an account of how blindness organizations, including those like the Lighthouse, have shaped hearing blind subjectivities.
3.1.3 The Making of Blind Men
In The Making of Blind Men, Robert A. Scott examines the socialization of blind adults through their interaction with the “large, intricate, multimillion-dollar national network of organizations, professional specialities, and programs for blind people” (1969:1). Many of these organizations, including state agencies, have their roots in charity organizations like sheltered workshops, where the boundaries between givers and receivers are firm. Scott describes a similar dynamic in the support apparatus available to blind people in the 1960s. Boundaries between professionals providing services and those receiving them were clear, and the dynamic between them, as Scott described it, was one of conversion and domination that left blind people with a very limited repertoire of potential social roles (ibid.:71-89).
According to Scott, when blind people first seek help from an organization for the blind, they often have a clear idea of what their problems are and what kinds of help they are looking for. Some are experiencing difficulty reading, and would like to learn how to access texts in large print. Some would like help with household chores that have become difficult with deteriorating vision. Some would like to learn how to use a cane. However, the “workers for the blind,” as Scott calls them, have a very different idea of what their clients need. He explains that the professionals
regard blindness as one of the most sever of all handicaps, the effects of which are long-lasting, pervasive, and extremely difficult to ameliorate. They believe that if these problems are to be solved, blind persons must understand them and all their manifestations and willingly submit themselves to a prolonged, intensive, and comprehensive program of psychological and restorative services. Effective socialization of the client largely depends upon changing his views about his problem. In order to do this, the client’s views about the problems of blindness must be discredited.
What appears at first to the client to be a need for practical guidance is seen by the professionals as a small manifestation of a much larger problem. An attempt to learn large print, becomes a battery of psychological tests. An attempt to learn to use a cane becomes a long-term program of “testing, evaluation, and training” (Scott 1969:78). What promised to be a resource for learning seemingly simple skills, becomes a slow and complex process of socialization. According to Scott, there are various rewards and punishments for adhering or not to these programs, which seek first and most fundamentally, to disabuse the client of their misguided impressions regarding their condition.
Scott distinguishes between two general approaches to “blindness work.” The first he calls the “restorative approach,” (1969:80-84) and the second he calls the “accommodative approach” (ibid.:84-89). The restorative approach assumes that most people who become blind can return to a life much like the one they had prior to becoming blind. However, in order to succeed in doing so, the blind person must come to terms with a “life crisis” and be trained in various modes of “adjustment” and “rehabilitation” (ibid.:83). This process includes “training the other senses to take over the role of sight; training in basic skills and the use of various mechanical devices; restoring the sense of psychological security; and assisting the individual to meet the prevailing attitudes of the society toward him” (ibid.:82). Scott points out that the approaches imposed by the experts often do not coincide with those of the client. Ideas they might have had for improving their prospects are not taken into consideration. Therefore, the knowledge acquired by the client can, in addition to being useful, also act as a limit. Or in Scott’s words, “the choice of compensatory skills around which the theory revolves means the exclusion of a spectrum of other possibilities” (ibid.:84).
The restorative approach seeks maximal integration in the sighted world. However, proponents of the accommodative approach point out that the feasibility of integration changes, depending on many large scale historical, economic, and social factors. Therefore, obstacles to gainful employment and social integration in other domains can be significant. To address this problem, accommodative organizations establish special environments that accommodate blindness. They install special auditory signals in the elevators, braille displays on computers, and so on. Some arrange special transportation, and provide foods in the cafeteria that are not awkward for blind people to eat. Social activities, such as “bingo games” are organized and sighted people are available to monitor the game and do anything the blind person is not able to do for themselves (ibid.:84-5).
In manufacturing companies that take an accommodative approach, the production method will often be engineered with the disability in mind, so that “there is little resemblance between an average commercial industrial setting and a sheltered workshop. Indeed, the blind person who has been taught to do industrial work in a training facility of an agency for the blind will acquire skills and methods of production that may be unknown in most commercial industries” (ibid.:85).
In accommodative settings, the aim is not to prepare blind people for work outside of the agency, but to help clients organize their lives around the agency or organization as a permanent solution to a completely disabling set of circumstances (ibid.:85). These circumstances include the physical fact of blindness, but also other factors, such as the widespread unwillingness of hearing sighted people to hire disabled workers. After many years in such an organization, the blind person is likely to be maladjusted to the outside world, and therefore, “has little choice but to remain a part of the environment that has been designed and engineered to accommodate him” (ibid:85-6).
These two perspectives shape the field that blind people must occupy when seeking services, and a finite set of social roles emerge: the “expedient blind person,” the “true believer,” and the “professional blind person” (Scott 1969:86-7). The expedient blind person makes a conscious effort to perform the role expected of him in the presence of sighted experts in order to gain access to resources, but sees it as a performance that can be abandoned. The true believer is a blind person who actually experiences the emotions that the experts require of them (ibid.:87). They express emphatic gratitude to the organization, and they genuinely believe that they would not be able to live without it (ibid.). The professional blind person lives almost entirely within the network or organizations and agencies through which they have been socialized, and has very little contact with anyone outside of it (ibid.). The professional is often an employee of a blindness organization and their employment is understood as an act of goodwill or charity on the part of the organization.
3.1.4 “Integration” from a Deaf Perspective
The split that Scott identifies between agencies oriented toward full integration of blind people into society and those aiming to accommodate them, has been highly politicized among blind Americans. However, many members of the Seattle DeafBlind community had never before come in contact with blind agencies or blind people before moving to Seattle. In the Deaf worlds they had come from, nothing was valued more than access to a community where sign language was used. For this reason, one of the main thrusts of political discourse among Deaf Americans has been to argue against so-called “integration” in deaf education.
Precisely counter to blind politics, Deaf political discourse has focused on the detrimental effects of deinstitutionalization, integration, and mainstreaming, since these moves often mean isolating deaf children in schools full of hearing children, and cutting them off from any perceptible language, and therefore from normal patterns of socialization (e.g. Cleve (2007), Keating and Mirus (2003), Lane, et al. (1996)). As I describe in section 3.3.2, The Lighthouse was often apprehended by DeafBlind people as a place where the effects of blindness could be held at bay, and visual communication and ways of life could be recovered, if temporarily. Work was a means to that end, and the labor itself was not politicized in the way that it is among blind people.
However, 20 years later, the pro-tactile critique points to an asymmetric distribution of expertise that sounds strikingly similar to Scott’s critique. Adrijana and Lee, two of the central leaders of the movement, have consistently argued that the dominance of sighted people in matters of DeafBlind communication has undermined tactile modes of knowledge production. This asymmetry in knowledge production is comparable to asymmetries Scott describes, which lead to direct conflicts between the forms of knowledge produced by blind people on the one hand, and by the people providing services to them on the other. In the next section, I look at how the institutional structure of the Lighthouse may have affected the distribution of expertise in the DeafBlind program, and how these and other factors shaped communication practices in the DeafBlind community.
3.2 The DeafBlind Program at the Seattle Lighthouse for the Blind
The Seattle Lighthouse for the Blind, like other organizations of its kind, was once a sheltered workshop, and over the years has grown and diversified in terms of products and workforce/clients (Rochester 2004). However, unlike the others, in 1976, the Seattle Light-house established an employment program specifically for DeafBlind people.8 In order to understand the pro-tactile movement and its effect on language and communication, I focus on two achievements in the early history of the DeafBlind program. First, Visual American Sign Language was established as the primary language of the community. This was not an obvious or inevitable development. In many other places where DeafBlind people are socially and politically organized, spoken English, paired with amplification systems, is the primary mode of communication. Second, conventions for mediated group communication began to be established, making it possible for DeafBlind people to meet in groups, as opposed to being limited to one-on-one communication. These important changes happened within the institutional structure of the Lighthouse with influence from Deaf and sighted people who had not previously been involved with blind people or the organizations and agencies that serve them. Many of those people were affiliated with or trained in the Interpreter Training Program at Seattle Central Community College, and/or were members of the Deaf community.
3.2.1 Interpreter Training Programs
Seattle Central Community College established a program for Deaf students in the 1960s and an Interpreter Training Program (ITP) in the 1970s. According to Laura, a Deaf student who was there in the late 1970s, there were about 100 Deaf students enrolled at the time. Some took two years of general requirements and then transfered to a four-year university, such as Gallaudet. Some learned technical skills such as boat-building, or mechanics. The Deaf program and the ITP were housed in the same building so there was a lot of interaction between hearing and Deaf students. Laura said that
[l]ater, it became really common for people to get together in the cafeteria. And people didn’t care if you were Deaf or hearing, as long as you were signing. It was a really thriving social scene. That’s what it was like back then. And interpreting services was in the same building, too.
Early on, when DeafBlind people moved to Seattle to work at the Lighthouse, they were among a very small group. Given the diversity in language background, it was likely that they would either not be able to communicate with other DeafBlind people or that they would have nothing at all in common with them and would not feel compelled to communicate with them.
Seattle Central Community College was an important resource for those people in broadening the pool from which potential interlocutors, friends, and communication supports could be found. Early on, ties between the two organizations were informal, but over time, they became stronger. First, a small number of specialists with Deaf-related expertise who were affiliated with Seattle Central in some capacity, were hired at the Lighthouse in permanent positions. From the very beginning, this included Deaf and hearing people.
Next, the ITP at Seattle Central started encouraging (and later requiring) their students to volunteer in the DeafBlind community at events that were part of the DeafBlind program at the Lighthouse. This mutually beneficial relationship, which was forged in the 1970s, has been very important throughout the history of the DeafBlind community for maintaining the pool of interpreters available to work with DeafBlind people. In the late 1990s and early 2000s, the relationship became weaker, and students were not being asked or required to volunteer in the same ways. This trend continued further when a private ITP in Seattle and the Seattle Central ITP both closed, one after the other, due to changing standards in the national certifying organization for interpreters, and other factors.
In 2010 and 2011 when I was conducting my fieldwork, it was clear that there would soon be no ITP at all in Seattle proper. These changes contributed to a severe interpreter shortage in the DeafBlind community, which was only expected to worsen. Already, DeafBlind people were having to cancel or postpone events due to a lack of qualified interpreters. When given a choice between waiting and communicating without an interpreter, some chose the latter, and in doing so, were forced to develop new communication practices.
3.3 Why Didn’t a Tactile Field of Engagement Emerge Sooner?
When communication specialists and interpreters came to work at the Lighthouse in the 1980s, they did so in a variety of capacities. Although their training focused on the history, culture, and language of Deaf people, they had to learn how to extend their expertise to include things that would be relevant for Deaf people who were going blind. Some things required improvisation, while others fit fairly neatly into the structures, categories, and practices that were already in place. For example, one graduate of the Seattle Central ITP was hired to each “independent living skills,” which is a recognizable category among blind people. Some of the things that would normally be included in such a class would be instruction in how to cook without vision and instruction in reading and writing in Braille. The Department of Services for the Blind (DSB) provided these services, but only in spoken English, since most of their clients were hearing. When the numbers of DeafBlind people started growing in Seattle, it was cheaper and more effective to train an ASL user to provide the training directly than to hire interpreters, and DSB provided the funds.
These techniques or strategies were taught, for the most part, by sighted experts to adults who had become blind. Given this institutional structure, tactile reception of ASL fit in easily as an additional technique or strategy that could be used to compensate for vision loss. Just as Braille is a tool that helps people access written English, tactile reception was treated as a tool that could help people access Visual ASL. This alignment of tactile reception with services provided to blind people may have contributed to the sense that Visual ASL could be detached from the visual channel it was produced and received in as well as the visual worlds and practices that had shaped it. On the one hand, there was a language. On the other, there was a means of adapting that language using compensatory strategies. In combination with the lack of direct communication between DeafBlind people, this distribution of expertise may have been one factor that contributed to the maintenance of a visual field of engagement, rather than the establishment of a tactile field of engagement.
3.3.1 Moving to Seattle from Elsewhere
Another factor preventing the emergence of a tactile field of engagement was the fundamentally visual orientation of DeafBlind people prior to their arrival in Seattle. People with Usher Syndrome, for example, were used to communicating in visual modalities while strategically compensating for their loss of vision. While living elsewhere, they had learned to linger in the back of the room where their tunnel vision would capture a wider swatch of activity. In conversations, they stood far away from the person they were talking to so they could see both the hands and the face. When more than one person was involved in a conversation, they looked for cues to know when and in what direction to turn their heads. When this became impossible, they honed their skills of inference and tried at least to keep up the appearance of participation. When neither approach worked and even appearances couldn’t be maintained, they limited themselves to one-on-one conversation.
Slowly, entire categories of experience were deemed inaccessible: staying out past dark, going to parties, meeting friends in restaurants or bars with low lighting, and so on. If this process went too far, people became withdrawn and isolated. Once a person has become withdrawn and isolated, it becomes harder and harder to re-establish contact with the outside world. People forget how to behave in socially recognizable ways fairly quickly, their strange behavior drives people away, and isolation becomes self-perpetuating. People who move to Seattle do so, at least in part, to avoid such cycles.
Upon arriving in Seattle, the situation is hopeful. DeafBlind people encounter others who are familiar with their experiences and who want to be part of a better future. They also find an army of interpreters trained to provide visual information and otherwise facilitate communication. With interpreters, they enjoy renewed access to some of the categories of experience that had previously grown inaccessible. If they had stopped joining group conversations, now they could do so with an interpreter. If they had stopped going out past dark, now they could do so with an interpreter. In addition, the strategies they had for maintaining visual communication practices became legible. In Seattle, in addition to making up part of an elaborate compensatory apparatus, these strategies also constitute ways of taking up recognizable social positions such as “tunnel vision person.” Outside of the DeafBlind community, they are more likely to be interpreted as idiosyncratic behaviors that mark a person as deviant or different.
Sighted and DeafBlind people together take part in building the compensatory apparatus. In Seattle it has been part of the common sense shared by sighted and DeafBlind people alike that if you are talking to a tunnel vision person, you have to back up. Everyone wears clothing that contrasts maximally with the color of their skin. People with light skin wear black, navy blue, or dark grey. People with dark skin wear white, or pink, or teal. That way the signs stand out against their clothes and tunnel vision people can go on longer using visual reception. Sighted people with ties to the DeafBlind community often carry contrastive clothing with them in case they run into a DeafBlind person, and DeafBlind people almost always wear contrastive clothes (so much so that they occasionally wax nostalgic about a time when they could wear red or polka dots).
There are also interactional conventions for turn-taking so people cue one another when collective focus shifts. Everything is geared toward maintaining visual communication practices as long as possible, which is a relief to people who had previously been out there in Deaf communities trying to fill in the blanks, bridge the gaps, and keep up appearances with less and less success. In Seattle it is possible for familiar, visual sensory orientations to be kept in tact a little longer. Therefore, for many of the people I interviewed, moving to Seattle was not a move toward tactility, but a way of postponing blindness. For those in the earlier generations especially, a great deal of negativity and fear had accrued to blindness. The promise of postponing it and the isolation it threatened was better than most could have hoped for--even if one day they would have to give up on vision entirely and “go tactile,” thereby becoming a “tactile person.”
3.3.2 Growing up with Usher Syndrome in the ‘60s and ‘70s
When people in the older generations were told they would go blind, they couldn’t imagine how life could go on at all. No one explained to them what they could expect or how they might cope. When people did suggest ways of coping with blindness, they were often very unappealing. For example, two sisters with Ushers, who had been living in Seattle since before a community formed there, reportedly sought out advice from a prominent Deaf teacher in the Seattle Deaf community about what to do when they lost their vision. They were told that once they were blind, they couldn’t sign any more. They would have to sign smaller and smaller as their tunnel of vision grew smaller, and at the end they would have to switch to fingerspelling. Whether they were given this or other scenarios, blindness, it seemed, would be even worse than what they had already experienced.
In many cases, growing up with Ushers meant being picked on by other kids, being called clumsy, being treated as not smart or not capable because of misunderstandings surrounding vision, and so on. Blindness was what made you not a good athlete, not a graceful person, not smart-- but it was not clear, in a positive sense, what life might be like as a “blind Deaf person.” Against this background, Seattle appeared as a place with hope for a collective future and energy for building it. Blindness was not stigmatized the same way that it was in the broader Deaf community. There were recognizable social roles to be inhabited and people to hang out with. Particularly in a time when access to information was limited, the phenomenon of the DeafBlind community came out of nowhere as a viable alternative to many of the effects of blindness--though not exactly as a place where blindness could be embraced. Counter intuitively, cultivating a “DeafBlind” identity led not to a shared world suited to a tactile mode of experience, but rather to services and social roles that would keep impending blindness at bay. Daniel’s story illustrates much of this.
Daniel grew up in a residential school for the Deaf in the 1970s. After graduation, he went to see an eye doctor because he suspected something was wrong with his vision. There were no interpreters present at the appointment, though, so the results of the exam weren’t clear to him. The doctor referred him to the Department of Services for the Blind (DSB). When he arrived at DSB for his first appointment, he thought he would be fitted for glasses. Instead, he had his first experience being thrust into the social role of a blind person.
[A woman who worked there, named Lisa] came out and met me, and pulled me by my forearm into her office. I thought,‘What is this lady doing?’ But she just went right on, smiling, and pulling me by the arm into her office. Finally, we sat down. She pulled out a Braille book and some math cards. I had no idea what was going on. I couldn’t imagine why she was pulling out all of this stuff for blind people. I wrote on a piece of paper that she must have misunderstood or something, that I only came to get glasses. I told her I had perfectly good vision. So she wrote back:
You’re going to be blind in 15-20 years.
I couldn’t believe it. I was in shock. I felt terrible. “Blind!” I thought. I told her I had to go to work, and she asked if I would be coming back in two weeks. I told her I would--you know-- whatever she wanted to hear. I didn’t understand if in 15 years I would wake up one day and suddenly be blind, or if I would be slowly going blind or what. I had very little actual information. When the time came to meet with Lisa, I didn’t go [ ... ]. The stigma associated with blindness was so great, that I assumed there was nothing but an empty existence for blind people. I was terrified of that [ ...]. This was in the ‘70s, and it was different then. [ ... ] So the years went by, and I wasn’t sure what to do about it.
Later, Daniel met a blind Deaf person who had Ushers. That person told him about the American Association of the DeafBlind (AADB) and also explained crucial facts to him about what he could expect in terms of his vision--for example, that it would slowly deteriorate from the periphery in. Only after meeting several people with Ushers at AADB who all told him the same thing, did he confirm for himself that his vision would fade from the periphery in over time. In 1984, Daniel attended AADB again, this time in Seattle. He liked what he found so much that he decided to move there.
I liked the people here in Seattle a lot. There seemed to be no stigma at all associated with being blind here. People were willing to help out when needed. I was really impressed. In [the state I had come from], if they found out you were blind that was the last you would see of them. It was really hard to find anyone willing to be your friend, let alone people to help you. In Seattle, not only were people willing to help, everyone saw each other as equals. I felt like I would have a better life in Seattle. [ ... ] So that is how I came to be a member of the DeafBlind community, and how came to identify as DeafBlind.
Daniel was not the only one. According to a record compiled by a former director of the DeafBlind Service center, 48 DeafBlind people moved to Seattle between the years of 1984 and 1987. In interviews I conducted with several of these people, they told stories similar to Daniel’s. After attending the 1984 AADB meeting, they were so taken with Seattle--the people, the energy, the possibility for once again being part of a community, job opportunities at the Lighthouse for the blind, etc.. that they decided to move there.
3.3.3 Fear of Going Blind
In the early ‘80s, there was great resistance on the part of many DeafBlind people to tactile modes of communication since these were associated with blindness and blindness was feared. Communicating with other DeafBlind people sometimes required tactile communication, so this was avoided. Joey, a Deaf communication specialist working at the Lighthouse in the early 1980s recalled:
Some DeafBlind people were very resistant to the idea that they were blind. They were always saying that they were only “a little bit blind,” and they insisted that they were Deaf. They wanted to keep communicating the way they did when they were sighted, which was fine, but as soon as they were put in a position to communicate directly with another DeafBlind person, they didn’t want anything to do with it. They just really had a lot of resistance to changing the way they communicated.
This is consistent with what many DeafBlind people told me about their experiences. In the the pre-ADA era people were often informed of their inevitable blindness in a crude way, which was followed by a lack of information about their condition. These experiences led some to develop strong aversions to everything they associated with blindness, including tactile communication. They came from Deaf sighted environments where visuality was highly valued and blindness was highly stigmatized. Kathryn explains that DeafBlind people in her Deaf school were picked on and in her case, even beaten up.
When I was a senior at the Deaf school I was on the volleyball team. I was a star player. I was chosen by the school to join the team. I was very involved, and things were going along OK. Then one game, we were playing against another Deaf school, and it was a really close game. We were neck and neck--they would gain the lead, then we would come back, and toward the end of the game, it was a tie. The ball came over the net, and somehow, my mind couldn’t understand what I was seeing and it went right over my head. Their team won. So I was disappointed, but I had to accept that we had lost. Then, once we were off the court, a player from our team came up to me and said she didn’t like to lose, and then she beat me up. She did it because I couldn’t see the ball, and so I contributed to our team losing. That was a terrible day that I will never forget.
Events like this continued happening until Kathryn’s parents decided she should see an eye doctor. She describes, like Daniel, the crude way in which she was informed of her impending blindness by the doctor, and the effect it had on her:
I went in for all day testing. I didn’t like it at all. No interpreter was provided. The ADA hadn’t been established yet at that time, in 1977. [ ...] There was no law that said you had to provide an interpreter.
So I spent the whole time tapping people on the shoulder and asking them, “What did you say? What did you say?” My parents and the doctors were all standing there discussing the situation. My parents said they would tell me later. I had very limited knowledge about Usher Syndrome. The doctor said, “You. One day you will be blind.” I was shocked. I didn’t understand why he thought I would become blind when I was older. I thought to myself, “I can’t accept blindness.” I had already grown up sighted for 19 years, experiencing the world that way. So when I found out I had Ushers, I just couldn’t accept it. And the way the doctor told me in no uncertain terms, “You will be blind one day.” [ . . . ] If only that doctor had described these things to me properly. If only he had had a good attitude, brought in an interpreter, and explained in a reasonable way that I should go to Braille school. Maybe I could have accepted it if that had been how I found out. But that doctor had a really bad attitude. He was cocky and he thought he knew everything. That hurt me a lot. It changed my life. Before I met with that doctor, I was talkative, social, but after that, I became very reserved.
The shock of finding out that she would be blind was compounded by the fact that Kathryn had already overcome other major obstacles to make her way into the visual world of Deaf, sighted people. Kathryn had no Deaf siblings and was subjected to years of oral “education” where ASL and even gesture were not allowed.
If a child gestured, they would be punished. The teacher would smack their hand. You really weren’t allowed to use your hands for any kind of communication. I rebelled in that environment, because I really couldn’t understand speech. I can’t hear at all [ . . . ]. Later, my family moved out of that neighborhood, north, and I was transferred to a different school. Unfortunately, it was the same situation. ASL and gesture were both forbidden. The only improvement was that they policed the use of gesture a little less, and they didn’t really hit our hands if we did try to gesture to one another. Nevertheless, it was an oral program run by people who believed strongly in teaching deaf children to speak.
Eventually, Kathryn met a girl who attended the residential school for the Deaf and she decided to visit her there. Shortly thereafter she transferred into the Deaf school and found her life greatly improved.
At that school you could be involved in drama, in sports, in all sorts of activities. There didn’t seem to be any limitation. With hearing students, what you could do was very limited. There were no ways to provide those kinds of opportunities because of the communication barriers. Hearing students didn’t understand me, and I didn’t understand them.
Kathryn had finally found a social setting where she could communicate and therefore participate, only to find out that she would become blind. Like Daniel, she couldn’t imagine what being DeafBlind would be like.
[The idea of blindness] scared me to death. I thought, ‘I’ll be blind and deaf. That means I won’t be able to see or hear’. I thought that meant I would be utterly helpless, unable to function. I had no idea how a person could live like that. There were no services, no support [ . . . ]. I wondered what my life would be like in 20-30 years. I didn’t think about technology. I didn’t think about computers. They came later. I couldn’t imagine at all how DeafBlind people could communicate. I just asked myself how?? I had so many questions, and no answers. It felt like no one was helping me.
Kathryn went on to attend Gallaudet University, where she occasionally encountered Deaf-Blind people. By that time, so much stigma and fear had been bound up with the idea of blindness, that she saw DeafBlind people as a threat.
One day I saw some fully blind DeafBlind people communicating tactually, and I was taken aback. I felt like if I touched someone like that, I would suddenly lose all of my vision. I didn’t want that, so how was I supposed to communicate with them? So I avoided DeafBlind people.
It wasn’t until she was living in Seattle that collective norms required her to face her fear of tactile communication. Nevertheless, there was an important line that she still would not cross. Although she learned to communicate with people who use tactile reception, going tactile herself remained unimaginable.
I had to accept touch. I had to learn how to interact with and communicate with tactile people. But it was all one-way. They would use tactile reception, but I wouldn’t. I hadn’t practiced, so I didn’t know how. Really that doctor [ . . . ] ruined it for me. That experience was so traumatic, that even after 33 years, it’s still hard to get over it.
Kathryn summed up her fear of going tactile, as a symptom of her “denial.” She found the thought of going blind so terrifying that she never accepted the fact that it was happening. Moving to Seattle was a sort of compromise. The supports that were in place in Seattle on the one hand forced her to accept a DeafBlind “identity.” Receiving services required this. On the other hand, these supports allowed her to continue compensating for vision thereby maintaining a fundamentally visual orientation to the world, as opposed to transitioning to a more tactile way of life.
After I moved here, I wouldn’t say I made wonderful progress. You really have to understand yourself. I knew I needed to know who I really was as a DeafBlind person. I had to accept that. So between then and now, I’ve been doing better, but there are still some things that I haven’t faced. For instance, I should be using a cane all the time, every day, but I don’t. When I look outside, and notice that it is a bright day, I think, ‘I don’t need a cane! I’ll be fine!’ Tactile reception is another example. I don’t need tactile reception. I can still see what people are saying when they sign through my tunnel of vision. So that’s what I mean by ‘denial.’ Really denial means that I haven’t gone for it, and learned tactile reception. I feel that I don’t need it. Therefore, I’m in denial. I mean, I understand the concept of tactile reception, but I don’t practice, and I’m not skilled at it.
This combination of claiming one’s need for tactile communication and simultaneously recognizing one’s denial about that need are a common theme. For many people in the earlier groups this discourse makes perfect sense. Going blind is terrifying and there really isn’t any way to change that. When the time comes, at best you can “go for it” and at worst, you can “give up,” but there is nothing appealing about going tactile whether you are in Seattle or not.
3.4 Visual American Sign Language is Established as the Primary Language of the Community
Diversity in language and communication backgrounds coupled with the effects of stigmas around tactility led to a complicated sociolinguistic situation at the Lighthouse in the early stages of the DeafBlind program. Even before the post-AADB influx in the mid-1980s, there was already an effort to improve communication between DeafBlind employees. However, as the numbers grew, the problems became urgent. For members of the Deaf community, and those who studied their history and their language, these problems were familiar. DeafBlind people who ended up at the Lighthouse had, after all, grown up as deaf children. Deaf education systems (and lack thereof) have produced a wide variety of communication styles and capacities in the broader American Deaf community as various fads and trends have come and gone.
Some Deaf people have Deaf parents, but most have hearing parents. Of those who have hearing parents, some parents learn ASL, some learn cued speech, some develop “home sign” systems, some learn Signed Exact English (an invented code which haphazardly attempts to represent the morphology of English visually). Finally, some Deaf people have been educated orally, which often amounts to a denial of access to visual language and natural social environments, as it did in Kathryn’s case. Given no access to visual language, most fail to develop a native command of either ASL or English. While they are still able to communicate, opportunities for higher education are often very limited.
As an effect of this history, most Deaf people who are members of an established Deaf community in the United States will be familiar with a wide range of types of d/Deaf people. Some members of the Deaf community, due to their particular biography, their skills, and/or their training, act as translators within the community. For example, a person who grew up with parents who had acquired ASL late in life might develop skills for mediating between their parents and the more fluent Deaf users of ASL in their community. In recent years, this role of the Deaf Interpreter (DI) has become professionalized. Today, DIs often act as a second relay in official situations where accurate communication is both very difficult and very important. For example, if a deaf person who doesn’t have a standard language is arrested for a serious crime, the court proceedings need to be clear to that person. A standard hearing interpreter is trained to interpret between two languages--ASL and English--not between English and gestural communications that are shared by a very small group of users (such as the person’s family or the person and their sibling). In a case like this, a DI would be hired to mediate between the hearing interpreter and the deaf person on trial. Although the role of the Deaf interpreter is not new within Deaf communities, the professionalization and recognition of its importance in official settings is.
In the 1980s, when DeafBlind people started moving to Seattle to take positions working at the Lighthouse for the Blind, this process of professionalization was just starting in Seattle. As the sociolinguistic situation grew more complex, it became clear that a “communication specialist,” would be needed. Joey was one of the first people to be hired in this capacity. In an interview, he described his own communication background and explained how his background qualified him for the job.
It was the height of oralism in the ‘70s, and signing was banned in most schools. I have a Deaf brother. I’m the youngest, and he is the third, of five. Also, our oldest brother is deaf, but not culturally. He’s kind of ... hard of hearing, I guess you could say. But not really. The other two kids in the family are my sisters. The younger of the two signs now, but in our family growing up, no one signed. My brother and I sort of “talked” to each other, doing the oral thing, but we really communicated using our own home-made signs. But at the school I went to, there were always Deaf students who signed. Maybe they were kicked out of the Deaf school, or their families were Deaf. Or their families moved from other places--there were military kids in the school, because [the school] is near an airforce base, so there were a lot of kids from families who signed. In the classroom, everyone sat on their hands and acted like good oral kids, but as soon as we were out of the classroom, we couldn’t get enough of signing--that was where the real social stuff happened, and where we all learned ASL. For me, it started out as a kind of combination between the home signs me and my brother developed and then the exposure that I got from kids on the playground. Being deaf, I had a natural inclination for learning ASL, so it happened fast.
These experiences, in addition to his general curiosity about and openness to communicating with a wide variety of people, led him to cultivate the skill of mediation. In 1980 he was hired as a communication specialist at the Lighthouse for the Blind in Seattle. According to his memory, there were about 10
DeafBlind people working there at the time. I asked him why he was chosen for the position and he said:
I was qualified for that job because of my skills with language. I can communicate with a wide range of people with a wide range of communication backgrounds. I can do everything from real big ASL to snobby small signing, to Pidgin Signed English, to Signed Exact English. I can do it all. I have a lot of experience with communication, and I have a certain ability with it, too.
Although Joey didn’t have any experience working specifically with DeafBlind people, what he found when he started working at the Lighthouse was familiar to him. The expertise he brought with him from the Deaf community seemed perfectly applicable. In the Deaf community, as Joey noted in our interview, the solution to communication problems is, very simply, American Sign Language. With DeafBlind people, he said, there was an additional issue with “communication technique.” Nevertheless, Joey and the others he worked with
figured that ASL as a common language would be a first step in the right direction, so Joey started teaching ASL classes to DeafBlind employees at the Lighthouse.
[They] had really mixed backgrounds. Some of them had limited exposure to language in general, or they used a different sign system. It was just like deaf people who were not blind. Many of them came from hearing families, so they had really weak foundations in their language development. So when they met me, and I could communicate with them clearly, they wanted to learn how I did that. I did that by using ASL, so that’s how teaching ASL classes came about. It wasn’t “about” ASL. It was about improving communication skills. The means was ASL. That’s how we put it. I remember one hot issue at the time, and maybe it still is now, was direct communication between two DeafBlind people. Often times, if DeafBlind people communicated directly with one another there would be all kinds of misunderstandings that would lead to accusations and fighting. So as a communication specialist, I would often intervene in situations like this. I would ask each person, one at a time, what happened, and then I would explain to them what had gone wrong.
When I asked him why ASL didn’t solve the problem the way it would have in a Deaf, sighted environment, he explained:
Communication really was limited at that time. There were a lot of misunderstandings when DeafBlind people communicated directly. Now, I think that’s still the case, but back then it was even more the case. It wasn’t only that some people used ASL and some people used Pidgin Signed English and so forth--sometimes that was the problem, but also people had different degrees of blindness. Some people used tactile reception, and some didn’t. So they were incompatible in that way too. It was hard to find a common language and mode of communication that two DeafBlind people could use. So just like with hearing people, when they start to get involved with the community, you have to explain the different kinds of vision loss that people have and how it affects communication: Ushers, tunnel vision, people who need to stand far apart from each other, people who need tactile manual communication, the people who have unclear vision, so you have to sign up close with them . . . DeafBlind people had to learn that stuff, too. When the conversation would start to be frustrating for them, you would have to intervene and explain--“that person can’t see you.” They have to use tactile reception, so you have to sign tactually to them. Or maybe one person doesn’t really have much exposure to English and the other one is throwing big English words at them, and they start calling each other names. So there was language background and then there was also communication technique.
Deaf people like Joey and students of ASL and interpreting were the ones in an institutional position to affect communication conventions. Given their knowledge of Deaf history, Visual American Sign Language was offered as a solution to communication barriers. As I will discuss in Chapter 4, the pro-tactile movement is scaffolded in many ways on Deaf understandings of community, power relations, and the relationship of both to language and communication. It is unlikely that the pro-tactile movement would have emerged in
a DeafBlind community where spoken English was the primary language, therefore, this development was an important first step toward a pro-tactile future.
However, when Visual ASL was introduced among DeafBlind people, there were problems that did not arise among Deaf sighted people. As Joey explained, people have differential perceptual access to the sign vehicle and to one another. DeafBlind people came to Seattle already frustrated by communication barriers, so when they encountered other DeafBlind people who were even more difficult to communicate with than sighted people, this was a level of frustration most could not endure. In the beginning, there were too few DeafBlind people to break off into smaller groups with similar language background. Therefore, sighted people had to intervene. However, mediated communication was limited, in the beginning, to one-on-one configurations and DeafBlind people had no way of meeting in groups at all. Therefore, one of the next goals was to find a way of making group communication feasible.
3.4.1 Toward Group Communication
Prior to the large influx of DeafBlind employees at the Lighthouse, there was less of a problem, simply because there was less interaction between DeafBlind people. An annual picnic, hosted by a DeafBlind employee, was one of the only social events that was recalled in interviews. As time went on, DeafBlind people started organizing social gatherings more often. Several people I interviewed remembered a Halloween party, held in the apartments owned by the Lighthouse. A small group attended, including DeafBlind people, Deaf people, and sighted people. Visual ASL was the common language, and people “did what came naturally” to communicate. There were no official interpreters working, and at least some of the people present thought about guiding and relaying information as part of “hosting.” One person explained, “If someone looked lost, someone else would help them find what or who they were looking for.” Since blindness was so stigmatized elsewhere, a willingness to do simple things like this was unusual. It was also very different from what some of the DeafBlind people had anticipated for their futures, for example, those who had been told that they would have to switch to fingerspelling when they went blind. So, as one sighted participant explained, “There was a lot of excitement. What had been impossible was suddenly possible, and everyone was really excited about it.” In these early gatherings, people communicated one-on-one, adjusting to one another as needed.
Around this time, a class was held as part of a research project being done by a graduate student in psychology. This provided an opportunity to experiment with interpreting strategies for group communication. Although it was awkward and difficult, group communication was popular and people were optimistic that strategies could be improved. Over the next several decades, interpreting practices in Seattle became increasingly sophisticated, streamlined, and effective. These practices made social and political organization possible via meetings of DeafBlind advocacy organizations, like Washington State DeafBlind Citizens (WSDBC), “task force” meetings, which were organized periodically to address economic, social, and political problems, and a bi-weekly meeting that has become a main-stay of the DeafBlind community, known as “DeafBlind class.” DeafBlind class is, to this day, a highly valued venue for DeafBlind people to come together and exchange news, learn about legal, medical, and social developments in society that affect them, and socialize. It is also an important opportunity for interpreting students to improve their skills and to be mentored by more advanced interpreters. By the time I came into the community as an interpreting student in the 1990s, they had mediated communication down to a science. I was part of a small army of volunteers who would go to Seattle Central every two weeks to interpret at DeafBlind class. It often took me the first part of class to figure out where I fit into the overall network of relays (they are exceedingly complex), and yet it all seemed to work and was surprisingly efficient. In the early days of group communication, this was not the case. An interpreter who was new at the time, told me that
one of the most memorable problems was turn-taking--DeafBlind people didn’t understand how to do it, and interpreters too. Interpreters were there for short periods of time [as students], then they moved away, or whatever, so people would learn, but then there were new people who didn’t know yet, and there were so many confusions. Someone would say something, and the person would be confused about why THAT person (the interpreter) would be saying that thing. And the interpreter would try to explain--“Its not ME. Its [Robert] saying that. I’m just interpreting what he’s saying,” and it was really a challenge.
This was a common problem. People would mistake the interpreter for the signer, and communication would go circular:
Ronald stood up in front of everyone, and signed READY? To his interpreter, and [Rose] voiced it. Then his interpreter signed what Rose said back to Ronald, instead of YES, and it just went on like that in a potentially endless loop. Until finally Rose said, “DO NOT SIGN READY! SIGN YES!” [Laughs]. We could still be there if Rose hadn’t said something.
As was discussed in the previous section, in order for DeafBlind people to communicate with one another at all, and especially in groups, sighted interpreters were necessary. However, the use of sighted interpreters prevented a tactile field of engagement from emerging. Instead, a visual field of engagement was maintained, as were the structures of Visual ASL. Expertise regarding communication accrued to sighted social and professional roles, and this distribution of expertise was reinforced by the institutional structure of the Lighthouse and other organizations serving blind people. While these asymmetries were established, mediated group communication was essential, and it led to political recognition of the Seattle DeafBlind community and the establishment of the DeafBlind Service Center.
3.5 Political Organization in the 1980s and the Inception of DBSC
By the mid 1980s, Seattle had drawn national attention as a place where something hopeful was happening for DeafBlind people. Jobs and communication resources were almost im-possible to find elsewhere. Each year, there was a new influx of DeafBlind people who had come to work at the Lighthouse, and the community grew rapidly. Over time, communication appeared as only one of many additional problems. The Lighthouse worked with other organizations to provide services to DeafBlind people, but the coordination and provision of services was extremely complicated and therefore, largely inaccessible. Leah, the manager of the DeafBlind program at the time, said that when DeafBlind people actually did figure out where to go for services, something was almost always lacking. Either the organization in question knew how to address vision loss, but didn’t understand about ASL and interpreters, or it was the other way around. In order to address these problems, a task force was established, which included representatives from several of these agencies, including the Department of Services for the Blind (DSB), the Department of Vocational Rehabilitation (DVR), the Hellen Heller National Center (HKNC), and the Division of Developmental Disabilities (DDD). The director of DSB at the time, who I will call Al, suggested that some research needed to be done about the gaps in services. Leah explained:
[O]ne of the key things we did was put together a matrix. It was done by hand, because it was before computers.9 It was a grid sheet--we had services and organizations--one one each axis, and put an X where there were services, and no X where there were no services. That became a tool for us to make our case.
Some services were not only inaccessible, they were nonexistent. The problems that Deaf-Blind people faced, and the services needed to address them, were often not a product of adding Deaf issues to blind issues. They were unique. One of these things was the use of visual interpreters for running errands such as grocery shopping. DVR paid for these services for a while, since, according to Leah, “You need to buy groceries to eat, so you can go to work, but,” she said, “that was kind of a stretch.” So the term “Support Service
Provider” (SSP) was introduced10 to describe this specialized service that could not be provided elsewhere.
SSP services were beyond the scope of what any of the existing organizations could take on, including the Lighthouse. They needed a separate organization, with separate funding for this. The heads of the state agencies all recognized the problem. Al, the director of DSB at the time said in an interview, “By the time I saw the needs assessment, [Seattle] was a place of choice for DeafBlind people. Large numbers, proportionately, so it created a real challenge for metro, DDD, VR, DSB. We had a real problem.” The solution that was agreed on was to establish a separate non-profit organization that would provide the services other agencies couldn’t. This organization would become the DeafBlind Service Center (DBSC). Early on, Al said, the idea was to create an “embassy” for the state agencies.
This metaphor hasn’t stuck, but that was how I characterized it at the time. Think about immigrant communities. It was like that. We had a community within our state that was a linguistic and cultural minority, and there were real barriers to finding them, to communicating with them, and to serving them. For
that, we (the state agencies) needed an embassy. That way we could sort out the confusion of where people should go for the services they needed. That way we would be able to more effectively serve them, and make it less confusing for us, while also making it easier for them. So that was the pitch. If it’s just for them, that’s not the most convincing argument. It needs to also benefit the system--it needs to help us do our job, too. I don’t remember any big difficulty or battles about that. Also, it was a way to show we were responding to that list of needs that [Leah] had presented us with.
In addition to referring DeafBlind people to the right place within state agencies, DBSC was supposed to provide any services that were not offered elsewhere. Support service providers and accessible advocacy services were two of the things that were glaring needs at the time (and remain so today). The task force participants all agreed that something like DBSC was needed, so they arranged for ancillary services to be provided through a “joint operating agreement.” However, something more permanent still needed to be established, which, as Al said, “was not subject to the while of whoever happened to be directing the three agencies(11)."
According to Leah, everyone thought it was a great idea, but no one was jumping out of their seat to pay for it. So the aim at this point was to convince the governor’s office to establish a bill that would secure funds for DBSC. This was one of the earliest organized political efforts in the DeafBlind community. In order to achieve their goal, political representatives had to become aware of the need for SSPs among DeafBlind people, and for that, they would need to become aware of the growing DeafBlind community.
Toward this end, groups of DeafBlind people and sighted advocates and interpreters started making regular trips to talk to individual senators and representatives at the capitol. One sighted interpreter had a VW bus that everyone would pile into it and go down to the Capitol for the day. They planned their appearances strategically, showing up, for example, during the lunch hour on days when important meetings were happening. There were sleepovers the night before, where people would practice their speeches repeatedly, until they were concise and flawless. Real relationships were growing and sighted volunteers, according to both DeafBlind and sighted people, were abundant(12).
These efforts resulted in getting the legislature to force the relevant state agencies to put a proviso in the budget, which meant that funds would be secured regardless of who happened to be the director of the obligated agency. There are several stories that people have told me about the specific moment when DeafBlind people achieved political recognition at the Capitol. One is about Dan Mansfield, who was one of the first DeafBlind leaders in Seattle. He was one of three DeafBlind siblings, all of whom had Ushers. Dan grew up at the residential school for the Deaf. Although I have never met him, by the time I came into the community in the 1990s, Dan had become a legend. He was known for his charm, his good looks, and his political competence. Many people credit him with the moment of political recognition. The following version was relayed by Adrijana, a current DeafBlind leader in Seattle:
[H]e went to the capitol, and you know he was charming. He walked up to the congressional committee, who were all seated at their raised table, and he told the interpreter he brought not to say anything. He stood in front of them, and pulled out a stack of cards. On each card was one letter. He proceeded to show them one letter at a time, I A-M etc. And then he slipped, and all the cards fell on the floor. Everyone scurried around trying to pick them up. It was embarrassing and uncomfortable for everyone, not to mention a frustrating communication experience. He got up, and tried, with many mistakes to spell something (the cards were now out of order). Then he calmly told the interpreter to start interpreting for him. All he said was,“We need interpreters.” And we have had funding for interpreters ever since...
During this time, DeafBlind people were a persistent presence on the Capitol campus, and it is likely that many moments like this had a cumulative effect. For example, Al told me a similar story about a moment of political recognition:
Jim McDermot was chairman of the Ways and Means committee. It was really hard to get a meeting with him, and I remember the DeafBlind folks were down that day. We had come to his building--his office was in a suite. There was a waiting room and a conference room. And he had an office in the back in the ground floor of this building. Dan Mansfield and 4 or 5 people were standing ... in the hallway outside of his door. He was leaving his office, about to go out to the capitol. He was so hard to meet with, that typically people would ambush him--Senator can I walk with you. Every once in a while, he would see someone he wanted to talk to, and he would walk with them, but most of the time, [he would bolt]. So he stepped out and he glanced down the hall, and he saw several DeafBlind people talking to each other and the interpreters. And he stopped and stared for about a minute watching their communication. I observed this, and I thought, ‘holy hell. He never exposes himself to everyone like that.’ And I thought, ‘they got him. He is seeing what the challenge of communication is--in one respect anyway--and they’ve got his attention’.
According to Al, this kind of fascination played a role in the success of the activists. In a representative democracy, no one should care about this tiny group of people and what they are asking for, at least in theory. But Al said that for this senator, and for others, there was a “lost tribe” aspect to it. He said:
Here’s this thing that you didn’t know exists, and it exists. And DeafBlind people were saying they wanted to come into the fold. They weren’t trying to impress upon us their particularity or their specialness. They just wanted what everyone else wanted.
Seattle became even more appealing to DeafBlind people elsewhere once DBSC had been established. Their work and personal lives could be separated to a greater extent, they had a standard number of hours with a visual interpreter or “SSP” each month that they could count on. In addition, they had somewhere to go to sort out services in the larger system of State agencies. The community continued to attract new members.
A “DeafBlind identity” emerged during this time as something distinct from a Deaf or hearing identity. Many DeafBlind people told me during interviews that they had struggled for many years in accepting it, but eventually came around to accept themselves as “DeafBlind” after moving to Seattle. However, many of these same people were still using visual reception, despite very limited vision. They were still going to great lengths to avoid tactile communication. To them, being DeafBlind did not mean cultivating tactile sensibilities, using tactile communication, or becoming a tactile person. Stigmas around tactility among the earlier groups of DeafBlind people remained powerful.
3.5.1 New DeafBlind Perspectives in the 1990s and 2000s
For some DeafBlind people who moved to Seattle later, in the 1990s and 2000s, the negativity associated with tactility was surprising. Aversion to tactility seemed to come from attitudes and norms in Seattle at least as much as it did from their prior experiences outside of Seattle. For example, when Lee moved to Seattle in 2001, she noted that going tactile was very clearly
something negative that people gave into. Something that would draw sympathy and looks of consoling understanding. Not something people went into with positive aspirations or enthusiasm.
In many of the interviews I conducted, narratives about going tactile were as Lee describes. For example, Susan said that one day she was at a staff meeting at the Lighthouse and she was watching an interpreter visually, as she usually did. At some point, someone said, “Susan? Are you going to answer?” And she realized that she had been missing what the person was saying. Before that, she thought she had been catching everything. To try to clarify, someone tried to communicate tactually with her, and she pulled away, asking what the person was doing. By this time, she was certain everyone was watching, and she was deeply embarrassed. Tactile communication wasn’t helpful for her, because she hadn’t developed the skill. Eventually, she did learn how to receive Visual ASL signs tactually, but this only led to more difficult encounters. She explained that often, DeafBlind people would say, “Susan? That’s you? Communicating tactually with me? Your eyes have gotten worse!” which was really upsetting. Susan said that going tactile was a necessary change, but overall, it was depressing. She said once she went tactile, she couldn’t participate in groups the same way.
For example, at the Lighthouse, there are two separate lunch groups. If you are still using tunnel vision to communicate, you can eat with the other tunnel vision people. Once you go tactile, though, you have to either switch to the tactile group, or be left out of conversations. Susan’s friends were all still in the tunnel vision group, but that was no longer a feasible communication situation for her, so she saw less and less of them. She also described a process of increasing dependence on interpreters, where the quality of her day, or a meeting she attended, or her level of interest in a person she was communicating with always depended to some extent on whether her interpreter was tired, whether they knew her preferences or not, and so on. She said, all in all, going tactile had been a negative experience for her. But, at a certain point, it became necessary, and she had to do it. This kind of story about giving in and going tactile, despite the many negative consequences associated with it was a common theme among the DeafBlind people I interviewed.
3.5.2 Mainstreaming, Inclusion, and Mediation
For Lee, Adrijana, and others who moved to Seattle in the late 1990s and early 2000s, these stories were alarming. They were not attributable to vision loss, but rather, to the aversion so many DeafBlind people had to tactility. After hearing so much about the DeafBlind community in Seattle, the negativity toward tactility that they encountered upon arrival was both surprising and disappointing. The Deaf world that they came from was very different than the one Daniel and Kathryn came from. By the time they moved to Seattle, Lee and Adrijana had spent years linked in to constant streams of information via the internet, email, text messaging, text relay, video relay, and captioned TV. Seattle had become an established phenomenon, and they knew that it was a viable option long before social isolation would have become a problem. “Deaf culture” was something they took for granted, and it was part of their common sense that ASL was a full-fledged language. It seemed like if Deaf people had a world of their own organized along visual lines, complete with everything any human could want, why couldn’t the same be true for DeafBlind people?
For example, Adrijana describes her impressions in the late 90s just after moving to Seattle. She said that she and others who moved there around the same time wanted to get away from being so dependent on interpreters.
I started feeling that way not long after moving here in 1997. I had a lot more vision at that time, but it didn’t matter. I didn’t like the environment. For example, at Seabeck. There was no one to talk to! Everyone was busy chatting with their SSPs. I started to feel like, ‘Who am I? Why did I even move here to Seattle? I’m from a Deaf world where communication is direct and unmediated. Now everything seems wrong.’ Like I took a step backwards into a hearing environment. Later, though, new people were moving here who were more my age [ ...] and Seabeck started to change a little. People in our group, with our communication system, in our world--we started communicating with one another, rather than always going through an SSP.
Adrijana, like Kathryn, had spent many years in hearing environments where she had limited opportunities to engage with her peers and otherwise participate in collective life. It wasn’t until she went to college at the Rochester Institute of Technology and the National Technical Institute of the Deaf that she could fully participate. Not long after, though, her vision got worse, and she no longer found Deaf environments welcoming. She was having difficulty with her job working as a biologist in a lab. She started looking for jobs that did not require vision, and found one at the Seattle DeafBlind Service Center. She expected to find a place where she could communicate tactually with other DeafBlind people in the un-restricted, un-mediated way that had previously characterized Deaf environments for her.
Instead, she found that communication was perpetually mediated by sighted people. In this sense, it was like being a Deaf person in a hearing environment, participating through the use of an interpreter. Adrijana had had enough of that. She wanted a place where interaction felt natural and unmediated. She didn’t think there was anything inherent about being DeafBlind that would prevent that, but in Seattle there was too much resistance to tactility to make it a reality. She found that people preferred to use an interpreter and go on using visual communication practices than go tactile and have unmediated exchange. In some ways, this appeared to Adrijana like a deaf oralist stance--deaf people who would rather appear to be speaking and hearing (meanwhile working hard to compensate for what they miss) than having a genuine, easy interaction in a visual language.
3.5.3 The Crystallization of Anti-Tactile Forces
By the early 2000s, anti-tactile forces had become reified in the organization of the social field. One of the most obvious manifestations of this was a hard separation between sighted and blind social roles. In order to occupy a sighted role, you had to be able to communicate (or appear to be communicating) in a visual modality. If you were no longer able to do this, you were forced to occupy a blind social role. Therefore, DeafBlind people sharpened their skills of inference and performance, trying to appear sighted for as long as possible. Going tactile meant going blind and going blind meant extreme marginalization, even and especially in the community that was once a refuge and source of hope.
Susan, for example, could no longer convincingly occupy sighted social roles, and was therefore alienated from her friends, was more dependent on interpreters, and was less able to access stable and reliable sources of information. She was more isolated and experienced a significant decrease in the quality of her life. In these ways, the occupation of sighted social roles was restricted to those who could pass for sighted, given the necessary accommodations. When no amount of accommodation would suffice, there was no choice but to become blind.
On the other hand, the occupation of blind social roles was also restricted. Lee moved to Seattle in 2001, thinking that she would go tactile upon arrival as a first step in a series of changes that would lead her into a more tactile way of orienting to the world. However, because she still had quite a bit of vision, she encountered a lot of resistance from other DeafBlind people. From one perspective, individuals were resistant to going tactile because of their fear of going blind, which was a response to historical and personal circumstances outside and prior to the Seattle DeafBlind community. However, within the community, these dynamics took on a life of their own, generating increasingly rigid boundaries. One could not just declare that they were DeafBlind and be considered DeafBlind. There were practices through which this position had to be taken up-- some related to language and communication and some not. Lee explained:
I moved here and immediately started calling myself DeafBlind, but people said I couldn’t do that because first, I was still driving. Second, I didn’t use tactile reception, and third, I didn’t use a cane. It was firmly established that until my status changed regarding these three things, I had to wait.
Lee is gay. She thought of these things like “coming out” and saw no reason to put them off. The faster you come out, the faster you are integrated into a world that will support you, rather than remaining in a world that seeks to limit and exclude you. It was the same thing for her from a Deaf perspective--being a part of the Deaf community means embracing a visual way of life, which includes using and valuing Visual ASL and visual communication practices. The sooner you stop trying to approximate hearing ways of doing things, the sooner you find a way of being with others that feels natural and easy. When DeafBlind people stated the requirements for establishing a DeafBlind identity, Lee understood them in these terms. She took their claims seriously, learned to use a cane, learned to use tactile reception, and stopped driving. But to her surprise, she caught a lot of flack every step of the way.
One DeafBlind person really picked on me early on, right after I moved here, saying I was over-eager “like a puppy” and so on--taking any opportunity to insult me. I went ahead in any case--first with the cane. That same person was really dismissive of my decision to start using a cane. Second, I quit driving, and people sort of patronizingly congratulated me on “finally” quitting. Third, I started using tactile reception. People were really discouraging about that one, like, ‘Why are you going to do that? You should wait. I haven’t gone tactile yet.’
Lee went ahead, though, because like Adrijana, she saw how people who didn’t go tactile missed more and more of what was going on around them, and saw that it was more and more difficult for them to learn to communicate tactually. Her decision seemed like the right one on many occasions. She said she often ended up interpreting for people who were still “tunnel vision” people because via tactile communication, she could follow what was going on and they couldn’t. Lee said that tunnel vision people relied more and more on idiosyncratic rules and became very demanding of the people around them. She explained that on one occasion, a tunnel vision person she was with was complaining that people weren’t following all of the many ridiculous rules that you have to follow to make visual communication with her possible. She put it in terms of “respect.” She said people weren’t respecting her. They shouldn’t walk quickly by-- it’s confusing. They should stand at the right distance, they should sign slowly ... It’s not reasonable to expect people to do that, and they don’t. So the result is that she’s left out, and is getting more and more frustrated as time goes by. I knew that by going tactile early, I would never have that problem.
Lee experienced resistance to going tactile primarily in her interactions with other DeafBlind people, but she saw their perspectives as being shaped both by history and by the current configuration of social roles in the community, which included sighted people. She said that middle group came into the community as “hip, cool 30 and 40 somethings.” In contrast to the people who were already older when they moved to Seattle, they, as a group, had more education (most had attended college if not graduated), they had more leadership experience, they had been part of Deaf organizations like Deaf fraternities and sororities and they were used to “being in the public eye.” The older group, she said,
was more used to a world made up of Deaf people. They almost exclusively went to residential schools for the Deaf. They were not college educated. They had worked in manufacturing or other working class jobs for many years, and when they moved to Seattle and got jobs at the Lighthouse, they went on doing the kind of work they had been doing all along. And it was a large group, so they supported one another a lot. [ .. .] The younger group is more used to a mainstream kind of experience. Not just in school, but in life. They’ve already had the experience of working in a hearing company before. They’ve had romantic relationships with hearing people, they have hearing friends, they live in a hearing area, they participate in hearing events and the hearing world in general. They still value Deaf and DeafBlind people, but they have a range of experience. So the two groups are really different. The younger group is more concerned with current mainstream trends, so they’re more likely to resist tactile communication practices, or the use of a cane, that would mark them as different from the mainstream [ ...]. Maybe if mainstreaming never happened, then we wouldn’t have this problem, and people would embrace tactile signing. I don’t really know, but that’s my guess.
Lee speculated that when the more “mainstreamed” people arrived in the community, they were given the impression that they weren’t the same as the older group, but that
[t]hat they were somehow better--had more potential, and they would be leaders. So they had a stake in distinguishing themselves from that older group, and even though they themselves were getting older, they didn’t adjust because adjusting would have been becoming the thing they were valued in opposition to. [ . . . ]
The evaluative perspective that gave rise to these hesitations was primarily, according to Lee, a normative, sighted one, but the boundary it created between tactile people and tunnel vision people was adopted and policed by DeafBlind people. It was then reproduced in many domains of social activity. For example, the way interpreters, as a resource, were distributed has perpetuated the asymmetry between tactile and tunnel vision people. Samantha, a sighted interpreter, who is also an interpreter coordinator explained:
There’s not a lot of support for people who are going through vision change. And I think because of that power dynamic that’s set up. If I have vision I get to watch Harli Johnson.13 He has amazing language. If I don’t have vision, [ . . . ] I’m going to get sometimes a student and sometimes an interpreter who’s OK-- unless I say that I really don’t like to work with that person. But how many times can I [as a DeafBlind person] say that before somebody says, ‘Well they’re really hard to work with’. And then what I really want to do is participate and be involved in this community that functions because we have interpreters in this
setting [laughs exasperatedly].
There are conventions for organizing meetings like the one Samantha is talking about. A person who is presenting will be on the stage. There is also a platform interpreter who copies the questions and comments coming from the audience, as well as providing some visual information. This person is often one of several Deaf interpreters with years of experience, skill, and appeal. If you are a tunnel vision person (a blind person occupying a sighted social role), you are more likely to work with the platform interpreters. However, if you are a tactile person (a blind person occupying a blind social role), you are more likely to work with someone who is not experienced, and is hearing, and therefore does not have a fluent, let alone native command of ASL. This is a further incentive for remaining a member of the tunnel vision crowd for as long as possible.
In addition, if a person is part of a group that is using one platform interpreter, this is less expensive (either in terms of volunteer resources or money) than providing two tactile interpreters for every individual. Although sighted people do not actively discourage requests for tactile interpreters, DeafBlind people are careful about asking. They feel the pressure of the interpreter shortages and until they are really incapable of using visual accommodations, they feel that they should continue trying. When encouraged to start working with tactile interpreters, they reportedly say things like, “I can’t ask for that,” “I don’t want to rock the boat,” or “I don’t know if I want to be tactile.”
In these, and other domains of social activity, blind and sighted social roles have become increasingly contrastive and asymmetrical. The former has accrued less authority, potential, and value. Until recently, using VASL meant taking up a sighted social role, therefore, greater legitimacy and worth accrued to VASL and visual communication practices. Distinguishing one’s self from the tactile people became more important than the actual communication practices from which the social categories derive. At a more fundamental level, the field reproduced by these position-takings was primarily organized visually. This meant that either DeafBlind people were modifying visual communication practices to access visual fields of engagement or they were using tactile communication practices to access visual fields of engagement. The further the mode of reception drifted from visual modes of orientation and representation, the further the person drifted from direct access to what was going on. They relied more and more on descriptions of the visual details of ordinary life that interpreters might or might not be able to capture.
There was no tactile field of engagement. There were only tactile forms of compensation that would allow access to visual fields of engagement. Therefore, the bridge linking experience to collective experience grew longer and more difficult to cross as one adopted tactile modes of communication. The asymmetry in the social field was self-perpetuating. The benefit to living in Seattle was that there were people there who understood what Usher Syndrome was and who were actively trying to help DeafBlind people go on occupying familiar sighted social positions as long as possible. But eventually, the same problems people came to Seattle with happened all over again--group interaction was avoided, inference capacities were pushed to the breaking point, dark restaurants and bars became uninhabitable. In short, social isolation threatened to encroach again. The new life that seemed so promising upon arrival in Seattle became less and less so with time. It was against this background that the pro-tactile movement emerged.
Chapter 4
The Pro-Tactile Movement
Since the 1990s, communication practices have become conventionalized, social and professional roles have become clearly defined, and bridges between the community and the larger society in which it exists have continued to be established. For the first time in Seattle’s history, a DeafBlind woman was hired as the director of the DeafBlind Service Center. The local transit authority, the airport, the public library, and other organizations have begun to work with agencies that serve DeafBlind people to make the city more accessible. The American Association of the DeafBlind, a national advocacy organization, has made progress toward the incorporation of specialized, DeafBlind interpreting services into the Americans with Disabilities Act. All of this is evidence that “DeafBlind” as a political category has continued to gain crucial recognition at the local and national levels--not as a combination of “Deaf” and “blind,” but as its own political position from which DeafBlind individuals and organizations can make specific and relevant claims for access to resources. Meanwhile, the community has grown larger and more diverse, and significant internal divisions have begun to form. These changes together have opened up more space for critical reflection, and attention has turned inward.
Between 2006 and 2010, DeafBlind people started to express dissatisfactions with what had become the status quo. The problems were numerous. There was a lack of DeafBlind leadership. There was an inexplicable separation between tactile people and tunnel vision people that was keeping the community from cohering as a whole. There was too much dependence on interpreters. Those who could pass as sighted had more access to power, and those who actually were sighted were still, largely, the ones making decisions. These concerns signaled a shift in focus. Political recognition from outside of the community, although this was a precondition for its existence, was no longer enough. DeafBlind people wanted to have more influence in decision-making processes that affected them within their community.
Beneath political struggle there were also problems and desires of a different nature. DeafBlind people started communicating with one another, and in doing so, they discovered shared longings. They wanted a world of their own, dense with particularity and potential. Momentary and sporadic access to the worlds of others would no longer suffice. There was a shared sense that somehow, over the years, particularities had been subsumed by types and examples. Three dimensional scenes had been replaced by two dimensional characterizations of scenes and these scenes grew more and more difficult to inhabit. Co-presence had been replaced by representations of co-presence causing loneliness and isolation to encroach, no matter how many people were around. It had been years since sensory experiences actually accrued to the shared networks of association that come with a living language. Now the language itself seemed better suited to faded visual recollections than to the world at hand. Soon, it would lose its capacity to refer to anything at all-- even memories.
The situation was urgent, and this urgency pushed DeafBlind leaders into brand new territory; no one knew quite how to proceed. The pro-tactile movement began as a kind of exploration, looking for ways to solve the many problems that had been identified, and reinstate categories of experience that had grown inaccessible. Direct communication between DeafBlind people seemed like a good place to start, though the practices through which this aim would be realized were yet to be found. In what follows, I sketch a narrative line through some of the events and themes that defined the social field in which pro-tactile practices would be cultivated. Although there are broader historical frames that must be taken into account (see chapter 3), the inception of the pro-tactile movement as such can be located between 2006 and 2008 among the staff of the DeafBlind Service Center, and in particular, among three DeafBlind staff members--Adrijana, Lee, and Jodi.
4.1 “The Family Was Almost Dead”: Degradation of the visual habitus
Prior to Adrijana’s tenure as director, institutional positions of power were not occupied by tactile people. From the novel perspective of a tactile director, there were fundamental problems with DBSC as an organization that needed to be addressed. First, although DBSC provided crucial services, there was a sense that the organization was uninviting to the people and the community it served. As Adrijana put it:
The family was almost dead. It was like the Adams family. No character, no spirit, no nothing. It was just a vacant, bureaucratic feeling.
This problem was operating on several levels. From a visual perspective, Adrijana’s sense that DBSC was inhabited by the living dead might have seemed odd or unexpected. However, when sensory orientation shifts slowly, as it does for people with Usher Syndrome, what counts as self-evident shifts with it. Eventually, a gap opens up between DeafBlind perspectives and dominant perspectives, sometimes causing serious problems (as was the case for DBSC and its relation to the people it aimed to serve). These problems were caused by the degradation of the visual habitus (see section 1.2.1 in chapter 1).
For example, in 2006, I conducted two months of fieldwork, during which time, I made a habit of people-watching with a DeafBlind woman named Helen. We went out in Seattle to places we might have gone anyway-- a farmer’s market, a restaurant, the dog park, and I would describe what I saw, adjusting the focus of description as instructed. On one such outing, we were wandering around in Seattle’s Capitol Hill neighborhood, and we happened upon an art opening. The following is taken from my field notes written afterward.
I started with the hammers. Helen said not to bother, she wanted the feet. So we found a corner and started with the feet, which required attention to the legs. “The toe is planted and the heel is swiveling right to left and back again,” I say. “I don’t understand, show me,” Helen says.
So I plant my toes and swivel my right foot. Helen pats down my leg, while I continue. She makes it down to the toes and back up again, and then says she gets it. She imitates me and asks if that’s it. I confirm.
“Woman or man?” she asks.
“Woman.”
“Is she talking to a woman or a man?”
“Man.”
“Next.”
It turns out that that woman was not the only woman talking to a man and swiveling one of her feet back and forth, pivoting on the toes. There were others. Helen notes that when a woman flirts, she is likely to engage in this particular movement of the foot. I move to the right. Two men are next to a very large sculpture of gears. They are facing each other, feet anchored.
“They’re not moving their feet at all?” Helen asks.
“Nope.”
“Men or women?”
“Men.”
“What about the rest of their bodies? What are they doing?”
“Their hands are in their pockets, their heads are nodding, almost
imperceptibly, and they’re looking at the floor. Every once in a
while, they look at each other and then quickly back to the floor,”
I say. “They’re looking at the floor and their hands are in their pockets?”
Helen asks. “Yep.”
As we made our way around the room, it became clear that these men were not the only ones with their hands in their pockets. There were others. In fact this was almost an entirely generalizable feature of the room. It was a room in which hands were pocketed.
“Feet anchored, eyes averted, hands in pockets.”
“Left foot anchored, right foot swiveling, hands in pockets.”
And it goes on like this, until Helen becomes concerned. She says, “What are they doing with their hands in their pockets? Isn’t this a party?” She hadn’t remembered that hearing people stand around with their hands in their pockets,since they’ve got their mouths and their eyes for talking and seeing and such. She said she must have known that before she was blind. We went over the room again, scouring for hands caught mid-activity, and there were almost no cases to report. She accused them of being devoid of feeling. She accused them of being cold. But after thinking about it longer, she said, “Those poor people! They have too many limbs! They don’t know what to do with them!”
For me, the pocketed hands, the averted eyes, and the swiveling feet all faded into the background as expectable features of an awkward social event. Helen, on the other hand, had been relying on interpreters to read social scenes for years, and this led to a deterioration of the visual habitus(1).
When interpreters used words like “party” and “art opening,” as I did, they prepared Helen for a place with particular characteristics. She expected to find certain types of people, dressed in a particular way, engaging in a certain type of interaction. Meanwhile, Helen’s perceptual schemes were shifting. While interpreters went on describing objects, scenes, and encounters in a visual field, she was filling in the details in ways they couldn’t have imagined. Interpreters were working within the limits of the language they were using, and that language contained forms with associated meanings. Meanings in any language are schematic and are only made definite as they are instantiated in use. Without the particularities of the visible environment, a distance grows between the categories and the phenomena they characterize and point to.
For example, in an interview, Lee explained that sighted people living in Seattle are familiar with downtown hotels. They expect to find automatic, sliding glass doors at the entrance. They anticipate the slightly squishy floor mat as they pass through the threshold. If they are holding a paper coffee cup, only a half-glance will be necessary to confirm the existence of a cylindrical silver trash can into which they can dispose of their cup. “It’s always the same!” Lee said.
However, she explained that DeafBlind people have, until recently, relied on sighted interpreters to navigate public spaces, preventing them from cultivating tactile sensibilities. As a result, Lee says, scenes like the following are likely to unfold :
A DeafBlind person walks into a [hotel], and runs into the garbage can turning the corner. They look shocked and tell the person they’re with that the placement of the trash can is not safe!
Outbursts like this strike others as unwarranted, since from a sighted perspective, the placement of the trash can is expectable. Lee pointed out that if the DeafBlind person were using a cane, and paying attention to their surroundings without passing through someone else’s visual perspective on it, they would notice regularities like this as well. It is not a matter of sensory capacity. It is a matter of orientation, the grasp that social actors have of being a body in space, and how their split-second evaluative responses to stimuli align (or not) with shared frames of social value. The further those responses drift from shared frames of social value, the more “odd” or “eccentric” the DeafBlind person appears.
I lived in Seattle and was involved in the DeafBlind community as an interpreter and in other capacities for 7 years before I went to graduate school. During that time, these events in which DeafBlind people responded to expectable stimuli in non-normative ways seemed quirky to me. However, as the pro-tactile movement took root, and discourses began to circulate, I began to see that they were symptoms of a serious and alarming problem: the visual habitus was degenerating.
I encountered this problem often in my interactions with DeafBlind people. For example, one day, I entered a coffee shop with a DeafBlind man. I told him there were several people in line ahead of us. He responded by repeatedly adjusting his footing, saying “Sorry. Sorry.” He clenched his fists and cringed, as if bracing for a collision. This kind of response to information was not uncommon. I would give a DeafBlind person a piece of information, and they would yell, “Im sorry!” “I didnt know!” or “Im blind!”
When the habitus is intact, we respond to immediate triggers to act in expectable, appropriate, and otherwise normative ways. However, this process depends on access to the immediate environment and a process of socialization that helps us distinguish between relevant and irrelevant stimuli. DeafBlind people become jumpy and over-responsive because they receive triggers to act without the particularities in the environment needed to guide specific action. For example, if you are told that a sighted person is approaching and would like to start a conversation with you, you may feel the urge to turn your torso and face toward them, assume a particular posture, or express a particular emotion with your face. However, after many years of limited access to the bodies of others, you forget how to carry these actions out in ways that feel appropriate or natural. Over time, these failures accrue to the individual as the habitus degenerates.
A person without a habitus has no common sense. They run into ordinary objects and then act surprised that they are there. They stare past people, talk into walls, offer strange and unnatural smiles, and respond to routine questions by yelling, “Im blind!” These events thrust DeafBlind people into devalued social positions. They come to be viewed as “develop-mentally delayed” or are talked about as “slow learners.” They become less appealing to be around, which leads to increased social isolation, and increased social isolation contributes to further degradation of the visual habitus. Over several decades, the DeafBlind person drifts away from any legible position in the social order.
Leaders of the pro-tactile movement saw these problems as rooted not in the failures of the individual, but in naturalized interactional structures. Their hypothesis was that DeafBlind people behave in non-normative ways because they dont have enough direct, tactile access to their environment. Representations only make sense if they conjure experience, and too much reliance on interpreters had opened up a chasm between the two. In the terms employed here, they saw that habitus must articulate with field in order to be maintained, and rather than attempting to prop up the visual habitus, they opted to change the coordinates of the field.
The degree to which sighted people would be invited into this emergent social field had to do with assessments of their “attitude.” According to Adrijana, when she took over, DBSC was mostly staffed by people who privileged visual (and even auditory) communication prac-tices, took them for granted, and were not particularly concerned with the exclusions those practices engendered. Although it wasn’t clear exactly what needed to be done, improved attitudes toward, and competence with, tactility and tactile communication practices was an intuitive first move. The sign glossed as “attitude” in this context diverges from the English meaning. It is treated as an almost inherent part of the person, and it has to do with the capacity to see things from a DeafBlind perspective. People are either capable of learning or they are not. There is no use trying to teach a person with a bad attitude to communicate, thinking they might one day contribute to the community in some way, because they probably won’t.2 People who have bad attitudes (or rather, bad-attitude people) can be surrounded by American Sign Language for 20 years and fail to learn it. They are inert at best and intentionally perpetuating power asymmetries at worst. Therefore, for Adrijana, solving the attitudinal problem, thereby enabling the emergence of a pro-tactile social field, meant replacing almost all of DBSC’s staff members(3).
For about two years, there was a lot of instability in the organization. I really wanted to have the right people in there doing a good job because DBSC is an organization that is there for DeafBlind people, and they had to feel comfortable coming in and getting what they needed.
However, it was not self-evident how to make DBSC a comfortable and appealing place for DeafBlind people. First, there was work to be done on the public image of DBSC as compared with other agencies and organizations in Seattle. Adrijana explained:
We compared ourselves to ADWAS [The Abused Deaf Women’s Advocacy Service]. They’re such a popular organization because they’re attractive to people. They have the auction. They’re an organization of Deaf women, and it is truly a Deaf environment. They don’t have phones, they have TTYs (or they did when they started up). Their board is required to know ASL, etc. The Lighthouse was attractive to people because of DeafBlind community class and Seabeck camp. But where did DBSC fit in? What was so great about DBSC? That was when the notion of pro-tactile came up. It started out really vague and narrow. It didn’t mean ‘tactility’. It meant ‘manual tactile reception’. The point was just to change people’s attitudes about tactile communication, as a modality, to say there’s nothing wrong with it.
ADWAS is known for being a very welcoming organization. Anyone who is willing to contribute to their mission of providing direct counseling and advocacy services to Deaf victims of sexual assault and domestic violence will be invited to participate in some aspect of the organization. However, if hearing people were to volunteer and/or work for ADWAS in an effort to contribute, but used spoken language to communicate, the services would no longer be direct and the mission would be undermined. Therefore, ADWAS has gone to great lengths to make Visual American Sign Language the primary language in which business is conducted. For example, as Adrijana mentions, there are no voice telephones in use. This means that there is no receptionist speaking English at the front desk so when Deaf people enter the building, they are not immediately alienated.
At the same time, ADWAS actively encourages hearing people to participate as volunteers, staff members, donors, board members, etc. The only condition is that they adhere to Deaf norms of communication and interaction. ADWAS has been wildly successful and as Adrijana explained, this is not in spite of, but rather, because of the fact that they are a Deaf organization that serves Deaf people according to Deaf norms. Not unrelatedly, their fundraising events, such as their auction, have taken on a life of their own as vibrant sites of Deaf sociality in Seattle. Talk of a more inviting environment for DeafBlind people came about with a model like this in mind--but what would the DeafBlind version be?
4.2 “Everything We Touched Froze”
Adrijana called a meeting of staff members and some community members to talk about priorities for DBSC’s future. In this meeting, “pro-tactile” started out as a slogan that was used to sell DBSC, but at the same time, the more substantive idea of a “DeafBlind Friendly Zone” was raised. Adrijana explains:
We started using the words, but we didn’t really know what it meant. What does it mean to have a DeafBlind friendly zone? Well, tactile signing was important, and we just started thinking about things like that, which led to more and more discussion, and over time, it kept changing. For example, we started talking about why it was that if two people were talking to each other, and you walked up and put your hands on one of their hands, they would stop talking. Why not continue, so we can listen for a while? We wanted people to get rid of those habits that made it hard for DeafBlind people to move around a room, observing what was going on tactually.
Although it wasn’t clear yet what practices might be considered DeafBlind friendly, there were some things that clearly weren’t, such as this habit people had of pausing, or “freezing” when a DeafBlind person touched them. The freezing phenomenon had an eerie effect. Conference rooms, offices, and hallways seemed perpetually occupied by people who were suspended in mid-air. Adrijana said when she was with another person, for example, eating lunch and conversing, she would take a bite, and then feel the other person’s hand or arms to see if they were still eating or not. If they weren’t, she might say something to them. If they were, she might want to feel their hands take the food to their mouths, or maybe their jaw chewing, but every time she put her hands on theirs, they would pause, awkwardly, until she removed her hand. Or if people were standing around talking in the conference room before a meeting, she would approach them, put her hands on one of them, and hope that they would continue signing, so she could tell what they were talking about. Invariably, though, the conversation would stop. Either the people would stop moving, as if they didn’t know what to do, or they would ask her what she wanted. How was she to know what she wanted if she didn’t know what possibilities for wanting there were? How was she supposed to know what possibilities there were, if she couldn’t observe activity in her environment?
Usually, this kind of observation would be done with a visual interpreter, but interpreters were in short supply, and Adrijana often went without one. Furthermore, she didn’t think tactile observation was implausible in such situations, but in the larger community there weren’t any tactile frameworks for observation, so when it was done, it was confusing, irritating, or on occasion, even interpreted as inappropriately sexual. However, for Adrijana and several of her friends and colleagues, there was a disconnect.
In 2006,I conducted two months of fieldwork, and during that time, I lived with Adrijana and her Deaf, sighted husband. They and several of their friends (both sighted and DeafBlind) had intuitively started developing tactile frameworks for observation. In 2008 I also lived with Adrijana and her husband, as well as working at DBSC, and was integrated into a group of friends and colleagues who continued to develop tactile communication practices. Those of us who were routinely exposed to these practices no longer froze on contact, and without necessarily noticing, our boundaries around touch had been revised.
For example, when Adrijana and I would go out together, she would often start the encounter by touching my feet, feeling the type and texture of shoes I was wearing. She would feel for the style of pants at the ankle and then trace the fabric up the shin to the knee. From there, she would skip to the belt and feel for the thickness and the texture, pausing for a moment at the belt buckle--Small and discrete? Thick and clanking? Then she would move to the neck-line of the shirt and do a quick scan of the sleeves before feeling the style and state of the hair--Still wet? Ponytail? Clean? Dirty? Straightened? Curly? All the while, she would be pulling in gulps of air through her nose, clearly gathering olfactory details as well. Finally, I would add any information that she wasn’t likely to discover-- for example, if we were wearing the same color, I might mention that.
We usually disagreed about something. Adrijana thought our shoes were the same, and I didn’t. Or she would (in good humor) accuse me of stealing her style, and I would try to defend myself. These arguments often ended with her telling me to feel rather than look at the item under dispute and once I had done that, I would often concede. Visually there were differences, but from a tactile perspective, the similarities stood out instead.
Although we were close friends and roommates, this kind of thing felt no more intimate than a friend commenting on your clothes when they see you: I like your shirt. Or: Look! We’re matching! Outside of our small group of friends, however, it was clearly counter to the norm. In the broader community, people were still suspended in mid-air and lacking particularity. Attempts to fill in the details were continually thwarted. When Adrijana became the director of DBSC, the staff there was no exception:
Everyone was like that. Especially Deaf employees. If you came up and put your hands on them, they would either freeze or say ‘Hold on, Im talking to someone.’ Or, ‘I’ll be done in a sec.’
In the past, Adrijana couldn’t always prevent this sort of response, but now that she was the director, changes like this were within the scope of her job responsibilities. It wasn’t just for her. It was part of making DBSC a DeafBlind friendly zone. Adrijana said that she reminded sighted staff members continually, and eventually, they continued signing or going about their work when she put her hands on them.
4.3 DeafBlind to DeafBlind Communication
The new staff included three tactile DeafBlind people: Adrijana, Jodi, and Lee. There were no communication conventions in place for three-way tactile communication. If there were more than two DeafBlind people present, interpreters would be hired to mediate. In an interview in 2010, Adrijana explained:
If Jodi and I were talking and Lee wanted to join, we had to figure that out. It wasn’t obvious to us at first, but we tried to follow our intuitions and find a way to communicate between the three of us. [... ] We weren’t really reflective about it. We just kind of did what worked, which was signing with two [dominant] hands. Then when sighted people would join us, they would look confused--like how am I supposed to communicate with both of you at once? And we would tell them to sign with two [dominant] hands. We didn’t do that if we had to have a meeting for an hour. We did that for short meetings--10 minutes here, 10 minutes there. I didn’t want to explain things to one staff person, and then repeat myself with the second person. That would eat up too much time. So it was a good way of efficiently conveying a short message.
These practices quickly became naturalized among the staff at DBSC. So much so, that they were surprised when others found them novel.
It became so normal for me in such a short period of time that I didn’t think about it. But when people saw it, they would respond--like ‘Wow! That’s so cool!’ And I remember saying, ‘Well, they do that at the Lighthouse, too,’ and being told that they didn’t do anything like that there. That was a big insight for me [...]. I didn’t even realize that that was the case until about a year later. I didn’t come to the realization that there was a discrepancy in how communication was happening inside DBSC and outside.4 It had all happened so naturally that we didn’t think about each little thing we did. No one really talked about it much. It was just an ongoing negotiation and people were expected to do what it took to make themselves understood and understand other people.
From 2006-2007, communication within DBSC was already moving away from reliance on interpreters, and toward direct communication between DeafBlind people. Conventions for communicating with sighted people that included more tactile practices were also developing. This shift eased financial and scheduling strains. DBSC had very limited funds and interpreters are expensive. It also takes time to schedule interpreters, and in order to get the ones you want, they must be booked far in advance, and these problems were increased as interpreter shortages became more severe (See section 3.2.1 on page 78 for more on this).
As Adrijana explained above, there were often situations where an impromptu meeting was needed that required the presence of more than one DeafBlind staff member and using interpreters was not feasible for that reason. In addition, Adrijana noted that people didn’t want to include DeafBlind people in their organizations or events because paying for interpreters for them was so expensive. Therefore, she said, “changing our communication practices could help solve that problem in addition to the day-to-day logistical problem of wanting to have short, spontaneous meetings.”
The process was kick-started because as soon as internal dynamics started changing for the better, there began to be friction with people from outside the organization who came to DBSC regularly and hadn’t been privy to the changes. That friction, Adrijana said, “made [the staff] more insistent and gave [them] the inspiration to get serious about establishing a DeafBlind friendly zone.” A certain repertoire of DeafBlind friendly communicative practices had become naturalized within DBSC, and their naturalization made it difficult to describe them explicitly. As Adrijana says below, even if outsiders wanted to learn (which was not often the case in the beginning), naturalization was a barrier to teaching them.
At first, I thought that communicating in a DeafBlind friendly way was common sensical, or at least easy to learn. But I realized that people don’t like change. These were all big insights for me and I realized that I had to be more patient, take things in baby steps, approach people more gently. We had to ask people nicely. We didn’t want to post big threatening signs [ . . . ], so I decided we would just have to go with the flow more, and be patient about change. That process took about two years-- from 2006 to 2008.
By the end of 2008, the internal dynamics of DBSC were greatly improved and efforts turned to increasing the relevance and quality of services. DBSC contracts with state agencies, such as the Department of Services for the Blind to provide specialized, direct services to DeafBlind people. Therefore, what counts as a legitimate service is shaped as much by the structures and categories of the state agencies as it is by the needs and desires of the community. Adrijana had to find ways of addressing the discrepancies.
We noticed, as staff at DBSC, that [ . . . ] senior citizens [were] coming in droves to discuss problems they were having. When we looked at what was going on, there usually wasn’t a problem. It seemed like they were home alone, socially isolated, going crazy, and had to invent a reason to come in and talk to someone. And then they would have to get caught up in some kind of imaginary problem as their only form of socializing. The advocate would get overwhelmed with all of this work that wasn’t really legitimate. [They needed to] have some kind of positive interaction. The goal was to relieve some of the problems that seemed to come from being isolated--paranoia, stress, etc.-- and it worked.
Given the fact that severe social isolation was a real problem for older DeafBlind people, it seems that they would have gotten together more often on their own. There were two main reasons they didn’t. First, even if they had, they wouldn’t be able to communicate with one another in groups, since no conventions had been established for this. Second, there was what Adrijana called a leadership problem:
A lot of people were retiring, so what were they going to do? [ ... ] That problem became a first priority. [ . . . ] We asked the senior citizens to bring their own SSPs rather than DBSC being responsible for coordinating SSPs, and each month they would be responsible for planning an event themselves. We called that “leadership,” and we expected it to go alright. But then we found out that they weren’t doing anything. They weren’t finding their own SSPs, they weren’t planning their own events. It was really surprising. They had just gotten so used to someone else doing everything for them. They’ll find me an SSP, they’ll plan the events, and so on. Conversations often went like this:
DeafBlind senior citizen: I need a ride.
DBSC staff person: You find your own ride! Use the bus! Or call a cab!
And then nothing happened.
So that was an indication of what had been going on all this time--people had become [ ... ] complacent and unable to do things for themselves, or at least not used to doing things for themselves. So I got really frustrated, and they got irritated, being asked to do things they didn’t want to do and weren’t accustomed to doing. So my great idea didn’t work, because people didn’t just snap into the role that I had in mind. I had to try to do what they expected, rather than trying to make them the kind of DeafBlind people I thought they should be. So I hired a coordinator for the DeafBlind Senior Citizen program. The goal then, was for that person to figure out how to work with DeafBlind people to build leadership potential without making the mistakes I had made, moving too fast and expecting things to change too quickly.
Essentially, Adrijana was asking people who had spent many years in the role of “the served” to step into the role of the service provider. Theresa Smith, a long time ethnographer in the Seattle DeafBlind community, writes about the problems this division between those who provide and those who receive services has caused:
Agencies naturally take their direction from the people who establish, fund and run them. Agencies serving DeafBlind people are typically funded and run by people outside the community. [ ... ] [Therefore] the people in positions of power and authority come from a different world than the people for whom the agency is established. This is a problem. Hearing/Sighted administrators and staff do not share the life experience (deafness, blindness) or socio-economic class (income and life style) of their clients. They do not even share a primary language and culture. Few professionals on staff and fewer administrators have native-like fluency in American Sign Language and Deaf culture [ ... ] This creates an almost insurmountable gap in world view and in access to power. This difference in power has been institutionalized. [ ... ] We want to move beyond the limits of the present to a future in which DeafBlind people have not only power but authority and control within these agencies established in their name.
Although there is a great deal of variation among DeafBlind people in terms of socio-economic class, life experience, access to education, etc., the roles of those providing and receiving services have historically been opposed and mutually exclusive. Therefore, if someone was receiving services, they were by definition, not making decisions about how those services were administered.5 This led to problems like those that the senior citizens were experiencing. There was no agency contracting with DBSC to pay for social events as a way of alleviating social isolation. DeafBlind people knew this, so they had to make their attempts at socializing into a problem suitable for the services that were provided. One of the unfortunate side effects was that DeafBlind senior citizens were shaped by the negative and irrelevant role they were often left playing. They had to put on a performance of distress sufficient to justify a meeting with the advocate. Although they were experiencing distress, the nature and cause of the distress had to be disguised in order to alleviate it.
For DBSC’s staff, redirecting some funds and organizing social events was much preferable to sifting through the details of intentionally confusing stories, as well as being overwhelmed by the number of clients who came in telling them. Furthermore, Adrijana thought DeafBlind people shouldn’t have to be in crisis in order to have human contact. The order of operations should be just the opposite. They should have human contact in order to avoid crisis.
Therefore, she decided to use part of the advocacy budget to pay for minimal support to a DeafBlind Senior Citizen’s program. However, one meeting of the group required many volunteer interpreters (about two per participant). So soon after its inception, interpreters became a problem. Louise, the first volunteer coordinator of the DeafBlind seniors program explained in an interview that the program had to be temporarily suspended.
Now we have a new director at DBSC, Adrijana, who asked me to work with the senior citizen’s program, trying to get it back on its feet, which I agreed to do. I have found volunteer SSPs who are ASL students. The students who have been helping have been absolutely wonderful. Right now we have 10 senior citizens in the program who are very happy to have the program back. But it is uncertain what will happen in the fall because many of our volunteers have to go to school. Some will find jobs. We need funding to pay for SSPs and interpreters. We want to get out of the house and learn more about the world. Many of us stay home for long periods of time, and are very lonely. Just yesterday I got a call from one senior citizen, who was crying because she was so lonely. She just wanted to get out of her house, but there were no SSPs available. It’s really bad.
The shortage of interpreters was the problem on the surface of things, but if interpreters weren’t used, there would no longer be a problem. This, however, would require a major transition where DeafBlind people learned to communicate directly with one another. If this could be accomplished, social isolation could be addressed without appealing to sighted people for support, and further taxing the already depleted interpreting resources.
4.4 A Vision for a Pro-Tactile Future
Once Adrijana took up her post as director of DBSC and replaced much of the old staff, she and her new staff found that many of the problems they hoped to address, when thought through, could be traced to an absence of a tactile field of engagement. Although they didn’t know how they would bring such a thing into existence, they thought that direct communication between DeafBlind people was a good place to start. However, many DeafBlind people didn’t possess the technical skill of tactile reception, so they wanted to find a way to make learning tactile reception appealing. They thought it was strange that in the past, sighted people had often been the ones to teach tactile skills to DeafBlind people, even though they didn’t use tactile reception to communicate. They thought that DeafBlind people should be the ones to teach it--not only because it was more practical, as they were the ones who really knew how it worked, but also because DeafBlind people should be able to turn this practical knowledge into expertise as such, which they cannot do without opportunities to teach. All of this went into the planning a series of classes, which would be offered by DBSC to DeafBlind people, and which would be taught by DeafBlind people without the use of interpreters. The problem was that if they advertised the class as having anything to do with going tactile, no one would sign up--and especially not the ones, who in Adrijana and Lee’s view really needed to sign up. Adrijana explained that “we knew the word ‘tactile’ would turn them off, so we changed it to ‘DeafBlind to DeafBlind class.’ That piqued people’s curiosity, because they didn’t already know what it was.” Most of the classes did not thematize tactility. They were about finance, cooking, wood-working, and other topics. The instructors, though, were all DeafBlind as were the students, and no interpreters were provided. Tunnel vision and tactile people were thrown together and expected to communicate directly with one another.
People who had not yet gone tactile were encouraged to wear blindfolds, but not required to do so. Lee taught the classes, and one of her main strategies was to have discussion groups. She organized people into pairs sitting opposite one another, and then gave them a question to discuss. After 5-7 minutes, she had them rotate so that every person in the room discussed the question with every other person in the room. It seemed time consuming, but she naturalized the process for the participants by saying “this is our culture” and “this is how we do things.” This way of doing things had benefits, which she didn’t state explicitly in the classes, but which shaped her approach.
It meant that there was more equality in access to information. When a group of sighted people are in a room together, they can all be looking at one another. Everyone knows what everyone thinks, what everyone feels, and what everyone says [ . . . ]. It doesn’t work to get everything through one person [an interpreter]. Then you’re totally disconnected from your environment and the people in it. I was interested in finding a way to make group engagement possible--such that you would feel actually connected to the people you were with and the place you were in.
At the time, the classes didn’t feel like an extraordinary success. People were resistant to the idea of having events without interpreters present. In an interview, Adrijana and I discussed reasons for this:
Adrijana: People already have their ways of doing things. Senior Citizens love to go to the monthly meetings [at DBSC] in order to talk to their SSPs! They love it because they get information from them. They don’t see DeafBlind people as a source of information since they’re behind on news all the time anyway.
Te r r a : But do you think that’s true that DeafBlind people don’t have any information to share?
Adrijana: I think DeafBlind people have a disconnect between information that they have and ways of expressing it. I think when SSPs share information, it gets their minds working again--connections start happening, and then they can share with other DeafBlind people. It’s like their brains come alive again, but they need a kick start.
Adrijana was talking specifically about senior citizens here. Most members of this group are fully or almost fully blind and as was described previously, are suffering from some degree or another of social isolation. Social isolation is self-perpetuating. When you don’t talk to people you don’t have anything to say.
For blind DeafBlind people, the situation is worse still; whatever information they generate in their daily lives is generated via primarily tactile means. However, there is no system of representation available to them for expressing knowledge produced tactually. Visual ASL does not always lend itself to the tactile dimensions of objects, encounters, and people. There is, in Adrijana’s terms, a disconnect between information that they have and ways of expressing it. This disconnect leads to a “liveliness” deficit, which makes social exchange difficult. Two people who both have deficit of liveliness cannot help one another. It takes a person tapped into something--anything--to kick start their brains and come alive again. Giving that up would have dire consequences. For fear of such a situation, several people dropped out of the DeafBlind to DeafBlind classes once they realized that no SSPs would be provided.
Of the people who did stay, there were further problems. One of the classes involved going to a coffee shop and using tactile communication in public. While many of the participants were willing to communicate tactually in a private class, they were unwilling to do it in public. Several people dropped the class at this point. Then there was the question of safety. Adrijana and Lee didn’t have a set of practices that they were teaching people for direct, tactile communication. It was more experimental than that. They wanted to see what would happen if they threw everyone together and didn’t invite any sighted people. This was OK for the first several classes, which were taught by tunnel vision people about topics that did not require hands-on activities (e.g. “finance”). But eventually, there was a class taught by Robert, a tactile person.
Adrijana said that “Everyone assumed since he was a blind DeafBlind person, that he would be with an SSP. But just like all the other classes, no one had an SSP. Several students dropped the class when they found that out. Robert felt demoralized.” I asked Adrijana if people gave a reason when they dropped the class and she said they had: “There are no SSPs and Robert is blind.” It turns out that when pressed further, they didn’t feel safe. Robert was teaching wood-working and he was using a large, electric saw and a drill. Adrijana explains:
Before Robert even plugged in the machine, they were scared to death. Robert just wanted to show them the machine and they freaked out. They thought there would be SSPs there, and they would have more of an observational role, but that isn’t what we had in mind.
I asked Adrijana if she thought their fears were warranted, and she said that at first she didn’t think so. But then a while later, she was helping make a bunch of cloth napkins for a DeafBlind event with friends--both DeafBlind and sighted--all of whom had significantly more vision than she did. She fearlessly ventured forth with the sewing machine and ended up putting the needle through her index finger. “I laughed,” she said, “but it hurt like hell.” After that, she changed her perspective on the issue.
115
Part of the problem was that people didn’t trust their tactile experiences, and they didn’t trust that people would be able to reliably explain to them how to use this dangerous machine. They were right. Not only were their sensory orientations always shifting, there was a definite disconnect between tactile experience and Visual ASL. In addition, there was a great deal of variation among the group in terms of sensory orientation and there were no conventionalized practices that equalized these differences. All of this made learning how to use new, potentially dangerous equipment without the use of interpreters a bad idea.
In addition to the safety issue, the fact that group communication among DeafBlind people was not conventionalized yet meant that every little thing took effort. For three DeafBlind people to communicate with one another, one person has to know how to sign with two dominant hands and receive with one hand (not two). It can be annoying and/or frustrating to focus on such tasks while also trying to express a thought, or learn something, and many people felt that it was too much to ask. Two more DeafBlind people dropped out of the classes for these reasons.
I asked Adrijana if she thought that there had been an effect on language and communication practices, despite the initial lack of enthusiasm about the classes. She said, “What I think has been happening is that there is more overlap. Before there was a crystal clear separation between [tunnel vision people] and [tactile people]. Now they are mixing a little.” She went on to explain that homogenization of communication practices seemed like a big challenge.
There’s so much variation. Now we’re just trying to slowly close the gap between the two sides. That will help people to transition to our side--the tactile side-- and it will keep people from being able to reject us. They can’t do that any more. So my experience of the changes since 2007 really includes this narrowing of the gap and a recognition of the importance of it [ . . . ]. All this time I thought that it really hadn’t gotten any better and that was that. But deep down, I knew we had gotten off to a great start. It’s just that I had no idea how it would grow or if it would. That’s why I say it’s all very new, and things are changing very slowly. As far as how it will all end up, I think we have to wait five years or something to find out.
As far as changes in actual communication practices, Adrijana wasn’t sure. She said that she knew that some things were new--like describing relative spatial relations by pointing to locations on the palm of the addressee rather than in space--but, she said, “In DeafBlind to DeafBlind class we never talked about it. We just did what we did. I don’t even know what we did [ . . . ]. Really, you’re asking me if things have changed and I don’t really know.” She said she thought things had changed, but it wasn’t clear when certain practices had come into use and how widely. She was certain that they didn’t teach any new communication practices in these first classes. People “just started picking things up from other people and incorporating what [they] liked. And then some of it stuck and was history.”
As I started my dissertation research, Adrijana and Lee were looking for another opportunity to teach classes like the ones they had taught before, but funding had been scarce, and they had been busy with other projects. I was looking for ways to systematically observe the changes in communication and language that had been occurring. I contributed part of my dissertation funding for a second round of classes, and we started having planning meetings in the Fall of 2010 and the classes started in January of 2011.
Adrijana and Lee prepared the content of the courses and selected and recruited participants. I helped coordinate logistics and took care of tasks specific to research, such as organizing the collection of video data and obtaining consent from participants. I and two other sighted people videorecorded the classes, but did not otherwise take part in them. There were two groups: Group A and Group B--each comprised of five or six students and two teachers. Te n two h o u r c l a s s e s w e r e o ffered to Group A over the course of five weeks. 10 2 hour classes were offered to group B, also over the course of five weeks. In chapters 5-7, I show how the pro-tactile movement affected changes in sensory orientation and structures of interaction, and how, in turn, these changes began to influence the internal organization of the linguistic system.
Chapter 5
The Deictic Field prior to the Pro-Tactile Movement
In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, these Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the East, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars ...
--Borges, “On Exactitude in Science” in Collected Fictions
Prior to the pro-tactile movement, DeafBlind people relied on sighted interpreters to orient to the immediate environment. Using Visual American Sign Language (VASL), interpreters produced map-like instructions for interaction and exchange. However, as vision deteriorated and visual memories faded, the maps did not corresponded reliably to any external reality and deictic reference was strained. In this chapter, I argue that the problem stemmed from a deterioration of the deictic field to which deictic signs articulate and from which they derive meaning and efficacy. In addition, the perceptible ground of the deictic signs themselves became inaccessible.
Initially, DeafBlind leaders tried to address these problems by teaching interpreters to make maps that were more detailed, more life-like, and more compelling. At this point, the “interpretation” was no longer meant to provide orientation to the environment. Instead, it was meant to contain the environment. Ultimately, it became clear, as it did to Borges’ cartographers, that the closer the maps grew to the territory they were charting, the more useless they became.
Examining this tension between access and representation among DeafBlind people and sighted interpreters highlights two things. First, the deictic system, which is crucial for any map-like orientation scheme, must remain distinct from the deictic field to which it articulates. The former inheres in the linguistic system, while the latter is an integral part of the world (Bu¨hler 2001 [1934], Hanks 1990). Second, it highlights the mutual dependence of these constructs in accounting not only for acts of deictic reference, but also for the role these acts play in maintaining the structure and utility of the linguistic system over time.
When relations between the deictic system and the deictic field broke down among DeafBlind people and a new deictic field began to emerge, the system did not merely re-articulate to the new field with no consequences for its internal organization. Rather, each was re-calibrated to the other by TASL signers in interaction, and the linguistic system was altered. This chapter focuses on the disarticulation of the deictic system from the deictic field that was in place prior to the pro-tactile movement. This process is the first moment in the larger reconfiguration of deictic relations.
This chapter contributes to my overarching argument in this dissertation--that languages do not emerge by abstracting away from the contexts of their use, but rather, by being integrated with those contexts in tighter and more restricted ways. In sections 5.1 and 5.2, I introduce the notion of the deictic field, drawing on Bu¨hler (2001 [1934]), Hanks (1990, 2005, 2009), Schutz (1970), and Goffman (1964, 1981). In section 5.3, I show how interpreters were used to generate visual coordinates for orientation schemes, and how this strategy inadvertently prevented DeafBlind people from shifting toward tactility at an earlier point in the history of their community. I conclude that these practices led, over time, to a deterioration of the relations between the deictic system of VASL and its deictic field in the Seattle DeafBlind community.
5.1 The Signpost
In the Deictic Field of Language and Deictic Words, Karl Bu¨hler identifies a subset of pointing gestures that function like “signposts.” He writes that
where the pathway branches, or in countryside lacking pathways, an ‘arm’ or ‘arrow’ is erected so that it can be seen from far off; an arm or arrow that normally bears a place-name. If all goes well it does good service to the traveller; and the first requirement is that it must be correctly positioned in its deictic field (2001 [1934]:93).
Like a signpost, deictic words, such as here and there are combined with pointing gestures to create a perceptually salient sign that directs its recipient. For example, when a human “opens his mouth and begins to speak deictically, he says ... there! is where the station must be, and assumes temporarily the posture of a signpost” (ibid.:145).
The meaning of the deictic expression is not difficult to sort out because speakers and signposts “can do nothing other than take advantage--naturally to a greater or lesser extent--of the possibilities the deictic field offers them; moreover, they can do nothing that one who knows the deictic field could not predict, or, when it turns up, classify” (ibid.). In other words, possibilities for pointing are not infinite. The signpost merely clarifies potential ambiguities between, for instance, branches in a pathway, landmarks in a landscape, or one of a limited set of cardinal directions. A deictic sign is a signal to choose one path over another; it does not launch a trajectory into unstructured space.
Within a field of limited choices the deictic sign, like the signpost, does two things: it names and it points. Its symbolic meaning derives from oppositions in the language (here is not there). Its indexical function derives from oppositions in the “pathway,” or rather, the speech situation, where it is inserted. Deictic words are, therefore, part of language and language must be composed not only of symbols, but also of signals. When linguistic signs, both deictic signs and naming signs, are applied in the speech situation, they receive field values (Bu¨hler 2001 [1934]::99). The most fundamental difference between the two hinges on where each sign-type receives those values. A deictic sign’s meaning is “fulfilled” and “made definite” in the deictic field, whereas a naming sign’s meaning is fulfilled and made definite in the symbolic field.
The idea that the meanings of signs are elaborated, added to, or in some way changed when they are instantiated is, according to Bu¨hler, not controversial. What remains unclear is how far-reaching the consequences of this fact are for the rest of the linguistic system. In what ways is the linguistic system changed by the field values it accrues? Building on Brugmann (1904), Bu¨hler pursues this line of inquiry by considering the role that gestures and other sense data take in complementing and otherwise mediating the meanings of utterances, thereby linking them to the speech situation.
According to Brugmann, gestures are coordinated with the utterance in and through a “perceptual image,” or anschauungsbild (2001 [1934]:147). Bu¨hler names several variously foregrounded, or activated, coordinate systems that can contribute to the perceptual image; the coordinate system anchored by the head (as “a kind of globe”), or head coordinates; the coordinate system anchored by the zero-point of the chest, or chest coordinates; the coordinate system anchored by the eyes, or visual coordinates, among others (ibid.). These systems converge on and “wander” within the “tactile body image,” yielding a synthetic sense of being in a place.
The perceptual image is relevant to language in the sense that it contributes to the I, here, now from which deictic reference is computed. However, Bu¨hler goes further to ask how far the “‘perceptual image’ and its use for the representative purpose of language extend[s] into the entire structure of language” (2001 [1934]:147). Like Bu¨hler, I am concerned not only with how changes in sensory perception affect the ability of DeafBlind people to resolve deictic reference, but also what consequences these changes have for the structure of the language, more generally.
5.2 Beyond the Signpost
Signsposts and acts of reference differ in many respects. Broadly speaking, “the concrete speech event differs from the wooden arm standing there motionless in one important point: it is an event. Moreover, it is a complex human act” (Bu¨hler 2001 [1934]: 93). This difference opens onto many more. First, while both people and signposts occupy physical positions in space, humans also occupy roles in a way that signposts do not (ibid.). A human pointer is a speaker and the person they are communicating with is an addressee. The words I and you vary only according to which of these roles is being occupied, not according to which person is occupying the role (ibid.:94).
For pronominal systems like this to work, there must also be conventional configurations of roles, and conventional ways of moving between them. These patterns settle out of the situated encounter (Goffman 1964, 1981) via habituation and routinization (Hanks 2005b:193). This introduces another layer of structure, which does not inhere in the deictic system of the language, but fits with it, or as Bu¨hler says, “fulfills” it.
Second, deictic words direct and modulate attention in a way that signposts do not. The acoustic or gestural qualities of deictics are calibrated to these efforts. For example, when here is uttered twice, the second time more loudly than the first, its auditory qualities trigger both heightened and directed attention in the recipient. When I say here, you become receptive to the environment, scanning, before you analyze it, locating here, for example, in relation to there. An augmentation or change in receptivity occurs prior to identification. Deictic signs, in this sense, are “reception signals.” They are an inverted version of “action signals” like imperatives. Words like I and this “cause the gaze to turn (or something of the sort) and the result is a reception. The imperative come, in contrast, has the job of bringing about a certain action on the part of the hearer” (Bu¨hler 2001 [1934]:122).
Third, unlike signposts, humans have sensory systems that come with certain limitations and affordances. According to Bu¨hler, when the speaker speaks, the auditory signal gives off clues about the speaker himself as well as his location. These perceptual clues are put together with the visible location of its source and other sense data contributing to his localization. These aspects of speech production work in tandem to join the person to the role they are inhabiting, i.e. speaker (Bu¨hler 2001 [1934]:151).
Finally, humans differ from signposts in that they can remember, imagine, synthesize, and categorize (Bu¨hler 2001 [1934]: 137-154, 203-215). This makes it possible for human communicators to establish a perspective, and furthermore, to establish a “reciprocity of perspectives” with their fellow communicators (Schutz 1970:183). Participants take for granted a certain degree of similarity between their perspective and that of their interlocutor. At the perceptual level, this includes assumptions about the mutual accessibility of the immediate environment, including people, signs, objects, events, and so-on. When I say, “here,” pointing to an object, I take for granted that you can see what I am pointing at, more or less as I see it. In other words, “I take it for granted--and assume my fellow man does the same--that if I change places with him so that his ‘here’ becomes mine, I would be at the same distance from things and see them in the same typicality as he actually does; moreover, the same things would be in my reach which are actually in his. (All this vice versa)” (ibid.).
Prior to the pro-tactile movement, perspectives were not reciprocal among DeafBlind people in Seattle. Despite the fact that the members of the community were all more or less blind, visual capacities and orientation schemes were taken for granted. It was as if everyone could see, could access visual memories, and could respond to stimuli as sighted people do. From there, accommodations were made on a case-by-case basis for individuals. Strange things transpired as a result. For example, eye-contact was still treated as a way of marking an interlocutor as an addressee, despite the fact that DeafBlind signers often had to be told where the addressee was before they could fix their gaze. Pointing gestures were still used, despite the fact that very few DeafBlind people could link such gestures to a referent. These practices led to greater dependence on sighted interpreters. If sighted people were not present and available to mediate, deictic reference could not be resolved.
This case makes clear that at some level, perspectives must be reciprocal for deictic reference to work. Perceptual access is, however, only one small part of what constitutes a perspective, and therefore must be considered within a broader analytic context. Objects of reference are individuated against an indexical ground, or an “origo” (See Figure 5.1 taken from (Hanks 2009:12). The origo
may be the [speaker], the [addressee], the relation between them, or some other aspect of context, depending upon the case . . . The relation between origo and object may be spatial, distinguishing for instance relative proximity, inclusion or orientation. But space is just one sphere of context. Other spheres attested in deictic systems include time, perception (Tactual, Visual, Auditory), memory versus anticipation, and what we might call the force of the deictic (Presentative, Directive, Demonstrative, Referential, non-Referential). [ . . . ]. In addition to these functions, any one of which may be conventionalized, deictics in use pick up lots of other pragmatic baggage. They tend to be very sensitive to whether the referent is an object of mutual knowledge or not, or whether one or another participant has special claim over the object (by authority, ownership, habitual familiarity) (Hanks 2009:12).
Figure 5.1: The Structure of the Deictic Field
Among DeafBlind people, sensory capacities and orientations shift idiosyncratically. Everyone loses vision at different rates and in different ways. Prior to the pro-tactile movement, this splintering of perspectives was addressed by compensating and accommodating as needed. As a result, the indexical ground of deictic reference began to erode--at first along perceptual lines, and then in a broader sense as common knowledge became more difficult to generate and maintain. In this chapter, I examine the role of sighted interpreters in addressing this problem and the reasons that alternate strategies were eventually employed.
5.3 Displacement in Interpreted Interactions
Prior to the pro-tactile movement, interpreters described environments in the same way that a person would describe an environment to a non-present person, for example, a person on the phone. This approach was effective insofar as the environment could be reconstructed via memory or imagination. According to Buhler, memory and imagination work together like a “recording device ... that gives the organism ... a sort of orientation table for its practical behavior” (ibid.: 145). In this view, I, here, now is located in relation to past and anticipated experience, all situated in overlapping coordinate sets produced by sensory systems (visual, tactile, vestibular, etc.). Relations between coordinate sets accumulate, extending out around the present moment like roads or pathways, which structure movement through, and orientation to, space.
For example, in Figure 5.2, you see a schematic image of a sighted person orienting to a door. The projected line of travel follows from a visual orientation scheme. After DeafBlind people lose their sight, they continue orienting to objects in their environment in this way despite the fact that their visual system does not generate the necessary clues. This is an effect of habituation, as well as dynamics and constraints in the social field (see chapter 3). In order to adjust these habituated patterns so that orientation is organized around perceptible clues, DeafBlind people can receive “Orientation and Mobility training, or “O&M.”
A person who has adjusted their orientation scheme in this way, will orient to objects differently, as in Figure 5.3. Given this orientation, pathways that extends out around the traveler, will snap to a different grid. For a person attuned to tactile relations, a diagonal path through a room, like the one in Figure 5.2, is entirely unstructured, providing no clues as to where the door might be located. Therefore, an alternate route must be taken. Using a cane, some kind of orienting line must be identified, otherwise known as a “shoreline.” For example, the line where the wall meets the floor is a shoreline. If a tactile person follows this smooth orienting line with their cane, they can be confident that it will eventually be disrupted by door frames and other protrusions. Over time, intuitions grow stronger about how and where lines of travel intersect and where various protrusions are likely to be. Potential trajectories extend out around the DeafBlind traveler. Overlapping coordinate systems anchored by sensory systems converge on, and are elaborated by, this grid.
When visual orientation schemes deteriorate, it becomes more difficult for DeafBlind people to navigate independently. Prior to the pro-tactile movement, this problem was not addressed by cultivating tactile sensibilities or attending more closely to tactile cues. Instead, sighted people were increasingly relied on as interpreters and guides. The goal, in relying on interpreters, was to trade in dependence at the sensory level for autonomy at higher levels of processing--for example, decision-making. The interpreter guides the DeafBlind person to the rack of shirts, tells them what colors there are, describes the styles, and DeafBlind person decides which one they want. In 2006 and 2008, I recorded dyads composed of one
Figure 5.2: visual path
Figure 5.3: tactile path
DeafBlind person and one sighted interpreter running errands like this.1 I found that most interpreters did not paint vivid scenes of the environment. Rather, they used the few words that were necessary to guide the DeafBlind person through familiar scenarios.
When a DeafBlind person enters their bank, for example, where they plan to deposit their paycheck, they need to know where the end of the line is. This goes without saying, and upon entering, the interpreter guides the DeafBlind person to the end of the line. Once they are in line, the DeafBlind person needs to know how many people are in front of them and how quickly the line is moving so they know how to stand, whether to strike up a conversation with their interpreter or not, etc. Once they have reached the front of the line, they will need to know when one of the tellers motions to them to come to the window. The details of the gesture are unimportant as are the physical and personal characteristics of the teller. There is no clue that means stay, so any deviation from silence will mean proceed to the window. Once at the window, the DeafBlind person will need to know when the teller is ready to receive the check so they can coordinate their actions with the teller’s. At each turn, the visual interpreter must focus on visual cues in the environment that will help the DeafBlind person execute their check-depositing plan.
The visual information that is relayed to the DeafBlind person is a tiny fraction of what the interpreter sees. These bits of information are sufficient because the bank is not experienced by the DeafBlind person as vague gradations of color or disorganized centers of warmth and cold. He has been to banks before, and in particular his bank. Those prior visits have led to a set of expectations about banks. In familiar places like this, action can take on a binary character:
Is there a line? Yes or No
If Yes, find appropriate place in line.
If no, proceed to neutral location near tellers. Has the teller signaled? Yes or No
If Yes, follow interpreter to teller
If no, remain in current location
Communicative signals like the one produced by the teller are interpreted as instructions to act in very specific ways. They are interpretable because they are embedded in an orienting scheme, which has been “recorded” over time and, crucially, because DeafBlind people are habituated to the environment. In these contexts, interpreters tend to say things like: “Your turn,” “Go ahead,” “pull [the door handle],” “Prescription number please,” and so on. This information allows the DeafBlind person to choose between a very narrow range of possibilities--push or pull, move forward or stay, etc. The automaticity observed in these cases is a result of many years of bank visits sedimenting into a field of limited choices.
Equally important, however, is an alignment between the overlapping coordinate systems anchored by the sensory systems in the body and the broader orientation table these systems are absorbed by. Since the visual system of the DeafBlind person no longer aligns with the rest of his orientation table, he relies on visual data provided by his interpreter. Alignment is thereby maintained by distributing the perceptual field across two participants, only one of whom has full access.
In order for the orientation scheme to snap to a tactile set of perceptual coordinates, the DeafBlind person would have to be able to identify correspondences between a perceptible quality and an object acting as signpost. In other words, the DeafBlind person would have to be able to enter the bank and sort out for himself where the beginning of the line is, when the teller is signaling for him to come, and so-on. They would then have to find corresponding values in their visual memory and adjust the expectations that guide movement through the environment, yielding a coherent orienting scheme. However, prior to the pro-tactile moment, the strategy did not involve reconfigurations and realignments like this. Instead, the orientation table was kept in tact and a surrogate see-er was inserted, who could provide minimally necessary cues for routine action.
5.3.1 Useful Interpretations
The type of interpreting that involves minimally necessary cues is known as “useful interpreting” (Nuccio and Smith 2010:122-159). In useful interpretations emergent aspects of activity are, by definition, not included. If useful interpretations are the only kind of interpretation a DeafBlind person has access to, signposts start to hover above an increasingly irrelevant ground. They are isolated from the extended grid they were once a part of, and therefore, no longer mark moments of decision in a complex network of potential trajectories. Instead, they are like mileposts along a singular and undifferentiated path. It is not possible, given this state of affairs, to deviate from a series of pre-planned actions. Therefore, despite attempts to preserve autonomy, very little remains.
In 2006, while I was conducting two months of fieldwork, DeafBlind leaders were looking for ways to increase the autonomy of DeafBlind people and these kinds of barriers that were built into the interpreting process were a main focus. In the pre-tactile era, the solution seemed obvious; visual interpreters needed better training. Instead of providing only the most minimally useful cues, it was thought, they should also learn to attend to emergent particularities, or the “interesting” aspects of setting (Nuccio and Smith 2010:122-159). Interesting aspects of setting included things that could not be readily referred to a type, a category, or a grid. This kind of input would open up possibilities for action, allowing DeafBlind people to deviate from the plan, become distracted, fascinated, or surprised, and eventually, to have genuine choices in how they moved through their environment with an interpreter.
Over the next couple of years, between 2006 and 2008, conversations on this topic became more public than they might otherwise have been because DBSC received grant funds from the Department of Education to write a curriculum for training visual interpreters. The final draft of the curriculum was published in 2010 by Jelica Nuccio and Theresa Smith. Sections written for intermediate and advanced sighted interpreters provide ways of moving beyond the minimal instructions needed to complete practical tasks, and into the excesses and particularities that cannot be immediately referred to categories, roles, or structures. In order to collect a range of visual data, distinct “modes of attention” were incorporated into the model. These visual data were supposed to fill in where memory had receded, thereby maintaining visual orientation schemes.
5.3.2 Four Modes of Attention for Maintaining Visual Orientation Schemes
As the visual field deteriorates, it becomes increasingly difficult to act on the basis of minimal cues. In order to maintain and repair it, the attentional repertoire of interpreters was augmented. Interpreters were good at providing clues that would be immediately relevant to the next step in a conscious plan, particularly when the plan was highly scripted, as in the banking scenario. However, given the training they had, they were less able to venture into the details of the situated encounter. For example, they couldn’t capture possible but unrealized moves in an interaction. They couldn’t grasp transitional moments that turn situations into encounters (in Goffman’s sense), or cues that signal types of encounters as distinct from particular encounters. They were also not trained to capture habitual behaviors or routine patterns. Most of this receded into the periphery of their awareness and was,therefore, hard to retrieve and objectify.
However, DeafBlind leaders identified these dimensions of interaction as key for maintaining the deictic field and they thought that interpreters could learn to incorporate them, developing a kind of artistic practice. They attempted to formalize instruction for doing so in the curriculum. Four types of visual information were defined, according to the modes of attention that produce them. Together, these categories were meant to generate both useful and interesting interpretations (Nuccio and Smith 2010:126-7):
Passive seeing is not looking at any one thing in particular (as when walking down a familiar street) but absently noticing things as they come into view.
Focused looking is when reading, threading a needle, or looking at a painting.
Monitoring is being focused on something else but being aware of changes and ready to respond (as when having a conversation with a friend but monitoring the actions of the children, or having a leisurely dinner but watching the time so you’re not late for the next event).
Scanning is a way of quickly shifting focus or attention across a broad area, looking for something specific, for example: moving focus across an area in search of one particular thing (scanning to see where I put my keys); moving focus across an area for one type of thing (scanning the picnic area for an empty spot); or moving focus across an area around a broad area for a sense of place (scanning a friend’s apartment the first time you enter).
For purposes of training interpreters, engaging distinct modes of attention is an ethical matter. If the interpretation is reduced too much to immediately useful cues produced by a focused mode of attention, it can devolve into instructions that do not allow the DeafBlind person to make their own decisions. The choice to continue on course or abandon that course for some other requires potentially relevant information which must be gathered via different modes of attention. Therefore, Nuccio and Smith separate the modes of attention engaged by the interpreter to produce objects of attention and the process of meaning assignation that follows. They explain:
We use our vision to gain a sense of place, to feel oriented, and know where we are. Accordingly, we feel safe or tense, relaxed or focused and so on. We ascribe meaning to what we see. What we see is interpreted by us to mean something. We evaluate what we see (2010:127, original emphasis).
Ideally, it is the DeafBlind person who assigns meaning to objects of attention. If they are highly trained as well (there is nothing natural about the knowledge required to work with visual interpreters), they might even become skilled at deciding when the interpreter should switch from one mode of attention to another, and instruct them accordingly. If the sighted person imposes meaning, then the interpretation is going to give the DeafBlind person access to the interpreter’s experience of the environment rather than their own experience and agency will be lost. Therefore Nuccio and Smith identify restraint in meaning assignation as part of the ethics of visual interpreting, and developing this skill is a focus of trainings at every level--from beginning to advanced.
Restraint in meaning assignation requires a mode of attention that dwells in the situated details of the present moment, without leaping too quickly to categories, schemes, and types. Much of this can be accounted for with Goffman’s notion of the situation; particularly “scanning” and “monitoring.” Buhler’s language-user does a lot of focused looking, so that is not difficult to account for given the framework that has already been established, either. However, the category of “passive seeing” fits with neither Goffman, nor Buhler’s frameworks.
In 2006, I attended a workshop for sighted interpreters on “visual analysis” where passive seeing was introduced. Lee, the DeafBlind instructor, said that with this mode of attention:
The goal is more to evoke an image that the DeafBlind person can then interpret. Tap into the mood of the place, the passive aspects. Fill in the background, the texture of the scene, so the DeafBlind person can be free to make their own decisions about how to interact with their world. You can’t substitute your opinion for visual analysis and expect that to be informative.
Lee and the other DeafBlind instructor, Adrijana, went on to perform a role-playing exercise that illustrated the difference between conveying an “opinion” such as, “That man over there is friendly,” and conducting “visual analysis,” where details of the scene are relayed as close to the perceptual level as possible. The role-play in the workshop was set in a restaurant. The instructors were interacting, but saying very little to one another. The students were instructed to ignore the dialogue and attend to the “feeling” of the interaction, which they will be asked to report on later. There were several DeafBlind people attending the workshop who were using sighted interpreters. A few moments into the exercise, one of those visual interpreters interrupted the role-play to explain that without dialogue there was nothing to interpret.
The instructors explained that the point of the workshop was to see that when nothing is being said, the real work begins. Some examples they gave were the positioning of shoulders, the movements of heads, the direction and consistency of eye-gaze; light flows and responses to them; details about clothing, shoes, and jewelry, including the way they move, and are adjusted, both habitually and idiosyncratically; the particular rhythms of foot-tapping, hand-tapping, and the coordination (or not) of those rhythms between interlocutors and the broader surround.
Some of the data produced by this mode of attention goes to the habitus and its articulation with the social field. The conveyed cues act as triggers (if not immediate triggers) to act or to speak in particular ways.2 DeafBlind people grew up sighted and therefore developed a sighted habitus. If you tell a person with a visual habitus about the posture a person is assuming and what type of jewelry they are wearing, they will have some clues about what kind of person they are. In other words, bodily comportment, clothing styles, etc., are all visible cues that helped refer people to particular positions in the social field, and prevent them from being referred to others. This is the snap-to function of habitus and field.
However, there are also modes of attention that generate sense data which hover in the space between, or are in excess of any scheme or pattern. Disorientation, confusion, fascination, and the sensation of falling in love are all organized by modes of attention like this. In each of these states, there is a sense of immediacy that resists objectification and analysis. These are the phenomena, which, for some period of time, fail to snap to any grid of intelligibility. Nevertheless, we are overcome, carried away, drawn in, and otherwise directed by these modes of attention. In this sense, they restrict and guide our actions. In particular, neither the focused looking of a map-reader, nor the scanning of the sign-post follower can generate a here or a we that is charged with enough intensity and indeterminacy to be readily distinguishable from descriptions of places or groups of people. More than anything else, this is what is at stake in Nuccio and Smith’s category of “passive looking.”
In The Passions of the Soul, Descartes distinguished between three types of perceptual activity. First there are perceptions that we refer to external objects. The mechanism for this kind of perception works so that objects or bodies produce movements in the external sense-organs (for example, the eyes, or the hands), then the nerves carry those movements to the brain, and the brain imprints an idea of the external object on the soul. This kind of perception includes things like hearing a bell ring, or seeing a light (1985 [1647]: 337). Second, there are perceptions we refer to the body. The mechanism for these is the same as the first, except that we judge them to be already in us, and not external to us. They include “hunger, thirst, and other natural appetites” as well as pain, cold, and heat. These two differ from perceptions we refer to our soul, which constitute Descartes’ third category of perception.
This third kind of perceptual activity involves “the feelings of joy, anger and the like, which are aroused in us sometimes by the objects which stimulate our nerves and sometimes also by other arises (1985 [1647]: 337). These perceptions are defined by our inability to refer them to an identifiable, proximal cause. We end up referring them to the soul, not because they are generated in the soul, but because their cause is ineffable. Like all other forms of perception, the passions of the soul, or the “affects” describe a process of being affected by external bodies. Unlike other forms of perception, we experience the cause of an affect as ineffable. Affects link us mysteriously, to others. Ineffability is charged with potential. It heightens our awareness to the immediate surround and others in it, giving us a sense that we are really “here”-- that we are in something together.
DeafBlind people wanted to get as close as they could to an intense, immediate, charged present, and they saw sighted people as a portal. This posed a challenge for the interpreter-- to generate descriptions that were as concrete and indeterminate as reality. One of the ways this could be done was to include too much detail in visual descriptions, triggering a kind of “reality effect” (Barthes 1984:141-154). The reality effect, for Barthes, is a literary maneuver that involves writing in superfluous detail, drawing attention to things that are “neither incongruous nor significant” (ibid.:142). He argues that such details, only when provided in great excess, can end up conveying something of the character or atmosphere of a place. Each thing remains insignificant, but the cumulative effect of all of that insignificance is a sense of concreteness and immediacy.
For DeafBlind people, too many years of receiving “useful” interpretations caused types,
categories, and schemes to peel away from the particularities surrounding them. Therefore, there was no way of distinguishing places from types of places or people from types of people. In an attempt to repair this problem, “passive seeing” was introduced to interpreters as a mode of attention that could restore these distinctions in two respects. First, it would fill in the ground, or horizon, of routine patterns of action and exchange, thereby repairing the trigger-response loop that keeps habitus and field aligned. Second, particularities and excesses that do not snap to any grid or scheme, were fed into the indexical ground of deictic reference by creating an intense, indeterminate here for us to inhabit.
This strategy was ingenious, but it did not pan out for several reasons. First, the literary talents of interpreters vary widely and great heights of artistry were not often reached. Second, there is no way to fill in the background fast enough. Even as interpreters scrambled to describe every detail of every scene, it was not enough. Reality was perpetually flat, despite every attempt to bring it back to life. As a result, DeafBlind people eventually lost interest in the visual world and, as we will see in the next chapter, efforts shifted toward generating new forms of tactile immediacy, which sighted people had no role in generating. One of the things that prevented forms of tactile immediacy from forming earlier (apart from socio-historical dynamics discussed in previous chapters), was the persistence of participation frameworks, which were built around visual access and orientation.
5.4 Participation and Access Prior to the Pro-Tactile Movement
Participant frameworks are the emergent configurations that communicative agents occupy in the unfolding of an interaction, while participant frames are the repository of regularities that emerge in participant frameworks across encounters (Hanks 1990:137-187).3 Participant frameworks require participants to assume certain bodily configurations, and these configurations become regularized (or not) along with other aspects of interaction. In this section, I examine the relationship between participation and access prior to the pro-tactile movement by looking at the bodily configurations made possible by common participant frameworks.
In describing these frameworks, I also intend to emphasize for the reader how complex interaction became as a result of radically asymmetrical modes of access among DeafBlind people.
In the previous sections, I have discussed interactions between DeafBlind people and sighted interpreters as they move through social and physical space. The participation frameworks I examine here involve interpreted interactions where the focus is the exchange of utterances. For example, the DeafBlind man on the right in Figure 5.4 is standing on stage giving a presentation to an audience of DeafBlind people. The interpreter next to him relays visual cues, such as a raised hand, from the audience.
The audience is filled with dyads composed of one DeafBlind person and one interpreter.
Figure 5.4: DeafBlind Presenter (right) with Sighted Interpreter (left)
For example, in Figure 5.5, the man on the left is DeafBlind and the woman on the right is a sighted interpreter. The interpreter copies the presenter’s signs, so they can be received tactually by the Deaf Blind person. Each DeafBlind audience member using tactile reception must have at least one interpreter dedicated to them. Therefore, if there are 10 DeafBlind people present, there will be at least 10 interpreters working at any given time. In participation frameworks like these, DeafBlind people do not have direct access to one another. Instead, utterances are channeled through several relays before reaching the intended addressee(s).
Figure 5.5: DeafBlind audience member (left) with sighted interpreter (right)
This was the norm prior to the pro-tactile movement and it meant that all of the emergent dimensions of interaction-- the moment-to-moment adjustments, the embodied particularities of a smile, flushed cheeks, subtle shifts in posture, etc., were not available to the DeafBlindrecipient. They only had access to disembodied utterances and the name of the person occupying an abstract participant role (e.g.“speaker”).
Participant frameworks are supposed to act as the repository of regularities in interaction (Hanks 1990:137-187). However, without access to embodied particularities in the physical and interactional environment, stores grew thin. As visual memories faded, it became more difficult for DeafBlind people to to imagine how disembodied utterances were being brought to life around them. It also made it difficult to participate in the situated encounter in convincing ways. DeafBlind people ended up depending on interpreters to direct their attention, tell them who they were talking to, where to stand, what orientation and posture to assume, etc.
This reduction of immediacy to displaced roles and disembodied utterances took the au-tomaticity and the appeal out of interaction. Everything required conscious effort; people were flat and uninteresting; deictic reference was difficult to resolve; the exchange of utterances was stilted and arhythmic. However, prior to the pro-tactile movement, abandoning interpreters and engaging in direct, tactile communication was not an option since there were no participant frameworks available for organizing tactile access. Everyone was out of reach.
In addition, each DeafBlind person was losing vision at different rates and in different ways. Some people spent a lot of time in Orientation and Mobility training, others did not. Some people established relative spatial relations tactually (as in Figure 5.3) and some people established relative spatial relations visually (as in Figure 5.2). Some people spent most of their energy reconstructing visual scenes around degraded and partial visual data, while others turned more quickly toward tactility. Individuals were compensating in idiosyncratic ways. At the most fundamental, perceptual level, this contributed to the deterioration of reciprocity in interaction.
For example, DeafBlind people who had only a small tunnel of vision left would back up farther and farther from their interlocutor in order to see them. People who communicated like this were identified as “tunnel vision people.” When this strategy no longer worked, the DeafBlind person would be forced to use tactile reception, thereby becoming a “tactile person.” Being a tactile person did not mean that a tactile orientation scheme had replaced a visual one. It meant that VASL signs were received tactually, rather than visually, and sighted social roles were no longer available.
Once people “went tactile” they could no longer communicate with their tunnel vision friends or co-workers. Two tunnel vision people could stand far away from one another and communicate directly (with greater or lesser success). However, the procedure for a tunnel vision person and a tactile person was as follows: each time the tunnel vision person assumed the role of speaker, they would move to where the tactile person could touch them. Each time the tactile person assumed the role of speaker, the tunnel vision person would have to back up. It wasn’t clear to the tactile person when tunnel vision person was in position, though, so they might start signing before the tunnel vision person had gotten situated. The tunnel vision person was not likely to use tactile reception, even temporarily, because it would thrust them into a blind social role, and that move was seen as a irreversible (See 132 Chapter 3). Given this state of affairs, there was nothing reciprocal about the here occupied by a tunnel vision person and the here occupied by a “tactile” person. In this and other ways, the indexical ground of deictic reference was disjointed.
For these reasons, communication between DeafBlind people across sighted and blind social roles was far too cumbersome and it rarely happened. Likewise, communication between tactile people was difficult because there was no stable deictic field organized along tactile lines. Direct communication was blocked by many layers of mediating structure in the social and deictic fields, all of which had been built up around visual capacities and modes of orientation. Although much of it had nothing to do with vision, directly, taking vision out of the center caused the rest of the structure to collapse. Interpreters were not really able to solve these problems. However, the sheer diversity of orientation schemes among DeafBlind people left little alternative. It seemed impossible to imagine a scenario in which DeafBlind people could communicate directly with each other.
5.4.1 Participant Frameworks as Compensation
DeafBlind people came from different backgrounds and had very different ways of communicating. On top of this, they had different sensory capacities and orientation schemes. Interpreters dealt with this by accommodating each individual according to their needs. Therefore, if there were 15 DeafBlind people at a presentation, there were likely to be almost as many routes of transmission-- each one constrained in different ways. To manage this, each interactional setting has to be pre-structured on a case-by-case basis. Planning communicative events like this requires a great deal of expertise because unlike most routine encounters, nothing in this context is taken for granted. In other words, there are no mechanisms for linking basic participant frames to the situated present in the unfolding of the interaction. Therefore, the interaction must be, quite literally, planned.
Trudy started coordinating interpreters as the community was coming into being in the 1980s and she has been involved ever since--as an interpreter, interpreter coordinator, and in many other capacities. She has the kind of mind that can grasp the complexities of nonreciprocal interactions, anticipating beforehand where the sight lines will be, where tactile access is necessary, how many interpreters will be needed, what skills those interpreters must have, if there will be any personality conflicts, and on and on.
In an interview, Trudy provided me with some schematic representations of typical interpreting scenarios. As she described them, she sketched the configuration of objects and bodies on a notepad and explained in spoken English what types of scenarios would call for the configuration. I had a videocamera focused on the notepad and the microphone picked up our verbal exchange. The audio was transcribed and I reproduced her sketches in digital form using Microsoft Word. Because Trudy and I assume a lot of shared background knowledge, her descriptions require some additional, supplemental description. Drawing on my experience as an interpreter and participant in the community, I fill in as much as seems necessary to make Trudy’s examples legible to the reader. The examples I provide do not constitute an exhaustive list of interactional frameworks mediated by interpreters, nor do they include all of the examples that Trudy described, but they do give a sense of how interaction was organized prior to the pro-tactile movement. They also give the reader an opportunity to appreciate the complexity of the mechanism that was required to compensate for a lack of direct, tactile access to the situated encounter and the kinds of routinized regularities that settle out of them.
A Banquet
One of Trudy’s first scenarios involved a tunnel vision person attending a banquet, or more specifically, a fundraiser luncheon. In this case, the DeafBlind person is sitting at a large, round table. For a person with tunnel vision, such scenarios are impossible without an interpreter, even if everyone else is Deaf and using VASL because conversations jump around, and without peripheral vision, you don’t realize when someone is bidding for a turn by leaning forward, or raising their hand slightly, or giving off other fairly subtle cues that they would like to take the floor. Figure 5.6 is a representation of the sketch that Trudy
(a) Figure 5.6: A Banquet
drew while she was explaining this configuration. The solid black triangle represents the position of the DeafBlind person. The solid black rectangle and the white rectangle both represent interpreters working with that person. Below, is a transcript of her narration that accompanied the sketch:
Sometimes if it’s a fundraiser luncheon . . . something where there’s a table, there’s a round table and the interpreter’s over here [draws the solid black rectangle], the DeafBlind person is over here [draws the solid black triangle], and they’ve got [tunnel] vision [draws the arrow]. But [then] waiters are bringing food, things are happening over here [points to the area to the left of the black triangle]. Then the ‘off’ interpreter sits here--the team interpreter . . . This [solid black rectangle] is the ‘working’ interpreter and this [white rectangle] is the ‘feed’ or team interpreter. [T]hen their role is tactile information.
When Trudy says “tactile information,” she does not mean information acquired via tactile modes of access. She means information acquired via visual modes of access, which is described to the DeafBlind person who is using tactile reception. Therefore, we can consider this interpreter the sighted interpreter, while the interpreter represented by the solid black rectangle is the one focusing on utterances. When the server comes to ask for everyone’s order, the visual interpreter tells the DeafBlind person that they are approaching, but the language interpreter translates the server’s utterances. When someone gestures, as a bid for a turn in the conversation the visual interpreter interprets those gestures while the language interpreter translates the utterances of the person who takes the floor.
Even with two interpreters working in sync, the stream of information that is provided is necessarily a gross reduction of what is happening in the environment and at the table. Interpreters tend to focus on utterances and minimal visual context needed to interpret those utterances. If there are other DeafBlind people at the table, their utterances are translated in the same way that the utterances of sighted people are translated (as opposed to being exchanged directly). Therefore, although both DeafBlind people might be using tactile reception in some capacity or another (i.e. with the sighted interpreter who is providing supplementary visual information), the field of engagement is organized along visual lines, and utterances are designed for sighted addressees.
A Tunnel Vision Presenter on Stage
As part of my fieldwork, I attended bi-monthly classes sponsored by the Lighthouse. The class is known as “DeafBlind class” and it functions like a local newspaper. It is a venue for sharing news and also an opportunity to learn about new things that are not directly related to work. One class that I attended was a Discovery channel style presentation about earthquakes that was given by a Deaf sighted person who is well-known in the community. Another class was an introduction to yoga, taught by a DeafBlind woman. At another class, representatives from the Port of Seattle came to address concerns about the airport, and DeafBlind people stood up and told them their stories about difficulties they had encountered with airport personnel and physical accessibility. This helped the representatives understand how they could improve access for DeafBlind people, and it also provided a forum for DeafBlind people to share their experiences with one another.
Before, during, and after class, DeafBlind people communicate mostly via their interpreters, or they communicate with their interpreters and other sighted people who attend. Direct communication between DeafBlind people is rare. I understood Trudy’s description of the presenter-on-stage scenario largely in this context, since this was where I saw DeafBlind people (tunnel vision and tactile) on stage presenting. The number and positioning of sighted relays becomes complicated very quickly. For example, in the scenario in Figure 5.7, a tunnel vision person is giving a presentation. He is standing on stage, and the interpreter next to him is making sure that he is facing the audience, so sighted interpreters can see his signing clearly. If he drifts off to one side, or rotates his body at all, the interpreter will give him cues to adjust his position. If a person in the audience asks a question, their utterance follows the following route.
First, the “DeafBlind Question Asker,” in the lower right hand corner for Figure 5.7, stands up and asks a question. The “platform interpreter” copies the utterance. Next, the interpreter seated at the base of the stage copies the utterance again. The presenter has visual
Figure 5.7: A Tunnel Vision Presenter on Stage
access to this interpreter through his tunnel of vision and this is how the question finally reaches him. It is done this way, because if the presenter had to scan the audience with his tunnel vision, searching for the person with a question, it would take far too long, so the interpreter seated at the base of the stage acts as a stationary animator through which utterances are funneled. This is just one example of many participant frameworks, which together constitute a compensatory mechanism that allows DeafBlind people to approximate visual ways of listening, watching, and interacting.
In contrast to unmediated participant frameworks, the machinery of interaction often intrudes on the explicit aims of participants. A successful presentation like this is a feat of communication engineering that is possible only due to the work of a highly trained and very experienced interpreter coordinator who, like Trudy, has the kind of mind that take into account (in advance!) all possible routes of information transfer, sight-lines, visual capacities, communication skills, etc.
In addition, everyone in the room is wearing clothing that contrasts with their skin color--if their skin is light, they wear black shirts with high necklines. If their skin is dark, they wear teal or white shirts with high necklines. That way, if a tunnel vision person is looking at you, they will be able to see your hands against the background of your body more clearly. There are curtains hung behind the presenter to block out visual noise and there are large pieces of yellow tape on the stage to help DeafBlind presenters keep a visual orientation to their audience. All of this constitutes a compensatory mechanism that allows partially sighted and blind DeafBlind people to approximate visual modes of interaction.
However, as vision is lost, approximation becomes less and less convincing from all perspectives and further compensation is necessary. For example, the following scene unfolded in DeafBlind class (recorded in my field notes during the class):
Allen is doing the announcements today. Someone is standing behind him, changing his position, presumably so that people with low vision can see him, and maybe so he is facing a natural direction for the sighted members of the audience. When he is nudged, he is hyper-responsive--saying quickly and nervously, “Sorry. Sorry.” and moving over in a somewhat dramatized fashion.
I noticed that this type of response was most common among people who had very little vision and had not spent a lot of time cultivating tactile sensibilities. Many DeafBlind people, prior to the pro-tactile workshops, were hyper-responsive to feedback about visual communication norms. They turned jumpy and nervous, like a person who has received a signal that they must act, but has no structure to guide their actions.
These complex networks of mediation did, in fact, allow utterances to circulate among Deaf-Blind members of the community. However, particularities no longer accrued to the situation in which utterances were instantiated. A presenter on stage was just a speaker and there was no sense of how that role was realized in particular bodily configurations, gestures, postures, mutual embodied adjustments, and other emergent phenomena. DeafBlind people may have known, on some level that when they address an audience, they were supposed to orient their body in a particular way. But if they didn’t know where they were, or for that matter, where their addressees were, exactly, the trigger-response loop would remain incomplete.
Breakage in this loop can be compared to the jumpiness and anxiety one feels while lying in bed in an unfamiliar place, listening for intruders. You begin to strain and extend yourself toward whatever clues you receive, but if you don’t know what the clanking coming from the garage indicates, no description of it, no matter how detailed, can really bridge the gap. At this point, along with frustrations about always feeling left out, and one step behind, there is a sense that social norms are always being broken, or they are about to be broken--hence, the nervous side-stepping and repetitive apologizing. These are the limits of displacement, even when it is brilliantly orchestrated and highly elaborate, as it has become among sighted interpreters in Seattle.
A Meeting with a Facilitator
I have already touched on some of the constraints on interaction that derive from the social field and the effects of those constraints on participant frameworks and modes of access. The type of mediation that is provided does not vary straightforwardly according to the amount of vision that a given DeafBlind person has or doesn’t have. Looking at the meeting-with-facilitator scenario reveals some additional perspectives on how embedding in the social field bears on mediation strategies. In this scenario, the following categories of people are involved: DeafBlind people, Deaf sighted people, and hearing sighted people fluent in VASL. As in all previous scenarios, the common language is VASL and the deictic field is organized visually, with various compensatory mechanisms built in. In Figure 5.8, the white crosses
(a) Figure 5.8: A Meeting with a Facilitator
represent hearing people who sign. The white circles represent Deaf, sighted people. The “F” represents the position of the facilitator, or the person running the meeting. The white rectangle to the right of the facilitator is a copy signer, who plays the same role as the platform interpreter in DeafBlind class. The black rectangle on the left side of the semicircle is a DeafBlind person who is using tactile reception, working with interpreters. The arrow traces the sight lines of the interpreters. The DeafBlind person is facing away from the facilitator, and the interpreter is facing the facilitator and copy signer. The interpreter can either watch people in the meeting sign and interpret what they say to the tactile person, or if those people are not visible to them for whatever reason, they can look to the copy signer for reproductions of what they’ve said.
The white triangle represents a tunnel vision person and the black rectangle next to that is a “pointer” who directs attention to the current speaker. The assumption with this kind of compensation is that the tunnel vision person has enough vision to locate a person if directed to the general area, and once they have located them, they can see their signing without the use of an interpreter. However, because of their reduced peripheral vision, they can not follow conversations with multiple participants and rapid turn-taking. Trudy explains the role of the pointer:
That person is [pointing]. They also . . . This is like, this is kind of that transition-- if [the tunnel vision person] missed the fingerspelling, [this interpreter] might do the fingerspelling. . . . By the time they need this [kind of interpreter], they should be doing more tactile [reception, and be able to understand tactile fingerspelling]. Should be. But if, for whatever reason, someone is not very tactile, then this is a way to do the transition, where there’s ‘You know, I’m a visual person . . . ’ and for some reason it really helps if this person is a Deaf interpreter for some reason. It’s more comfortable, or more . . . whatever. Not always.
Trudy talks about this type of compensatory mechanism as a transitional strategy. However, it fits more easily within the broader pattern of resistance to all things tactile. Tactile reception (not to mention tactile practices that require more than just the hand) was considered something you did only if you “had to.” Therefore, it came piecemeal. You use a pointer, then later you add on a couple of relays so you do not have to locate the actual speaker. Later, you back up from the interpreter so you can still see them in your tiny tunnel of vision. The very last step, and only when it is absolutely necessary, is to switch to tactile reception of VASL signs.
This makes sense if shifting to tactile reception is seen as the first stage in a transition toward greater and greater alienation from the social world. The most fundamental insight of pro-tactile theory is that this alienation is not necessary, and it can be avoided given a field of engagement organized along tactile rather than visual lines.
Notice that the explanation given by Trudy has to do with reflexivity regarding personhood. It is not about access. The history of the Seattle DeafBlind community, embedded in a broader history of disability, deisntitutionalization, the rise of sheltered workshops for the blind, “vocational rehabilitation” programs, the recognition of VASL as a language, the rapid uptake of the notion of “culture” in public discourse and the application of this notion to Deaf, sighted people, yielded two, basic contrastive social roles: sighted and blind (see chapter 3). Greater forms of authority accrue to the sighted role, and legitimacy accrues to visual modes of access and representation. Therefore, in an attempt to take up more valued social roles in interactions within their community, many DeafBlind people continued to use Visual ASL long after it served as a useful mode of access to the environment and to utterances. However, another reason, prior to the pro-tactile movement, was the striking lack of any alternative.
Going tactile did not, until recently, mean entering a world dense with particularities and potentials, nor did it mean finally finding your people. Instead, it meant always being a description away from the charged reality of living with others. The social field kept the deictic field organized around visuality, despite the fact that participants couldn’t see. However, this discontinuity led to the slow degradation of the visual deictic field. This, in turn, meant that deictic signs in VASL did their work of referring less effectively as time went on.
5.5 Deictics in Search of a Field
Erosion of the deictic and perceptual fields became visible on occasions when deictic reference could not be resolved; when, for example, directions given in VASL to the kitchen in a friend’s house were misunderstood, when grammatical relations and phonemic distinctions that relied on the discernment of relative spatial locations were treated as ambiguous, or when descriptions could not be linked to the objects they described. In what follows, I discuss a few examples, recorded during Orientation and Mobility (O&M) trainings. Additional examples will be discussed in the following chapter, along with solutions that were eventually applied.
Exchanges between Marcus and his students in O&M trainings offered opportunities to examine the inadequacies of VASL deictic signs given tactile orientation schemes. In most circumstances, there are too many layers of potential confusion to single out the deictic sign as the culprit. However, Marcus is not a typical sighted person. He has been trained for many years to apprehend physical spaces in tactile ways. Nevertheless, the only language he had at his disposal was VASL. VASL is sensitive to the deictic field that has grown up around visual orientation schemes. Therefore, it is not surprising that VASL deictics were often ambiguous for the DeafBlind recipient.
5.5.1 Deictic Reference in the Transit Tunnel
After several failed attempts at finding a good starting place for orientation in the tunnel, Marcus explained to Helen that busses and trains take the same route through the transit tunnel, and he points to the line. Helen is shocked. She yells, “What! How?” And immediately pushes her cane out into the road to find the tracks. Marcus explains that when a train comes through, it uses the tracks and when a bus comes through, it drives over the tracks. Once she felt the track with her cane, she had a perceptible link to an organizing line in the tunnel, and she began to build up structure around this line.
The fact that both busses and trains take the same route through the tunnel is a crucial piece of information for establishing an orienting structure. It contributes to a retrievable “field value” in Bu¨hler’s terms, which is assigned to the meaning of the deictic utterance. However, the process breaks down for Helen because for her, the basic design of the tunnel cannot be taken for granted. Bits of information like this--the design of new transit structures, new clothing styles that sweep the urban landscape, new highways, new technologies (cell phones, iPods, iPads, smartphones, and so-on), are precisely the kinds of things that DeafBlind people miss out on. They are the topic of conversation in the general population, only briefly before fading into the background of urban life. This kind of shared knowledge accrues to the indexical ground of reference, and when the language user does not have access to it, deictic signs become contextual receptors set to receive values that are no longer retrievable.
This problem is compounded by the fact that the signs themselves are positioned in the deictic field and access to them is restricted when vision is restricted. For example, Marcus describes the layout of the transit tunnel in a way that would seem unremarkable to users of VASL. He names places within the tunnel, such as entrance and then locates them in the space in front of his torso. Using a combination of signs like right and left, he traces relative spatial relations between localized elements in space. After a few moments of this, Helen interrupts him, saying she doesn’t understand and she asks him to stop pointing to the “air.”
This problem arises again when Marcus tries to map the length of the tunnel onto the cross-streets above ground (Figure 5.9). Marcus (represented by the figure on the left) raises his non-dominant arm up so it is parallel with his chest. Without touching his signing hand to his arm, he signs “6th,” “7th,” and “8th,” above the arm, moving from the space just above the elbow to the space just above the wrist. With this information, Helen (represented by the figure on the right) would know that the tunnel was three blocks long and she would also know something about the location of the tunnel relative to the downtown grid. However, Helen
Figure 5.9: 6th to 8th street above tunnel
does not understand the description and asks him to refrain from signing “in space.”
As you can see in Figure 5.9, the DeafBlind recipient only has tactile contact with the signer’s dominant hand. The non-dominant hand, which forms the ground against which relative spatial locations are established, is not available tactually. In both cases, there is no perceptible ground against which deictic relations can be established. Therefore, in addition to a lack of structure in the deictic field, there is also a lack of structure in the perceptual ground of deictic signs themselves. Over time, these problems accumulate and make it increasingly difficult to establish shared orientation schemes. One place where these problems become unavoidable is in interactions organized around the activity of direction-giving.
5.5.2 Direction-Giving
In a sighted world occupied by sighted people, things like transit routes are shared and ways of orienting to them become routinized in practices like direction-giving. As DeafBlind people lose their vision, they become increasingly alienated from these practices. In the beginning, they find themselves giving directions less and less, but later on, they find that they can’t understand directions either. This all points to a disarticulation of deictic signs from the deictic field, compounded by the breakdown of figure/ground relations in the signs themselves.
After Helen and Marcus boarded the bus on the way to the transit tunnel, Helen asked Marcus about the route. Marcus explained that the bus goes “down Eastlake, past REI, into downtown and then into the tunnel.” They were sitting across from me on a crowded bus, and shortly after he explained this, I lost sight of them because the space between us had filled up with people. So I don’t know how Helen responded to this explanation, but the description is worth some consideration. The bus passed by many locations, but Marcus mentions only one road, the name of one business, one area--“downtown,” and the destination for the trip, which is the transit tunnel.
For me, as a sighted person who is familiar with Seattle, this description is adequate because
it distinguishes a limited number of feasible routes from one place to another. The city is not perfectly grid-like because it is built around several bodies of water. These bodies of water force traffic through several bottleneck bridges. From Greenlake to downtown, there are two feasible options--Interstate 5 or Eastlake. Eastlake crosses underneath I-5, and the two form an “X” when viewed on a map from above. They diverge as you enter the downtown area. At that point of divergence, REI appears as a salient visual landmark.
Part of the salience of the building is the architecture. It is a multi-story building the size of a warehouse, and the walls in one large portion of the building are made almost entirely of glass. In addition, REI is a camping and outdoor sporting goods store and Seattle is a place full of camping and outdoor sporting people. Even people who do not camp or engage in any kind of outdoor sports dress as though they do. Therefore, the building has been visited by many residents of Seattle and is likely to be familiar. Its salience as a landmark, then, derives in part from its size and eye-catching design and in part from wide spread familiarity with it.
It is unclear whether Marcus’ description felt adequate to Helen. However, it is safe to say that if a DeafBlind person were describing the route to another DeafBlind person, this is not how they would describe it. Many years ago, I was riding a bus along this very same route north-bound, when I noticed a DeafBlind person I knew coming aboard who happened to be fully blind. I took a seat next to him and we struck up a conversation. He asked me where I was going, I told him, and then we moved on to other topics. At some point, I stopped paying attention to where I was but just before I would have missed my stop, the DeafBlind man interrupted our conversation and told me I had better get my bag because my stop was coming. I thanked him and asked him how he knew. He said that he sometimes gets off at that stop (the DeafBlind Service Center used to be located there) and he knew that prior to that stop, there are characteristic motions of the bus that he sensitized himself to.
It would have struck me as odd if the DeafBlind man had said, “There is a cafe across the street with a giant spinning saucer on top,” or if he had referred to some other visual landmark. Marcus, like this DeafBlind man, has learned to orient to tactile dimensions of setting. However, in this case, he did not produce a descipriton based on a tactile orientation scheme. The reason is that Helen asked Marcus a question to which there is an appropriate and routine response for long-time residents of Seattle, who know that there are a limited number of routes from Greenlake to downtown. The routine association of particular questions with particular kinds of responses derives from the patterns of activity those questions and responses are embedded in, and the shared modes of access that participants have to those activities. Since Helen no longer has access to the visual dimensions of the route, Marcus’ description did not articulate to any structure outside of it. Although Marcus would be more equipped than most to understand why, he is still bound by routine patterns of action and exchange. Furthermore, the only language at his disposal was VASL, which responds to and is shaped by those routine patterns.
5.6 Conclusion
Stripped down to their most basic functions, deictic signs do two things: they name and they point. Both functions were disrupted by the deterioration of the deictic and perceptual fields among DeafBlind people. The naming function was disrupted because the ground of the signs themselves became inaccessible rendering the “name” uninterpretable. The pointing function was disrupted because from the perspective of the DeafBlind recipient, there was not enough differentiation or density in the field to which the signs articulated. Around these two basic functions, additional layers of mediating structure also broke down, including orientation schemes, modes of access, structures of participation, conventions for maneuvering within those structures, and shared knowledge. Any act of deictic reference is undergirded by complex networks of overlapping coordinate structures. If the deictic system fails to shift with the deictic field, it ceases to function. In the next chapters, I will argue that as the deictic field was reconfigured across a group of language users, the deictic system shifted as a result. This process contributes to the grammatical divergence of VASL and TASL.
Heroic measures were taken by interpreters and members of the DeafBlind community before tactility was turned to en masse. Almost every dimension of communication and interaction was mediated, channeled through complex systems of relays. Modes of attention were manipulated, literary devices were employed, and yet, in the end, it became clear that displacement was only possible given a reality that felt immediate, intense, and indeterminate. In order to act on this realization and built up new structures around tactile modes of access and orientation, a reorganization of the social field was necessary. Put another way, a prerequisite to changing the structure of the deictic field was nothing less than a social movement. In this sense, the deictic field presupposes the social field and its role in processes of language emergence cannot be understood in isolation.
Chapter 6
Reconfiguration of the Deictic Field of TASL
Prior to the pro-tactile movement, DeafBlind people relied heavily on sighted interpreters to access utterances, participate in interactions, and navigate physical and social spaces. Early on in the process of vision loss, interpreters were fairly effective. Eventually, though, the interpreter’s task became ludicrous. Filling in a missing word here or there became replication of entire utterances, which became replication of utterances and non-linguistic communicative cues, which became detailed descriptions of the crowd, the way light interacts with surfaces, the way styles among the youth keep changing. Interpreters found themselves doing cross sections of rooms, tracking patterns in the width and texture of pants, describing pale-skinned women sulking on the giant billboard above, or trying to capture the 5:00 malaise gathering itself on the inside of a city bus. In short, interpreters found themselves trying to reproduce reality in real time. Needless to say, such ambitions cannot be maintained, and even if they could, DeafBlind people eventually lose interest as their concerns and curiosities turn tactile.
When Helen was losing the last of her usable vision, she started responding to visual descriptions by laughing and yelling, “I’m blind!” One day, her husband told her that their dog had a dead mouse and was eating it on their living room carpet. He started describing the scene. She interrupted him saying, “I’m sorry dear, but your wife is blind as a bat.” Then she crawled onto the floor, opened up the dog’s mouth and smelled inside. She sniffed around the scene and felt the dog’s mouth where there was blood. She noted that blood does not have a distinctive smell, and her curiosity was satisfied.
Around this same time, Helen also started substantiating her claims about people with tactile facts. For example, one day she told me that the skin on Jodi’s arms is soft all the way down, but when you get to her palms and fingers, it turns rough. Helen wondered what was going on over there at Jodi’s house that made her hands feel like that. There was only one conclusion to be drawn, she said--that there is more to Jodi than there seems to be. Jodi is interesting’ it’s something about her discrepant textures and what they conceal about her home life. Then there was Joseph, whose signing, Helen reported, was often repetitive and light. She said that the rhythm of his false starts and the weightlessness of his movements suggested shyness, but she dwelled on the physicality of his hands longer than she needed to to reach this conclusion.
When DeafBlind people start talking like this, it is a sign that tactility has become a positive reality and is no longer an encroaching fear. Long before this moment, Visual American Sign Language has begun to feel inadequate to all involved. Directions to the bathroom in a restaurant are misunderstood. Stories vivid with visual detail conjure one-dimensional, faded scenes and are no longer interesting. Grammatical relations and phonemic distinctions that rely on the discernment of relative spatial locations become ambiguous.
There is not much an individual can do about problems like these, but in 2007 with the inception of the pro-tactile movement, DeafBlind people set out to address such problems collectively. Toward this end, a series of 20 pro-tactile workshops were organized for 11 DeafBlind participants by Adrijana and Lee, two DeafBlind leaders who had been developing new tactile communication practices in their professional and personal networks for about four years at the time. The workshops took place over the course of 10 weeks in the winter of 2010 and 2011.1 In this chapter, I analyze shifts in the structure of interaction that took place during the workshops. My central claim is that this transformation is not reducible to, or best understood as, a linguistic process, nor is it best understood as a cognitive process. Rather, it is an interactional process, which affects the organization of the deictic field.
The deictic system is analytically distinct from the deictic field. The former belongs to the language, while the latter belongs to context. The deictic system, like a collection of distinguishable signposts, can only point this way and that; in order for an object to be individuated, the signposts must articulate to distinguishable and external referents. Bu¨hler compares the deictic field to pathways where corresponding signposts are positioned (2001 [1934]:93-6). The processes through which those pathways are carved out and navigated are not linguistic in nature (Hanks 1990, 2005). Rather, they have to do with the modes of access that participants have to the immediate environment, and the routine patterns in activity that make some pathways more common and more expectable than others (ibid.).
The deictic field is also not a social construct. In the social field, the body is evaluated against social frames of value. Habitual bodily movements, gestures, acts of touching, patterns in how words are pronounced, and so-on are judged as polite or not polite, appropriate or not appropriate, and so-on. Habituated motoric patterns like these accrue to the “habitus” via socialization processes, which unfold in ontogenetic and historical time (Bourdieu 1990 [1980], also see Chapter 1 of this dissertation). In contrast, postures, movements, and semiotic cues in the deictic field, get recruited “enchronically” (Enfield 2009:10) in the back and forth of face-to-face interaction. Here, they function as turn-taking cues, backchanneling cues, signals to modulate and direct attention, etc. These signals are organized around, and constrained by, shared modes of access and they require certain bodily configurations to be exchanged. Bodily configurations are associated, in more or less conventional ways, with participant frameworks, thereby persisting beyond a single interaction (or not).
These are analytic distinctions. In practice, the deictic field is always already embedded in the social field. This accounts for the fact that if it is considered impolite or inappropriate to touch other people or objects, tactile modes of access will never be established in the deictic field. Nevertheless, deictic phenomena do not yield to social or linguistic analytics and the reverse is also true in each case. Therefore, the deictic field must be distinguished from the social field and the linguistic system before each can be productively linked to the other. Analytically isolating the deictic field, and setting it apart from social, linguistic, and cognitive constructs, is essential for generating a coherent account of the grammatical divergence of TASL and VASL.
In this chapter, I focus on two moments in this process, which are pivotal for the overarching analysis: (1) the reconfiguration of orientation schemes, and (2) a reconfiguration of participant frameworks and bodily configurations. In both cases, material clues (as Bu¨hler calls them) were incorporated into, and subsumed by, the structures of the deictic field. Textures, densities, tensions, and temperatures were subsumed by rhythms, trajectories, and olfactory singularities. Unlike cognitive representations and universal human capacities, these are concrete things, which respond to and are subsumed by other concrete things. I argue that the reorganization of these material clues into new configurations yields channels through which the immediate environment can be grasped in reciprocal ways by tactile people. This, in turn, is triggering a reconfiguration of the deictic field of TASL.
I begin, in section 6.1 with an ethnographic account of how DeafBlind people establish new orientation schemes. The main argument in this section is that establishing an orientation scheme is not not equivalent to building a conceptual representation. Rather, orientation requires the traveler to incorporate material qualities such as texture, density, and line into situated, location-specific patterns. In section 6.2, I show how the orientation schemes of DeafBlind individuals were aligned via conventionalization of participant frameworks and the bodily configurations they incorporate. Here, embodied particularities must be integrated with participant frameworks. Like the reconfiguration of orientation schemes, this amounts to a process of contextual integration, as opposed to a process of conceptual representation.
6.1 Establishing New Orientation Schemes
Prior to the pro-tactile movement, DeafBlind people in Seattle tried to maintain orientation schemes that incorporated visual coordinates. Those who had enough vision left occupied participant roles that were built up around those schemes. Attention-getting strategies involved waving a hand in the direction of the addressee. Signals for regulating turn-taking involved head-nods, nose-wrinkles, and visible shifts in body posture. People stood at visual distances from one another. Those who did not have enough vision to occupy participant roles and move between them, produced and received utterances via a sighted interpreter. For this reason (along with social pressures discussed in Chapters 3 and 4), changes in sensory capacity were not generally followed by a reconfiguration of orientation schemes. Tools for the reconfiguration of orientation schemes have been available since the 1980s in the Seattle DeafBlind community via orientation and mobility or “O&M” specialists. In order to understand how these practices contributed to new orientation schemes, I observed six O&M training session, each one lasting between 2 and 3 hours, with a total of two DeafBlind people.2 These training sessions were led by an instructor I will call Marcus.3 My central thesis in this section is that reconfiguration of an orientation scheme is not primarily a matter of conceptual representation.
6.1.1 Learning to Fly: Orientation is in motion
On my first day with Marcus, we met Allen at his house and we all drove together to Alki Beach. Upon arrival, Marcus tells Allen that they will be starting in the same place they started last time. He draws his attention to the strong smell of the water, and says, “Remember?” As they begin the session, Allen is nervous. We are all standing on a path that runs parallel to the beach, which is set back, near the road. On either side of the path, there are strips of grass, and further down there are obstacles, such as poles and stairs. Marcus hangs back and tries to interfere only when necessary for safety reasons, or when certain issues that he planned to address in the session arise.
Allen starts out holding his cane in his right hand. Marcus places his hand on top of Allen’s and explains that the arc of the cane should be only as wide as the shoulders. He tells me later that Allen has a habit when he first sets out of standing still and sweeping his cane across the entire width of the sidewalk and back again several times. There are reasons this is not allowed. One reason is that you can trip people who are walking by. But the more fundamental reason is that the cane, when used properly, is not a tool, but one element in a very precise relational system. Other elements include joints, such as the wrist, the knees, and the ankles, and the soles of the feet where they make contact with the ground. The relations are largely rhythmic and in order to cohere, forward motion must be consistent and focused.
The wrist snaps to the right, pulling the cane into its shallow arc. Pressure must be applied and relieved as necessary to make the cane float across the concrete on the sidewalk--too heavy, and it will get caught on things; too light and it will be uninformative. When the cane comes in line with the right shoulder, the wrist snaps in the other direction, pulling it again into its arc. Each time the wrist snaps, the leading foot raises up, off of the ground, and floats forward. As the cane comes in line with the shoulder, the foot is planted. A single rhythm must form in the stepping of the feet, the snapping of the wrist, and the tapping of the cane. When an obstacle is encountered, or the cane snags on a surface, Marcus says you do a “military one-two” recovery. Miss a beat and you’re lost. Marcus tells me that is why he continually reminds Allen of the importance of confidence and a positive attitude--because orientation is in motion.
The first stretch of the pathway is fairly clear, but further out, there are obstacles, such as curbs. The first time Allen encountered a curb, he stopped moving. He was, no doubt, focusing on restricting the arc of his cane and coordinating his joints and feet with its movement as instructed. He had a lot to think about. So when the cane slipped off the edge of the curb, he stopped cold in his tracks, and moved sideways instinctively. Marcus described this move as reactive and said it is the most dangerous response to obstacles.
When Allen shu✏ed sideways, his rhythmic field retracted, like a fountain being turned off, and he was left totally unprotected. Marcus was emphatic. When new information comes, you have to be able to “turn on a dime” because with good technique, you have very little reaction time. You are walking along--snap, tap, step, snap, tap, step, snap tap, step, snap, bam! You hit a large metal pole with your cane. From that moment, you have the interval of one step before your face hits the pole. And if you respond in an arrhythmic manner, you risk complete disorientation.
Walking behind Allen, Marcus shares his observations with me: “The arc on the right is too shallow, the wrist is too stiff, the right foot is dragging...” All impediments to a smooth and coherent rhythm. In addition, Allen has a tendency to zig-zag from side to side on the sidewalk, adjusting his course as he reaches strips of grass on either side. This causes the protective field to become asymmetrical in addition to arrhythmic-- an almost equally hazardous situation. Marcus repeats that “[i]t’s all about your line of travel.” If you don’t pay attention to that, you end up in “pocket spaces”--doorways, entryways, stair cases, or worse. Being able to walk straight is key.
I asked Marcus how any DeafBlind person who is fully blind can keep track of whether or not they are walking in a straight line. He said, “It’s like flying. There are no visual points of reference like sighted people have, just proprioception. It’s all in the feet, ankles, and knees. Information goes straight from the joints to the brain.” Marcus told me a few weeks earlier that he wears socks made out of something like wet-suit material. He trains for marathons in them--running for miles on trails in the woods. He said they’re better for your joints because your feet become sensitive to the ground and can respond in ways that are better for your body. In shoes, the connection to the ground is blocked, responsiveness in the joints is stifled, and the whole process is more course and ultimately, more wearing. He says it would make a lot of sense for DeafBlind people to use shoes like this, though he has never asked anyone to try it. With the weakened proprioception of a shoed foot, movement is even more important. Marcus explained that that is why breakthroughs often happened while walking downhill. A couple of months into Allen’s training, after he had been struggling to find his rhythm, this is precisely what happened. All of a sudden, while walking down a steep incline, rhythm, orientation, and movement aligned. Marcus said you could tell--something clicked.
Marcus contrasted the body-state of a person walking downhill (which is optimal) with that of a “curious traveler” (which is not optimal). In the ideal case, DeafBlind travelers use their mobility equipment in the same way that they use visual interpreters who do basic, “useful” interpreting (see previous chapter). They distinguish objects only insofar as the distinctions are relevant to their aims of traveling from one place to another. The difference is that in the former context, they are reliant on the sensory orientations of the interpreter, whereas in the latter context, they must rely on their own sensory orientations. Since they are not accustomed to tactility, Marcus says they must start by developing tactile awareness around materials--brick, concrete, gravel--the differences between them, and their patterns and sequencing. All of this has to be incorporated into the rhythm and the line of travel without causing any delay or disturbance.
In cities there are many doorways. Sometimes the material on the ground in the entryway has a different texture than the main sidewalk. This can sometimes be felt by the cane. Sometimes, entryways are set back from the rest of the wall, and form a negative space that is detectable with the cane, or with the “mini guide,” which is a small, handheld device that bounces sonar off of surfaces, returning different intensities of vibration depending on how close the object or surface is. Marcus used these facts as a point of departure for later, more advanced lessons with Allen. For instance, the goal of one session I attended was for Allan to learn the route from his home to a bus he would be using regularly to get to school. The trickiest part of this route was the end. Once Allen had found the block where the bus was located, he had to find the actual bus stop. Standing at the corner, he couldn’t be sure how far down it was. So Marcus taught him to count doorways. He did this by tracing the “shoreline” (any detectable, orienting line, in this case, the line that is formed where the walls of the businesses on that block come in contact with the sidewalk) until he found a gap. The first gap would be counted as “one.”
There is no abstract structure that orients. Material fragments are concretely incorporated into a trajectory and a rhythm. A doorway is a tactile silence in the rhythm--no resistance, texture, or density. This silence is preceded by a hard tap against the brick-sided building and it is followed by the same. This sequence of material cues is incorporated into the pathway between the street corner and the bus stop along with other material clues, all of which guide the forward-moving traveler. It is not entire objects that get picked up and organized by the pathway, but material fragments, qualities, and “clues.”
Working with a visual interpreter, the vivid present is reduced to signposts that guide a pre-set plan. When working with a cane and other mobility equipment, the vivid present is reduced to bits and pieces of material. In both cases, excess, to some degree, drops out. However, there is one very significant difference. The minimal bits that are incorporated into orienting structures in O&M trainings are perceived tactually. When working with visual interpreters, the point is to have (indirect) access to visual stimuli and respond as sighted people would. This loop breaks down, though, when DeafBlind people can no longer
reconstruct the pathways the signposts are pointing to. O&M helps re-build those pathways, this time, along tactile lines.
Reconstructing the pathways in the deictic field is not only, or even primarily, about developing conceptual representations of the immediate environment. It is about cultivating modes of receptivity and responsiveness to the material qualities of actual things. Material qualities must be linked to the schematic map-like structures of the deictic system. If they aren’t, the map is useless. This focus on material things distinguishes the deictic field (Buhler 2001 [1934], Hanks 2005b) from constructs such as Real Space (Liddell 2003:82) and Gestural Space (Rathmann and Mathur 2012:144), which link the linguistic system to non-linguistic phenomena by way of cognitive representations, thereby excluding actual material things, which resist our actions in particular ways(4).
DeafBlind individuals like Allen work hard to incorporate material elements into rhythms and trajectories, and over time, these patterns extend out around them like a grid, or subsume them like a forcefield. Orientation and mobility training is one place where they do this work, but as a result of the pro-tactile movement, individuals started looking for their own ways of cultivating tactile modes of orientation and access individually and in groups. However, orienting to the tactile dimensions of objects and events was not enough to transform the deictic field. The next step was to coordinate orientation schemes by establishing participant frameworks for direct, reciprocal, tactile interaction.
6.2 Participation Frames and Frameworks
Goffman’s work on participation frameworks begins with the insight that the roles people occupy in interaction cannot be understood by starting with one speaker and one hearer (1981:127). A common assumption that follows from this, says Goffman, is that interactions begin with one person who is expressing feelings and thoughts, and another person who is listening, until the speaker and hearer roles are exchanged, and the one previously listening begins to talk. This suggests that the speaker and the hearer are the only two people involved, and are the only two people who have access to the interaction. From there, necessary changes are made such as adding participants and nonparticipants, but the terms of analysis cannot deviate from the initial “statement-reply” format (ibid.:129).
Goffman argues that adding and subtracting from this basic format will never suffice. Instead, the primary categories themselves must be analyzed into smaller, coherent elements (1981:129). To this end, he turns away from the dyadic encounter (i.e. speaker-hearer) as a starting point, and toward the whole of a communicative event. The communicative event opens, he says, when participants turn “from their several disjointed orientations, moving together and bodily addressing one another” (ibid.). The event is closed when people break from shared orientation, “departing in some physical way from the prior immediacy of cop-resence” (ibid). We can often recognize these events by “ritual brackets” such as greetings and goodbyes that mark the end of ratified participation (ibid.). When viewed this way, the encounter takes on an organization of its own.
Therefore, information is not simply added to the statement-reply format. Rather, our entire perspective on what counts as a relevant dimension of the encounter changes. We begin to ask questions such as-- how do conversations get started? How do topics get established as such? How is a “common information state” built up between participants? How are new participants brought up to speed in the conversation? What constitutes a “preclosing”? (ibid.:131). Many roles and functions become discoverable in the context of a whole interaction, which would have seemed otherwise peripheral. For example, in addition to the speaker and hearer, there might be people listening who are not ratified participants.
Goffman introduces two such cases: eavesdropping and overhearing (ibid.:131-2). Based on these and other examples, he argues that the precondition of ratified participation for the analysis of talk excludes all sorts of possibilities, which are in fact possibilities that participants are aware of and orient to. This is evidenced by easily observable behavior aimed at “managing accessibility.” Once the dyad is replaced by the interaction as a whole, many communicative activities other than stating and replying emerge. For example, the following (ibid.:134):
Byplay: subordinated communication of a subset of ratified participants
Crossplay: communication between ratified participants and bystanders across the boundaries of the dominant encounter
Sideplay: respectfully hushed words exchanged entirely among bystanders
Collusive Byplay: collusive subordinate communication
Collusive Crossplay: collusive subordinate communication within the boundaries of an encounter
Collusive Sideplay: collusive subordinate communication outside of the boundaries of an encounter
Each of these headings is a label for a type of communicative activity and each one hints at a certain configuration of participants and certain corporeal relations between them. However, multiple possibilities can be imagined in each case. For example, sideplay suggests that there are at least four participants--two who are communicating in some sustained way and two who break off from the dominant interaction to engage in some kind of subordinate communication. However, it could be that there are only three participants present, two of whom are engaging in sideplay, unbeknownst to the third. Or there may be many people involved in the dominant interaction and more than two break off to engage in subordinated communication.
It is also easy to imagine that the participants engaged in byplay are physically closer to one another than the participant(s) who are sustaining the dominant encounter. ‘Hushed exchange’ makes me think of whispering and whispering makes me think of two people in physical proximity, one with a hand cupped to their mouth, leaning forward toward their co-conspirator. Alternately, one could imagine sideplayers who are on opposite sides of the room, communicating via a signed language using a reduced signing space that functions like “hushed speaking.” Or maybe the dominant interaction is occurring in another place all together and the sideplayers have joined via video technology. In order to have a side conversation, they move out of the video frame and press “mute” but remain physically distant from one another.
If Goffman’s categories specified every one of these possibilities they would be of no use. They work because they are analytic constructs that describe regularities in interaction at some (unspecified) level of generality. At this point in Goffman’s argument, we have gone from an a priori set of participant roles (speaker-hearer) and utterances with a priori functions (to state and to reply), to “the whole interaction,” where neither participant roles, nor utterance functions are determined prior to activity. From there, the analytic vocabulary must be built up via observation of many interactions.5 Across these interactions, patterns begin to emerge.
This procedure implies an analytic distillation that leads to the more general categories and types listed above, which omit certain details and retain others (e.g. manner: “respectfully” and volume: “hushed,” but not physical distance between participants or mutual spatial orientation). So the totality within which categories emerge is larger than it looks, extending across many encounters, and yet, there is no conceptual framework that accounts for this larger unit of analysis, nor is there any way of accounting for the movement from particular to general. How is it, for example, that manner and volume make their way into the categories, but not corporeal relations between participants such as physical distance or mutual spatial orientation? According to what criteria and from what perspective were these selections and omissions made?
The participant frameworks and corporeal relations that were used in the pro-tactile workshops were new. Upon being established, they did not accrue seamlessly to the structures of orientation that had previously been maintained. Instead, new participant frameworks incorporated new corporeal relations and a broader reconfiguration resulted, which had consequences for the grammar of TASL. While Goffman provides a good starting point for understanding participant frameworks as a relevant unit of analysis, he is not helpful in trying to understand how new frameworks and bodily configurations can affect the emergence of new linguistic structures.
In order to address this question, we must follow Hanks in asking not only how the analyst moves from actual communicative events to the structures organizing them, but also how native actors schemamatize and maintain participant frameworks in the course of communicating to generate participant frames, and therefore, maximally expectable contexts within which signs are produced and received (1990:148).
First, Hanks argues that the language acts as a repository of conventional categories, and those categories are in a dynamic relation to the fields where they are instantiated (1990:148). For example, person categories in the deictic system of a language are linked to participant roles in the deictic field via reference and indexicality, so the use of pronouns “tends to sustain an inventory of participant frames by focalizing them, engaging them as ground for further reference, or both” (ibid.). Second, if you ask, participants can draw on their understandings of participant frames and reason from them as a resource for working through potential interactional scenarios. So talk about interaction is another way that participant frames are generated and sustained. Third, genres can maintain participant frames by linking them to something larger than the individual interaction. Genres work by incorporating “typical participant relations as schematized aspects, thereby making them expectable, repeatable, [and] automatically inferable” (ibid.).
While each of these processes contributes to the creation and/or maintenance of participant frames, the overarching process that Hanks points to is habituation, which, he argues, “is more general than either language structure or discourse genres (but it is related to both)” (ibid.:148). He argues that habituation simplifies the practical task of managing participant frameworks and occupying roles with them. In part, this explains why the apparent analytic complexity of participant frameworks poses no practical problem for social actors in the course of an interaction (ibid.:149). In addition, habituation introduces a hierarchy into an array of participant frames. This results in a kind of “taxonomy” which contains a set of “basic level” categories.
Following Coleman and Kay (1981), Lakoff (1987:46-7), Lounsbury (1964::205) and other cognitive theorists, Hanks defines a taxonomy as “a taxonomic structure plus a set of terms, where the former consists of a hierarchy of inclusion relations among sets and the latter of a set of labels standing for taxa” (Hanks 1990:151). There is a “unique beginner” at the top of the taxonomic structure with subordinated, included levels beneath it. Two sets that are subordinated to a common taxa “contrast” with one another. Moving from top to bottom, specificity increases. Moving from the lowest to the highest level, abstraction increases. The “basic level” in such a structure is located neither at the top, nor at the bottom. Rather, it is located at an intermediate level, where the tension between abstraction and specificity is optimal for mirroring the structures of attributes in the perceived world schematically (ibid.:151). Perception is shaped by routine motor interaction with objects of perception. Therefore, the basic level is grounded in “habitual motoricity” (ibid.:152). For participant frames, the highest position contains the most abstract, most inclusive category of “participant frame” and
the sets subordinated to it might include: (ratified participants vs. non ratified participants), (producers vs. receivers), (addressee vs. other), (animator vs. author vs. principle), (message bearer vs. ultimate target) and (perhaps) (bystander [copresent unratified] vs. overhearer [noncopresent unratified])” (ibid.).
Now the task is to determine the basic level within the taxonomic structure. The basic level should correspond closely to the way that participants perceive participant frameworks, and should therefore be relatively simple, since participants do not generally struggle as they inhabit and manage those frameworks. Some clues about how participants perceive participant frameworks can be found in the conventional and commonly used labels participants have for participant frames. Those that are most consistently and frequently labeled are likely to be included in the basic set (Hanks 1990:152.). Another kind of evidence is the default usage of a certain set of participant frames, which are altered according to circumstances that participants take to be exceptional in one way or another. In other words, the participant frames that are treated by participants as usual or expectable, are likely to be included in the basic set (ibid.).
6.2.1 Basic Participant Frames in a Tactile Field
As new participation frameworks were being established in the pro-tactile workshops, the frames that had been shaped by routine motoric patterns in a visual world no longer exhibited the characteristics of basic level categories. That is to say that they no longer corresponded to the way participants perceived participant frameworks. Not surprisingly, labels for visual participant frames were quickly abandoned. The basic level in the taxonomic structure, and everything above it had to be thrown out and replaced. This process began with establishing new participant frameworks, and over time, some developed labels, while others, which were used less frequently did not. By the end of the workshops, participants referred to a particular kind of two-person configuration consistently using a specific sign.6 Furthermore, this label was used with great frequency. The same held for the label associated with particular kind of three-person configuration.
In addition, participants began to approach interactions as though two or three participants were included and they adjusted easily and fluidly between those two configurations. However, when a fourth person joined the interaction, this required an explicit intervention, where participants would remind one another of the rule governing the extension of three-person participant frames to a four-person configuration.7 This is evidence that two and three-person configurations were treated as a default or basic configuration and other frames were treated as extensions or alterations of the default.
If (speaker-addressee) was a basic participant frame in a visual field of engagement, in a tactile frame, the corresponding slot in the taxonomy for a tactile field contained two categories: (speaker-addressee) and (speaker-addresses). While a distinction between one and two addressees does not have significant consequences for sign production in visual participant frameworks, it is highly salient in tactile frameworks, as we will see in Chapter 8.
Interestingly, it was not the configuration of participant roles that DeafBlind people thema-tized in their metapragmatic categories, but the bodily configurations. Therefore, in order to recognize the crucial corporeal component of these basic participant frames, I refer to them not as (speaker-addressee) and (speaker-addresses), but as “two-person configurations” and “three-person configurations.”
DeafBlind people had to adjust to these new participant frames in many ways. One of the most important adjustments was in the motoric patterns that were fit to the routine tasks at hand. Motoric patterns cohered earliest and most completely around two and three-person configurations. In early weeks of the workshops, participants struggled to occupy and manage frameworks since their visually derived participant frames had become obsolete. Working their way from the bottom up in the taxonomic hierarchy of categories, the immediate environment was, at first, overrun with specificity. This led to many disfluencies and frustrations in determining relations between speaker, animator, and author (i.e. is the person whose hand I am in contact with the one who is the author of this utterance?), how to address one versus two interlocutors, how to occupy the position of the “bystander,” how to join an ongoing interaction without disrupting it, and so-on.
The problem stemmed from the fact that the basic level was missing, so “category members” had no parent category. The motoric effects of this were visible in a wide variety of arrhythmias-- widespread choppiness in bodily movements, extreme hesitance, awkward pauses, failures to maintain rhythmic sequentiality in conversation, collisions, accidents, and flat out confusion. As the problems were worked out, corporeal relations began to fall into place and regular patterns emerged that allowed DeafBlind people to navigate participant frameworks and the transitions between them fluidly and with apparent ease. By the end of the workshops, basic participant frames were in place. All of this is highly consequential for the grammatical divergence of TASL and VASL, including sublexcial structure (Chapter 8), the emergence of a new system for generating polycomponential signs (Chapter 9), and the reconfiguration of the deictic system (Chapter 7).
Two and Three Person Configurations
In both of the basic configurations, tactile contact between participants increased. For example, in Figure 6.1, Adrijana (left) is listening to Collin (right) using her left hand. Adrijana uses her right hand to provide tactile back channeling cues. In addition, Adrijana and Collin’s thighs are in contact from the knee to the hip. In Figure 6.2, Chantelle (center) is signing to Adrijana (right) and Nina (left). The legs of all three participants are intertwined up to the mid-thigh. In addition, the hands of both addressees are resting on one another and on the knee of the signer. In this kind of configuration, all participants have access to the feedback that is being exchanged, including things such as backchanneling signals, turn-taking cues, signs of boredom, interest, annoyance, and fascination.
If Chantelle produces an utterance with shaking, clammy hands, it will be construed differ-ently than if she produces the same utterance with warm, dry hands and a clear, decisive rhythm. In configurations like these, utterances were re-united with the embodied partic-
Figure 6.1: Two-person Configuration
Figure 6.2: Three-Person Configuration
ularities of their production. DeafBlind people began to respond in to material clues in particular ways, and those ways of responding could be coordinated, given the kind of access that basic participant frameworks allowed.
Given basic participant frameworks, plus the embodied particularities that came with them, DeafBlind people had all they needed to elaborate, generating alternate frameworks as well. They could participate in a conversation, but they could also start new conversations, end conversations, overhear a conversation in which they were not previously involved, and observe the activity of others, even when utterances were not being exchanged. For example, in Figure 6.3, two people are seated, playing a game of tactile dictionary, while the two people standing behind them are observing their activity. Establishing basic participant frames made derivative frameworks like this intuitive(8).
Figure 6.3: Tactile Observation
As DeafBlind people established new orientation schemes, the material dimensions of objects were incorporated into motoric and perceptual patterns in new ways. The same is true for patterns in interaction. For example, playing tactile pictionary with direct access to your competitors, you pick up on all kinds of things--You know that playdough is being rolled out, but beyond that, you know how it is being rolled out--at what pace, with what intensity, and to what effect. From there, you can speculate about the temperament of the roller, or you can notice traces of their culinary habits, mixed with the smell of their dog and their body, and you can associate this unique olfactory combination with them, like a fingerprint or a signature that can be recognized anywhere. You know that there is another player there as well, but beyond that, you have access to the tension in the tendons and muscles of their hands, arms, and neck. From there, you can speculate about their level of interest in the game, or you can begin to appreciate their tactile agility as their fingers dart around the curves and corners of the sculpture, and then leap up off of the object to announce a best guess to the group.
After a while, you begin to like people, or not. You begin to feel drawn into things. The meanings of utterances begin to be overdetermined and expectable, and this leads you to feel that you in something and that you are not alone. People with stable sensory capacities take such things for granted, but for the participants of the pro-tactile workshops, recovering participant frameworks that allowed for the observation of others felt novel and thrilling. When everything was mediated by interpreters, utterances were dissociated from the authors that produced them, from the activity that preceded them, and from the kinds of affection, repulsion, and curiosity that grow only through watching at close range, how people habitually interact with objects and with other people. On one hand, these embodied particularities and the concrete patterns they were subsumed by accrued to the indexical ground of reference. On the other hand, the very same embodied particularities began to be evaluated against new frames of social value. The former accrues to the structure of the deictic field, while the latter accrues to an emergent tactile habitus.
6.3 Conclusion
The reconfiguration of the deictic field did not transpire (primarily) by means of cognitive representation. An olfactory signature is not a cognitive representation, nor is a rhythmic field that subsumes the textures of gravel, marble, and brick as it moves over them. These are concrete patterns that subsume material elements as they go, not abstract concepts that represent them once and for all. Concrete patterns form pathways, forcefields, configurations, and trajectories, about which, and through which, shared knowledge can be produced; all of this contributes to the structure of the deictic field. These structures presuppose certain cognitive, perceptual, and motoric capacities, such as proprioception and olfaction. However, the transformation that gave rise to them can only be grasped by analyzing specific practices and the material clues that participants use to organize them.
In the next chapter, I continue to analyze pro-tactile communication practices in order to understand how deictic signs were transposed onto the new deictic field, calibrated to it, and created within it. I argue that this process constitutes a divergence in the deictic systems of TASL and VASL, and in the remaining chapters of the dissertation, I show how changes in the deictic system of TASL echo in the grammar, affecting multiple subsystems, ultimately leading to the emergence of a new, tactile language.
Chapter 7
The Deictic System of TASL
In the previous chapter, I argued that the deictic field of Tactile American Sign Language was reconfigured as a result of pro-tactile communication practices. This chapter examines the effects of that transformation on the deictic system of TASL. Unlike the deictic system, which is part of the grammar, the deictic field is organized by modes of access and the structures of participation that are built up around them. In order to use a deictic sign, the language-user must coordinate grammatical elements and relations with elements and relations organized by the deictic field. Coordination can be loose or it can be tighter and more restricted. The tightening of relations between linguistic and deictic elements, as a language develops, is what I call “deictic integration.” In this chapter, I identify deictic integration as a driving force in the grammatical divergence of Tactile American Sign Language (TASL) and Visual American Sign Language (VASL).
Integration is a type of “embedding.” Embedding describes a process whereby linguistic elements undergo “reshaping” “conversion” and “transformation” as values are retrieved from deictic and social fields (Hanks 2005a:194). Patterns of retrieval align the linguistic system with the fields it articulates to so that, as Bu¨hler says, language is not “taken by surprise” when it encounters the world (2001 [1934]:197). Rather, the linguistic system acts like a network of receptors, which have been shaped by these patterns and are therefore set to receive certain field-values and not others.
Four mechanisms of embedding have been proposed: practical equivalences, counterparts, rules of thumb (Hanks 2005b) and integration (Edwards 2012). In the first three types of embedding, transformations affect the meaning of the sign, while the form remains constant. Integration, in contrast, accounts for cases where both form and meaning are transformed as they are embedded (See Section 1.2.3 in Chapter 1 for more on embedding). In this chapter, I argue that as new patterns of retrieval in a tactile field began to cohere, the deictic system was transformed. This is where the grammatical divergence of TASL and VASL begins.
In order to understand the scope of the phenomenon, as well as its projected implications, I begin by introducing three categories of signs that rely on a coordination of linguistic and deictic elements. They are: “pointing signs” (Section 7.1.1), “polycomponential signs” (Section 7.1.2), and “directional verbs” (Section 7.1.3). Once the deictic field was reconfigured, these categories of linguistic signs snapped to a new set of deictic coordinates, which triggered additional, language-internal effects. I identify three mechanisms driving this process: signal transposition, sign calibration, and sign creation. Signal transposition involves the transposition of handshapes onto the body of the addressee, yielding a tactually accessible ground. This process has phonological implications (see Chapter 9), but is driven by the coordination of the linguistic system and the deictic field. Sign calibration is an interactional process through which participants clarify and adjust signs which have lost their capacity to refer to objects in the immediate environment. DeafBlind participants calibrated signs intuitively in the flow of interaction when confusion, irritation, unresponsiveness, or requests for clarification arose. As a result of these procedures, signs grew new receptors for material clues, this time set to receive values via tactile coordinates. As this process was honed in the pro-tactile workshops, new rules for the formation of signs began to emerge and novel forms were created that would not be predicted given the grammar of VASL. I call this process sign creation.
In this and the following two chapters, I argue that these processes affect the internal organization of the deictic system of TASL, and they echo further into the grammar, affecting the phonology, morphology, syntax, and semantics of TASL. At TASL’s current stage of development, effects have only begun to manifest. However, given stable conditions in the social and deictic fields, a more comprehensive reconfiguration of the grammar appears inevitable.
7.1 Three Types of Deictic Signs in Signed Languages
Deictic signs do two things: name and point. Therefore, when a deictic sign is applied in the speech situation, it receives values from two distinct fields. Its naming or “characterizing” component receives values from the “symbolic field, ” while the pointing, or “deictic” component receives values from the “deictic field.” All deictic signs are composite in this respect, composed of both “symbols” and “signals” (Bu¨hler 2001 [1934]:99). In order to speak deictically, values from each field must be coordinated as the utterance unfolds. Together, these processes account for the definiteness and directivity of reference.
In signed languages, coordination of deictic and characterizing elements is often accomplished by directing characterizing elements, such as handshapes an their associated meanings, toward locations in the deictic field. There are three general categories of deictic signs in VASL: pointing signs, polycomponential signs, and directional verbs. In what follows, I show how each category of sign is affected by deictic integration in the Seattle DeafBlind community.
7.1.1 Pointing Signs
A pointing sign canonically involves directing a handshape like the one in Figure 7.1 toward an object of reference that is accessible to both speaker and addressee.1 Mutual accessibility can be established not only via perception, but also via memory, anticipation, imagination, or any other mutually accessible relation (Hanks 2005a). From the perspective of the language-user, directivity and definiteness of reference are easy to achieve because, as Bu¨hler says, the deictic sign “can do nothing other than take advantage--naturally to a greater or lesser extent--of the possibilities the deictic field offers them” (2001 [1934]:145). In other words, the pointing sign does not abandon the addressee in a vast and unstructured space of potential. Rather, like a signpost positioned at a fork in a pathway, the pointing sign clarifies potential ambiguities in a field of already-limited possibilities (ibid.). The deictic system is part of the grammar, while the deictic field is part of “context.” In order to understand the effects of changes in the deictic field on the deictic system, the two must remain analytically distinct.
Figure 7.1: Pointing Handshape
From the perspective of the grammar of VASL, the pointing handshape in Figure 7.1 is a semantically minimal linguistic element containing a signal to direct one’s attention toward a definite object. Definiteness derives from the linguistic system. For example, in English, here is not there, I am not you, and this is not that. Each of these oppositions generates definite categories, which analyze objects and phenomena in particular ways.
In spoken languages, the deictic system is composed of discrete, oppositional categories, which encode highly schematic semantic distinctions. There is growing evidence that pointing signs in signed languages do too. It has been shown that pointing signs can act as determiners, demonstrative pronouns, anaphoric deictic elements, personal pronouns, and that they can be lexicalized as temporal deictics such as yesterday and tomorrow, and these different functions correspond to stable differences in form (Pfau 2011:148-151). For example, locative pointing signs and nominal pointing signs can be distinguished according to differences in the orientation of the handshape, the extension of the arm, and eye-gaze (ibid.). These differences contribute to the definiteness of reference, and they inhere in the linguistic system.
Directivity, on the other hand, derives not from the language, but from the deictic field. In the deictic field, we orient to pathways, grids, channels, and trajectories, which have settled out of patterns in activity. These structures are organized around particular modes of access and orientation, participant frameworks, bodily configurations. We become habituated to those frameworks, and a hierarchy is established, which contains a “basic” level. These basic, maximally expectable participant frameworks are called “participant frames” (Hanks 1990:148). As particular frameworks become more expectable, certain bodily configurations that are associated with them also become more expectable.2 For example, users of VASL can communicate with one another while riding side by side on bicycles, sitting side by side in a car, or laying side by side in bed, but each of these bodily configurations requires adjustments and elaborations of a more expectable configuration, namely, standing or sitting face to face, about 3 to 5 feet from each other. This is not a “neutral context” but rather a basic bodily configuration, in the sense that it is assumed by participants on a habitual, motoric level as they move through interactions (Hanks 1990:151-2). Divergences from the assumed configuration require adjustment, elaboration, or compensation.
Participant frameworks contribute to the structure of the deictic field and when configurations become routine for participants, the grammar is not caught by surprise. Rather, it develops contextual receptors for values retrievable from those frameworks. For example, grammatical person categories in pronominal systems are set to receive values from participant roles in the deictic field according to particular relations that have emerged out of that field (Hanks 1990:148). Participant roles are organized by participant frameworks that incorporate particular bodily configurations, and in signed languages, those configurations become important for formal distinctions between pointing signs.
For example, in VASL, the pronominal system makes a two-way distinction between first and non-first person (Meier 1990:377).3 The first-person pronoun is produced with a pointing sign directed toward the signer and the second is produced with a pointing sign directed away from the signer. These formal characteristics align with a basic bodily configuration occupied by signer and addressee. When these signs are instantiated in the deictic field, they can be subject to momentary formal modifications. However, insofar as basic participant frameworks are in play, this two-way formal distinction in the pronominal system remains stable. In other words, the pronominal system in VASL has contextual receptors built in for basic bodily configurations, as opposed to actual bodily configurations. This is the difference between a pointing gesture and a pronoun in VASL: the former can retrieve a wide range of values from the deictic field, while the latter is set to receive a very narrow range of values (e.g. the obligatory selection of first person or second person forms). From this perspective it seems likely that pronouns, in VASL, have been derived from pointing gestures via deictic integration, leading to tighter and more restricted pathways for indexical retrieval.
There are many other types of pointing signs, which integrate linguistic and deictic ele-ments in more or less restricted ways.4 At the far end of the spectrum, deictic elements can be caught up in and coordinated by the grammar in highly restricted ways, thereby taking on grammatical functions. Directional verbs, for example, integrate characterizing and anaphoric deictic elements to mark syntactic relations (see Mathur and Rathmann 2002 on directional verbs). The anaphoric deictic signs retrieve values from the anaphoric deictic field. However, once the values have been retrieved, they act like arguments of the verb, as opposed to referents. This type of deictic integration has been associated with the emergence of new languages (A. Senghas 1999, A. Senghas and Coppola 2001, Kegl et al. 2001) and language-like gestural communication systems (Goldin-Meadow and Feldman 1977, Goldin-Meadow and Mylander 1983, Goldin-Meadow and Morford 1985). In other words, gestural communication systems become more grammatical, characterizing elements tend to “point more” (Meier and Lillo Martin 2012:154).
This process, which leads characterizing signs to point more, is what I am calling deictic integration. The fact that deictic integration plays a significant role in processes of language emergence suggests that languages do not emerge by abstracting away from their contexts of use (Sandler et al. 2005:2664-5).5 Rather, new languages emerge as linguistic and deictic elements and relations are coordinated in tighter and more restricted configurations.
So far, we have examined the effects of deictic integration on pointing signs. In the next section, we examine the effects of deictic integration on “polycomponential signs,” which combine characterizing and deictic elements to form complex constructions.
7.1.2 Polycomponential Signs
Polycomponential signs also integrate characterizing and deictic elements, however, they do so in more complex configurations than pointing signs (Slobin et al. 2003, Quinto-Pozos 2007, Morgan and Woll 2007, Schembri 2003, also see section 9.2 in Chapter 9). The semiotic status of polycomponential signs varies. At one end of the continuum, they are highly responsive to momentary dynamics in the deictic field, and at the other end, deictic elements are integrated in tighter and more restricted ways with the grammar so that only a limited set of values (which remain stable across contexts) can be retrieved.
In 2006, I conducted an interview with a Deaf Interpreter6 in Seattle, who I will call Harli. At the time, Harli was working full time in the DeafBlind community and was known for his mastery of polycomponential signs in VASL, or “classifiers” in the local discourse. His analysis highlights the responsiveness of polycomponential signs to dynamics and relations that shape the deictic field.
The interview was part of a larger project, aimed at understanding how sighted interpreters and DeafBlind people worked together to gain access to the immediate environment. Like many other people I interviewed, Harli insisted on the importance of polycomponential signs in this context (see Edwards 2012). So I asked him why they were so important. He explained that, for example, “ASL has the sign wat e r . But that’s just a word. Classifiers are differ-ent,” he said. “They’re broad in scope, they can do anything, include anything . . . They’re wide open.” So I asked him for examples. He produced a sequence of polycomponential signs that might be used to talk about water:
There can be rolling waves, undisturbed stillness, the first ripples of a rowboat, the first tap of the oars, a watery surface breaking from beneath, concentric circles extending, reverberating. There’s sweat on the brow that forms relentlessly, no matter how many times you wipe it off, the accumulation of moisture, wetness. You can take a gulp of water from a glass or you can take a quick sip. You wipe moisture off of your face when you’re sweating. You can’t capture all of that with the word wat e r , but you can with classifiers.
(a) (b)
Figure 7.2: A Perfectly Still Body of Water
I have reproduced one small portion of this explanation in order to explore its composition. In Figure 7.2, Harli characterizes the surface of the water as flat. The b-handshape is a characterizing element that corresponds to a quality of flatness and/or rectangularity. The signer’s right hand extends out in front of his body, thereby attributing the quality of flatness to a broad surface. In this context, the handshape takes on a deictic function. It is transformed into a “reception signal” (Bu¨hler 2001 [1934]:122).7 It causes the the addressee’s gaze to turn in the sphere of the imagination, ready to receive particularities associated with the characterizing aspect of the signal. A lifetime of encounters with flat things--synthesized and distilled--flashes before the mind and a connection is activated between that and what is present to the senses. Unless it doesn’t.
Notice that the sign is produced directly under the eyes of the signer. The location of the hands relative to the eyes of the signer anchors the representation in a perspective.8 The possibility of embedding the b-handshape in the deictic field turns on the mutual accessibility of this perspective to both speaker and addressee, or a Schutzian “reciprocity of perspectives.”9 The representation in Figure 7.2 articulates to the deictic field of VASL and resemblance relies on an integration of the two. Given nonreciprocal perspectives generated by a difference in the structure of the deictic field occupied by speaker and addressee, the resemblance no longer holds and the sign no longer signifies.
Perspective is built up around orientation schemes and shared modes of access and orientation, which are, in turn, built up around sensory systems with certain capacities and limitations. If the reader is sighted, she will likely perceive a resemblance, or iconic relation, between the b-handshape and an undisturbed watery surface. However, there is no field that structures that connection for the grammar; Indeed, “there is no pictorial field in language” at all (Bu¨hler 2001 [1934]:220). Rather, linguistic elements are filtered through a series of requisite “barriers” or fields-- syntax, morphology, phonology, and “it is only beyond this point that they display something like a secondary touch of a sound painting” (ibid.). Resemblance relies on the the coordination of linguistic and deictic phenomena.
This same kind of coordination is enacted in the next segment of the polycomponential sign (Figure 7.2b). Here, the signer sucks his cheeks in and seals his lips, while holding his hands motionless on the same plane that was established in Figure 7.2a. The sucked-in cheeks combined with sealed lips are a recognizable and repeatable linguistic element, which contrasts with puffed-out cheeks and sealed lips. The former is associated with flat, thin, empty, or motionless things, while the latter, is associated with thick, fat, full, or moving things. The placement of the hands near the eyes and the backward tilt of the signer’s head are not linguistic elements, but rather, contribute to the representation of a perspective. Perspective organizes the deictic field so that modes of access and orientation snap to a shared grid of overlapping coordinate structures.
Finally, the anaphoric deictic field often comes into play in polycomponential signs. Here, consistency in the location of the construction as a whole as it is built up sequentially as the signer links wat e r to flatness, flatness to a surface, a surface to a lack of visible movement and depth. Without that first sign: wat e r , there is no semantic clue that this is a watery surface, as opposed to some other--a concrete, nylon, or molecular surface, for example. Characterizing and deictic elements must be coordinated anaphorically as the polycomponential sign is constructed, and the anaphoric deictic field is constrained by modes of access and orientation shared across the group of language users.In a polycomponential sign like this, linguistic and deictic elements are loosely coordinated. They can easily be detached and rearranged, which is what gives language users the sense that they are “wide open” and “capable of anything.” Over time, though, certain combinations can become integrated with one another in more restricted ways, as is the case in some directional verbs.
7.1.3 Directional Verbs
The third type of deictic sign in VASL is directional verbs, or “verbs that point” (Meier and Lillo-Martin 2012). Directional verbs can be understood in contrast to “plain verbs” like love (Padden 1990:119). In the sentence, “I love you” and “you love me” love is produced in precisely the same way. give, on the other hand, is a directional verb. For the sentence “I give you the book,” the sign begins near the signer’s body and moves toward a location associated with the receiver. If there is more than one recipient, the sign will move from the body of the signer to a series of locations, marking the number of recipients involved. There are several different types of directional verbs, some of which are more like polycomponential signs in that they can retrieve a wider range of values from the deictic field. Some directional verbs, such as “agreeing verbs,” retrieve only a limited range of values from the deictic field. Agreeing verbs incorporate those values into the grammar in such restricted ways that their status as either “referents” or “arguments” becomes ambiguous.
7.1.4 The Problem
Every approach to directional verbs in signed languages encounters the same problem: how can symbolic and indexical elements be accounted for in a unified framework? For example, Klima and Bellugi, in their pathbreaking work The Signs of Language, appeal to an “indexic plane,” which extends out around the signer’s body as a kind of surface on which “target loci” are organized (1979:273-4). It is not clear, however, whether the indexic plane is part of the linguistic system or part of the extralinguistic context.
On the one hand, the indexic plane is part of “signing space.” Signing space is the space within which signs are produced (ibid.:51). It is organized internally by arbitrary distinctions and relations in the linguistic system (ibid.). On the other hand, loci within the indexic plane are determined by the actual positions of people, objects, and events in the immediate environment. For example, in the case of person reference, they claim that “[t]he actual positions of the signer and addressee determine the locations of their indexic loci in the indexic plane ... The same can be the case with objects and other individuals that happen to be in sight, though here other conventions also come into play” (ibid.:277). The indexic plane is then incorporated into polycomponential signs or “classifier constructions,” as well as certain classes of verbs in more or less obligatory ways. They explain:
In discourse that extends beyond the speaker, the addressee, and the here and the now, to objects, events, and persons not present, there are a variety of conventions for establishing indexical loci. The signer as narrator can use the indexic plane as a kind of stage on which indexical loci are created by indexic signs alone, or in conjunction with noun signs, or by positioning certain noun signs or classifier signs in particular locations on the indexic plane. Verb signs can move toward and between such loci and can be articulated at them, thereby expressing anaphoric reference. In addition, verbs can themselves establish indexic loci (and thus express differences in indexic reference). Such referential distinctions must be
incorporated into ASL verbs in specific sentenial contexts. Thus *JOHN LOOK-AT-(ME), with a verb uninflected for referential indexing, is ungrammatical in ASL.
In other words, producing the VASL sign look-at in the direction of the addressee, and then tacking on the pronoun me is ungrammatical. The integration of the pointing sign is obligatory. Under this analysis, the indexic plane organizes linguistic elements in relation to the speech situation. However, the linguistic system also integrates deictic elements in restricted ways.
This tightening of the relation between the linguistic system and the deictic field, or deictic integration, results in what Klima and Bellugi call “indexical inflection” (1979:273-4). They list seven types of indexical inflection, including reciprocal, number, distributional aspect, temporal aspect, temporal focus, manner, and degree (1979:273-4). Like inflection in spoken languages, these processes involve the modification of a root. Unlike inflectional processes in spoken languages, the root is modified by moving it toward locations in space. The locations to which they are moved are not discrete, listable forms.
Therefore, despite their role in linguistic processes, they do not yield to linguistic analysis.
For example, the “uninflected” (or unmodulated) form of give is produced with an outward movement from the torso of the signer. In order to make the verb reciprocal, the movement is modified so it begins in a location away from the signer and moves toward the torso of the signer (Klima and Bellugi 1979:274). The same sign can be inflected for “distributional aspect” by sweeping it in an arc across the torso of the signer, stopping along the way at multiple loci (ibid.:276). The status of these locations, or “loci” as linguistic, non-linguistic, or some combination of the two has been a major source of debate in the field of sign language linguistics.
These problems are all rooted in a conflation of deictic and linguistic phenomena. In Klima and Bellugi’s work, it manifests as an ambiguity between “signing space” and the “indexic plane.” In signing space, syntactic relations are established between a verb and its arguments by moving the verb between discrete loci. On the indexic plane, deictic relations are established between a verb and its referents. So which is it? And how can a verb have referents? This is the problem. Rathmann and Mathur (2002) and Mathur and Rathmann (2012) identify three main approaches to this problem, which they apply to directional verbs. Each analysis presupposes an approach to deictic signs more generally, which can be productively compared to the notion of deictic integration that I put forth in this chapter.
The R-Locus Analysis
The first approach yields the “R-locus analysis,” which is short for “Referential Locus.” Mathur and Rathmann sum up this approach as follows:
In this analysis, each noun phrase is associated with an abstract referential index. The index is a variable in the linguistic system which receives its value from discourse and functions to keep the referent of the noun phrase distinct from referents of other noun phrases. The index is realized in the form of a locus, a point in signing space that is associated with the referent of the noun phrase. This locus is referred to as a ‘referential locus’ or R-locus for short (2012:140).
The location of the entity with which the verb “agrees” (the R-locus) is a formal manifestation of an abstract variable, which is associated with, but not identical to, a referent. It is not the actual location of the referent that is listed in the grammar, but the abstract, underlying category.
In the sentence “Jayne gave Bob (something),” the signer finger spells j-a-y-n-e and then localizes jayne in space by pointing to “R-locus (1).” The signer then finger spells b-o-b and localizes bob by pointing to R-locus (2). R-locus (1) is clearly distinct from R-locus (2) (See Figure 7.3a). In Figure 7.3b, the verb give moves from R-locus (1) to R-locus (2). The NPs jayne and bob are represented by loci, which are kept distinct from one another. As the discourse unfolds further, those loci can be referenced again, without explicitly identifying them with their associated NPs. Therefore, the R-locus is referential in the sense that it derives its value from the anaphoric deictic field, or what Mathur and Rathmann call “discourse.” However, insofar as those loci “represent” their associated NPs, they also establish syntactic relations between the verb and its arguments.
(a) (b)
Figure 7.3: Referential Locus
From a practice perspective, “R-loci” are anaphoric deictic elements, which have been caught up in and coordinated with the syntactic system of the language. In other words, they have undergone deictic integration. Since deictic integration is a bi-directional process, this also means that the grammar has grown more dependent on the anaphoric deictic field to express syntactic relations. This dependence is unavoidable from a linguistic perspective because there is no way of restricting possible coordinates for the loci, and therefore no way of listing them as discrete, repeatable elements.
This problem is solved in the R-locus analysis by positing an abstract linguistic variable, which is associated with formally non-specific loci. The signer can point anywhere, as far as the grammar is concerned, as long as the NPs can be identified and kept distinct, via their anaphoric proxies (Mathur and Rathmann 2012:140). In a practice approach, these pointing signs are constrained not by the grammar, but by modes of access and orientation, as well as the participant frameworks, participant roles, and bodily configurations that become conventional within those constraints. These constraints cohere in the deictic field, not in the language, and yet, in order to produce a coherent and comprehensive theory of VASL syntax, the deictic field of VASL must be taken into account. In this approach, abstraction is not necessary. Instead, a lateral process of integration accounts of the interdependence of the syntactic system and the anaphoric deictic field.
The second approach to directional verbs identified by Rathmann and Mathur (2002) and Mathur and Rathmann (2012) is the “featural analysis.” This approach, like a practice approach, posits rules for coordinating semiotically distinct elements in restricted ways. Unlike a practice approach, the analysis relies on “gestural space” which is conceived of as a mental space. In a practice approach, the relevant construct is the deictic field. The deictic field is an historically emergent configuration of participation structures, built up around shared modes of access and orientation. It is not defined negatively with respect to language, i.e. it does not contain everything that linguistic principles cannot account for. Rather, it is governed by its own, deictic principles of organization. Since deictic principles organize historically emergent fields of activity, and are constrained by physical capacities and modes of orientation, they are not reducible to universal cognitive principles. Therefore, while cognition is clearly involved, the deictic field is not reducible to a “mental space.” Nevertheless, in the following section, I argue that a synthesis of the featural analysis with a practice approach is a useful and promising endeavor.
The Featural Analysis
Rathmann and Mathur argue that any approach to spatial or agreeing verbs must address, more explicitly, the interface between gesture and language (2012:144).10 Gesture inheres in “gestural space11” which interfaces with grammar, but is not included in it. Gestural space and grammar are both mental constructs. The former is relatively unstructured and the latter is highly structured. With this as the starting place, the following problem is immediately encountered:
[T]he linguistic system cannot directly refer to areas within gestural space (Lillo-Martin/Klima 1990; Liddell 1995). Otherwise, one runs into the trouble of listing an infinite number of areas in gestural space in the lexicon, an issue which Lid-dell (2000) raises and which Rathmann and Mathur (2002) describe in greater detail and call the listability issue. For example, the claim that certain verbs ‘agree’ with areas in gestural space is problematic, because that would require the impossible task of listing each area in gestural space as a possible agreement morpheme in the lexicon (Liddell 2000) (cited in Mathur and Rathmann 2012).
Mathur and Rathmann (2012) argue instead that for a subset of directional verbs, which encode number and person (“agreeing verbs”), the NP is marked with a finite set of person and number features.12 The verb agrees not with all aspects of the conceptual representation of the referent, but only the finite set of features that are linguistically significant (i.e. person and number).
For a sign like give, the first person form is specified phonologically for a location near the torso of the signer. The non-first person forms are realized via a “zero morpheme” which is then paired with a deictic gesture as it is realized. Via an interface between “spatio-temporal conceptual structure” and “the articulatory-phonetic system,”13 the form of the sign undergoes a phonological readjustment process called “alignment” where an abstract geometrical relation between elements is pre-given in the syntactic structure, a vocabulary item is inserted, and a phonological readjustment rule is applied to bring the abstract geometric coordinates in line with phonological and phonetic constraints in the language. This process generates the specific form of the verb, including directionality, but also orientation and other small variations in form that are attested in agreeing verbs (Mathur 2000:38-9).
The featural analysis is consistent with a practice approach in the sense that semiotically distinct phenomena are distinguished, establishing a firm boundary between grammatical and contextual phenomena. These elements are then coordinated, or “aligned” as they are instantiated via a phonological readjustment rule. This is a rule-governed, grammatically determined version of “embedding.” Via embedding, linguistic elements also undergo reshaping, conversion, and transformation as values are retrieved from non-linguistic sources (Hanks 2005a:194). Over time, patterns of retrieval align the linguistic system with the fields it articulates (Bu¨hler 2001 [1934]:197). Rather, the linguistic system grows receptors (cf. “zero-morphemes”), which have grown sensitive to these patterns, and are therefore set to receive a more restricted set of field-values (e.g. highly schematic person and number val u e s ) .
Agreeing verbs have undergone a process like this. This tightening of linguistic and deictic relations, into more restricted configurations, is what I am calling deictic integration. Another example of verb that has been formed via deictic integration is look-at. look-at-you is produced with a directional movement toward the addressee, while look-at-me is produced with a directional movement toward the signer. At this point in its diachronic development, the verb look-at has a deictic receptor that requires the signer to retrieve one of a limited set of values in the deictic field. These values look more like grammatical person categories than those retrieved by polycomponential signs, since there is a restricted set of alternating values, one of which must be selected. However, this shift toward more language-like semiosis does not imply a “loss” of indexicality. Rather, it is a tightening and restriction of possible relations between the linguistic system and the deictic field.
In a practice framework, the emphasis is (not surprisingly) on the determinate effects of practice, rather than the determinate effects of grammar. Nevertheless, the process that account for the alignment of language and context in the featural analysis and a practice approach are not contradictory; they are complementary, and a synthesis of the two is promising.14 Such a synthesis would involve replacing “gestural space” with the deictic field,15 the former a relatively unstructured mental construct governed by universally applicable cognitive principles, and the latter, an internally complex contextual construct, governed by deictic principles. Second, the “zero morpheme” would be replaced with a contextual receptor, primed to receive a restricted set of values from the deictic field. In other words, the NP would be marked by way of deictic integration.
The Indicating Analysis
In contrast to the featural analysis, Scott Liddell argues that the “locus” does not need to be treated as a linguistic element that is specified phonologically and stored in the lexicon as a distinct morpheme at all (Rathmann and Mathur 2002:375). Instead, he says, it should be treated as a conceptual representation of spatial relations in the world. In defense of this claim, Liddell points out that “give-to-a-tall-person would be directed higher in the signing space, whereas give-to-a-child’ would be directed lower, relative to the body of the signer. These verbs, then, are best described as being directed to entities in “mental spaces” and not to linguistic loci, specified in the grammar. Therefore, Liddell calls this class of verbs “indicating” verbs rather than “inflecting” or “agreement” verbs. However, any sign can be modified as it is instantiated in the deictic field (Edwards 2012:52-60). The question is whether the verb is momentarily sensitive to a particular dimension of context, or if it requires retrieval of a particular value, which remains stable across contexts. In the former case, linguistic and deictic elements are merely coordinated. In the latter case, they are integrated.
Deictic integration makes something like “indexical inflection” possible, since deictic elements can become integrated with syntactic structures in highly restricted ways. This returns us to Klima and Bellugi’s initial analysis, but with a more principled way of accounting for the linguistic and non-linguistic dimensions of the process. Under this perspective, the featural and indicating analysis are more consistent with one another than they would otherwise appear to be. However, the indicating analysis extends further into the language-external world, and in the process, reveals certain key distinctions between cognitive and practice approaches.
In a cognitive framework, points to participants that function as pronouns and verbs are both directed at elements in what Liddell calls “real space” (1995, 2003:81-7). Real space is “a person’s current conceptualization of the immediate environment based on sensory input” (Liddell 2003:82). In real space, people treat objects as if they were real, so that a conceptual entity is “treated as a real physical entity, having all the physical properties of the physical entity, including being located at a particular place in the immediate environment”
(ibid.). Using a book as an example, Liddell emphasizes the distinction between real space and physical space:
The physical book is not part of real space since real space only contains conceptual entities. The real-space book is an internal representation of the book conceptualized as being external to me. Fortunately, the locations of physical entities and the corresponding conceptualized locations of real-space entities generally overlap. That is, I reach toward the book as conceptualized in real space. Years of experience give me confidence that I will encounter a physical object there (ibid.:83).
Under this analysis, directional verbs are constrained by cognitive capacities that enable us to make functionally adequate, mental replicas of our physical surround, and point at elements situated in those replicas. These capacities are universal, so real space is guaranteed to be reciprocal for speaker and addressee (ibid.:86). Therefore, the person speaking deictically is “in a position to be of assistance in terms of providing clues that will help identify the real space entities being discussed” (ibid.). In cases where cognitive and perceptual schemes align, this works out well. However, where cognitive and perceptual patterns diverge, as is the case for people whose sensory orientations shift, problems arise.
Among DeafBlind people real space and physical space do not align. Under these conditions, each of Liddell’s assumptions, which undergird his analysis of directional signs becomes a research question: How do objects and relations in the immediate environment get incorporated into conceptual representations? How are they linked with linguistic and deictic elements in the language? How can sensory orientations become stable across a group of language users, allowing for a reciprocity of perspectives? How is pointing guided by these shared orientation schemes and modes of access? The answers to these questions require attention to a broader range of phenomena, viewed through a broader range of analytics.
7.2 A Practice Approach to the Deictic Systems of Visual and Tactile ASL
In all three approaches given above, the analysis begins and ends in conceptual and/or linguistic representations, which maintain a non-problematic relation to the external world. For Liddell, there is no analytic advantage in separating cognitive representations from the things they represent, since “[i]n general, real space lines up well with physical things in the world” (Liddell 2003:84). Real space is, for all intents and purposes, a copy of physical space. Among DeafBlind people, links between cognitive and linguistic representations, on the one hand, and experience on the other, are disjointed. The project of realigning them is a practical one, constrained by socio-historical and interactional processes, which are not reducible to, or best understood as, cognition.
In a practice approach, these problematic relations must be approached at the outset by examining the historical development of orientation schemes, which are built up around a particular habitus in a particular place and time (Section 6.1 in Chapter 6). From there, structures of interaction, such as participant frameworks and the bodily configurations they incorporate, conventional turn-taking, attention-getting, and back-channeling mechanisms, must be brought into alignment with the socio-historically given habitus.16 That is to say, interaction is constrained by socio-historical dynamics. If touch is a highly restricted modality in the social field, for example, it will not be drawn on in the development of new interactional practices.
All of this shapes and constrains the deictic field of any particular language. Therefore, the deictic field is not reducible to a conceptual representation of the immediate environment, nor is it unstructured physical space. It is organized around and constrained by shared modes of access and orientation that emerge under particular social and historical circumstances. This does not contradict the fact that representations of physical space are constrained by the universal cognitive capacities of humans; it is a complementary fact, which can account for the alignment of “real space” and “physical space,” not as a given, but as an outcome of ethnographically discoverable processes.
Moving this way from the social field to the deictic field to the linguistic system, it becomes clear that there are mutual dependencies between linguistic, cognitive, and deictic principles in directional verbs and other deictic signs. The grammar does not simply retrieve values from the deictic field; it is shaped by it. And as grammatical and deictic elements are coordinated with one another in tighter and more restricted ways, semiosis becomes more language-like.
In the next section, I show how the deictic system of TASL was transformed as values were retrieved from a new, tactile deictic field. I identify three interactional mechanisms through which this transformation took place: signal transposition, sign calibration, and sign creation. Signal transposition involves a transposition of handshapes onto locations on the body of the addressee, yielding a tactually accessible ground. Sign calibration is a process through which participants intuitively adjust signs that have lost their referential capacity. As this process is honed, new rules for the formation of signs are generated and novel forms are created that would not be predicted given the grammar of VASL. I call this process “sign creation.”
7.2.1 Signal Transposition
Signal transposition is a type of deictic transposition, or a “displacement or alteration of the indexical ground of utterances” (Hanks 1990:197). For example, in quoted speech, the pronoun “I” can, and often does, refer to someone other than the speaker, as in the sentence, “You said, ‘I don’t want any’ ” (ibid.). In this example, the formal element “I” is projected onto a displaced plane by placing it after the phrase “You said.” This is an example of a deictic transposition. In signal transposition, the formal element, which is the handshape, is projected onto a displaced physical plane, which is the body of the addressee. As the deictic field was reorganized along tactile lines in the Seattle DeafBlind community, signal transposition emerged as part of a broader figure/ground shift in the immediate environment. It is an interactional process, however, it has linguistic consequences.
Prior to the pro-tactile movement, deictic signs were produced as they would be in VASL. That is to say that they were directed toward referents situated in the deictic field of VASL. Visual access to the immediate environment was assumed, as were visual memories, and the capacity to imagine visual relations and dynamics.
From the perspective of a tactile person, attuned to the tactile dimensions of setting, a pointing sign like the one in Figure 7.4, is uninterpretable in two respects. First, given visual access to the immediate environment, the sign launches a trajectory against the visible backdrop of the signer’s body and other visible dimensions of context. If the context is not visually accessible, the trajectory will be abstract. Second, the sign articulates to the deictic field of VASL, which requires visual access and modes of orientation. Without access to that field, reference will be more difficult to resolve.
Figure 7.4: Tactile Reception of VASL Pointing Sign
The solution to these problems was twofold. First, DeafBlind people established a deictic field, which was accessible to anyone who cultivated tactile sensibilities and modes of orientation. This structured the space within which pointing signs are directed. Second, the sign itself was transposed onto the body of the addressee. For example, in Figure 7.5, the signer has just established a correspondence between the palm of the addressee and the United States.17 She then points to a location on the palm of the addressee in order to locate a specific state in relation to the rest of the country. This is an example of pointing in an anaphoric deictic field organized along tactile lines. Just as VASL users establish locations in the space in front of the signer and then refer back to them as the discourse unfolds, TASL signers establish locations on the body of the addressee and refer back to them as the discourse unfolds. While this change is motivated by changes in the deictic field, it has implications for the internal organization of the deictic system of TASL.
The deictic system of TASL is new. However, given the changes that have taken place in the deictic field, further developments are expectable. First, pointing signs in visual
Figure 7.5: A Transposed Pointing Sign
signed languages are distinguished from one another by differences in the orientation of the handshape, the extension of the arm, and eye-gaze patterns (Pfau 2011:148-151). All of these formal mechanisms for language-internal distinctions require visual access to the ground of sign production. The orientation of the handshape is only accessible if the visible backdrop of the body is accessible; the extension of the arm is only accessible if the addressee has access to the whole arm; and eye gaze patterns require visual access as well. None of these mechanisms are likely candidates for marking linguistic oppositions, given a tactile habitus in a tactile deictic field. Instead, some dimension of the tactually (as opposed to visually) accessible ground should be recruited to distinguish pointing signs from one another. In its current state of development, these distinctions have not settled into formally stable, contrastive patterns. However, a key question for further research is whether or not tactile forces on the body of the addressee might be recruited for these purposes.
For example, will signers distinguish nominal and locative points by using different and distinguishable amounts of pressure on the body of the addressee? Will proximal and distal meanings be distinguished via differences in movement, for example, a tracing, linear movement versus a punctual movement? My experience using TASL in its early phases of development has led me to these intuitions, and in future research, after the system has developed further, I plan to pursue these questions. For the time being, it is clear that TASL signers are transposing deictic signs onto a tactually accessible ground. This is putting pressure on constraints at the phonetic and phonological levels, as new places of articulation are incorporated into “signing space.”
For example, in Figure 7.6 pointing signs are produced on the arm and chest of the addressee to mark relative spatial relations between locations. The locations were associated with cities in the world in prior discourse. This process of establishing temporary correspondences is structured by the anaphoric deictic field. The anaphoric deictic field is not a free-floating, empty space, nor is it a product of a single interaction. It is constrained by modes of access and orientation, which outlast any one encounter. The only locations that can be admitted into the tactile anaphoric deictic field are those that can be identified and distinguished from each other against a mutually accessible ground. Practices for establishing an anaphoric deictic field had to be developed in the pro-tactile workshops. These practices involved deliberate tactile explorations of the objects at hand, through which participants gained reciprocal access.
An example of this is the napkin-folding exercise led by Adrijana, which involved learning how to do a “pocket fold.” The explicit aim, according to Adrijana, was to demonstrate that DeafBlind people are not slow learners as many of them had come to believe. Rather, sighted people are bad at explaining things from a tactile perspective. With each student she used specific examples to illustrate their speed and ability in learning a new task when the task is explained to them “in the tactile way.” In the terms being developed here, Adrijana was replacing the deictic field of VASL with a new field, organized along tactile lines. Deictic signs were transposed onto a tactile ground as part of this broader transformation, which increased coherence between the deictic system of the language and the field to which it articulates. Indeed, DeafBlind people were much faster learners when their language and the contexts of its use were aligned.
(a) (b) (c)
Figure 7.6: Transposed Pointing Signs
Linking the language to the deictic field was accomplished slowly over the course of many interactions like the following. In Figure 7.7, Adrijana guides Hank’s hands to the napkin. From there, she puts her hands flat on top of the napkin (Figure 7.7a) and then she slips her hands out from under Hank’s, so he has direct access to it (Figure 7.7b). In Figure 7.9, Adrijana re-folds the napkin, places it back on the table, and presses it down with both hands, making sure the edges are lined up. In Figure 7.9a, Hank follows Adrijana’s hands and his fingers are in a position where the movements of her fingers are perceptible. In Figure 7.9b, Adrijana places the napkin back onto the table, and Hank’s fingers slip off of her’s to touch the napkin. In Figure 7.9c, Adrijana flattens her hands out and smoothes out the napkin, pausing at each corner to feel that the layers are stacked directly on top of one another. Hank’s hands follow Adrijana’s so this sequence of actions draws his attention to the rectangular shape of the object. In Figure 7.9d, Adrijana, once again, slips her hands out from under Hanks’ so he can explore the object further on his own.
No linguistic signs are exchanged in this sequence. However, each move is important for establishing a structured, mutually accessible space within which deictic reference can be accomplished. Attention has been drawn to the edges of the napkin, the distances between corners, and therefore, the overall shape of the object. Attention has also been drawn to the multiple layers, folded over one another, the texture of the material, and whatever other
(a) (b)
Figure 7.7: Adrijana draws Hank’s hands to the object
(a) (b)
Figure 7.8: Adrijana picks up the napkin
(a)
(b)
(c) (d)
Figure 7.9: Adrijana re-folds and flattens napkin so edges are lined up
qualities present themselves in the course of Hank’s exploration. This kind of sequence, where reciprocal access to the referent was established, and particular aspects were foregrounded, became an expected prerequisite to acts of referring.
(a) (b) (c)
Figure 7.10: Adrijana directs Collin’s attention to the pocket
Once access to the object is established, characterizing signs are used to individuate aspects of the object, linking those aspects to other objects and to categories in the language. For example, in the following sequence, Adrijana embeds the sign pocket in the deictic field, and in doing so, links it to two pockets in the immediate environment: the one on Collin’s shirt, and the one they have just created by folding the napkin. The interaction begins the same way that Hank and Adrijana’s interaction began--by establishing reciprocal access to the object. Then, Adrijana folds the napkin into a pocket, while Collin follows along tactually, his hands on top of hers. Then, in Figure 7.10, Adrijana draws Collin’s attention to the pocket she has just created by using a flat-handed pointing sign (Figure 7.10a), followed by the sign feel (Figure 7.10b), followed by the sign pocket.
(a) (b)
Figure 7.11: Collin reaches into the pocket of the napkin
In Figures 7.11a-7.11b, Collin responds by reaching up toward the top part of the pocket in the napkin. In Figure 7.12, Adrijana and Collin link the pocket on the napkin to the pocket on Collin’s shirt. In Figure 7.12a, Collin signs pocket. In Figure 7.12b, Adriana signs pocket on Collin’s shirt and finds an actual pocket there, at which point, she slips her hand into his pocket while signing pocket. Collin smiles and tilts back his head. In Figure 7.12c,
(a) (b)
Figure 7.12: Pocket on napkin is linked to sign napkin and to pocket on Collin’s shirt
Adrijana grabs the edge of Collin’s pocket, pulls it out, and lets it snap back against his body in Figure 7.12d. In Figure 7.12e, Collin emphatically signs understand.18
In this example, you can see the migration of the language toward the coordinates of the deictic field. Not only are deictic signs directed at mutually accessible dimensions of the object, but the characterizing sign pocket is also transposed onto the body of the addressee. Everything is shifting to a tactile ground, including the sign itself. In other words, along with a shift in orientation to the immediate environment, the signal, generated by the grammar and subject to its constraints, is also affected. The movement and location parameters of the sign have changed so that all that remains from VASL, post-transposition, is the handshape. This example shows that signal transposition is just one part of a broader shift in the indexical ground of utterance, and yet, there are consequences for how signs are produced and received, which, as we will see in the following chapters, echo in the grammar in arbitrary ways. In the next section, signal transposition is taken a step further, so that aspects of the handshape are modified as well. These modifications help signers establish coherent relations between the linguistic system and the deictic field.
7.2.2 Sign Calibration
During the pro-tactile workshops, participants transposed signs onto the body of the addressee, but they also calibrated signs to multiple dimensions of the deictic field, leading to greater divergences between TASL and VASL. Sign calibration is an interactional process, through which a linguistic element or process is transformed as deictic relations are incorporated. One activity that elicited sign calibration at greater rates than other activities was called “the object game.” In this game, dyads were given a bag full of objects--things like old cell phones, toy snakes, and tea strainers--and they were asked to describe one in detail. When they were done, they handed the object to their partner, who explored it tactually, and then evaluated the description in terms of how well it prepared them for the qualities of the object, or in the terms of the game, whether or not the description “matched” the thing. Lee, one of the instructors of the workshops explained the game to two participants as follows:
The point of this game is not to guess what the object is based on it’s function. A function-based explanation would be like this: The first person says: ‘It’s something you pour hot water through to make tea or coffee,’ and the second person says: ‘Oh! I know! It’s a filter!’. Instead of that, what I want you to do is find a way to describe the tactile qualities of the specific object-- textures, patterns, bumps, etc. and then decide if the description matches or not.
Participants all started out using VASL polycomponential signs for this task. However, these forms often led to frustration, blank stares, confusion, and eventual requests for intervention on the part of the instructors. Lee intervened in these cases, and introduced new constructions, which were calibrated to the relevant and accessible dimensions of the object from a tactile perspective. In contrast to the VASL constructions, these new, TASL signs elicited memories, questions, and/or expressions of understanding (e.g. Oh! I see! Or “I get it!” Or laughter, while signing “Yes”).
Figure 7.13: The Measuring Tape
The following series was taken from an interaction between Nina and Allen, where poly-componential signs from VASL failed to prepare the recipient for the relevant and accessible qualities of a measuring tape, like the one in Figure 7.13. Nina begins her description by combining a b-hand shape with a bent-b-hand shape, as in Figure 7.14, and repeats this sequence once. This characterizes the shape of the object as rectangular in a way that would not be surprising for users of VASL.
Figure 7.14: Nina specifies a rectangular shape
Then, in Figure 7.15, Nina describes what is typically done with an object like the one she is describing. First, in Figure 7.15a-7.15b she pulls the imaginary tape out of its base on a plane that is horizontal relative to her torso (as if she is measuring a table). Her mouth is pursed here and she is blowing out air through partially closed lips to create a flapping movement. In VASL this kind of mouth movement has been analyzed as having morphemic status (e.g. Frishberg 1975, Liddell 1980). However, Allen does not have perceptual access to Nina’s mouth. In Figure 7.15c- 7.15d, Nina repeats the previous sequence, but this time she pulls the tape out on a vertical plane rather than a horizontal plane (as if she were measuring a wall instead of a table). In Figures 7.15e-7.15i, Nina signs one, two, three, four, five, from left to right along the path that had previously been associated with the measuring tape as it comes out of its base. Finally, in Figure 7.15j, Nina signs inch.19
After Nina’s initial description, Allen tells her he doesn’t understand and she responds by starting over. At this point, frustration is mounting. These kinds of tense interactions were common prior to the pro-tactile movement. One of the strategies that some of the most experienced and skillful interpreters used in cases like this was to draw on their extensive knowledge about the life history of the DeafBlind person they were communicating with. They would look for a past experience they could use as a jumping off point for description and in the way, fill in the ground of reference. This was a way of compensating for the absence of an accessible deictic field, including not only perceptible objects in the immediate environment, but also shared knowledge and “common sense” (Hanks 1990). If Nina knew that in highschool Allen used to make birdhouses for fun (this is hypothetical), she might start out by saying, “Do you remember in highschool when you used to make birdhouses? Explain to me how you did it.” Then at some point, Allen would get to the part where he measures the wood, and Nina would ask him to describe the thing that he used to measure the wood.
There are two problems with this approach here. First, it became evident as the pro-tactile classes went on that because interaction had been so heavily mediated by sighted people,
(a)
(b)
(c)
(d)
(e)
(f)
(g)
h) (i) (j)
Figure 7.15: Nina’s First Description using VASL Polycomponential Signs
DeafBlind people didn’t actually know very much about each other, and definitely not the kind of detailed information that would allow them to trigger specific memories. Second, Allen has been blind for many years. Even if he does remember the birdhouses, he may not remember how he measured the wood, let alone the physical details of the instrument he used for measuring. There is only so far that visual memory can take you, and when it runs out, a piece of the indexical ground of reference erodes.
(a) (b)
Faced with these challenges, Nina tries again. She starts this time by appealing to the more general category “tool” (Figure 7.16). She signs tool and then uses a combination of a b-handshape (in Figure 7.16a) and a bent-b-handshape (in Figure 7.16b) to describe the rectangular shape of the object. In Figure 7.17, she continues by describing the way one typically uses a measuring tape, by pulling it out of its base (Figures 7.17a-7.17b). Then Nina signs table and in Figures 7.17c and 7.17d specifies the size and shape of the table using a b-handshape and a bent-b-handshape respectively. Then she repeats her representation of a person pulling the tape out of its base. Finally she signs inch, fingerspells i-n-c-h, and starts to repeat the sequence in Figures 7.15e-7.15j--“one, two, three, four, five,” but she is interrupted by Lee, who joins the interaction.
(c) (d)
Figure 7.16: Nina’s Second Attempt
Nina’s attempt to describe the measuring tape involved a familiar procedure for users of VASL. First, she establishes a geometric shape (a small rectangle). Then she moves to how it is handled and for what purpose--you pull out the tape from the base, and measure things like tables with it. The description assumes that the rest can be filled in. In the pro-tactile workshops, it became clear that polycomponential signs like this had to be produced with the expectation that the addressee could not fill in the rest. Therefore, singers began to
(a)
(b)
(c) (d)Figure 7.17: Nina’s Second Attempt, Continued
(a)
(b)
(c) Adr’s Hand (d) Signer’s Hand
Figure 7.18: TASL Representation of a Rectangular Shape
(a)
(b)
(c)
(d) Adr’s thumb,(e) Adr’s thumb,
Signer’s fingers Signer’s fingers
Figure 7.19: TASL representation of width (g-handshape on thumb)
include far more detail, and the details were more specific to the actual object of reference, as opposed to the general category to which it belonged. However, Nina was unable to do this in a way that Allen could understand. Tensions between Nina and Allen grew.
Eventually, Lee intervenes in the interaction and asks what the problem is. Nina tells her that she has already tried to explain that the object is a rectangular tool used for measuring that has a tape that is wound up and measures by the inch as you pull out the tape. She essentially repeats what she had already said twice before to Allen. She says with frustration that Allen doesn’t understand. The problem here is not only that Allen is having difficulty perceiving the formal properties of the signs; signal transposition alone would remedy that. The problem also stems from asymmetrical access to visual memories and visually derived knowledge. Allen doesn’t know what a measuring tape is and Nina can’t imagine that this is the case.
In order to address both problems, Allen must have tactile access to the object, learn about its material properties, its physical functionality, and its typical uses. The signs used to draw attention to these aspects of the object must be perceptible and they must articulate to mutually relevant and accessible aspects of the object. When Lee intervenes, she calibrates her description to these parameters(20).
Like Nina, she begins with the shape of the object. However, rather than using the B and bent-b hand configurations, she draws a rectangle on Allen’s palm with her index finger (Figure 7.18a). She then repeats this on Nina’s palm (Figure 7.18b). Schematic representations of this sign are given in Figures 7.18c and 7.18d. This sign establishes relative spatial locations on the tactually perceptible ground of the addressee’s hand. In VASL it is possible to trace a rectangular shape in the space in front of the signer, using the non-dominant finger as an anchor for relative spatial relations. Signs like this have been analyzed as “size and shape specifiers,” (Schick 1990; Engberg-Pedersen 1993; Aronoff et al. 2003:67; Schembri, Jones and Burnham 2005) which fall under the broader category of polycomponential signs. Generating the TASL sign in Figure 7.18 follows the same general pattern as generating size and shape specifiers in VASL, however, it embeds the conventional pointing handshape in a different deictic field. As we will see, this has further consequences.
In Figure 7.19, Lee describes the shape and size of the tape that pulls out from the base. In Figure 7.19a, she signs with, indicating that what she is about to describe is a part of the object as opposed to the entire object. She then traces the length of Allen’s thumb with a G-handshape, moving her thumb and index finger up, down, and back up (Figure 7.19b). She repeats this motion on Nina’s thumb in Figure 7.19c. These signs are represented schematically in Figures 7.19d and 7.19e. This is a way of establishing the width of the object, without specifying the length, or the overall shape. She uses it here to characterize the width of the tape that can be pulled out of the base of the measuring tape. Lee does this by repeatedly tracing the outer edges of the addressee’s thumb, refraining from adding perpendicular lines of any kind.
A g-handshape, like the one used in this sign, was also used in the comparable VASL construction to describe the relatively narrow shape of the tape measure. In the comparable VASL polycomponential construction, there was also a sign that represented the way the object is typically handled, which includes some information about its shape. After that, the focus was on the numbers marked on the measuring tape, and then the description ended with the sign measure.
In the TASL example, corresponding parts of the construction have been transposed onto the body of the addressee. This requires a modification of the movement and location parameters of the sign. In addition, in this example, handshapes have also been modified as they articulate to a mutually accessible, tactile ground. The b/bent-b handshapes that Nina used in the VASL example were replaced by a pointing sign, which was used to trace a shape on the addressee’s palm and instead of describing the measuring tape in the space in front of the torso (as in Figure 7.19), the the signer traces the shape of the thumb of the addressee, by making several tracing movements, one after the other. This differs from the corresponding VASL sign, which incorporates a single movement that extends all the way across the space in front of the signer’s torso, mapping the shape of the tape onto its trajectory when it is pulled out. In the TASL example, the shape and the trajectory are separated out and there is no spatial redundancy between the two path movements. Finally, the numbers on the measuring tape are not marked in Braille, so they are not relevant given tactile modes of orientation and access. Therefore, they are not incorporated into the TASL sign.
These changes are a result of a principled shift in the organization of the deictic field. This broader transformation led TASL signers to transpose signs onto the body of the addressee. However, this led to further changes in how polycomponential signs were constructed. Not only were the signs altered to make them more perceptible, they also incorporated differ-ent dimensions of the objects they represent. In other words, there are new rules emerging for generating polycomponential signs in TASL, which can be expected, over time, to have phonological and morphological implications. I am calling the interactional process contributing to this divergence “sign calibration.” Signal transposition and sign calibration, which are both driven more broadly by deictic integration, are also having further effects on the internal organization of deictic signs in TASL. In order to capture these effects, I introduce a third and final term: “sign creation.”
7.2.3 Sign Creation
Sign creation involves signal transposition and sign calibration, but goes further, allowing new kinds of signs to be created that would not be predicted or permitted by the grammar of
(a)
VASL. Sign creation gives rise to forms that are far more predictable from the perspective of the deictic field of TASL than they are from the grammar of VASL. In the previous sections, changes in the production and reception of signs was linked to a broader reconfiguration figure-ground relations in the immediate environment. In this section, I argue that as signs are calibrated to those relations, novel possibilities for the the production, reception, and derivation of signs arise.
(b) (c)
Figure 7.20: Snake Sequence (Lee describes the shape of the snake’s body)
In Figure 7.20, Lee is describing the shape of a toy snake’s body. First, she grabs Manuel’s right arm and rotates it so his palm is facing down and pulls it back and up near the top of her head. Then, she cups her hand around his arm (See Figure 7.20a), and traces a line from the wrist (Figure 7.20b) to the armpit (Figure 7.20c). Then, in Figure 7.21, she describes the way the snake’s body moves. She does this by gripping Manuel’s arm just below the armpit and keeping hold of his wrist. Then she moves each point of contact alternately to produce a snake-like motion in his arm. There is nothing in the grammar of VASL that would predict or allow a form like this.21 However, it is expectable from the perspective of
(a) (b) (c)
Figure 7.21: Snake Sequence (Lee coaxes Manuel’s arm into a snake-like motion)
the deictic field of TASL, and it has grammatical consequences. Manuel’s arm is not just a surface on which signs are produced; he must use his arm to actively participate in producing signs. This requires a kind of motor coordination between the signer and the addressee that is never required of visual signed language users. In addition, if TASL signs can be derived by drawing on the addressee as a source of actively articulated, meaning-bearing forms, this presents the signer with new morphological possibilities.
These new ways of generating signs emerged out of the pro-tactile workshops as a way of linking the language to context. Participants did this by tacking back and forth between the objects they were describing and the signs used to describe them, tightening relations between the linguistic system and the deictic field as they went. This resulted in a divergence between the visual and tactile systems. For example, VASL signers do not recruit the body of the addressee in routine communicative contexts. The introduction of additional articulators brings new affordances and limitations for the production and reception of signs, as well as new derivational possibilities. These changes began with the emergence of a new, tactile habitus and the reconfiguration of the social and deictic fields. As deictic signs were instantiated in these new fields, they were calibrated to them. Calibration eventually took on a logic of its own, which permitted the creation of signs that would not be predicted by the grammar of VASL, and yet are expectable from perspective of the deictic field of TASL.
7.3 Effects of Deictic Integration on the Deictic System of TASL
In this chapter I have argued that deictic integration is leading to a divergence in how deictic signs are produced, received, and distinguished from one another. While there are elements, such as handshapes, borrowed from VASL (as in Figure 7.18), those elements are increasingly caught up in and organized by the deictic field of TASL. This, in turn, is leading to a morphological divergence in how polycomponential signs are generated, making it possible for TASL signers to create new signs, which would not be predicted and are not allowed by the grammar of VASL. This is the first moment in the emergence of TASL as a distinct, linguistic system.
In this chapter, I have focused on two categories of deictic signs: pointing signs and poly-componential signs. However, since agreeing verbs also integrate deictic and linguistic elements, it is expectable that they will also be affected by these processes, leading to a divergence in the syntactic systems of TASL and VASL as well. In addition, I predict that as the morphology of TASL becomes more systematized, it will diverge further from the morphology of VASL. This prediction is, in part, based on the fact that polycomponential signs, like those analyzed in this chapter, are a source of new lexical signs in most signed languages (Aronoff et. al. 2003, McDonald 1982, Enberg-Pedersen 1993, Klima and Bellugi 1979, Schembri 2000, Shepard-Kegl 1985, Zeshan 2003). If the rules for generating polycomponen-tial signs are being reconfigured, this should affect morphological processes in TASL more broadly, as the language changes over time. TASL is new and the effects of deictic integration have only begun to manifest. However, given stable conditions in the social and deictic fields, a more comprehensive reconfiguration of the grammar appears inevitable. In the next chapter, I discuss the effects of deictic integration on the sublexical structure of TASL. This is the second moment in the emergence of TASL as a distinct, linguistic system.
Chapter 8
The Sublexical Structure of TASL
8.1 Introduction
In this chapter, I argue that the a reconfiguration in the deictic field of Tactile American Sign Language is leading to changes in the sublexical structure of the language. Research on language use among DeafBlind people in the United States1 conducted prior to the pro-tactile movement, describes differences in production and reception of signs as “accommodations” and “adjustments” (Collins and Petronio 1998; Collins 2004; Petronio and Dively 2006). Collins states that “Tactile ASL is a clear example of a dialect in a signed language,” (2004:23) and Petronio and Dively concur, defining it as “a variety of ASL used in the DeafBlind community in the United States” (2006:57). I am arguing that the pro-tactile movement triggered a more radical divergence, resulting in two distinct linguistic systems: Tactile American Sign Language (TASL) and Visual American Sign Language (VASL). This chapter compares the sublexical structure of these two systems.
In section 8.2, I begin by distinguishing between tactile reception of VASL on the one hand and TASL on the other. I argue that tactile reception of VASL allows a visual language to be (partially) perceived tactually, without affecting the sublexical structure of Visual ASL, much as lip-reading allows a spoken language to be (partially) perceived visually, without affecting the sublexical structure of English. In contrast, TASL is an emergent language. Previous work on language use among DeafBlind people is not directly comparable to the phenomena examined here because the research was conducted prior to the pro-tactile movement, when DeafBlind people were engaging only in the tactile reception of VASL. This earlier work does, however, raise several problems that are relevant to the changes currently under way. These problems are addressed in section 8.2. In section 8.3, I introduce the sublexical structure of VASL as a base-line for comparison. In this section, I also introduce the notion of “phonology” as it has been applied in signed languages. Drawing on the analysis presented in Chapter 6, I argue that in order to know whether you are examining a phonological phenomenon or an interactional phenomenon, a “basic” set of participant frames must be established. Prior to the establishment of a basic frame, core lexical items cannot be distinguished from the instances of their use. In section 8.6, I show how changes in basic participant frames are affecting the production and reception of signs in a tactile field. Since these changes are occurring in basic participant frames, they constitute changes in the sublexical structure of the language, as opposed to momentary, pragmatic effects. In section 8.5, I review sublexical constraints that are relevant to the changes observed in TASL and in section 8.7, I show how these constraints are being reconfigured. I conclude that changes in the deictic field of TASL are putting pressure on the grammar in ways that are leading to a divergence in the sublexical structure of TASL and VASL.
8.2 Tactile reception of VASL versus TASL
Modifications that DeafBlind people were making to VASL prior to the pro-tactile movement have been analyzed as variations on the standard. Variation at the sublexical level has been documented in the use of signing space, changes in orientation, location, and movement (Collins and Petronio 1998:21-7). Many of these changes are linked analytically to non-linguistic elements and relations in the immediate environment. For example, differences in the use of signing space are linked to shifts in bodily configurations among participants.
Figure 8.1: The “Signing Circle”
Collins and Petronio argue that comparable phenomena can be observed among sighted users of VASL. The “signing circle” they refer to in the following passage is a canonical representation of the space within which signs are produced. A version of the signing circle is reproduced in Figure 8.1.2 The circle is meant to mark the outer boundary of this space for VASL. Collins and Petronio note that
[u]nder certain conditions, the signing space (the circle) can shift in visual ASL. For instance, if a signer is standing in the street and signing to someone who is looking out a second-floor window, the circle shifts upward. When the signing space shifts, the location of signs shift in relation to the signer’s body. For example, the citation form of now is located about lower chest level. When the signing space shifts upward as the signer communicates with someone on the second floor, the location of now shifts upward to about chin level from the normal chest level. If two people want to have a private conversation and “whisper,” they will greatly reduce their signing space. If a person signs to someone very far away, the signing space will be noticeably increased.
Two significant problems are raised by these observations. First, momentary shifts in perspective within a given interaction must be distinguished from more lasting shifts in the sensory orientation of the language user. One of the most explicit aims of the pro-tactile workshops was to establish participant frameworks that would allow DeafBlind people to communicate directly with one another, rather than relying on sighted people to mediate. This required DeafBlind people to cultivate tactile sensibilities. Toward this end, they wore blindfolds to discourage reliance on remaining vision; they engaged in activities where the aim was to describe objects according to tactile, rather than visual qualities; and they played games such as “tactile pictionary” in order to develop tactile ways of observing the non-linguistic activity of others. These efforts, in addition to the fact of significant vision loss, led to lasting shifts in sensory orientation and new DeafBlind subjectivities and modes of interaction (see Chapters 5 and 6). This kind of shift in habitual modes of orienting to the immediate environment must be distinguished analytically from transient shifts in perspective.
A second related problem is the relationship between the signing circle and the space within which actual utterances unfold. The signing circle is a typified representation in the same way that the citation form of a word in a dictionary is a typified representation. When we look up a word in the dictionary, we do not assume that the form we see is specific to loud environments, bright lights, or situations where the person we are talking to can’t hear certain frequencies. The same is true for representations of “signing space.” This is because in both cases, there is a distinction operating between phenomena organized by the linguistic system and phenomena organized by the deictic field. A limit on where lexical signs can be produced within basic participant frameworks (see chapter 6), constitutes a linguistic constraint on the sublexical structure of the language. Variation in the way signs are produced in the course of interaction does not necessarily signal a change in those underlying constraints.
Collins and Petronio’s comparison between signing space in Tactile and Visual ASL is operating across linguistic and non-linguistic domains. In order to describe changes in the sublexical structure of TASL, momentary effects of language use must be distinguished from changes in the linguistic system. This is only possible given a clear analytic distinction between “participant frameworks” and “participant frames.”
Participant frameworks are the emergent configurations that communicative agents occupy in the unfolding of an interaction.3 Particular configurations found on one occasion, such as a Deaf, sighted person on the first floor, signing to a Deaf sighted person on the second floor balcony is an example of a participant framework. In contrast, participant frames are the repository of regularities that emerge in participant frameworks across encounters. Participant frameworks can be highly contingent on momentary dynamics in the physical or interactional environment, however, under the weight of repeated use and habituation, variation in certain frameworks settles out over time, yielding relatively stable and repeatable “participant frames.4 As was discussed in Chapter 6, participant frames in the deictic field of TASL have shifted. This means that the unmarked contexts for the production of lexical signs has also shifted, making changes in the sublexical structure of TASL distinguishable, analytically, from momentary effects of language use.
Collins and Petronio were observing communication between DeafBlind people in Seattle prior to the pro-tactile movement and therefore prior to the conventionalization of participant frameworks in a tactile field. This is why we see such a wide range of configurations in their analysis and no hierarchy among them. On this topic, they explain that
[t]he data contained many examples of tactile conversations with the signer and receiver in different positions. Varying positions included the following: both standing face-to-face; both sitting side-by-side; the signer sitting and the receiver standing, or vice versa; and in some cases the signer and receiver leaning across a table or another person as they communicated tactilely.
Just as underlying phonological units are realized differently in different contexts, underlying participant frames manifest in different ways in situated frameworks. However, a description of participant frames should not include any information that requires reference to the infinite array of possible contextual circumstances in which participant frames might or could be instantiated. They must assume typified spatial relations between speaker and addressee, typified acoustics, lighting, etc. The cases described by Collins and Petronio take into account many contingent dimensions of context. For example:
In one occurrence, two people were leaning across a table. Both had their arms almost completely outstretched; their hands touched over the table. The signer signed neat, a sign located on the lower cheek. As neat was signed, the signer leaned forward and shortened the distance the hand had to move to contact the lower cheek. Because of the shortened distance, the receiver was able to remain connected with the singer’s hand.
They note that this type of adaptation occurred most frequently when signers were at different heights, or were not able to move closer to one another for some reason (ibid.:24). Retrospectively, it is clear that the variation they witnessed was due to the absence of participant frames in a deictic field organized around tactile modes of access and orientation. As a result of the pro-tactile movement, a tactile deictic field was established and a repository of participant frames emerged (chapter 6). In what follows, analyses of sublexical constraints in TASL rely on this baseline of relatively stable participant frames, thereby excluding momentary effects of language use from the analysis.
8.2.1 Tactile Reception in a Visual Field
Prior to the pro-tactile movement, tactile reception of VASL was a compensatory strategy used to perceive a visual language. As such, tactile access to VASL signs was partial, and like lip-reading, required various forms of reconstruction and inference. In a study of the tactile reception of sign language, Reed et al. (1995) found that DeafBlind people received VASL signs with 60-85% accuracy.5 Four categories of error were identified: (1) “semantic/syntactic, in which the substituted sign was dissimilar phonologically to the stimulus sign but had a semantic or grammatical relation to the target”; (2) “phonological, in which the formational properties, but not meaning were similar between stimulus and response”; (3) semantic/phonological, in which the target and response were similar phonologically and semantically (often morphologically related)”; and (4) “random, which included errors that could not be classified into any of the preceding categories.” The study showed that the largest source of errors was due to inaccuracies in the reception of the phonological parameters of VASL. This finding is explained as follows:
Given that ASL has evolved for reception through the visual sense, it is not surprising that some of its phonological properties are not easily perceived tactually. Perhaps further accommodations and adaptations of ASL for reception through the tactual sense would contribute to increased efficiency of communication with this method (Reed et al. 1995:15).
Patterns in communication among DeafBlind people in Seattle support the finding that tactile reception of VASL disrupts phonological processing. In the past, attempts to circumvent this problem have included further accommodations, as Reed et al. suggest. For example, The distinction between the VASL sign man and the VASL sign woman is inaccessible in a tactile field of engagement because the two signs constitute a minimal pair, differing only in the initial place of articulation. man makes contact with the forehead of the signer and then the chest, while woman makes contact with the chin of the singer and then the chest (See Figure 8.2 on page 197). Since the landmarks of the face are not visible, they cannot be used as a backdrop to differentiate between the locations of the two signs. This problem recurs whenever two signs require a visible ground to be distinguished from one another. To accommodate, it has become common among some interpreters and DeafBlind people to use an older, less common sign for man that differs from woman in terms of both location and handshape instead of just location when their addressee lacks the visual capacity to distinguish between the more common signs.
Substituting semantically equivalent signs in cases like these can patch up the problem, and one can imagine a scenario in which this type of patching becomes the main mechanism for adapting a visual language to a tactile mode of reception. All you would need is a rule or set of rules that could be applied consistently. For example: For all minimal pairs in VASL that
(a) man (b) (c)
(d) woman (e)
Figure 8.2: man and woman in VASL
differ only in location, substitute one sign in the pair for a different, semantically equivalent sign. If this rule were adopted by everyone, then the replacement sign would become the standard sign and any ambiguity in distinguishing man from woman would be resolved. The result would be Visual American Sign Language plus a set of rules for sign-substitution based on phonological and semantic criteria. There are many other ways in which the visual system could have, and has been, adapted on a case-by-case basis as needed (For example, see Chapter 5, Collins 1994, Collins and Petronio 1998, Petronio and Dively 2006, Quinto-Pozos 2002, Reed et al. 1990, Reed et al. 1995). However, with the inception of the pro-tactile movement, this approach was abandoned and reciprocal, tactile access to the sign-vehicle was established instead. This led to a more radical reorganization of the language, which, I am arguing, included a divergence in the sublexical structure of TASL and VASL.
The practices that allowed for tactile reception of VASL are similar to what Sapir calls a “substitutive” system in several respects (1995 [1927]). According to Sapir, language (as opposed to other semiotic systems) is defined in part by its ability to directly communicate feelings and thoughts via a system of “phonetic symbols.” If the thoughts and feelings of a communicator have to pass through another system first, then you know you are dealing with a substitutive semiotic system like writing, or a supplementary semiotic system like
gesture(6).
In signed languages we see this in the distinction between finger spelling and signing. Fin-gerspelling is not a language, but a substitutive system for representing language. In order to understand a fingerspelled word, knowledge of English must be drawn on.7 In contrast, VASL signs are understood without passing through English. The only knowledge that is required is knowledge of VASL and of the world within which VASL is used. Systems like fin-gerspelling are useful because because they allow for transfers in modality. A written English sentence represents a spoken English sentence visually. Likewise, fingerspelling represents English words in a rapid-fading visual channel, which is easily integrated into a signed utterance. The practices that DeafBlind people had developed for receiving visual signs tactually are like a substitutive system in the sense that they allow for a transfer of modality--visual to tactile, while preserving certain formal characteristics of the represented word or phrase. However, only part of the message is transferred, which causes the primary semiotic channel to be de-linked from the supplementary semiosis it would otherwise be embedded in, hence the necessity of reconstruction and inference.
Goffman argued for the central importance of paralinguistic (i.e. supplementary) cues such as gaze, shifts in posture, touch, for such things as managing turns, assessing reception via back-channeling, linking speech to the situated present, showing evidence of attention, etc. According to Goffman these things are so important that, “for the effective conduct of talk, speaker and hearer had best be in a position to watch each other” (1981:129). The fact that people understand each other on the phone is not evidence of the singular importance of words, but rather the power and efficacy of “reconstruction” and “transformation” (ibid.:129-30). It follows that if users of English only ever talked on the phone, the structure of interaction surrounding the English language would change, and audible conventions for marking intended addressee(s), providing back channeling cues, showing evidence of attention, etc. would become required.
VASL, when received tactually, is detached from the supplementary semiosis which, from a sighted perspective, surrounds it. In VASL, primary and supplementary systems, which are produced by many parts of the body, are received visually. Prior to the pro-tactile movement, access for DeafBlind people was restricted mostly, if not entirely, to the hands of the signer. All other aspects of signs and the bodily cues that surround them had to be reconstructed via memory and inference. The same was true for non-linguistic facial expressions, bodily postures, back-channeling cues, and other supplementary semiotic signals. These sources of ambiguity were added to already-strained reception of manual, lexical signs.
Therefore, the reconstruction and transformation that was necessary is comparable to the kinds of reconstruction and transformation executed by the hearer in a patchy cell phone conversation. However, reconstruction and transformation are only effective for DeafBlind people insofar as visual memories and visual sensibilities are still intact. As orientations to, and memories of, the visual world fade, tactile reception of VASL grows increasingly ineffective. Leaders of the pro-tactile movement recognized this problem intuitively, and sought to re-unite lexical signs with the situated present. Before continuing on to the effects of this process on the sublexical structure of TASL, the sublexical structure of VASL is introduced as a base-line for comparison. The following section also serves as an introduction to the notion of “phonology” as it has been applied to signed languages.
8.3 The Sublexical Structure of VASL
There is far more work on the sublexical structure of VASL than can or should be reviewed here. What is important for our purposes is two-fold: (1) to grasp, in the most schematic sense, how morphemes in VASL are broken down into meaningless elements, and (2) to review some relevant constraints on how those elements combine with one another. Apart from a general introduction, the sublexical structure of VASL is considered only insofar as it contrasts with emergent regularities in TASL. These points of contrast, for the most part, involve categories of analysis that are so basic to the description of VASL that in more recent work, they are folded into any argument as part of the common sense of the field. For this reason, I focus on some of the earliest work on VASL, where basic structural facts are made maximally explicit (e.g. Stokoe 1960, Stokoe et al. 1965, Battison 1978, Friedman 1977, Mandel 1981, Supalla 1982).
8.3.1 Cherology and the Aspects of the Sign
William Stokoe and his colleagues (1960, 1965) produced the first grammatical description of VASL. In this early work, they made the case that American Sign Language has sublexical structure. They called the enterprise (and the level of linguistic organization) “cherology” (from the Greek qeir (cheir) meaning “hand”). Stokoe’s most basic categories correspond to location, hand configuration, and movement, which he calls the “aspects” of the sign.8 In order to avoid potential confusions, Stokoe proposes a set of technical terms for the formational parameters of any manual sign: tabula, designator, and signation, which he abbreviates as tab, dez, and sig. The tab is the surface on which a sign is produced. The dez is the configuration of the active hand(s). The sig is the movement--either the external movement of a hand configuration from one tab to another, or an internal movement in the hand configuration which may or may not result in a different hand configuration.
8.3.2 Tabula
At first glance, Stokoe says, the tabula of a sign appears to be determined by its proximity to readily distinguishable parts of the body, such as the forehead, the temple, the cheek, the ear, and so on (2005 [1960]):21). However, according to Stokoe, these areas of the body are not distinguished as such by the language. His example is the sign see. The tab for this sign is the eyes, however, in its phonetic production
the forefinger of the dez hand can easily brush the tip of the nose in passing across the front of the face, but when the sig is motion outward from the same region, particularly when the dez is such that the sign is interpreted as “see,” the signer and viewer tend to think of the marker as the eyes. Since no significance attaches to a contrast solely between nose and eyes as tab, these are analyzed as allochers of the tab “mid-face” (ibid.:21).
In other words, tabs are not specific places on the body, but regions with spatial thresholds. The phonetic production of the sign can vary within those thresholds, but once they are crossed, the meaning of the sign will change. The mid-face tab, for example, includes several areas of the face that in a nonlinguistic frame would be distinct, such as the eyes, the upper part of the cheek, and the bridge of the nose. It also excludes parts of the face that would be part of a coherent area, such as the lower and inner parts of the nose.
Initially, Stokoe identifies 10 tabs that are distinctive in ASL: the whole face or head, the upper face or brow, mid-face, lower face, cheek or side face, the neck, the trunk, the upper arm, the lower arm (below the elbow), and the hand (Stokoe 2005 [1960]:21). Lastly, he adds the trunk of the signer, which, he points out is much larger than the face, and is not divided into smaller contrastive regions like the face is. He also adds the non-dominant arm and the non-dominant hand as potential tabs for the dominant hand, in addition to other roles they may play (ibid.:21). All of the tabs described thus far are what Stokoe calls “body tabs.” There are also signs in which the tab is zero, meaning that the sign is articulated in the “neutral” space in front of the signer (ibid.:25). On this topic, he says: “The zero tab is less precisely located than the others but it is still a place, that space in front of the signer’s body, where the hand can freely and comfortably move”(ibid.).
8.3.3 Designator
In order to describe the handshapes of the active hand, Stokoe appropriates the names of the fingerspelled letters of the English alphabet. However, he does not mean to say that these two categories of handshapes are equivalent. He compares the relationship between between them to the relationship between phoneme and grapheme in spoken languages. Fingerspelling is a digital representation of a graphemic representation of sound units in English. Therefore, it as an “evanescent graphemic system,” or a graphic system of representation that is rapid-fading, like speech.
The finger-spelled word is a series of digital symbols which stand in a one to one relationship with the letters of the English alphabet, but the word itself is a morpheme or combination of morphemes constructed from English language sounds on principles systematically described by the phonemics and morphophonemics of English” (Stokoe 2005 [1960]:25).
Fingerspelled words are representations of units--either phonemes or morphemes that are organized and shaped by the principles of spoken English, and not the principles of the sign language. For example, Stokoe argued that from the perspective of cherology, the hand configurations a, s, and t, which are distinct letters in the manual alphabet, are non-contrastive in the sign language, and therefore are allochers of a single chereme. In part, he attributes the grouping of these three configurations to phonetic constraints. a, s, and t are all formed with a closed fist, but the position of the thumb relative to the fist is slightly different in each. With such minimal perceptual differences, “conditions of visibility must be good for these differences of configuration to be distinguished” (Stokoe 2005 [1960]:22).
For a distinction to be contrastive in the language, Stokoe argues, the phonetic differences must be more perceptually salient: “The sign language [ . . . ] never makes a significant contrast solely on these differences. Instead the contrast is between any fist-like hand and all other (non-fist-like) configurations” (ibid.:22). Stokoe labels this chereme a/s. Another example is the b/5 chereme, which includes several flat-hand configurations. Its allochers look like the b-hand of the alphabet, the 4-hand, the 5-hand, and like a b-hand with the thumb extended. The flat hand is the common element. The fingers are either spread or closed, and the thumb is either extended or not (22). In total, Stokoe identifies 16 contrastive hand configurations, most of which include several allochers.
8.3.4 Signation
Stokoe breaks movement down into distinguishable types, including “gross movements,” which are made with the elbow or shoulder joints and smaller movements using the wrist and/or fingers (Stokoe 2005 [1960]:25). There are also movements that can be described according to the relation between dez and tab. These include descriptors for relative directions and qualities of movement such as “approach, touching, crossing, entrance, joining, and grazing, [ . . . ] separation and interchange.”
Lastly, there are major planes and directional lines in the space in front of the signer that can distinguish one sign from another (ibid.:24). It is not the actual movement that matters, but the ways in which differences in motion result in differences in meaning. Stokoe writes: “The exactitude with which these approximate directions coincide with the coordinates of three dimensional space is immaterial. Polarity is important, and in some signs the opposite direction of sig motion is used to make a pair of antonyms: ‘borrow’ and ‘lend’ differ in sig only, the motion being respectively toward the signer and away. But both directions may combine in the sig of other signs, as in “explain” where the dez moves to and fro” (ibid.).
8.3.5 Morphocheremics
Stokoe argued that there are meaningless elements that combine to produce morphemes in the sign language, but that those processes of sign formation are patterned. He writes:
If every sign in this sign language were simply composed of a tab, a dez, and a sig, the morpheme list of the language could simply be determined by the formula:
no. of tabs X no. of dez X no. of sigs
= no. of morphemes
But there are several different patterns of sign formation, not to mention compound signs and contractions: and the language in true linguistic fashion allows certain combinations of elements and not others (Stokoe 2005 [1960]:25).
Stokoe did not posit any systematic phonological constraints on VASL, but he did make some preliminary observations which were pursued by those who wrote after him. For example, the zero tab, he notes, is limited to “the space in front of the signer’s body, where the hand can freely and comfortably move” (Stokoe 2005 [1960]:25). He also suggests that with frequent use, signs shift from a body tab to a zero tab if the resulting sign is “sufficiently distinct in dez dez and sig from other signs” (ibid.:25). Likewise, he notes the tendency for frequently used two-handed signs to become one-handed (ibid.:27).
Later research addressed many of the topics raised by Stokoe in more depth. One thing that was not carried on beyond him, however, was his terminology. Stokoe established his terms in order to bring out similarities between spoken and signed languages, but at the same time, he was unsure how strong the comparisons were, and therefore, felt that distinct but similar terms were necessary. In 1976, when the new edition of The Dictionary of American Sign Language was published, more evidence had been produced. Some of this evidence suggested that in addition to the analyzability of signs into meaningless elements, the ways in which those elements combine are systematically constrained. If the meaningless elements of spoken and signed languages are constrained in similar ways, Stokoe writes,“the 1960 coinages chereology, chereme, and allocher are no longer needed” (1965:iv). Even for Stokoe, then, these terms went out of usage, and the standard terms used for spoken languages replaced them.
After Stokoe, researchers began to discover constraints on the way meaningless elements were combined in VASL, at which point the phonological system began to look like a series of reductions. For example, Battison (1978) begins with the unrestricted human vocal apparatus. The human body, he says, can make a wide range of sounds of which only a small portion can be recruited for speech (ibid.:20). Phonological constraints act on this limited range of sound to produce a finite set of units. These units are combined in rule-governed ways to yield the allowable morphemes of a specific language, including their alternations when they occur in utterances (ibid.). By analogic extension, the human body can make a wide range of gestures. Phonological constraints in signed languages act on some sub-set of physically possible gestures to produce a finite set of units, which when combined in rule-governed ways, produce the allowable morphemes in a language (as well as their alternations when combined with one another in utterances) (ibid.). These units include handshape, location, and movement, and combine to form signs that are systematically distinguishable from other signs in the language (ibid.:21-3).
In the case of both spoken and signed languages there is a series of reductions enacted in theory as increasingly demanding constraints are imposed on the capacities of the human body. At the outer phonetic limits, capacity is primary. That is to say--there will be no gestural or sonic material admitted into the language that cannot be produced or perceived by the human body. However, the changes that triggered a reconfiguration of the sublexical structure of TASL can only be partially explained by limits on sensory capacity. At least as significant were changes in sensory orientation and embodied sensibilities. These are not matters of capacity, but matters of convention and habituation. Given this, the relevant question isn’t whether or not DeafBlind people can see or feel the sign-vehicle. The relevant question is whether or not they have access to it, given habitual modes of attention in conventional participant frames and bodily configurations. One of the things that structured access to the sign vehicle among DeafBlind people in Seattle was the emergence of two, competing participant frames.
8.4 Participant Frames in the Deictic Field
During the pro-tactile workshops in 2010 and 2011, two competing participant frameworks and their attendant bodily configurations emerged as “basic” (Hanks 1990:148-152): (speaker-addressee) and (speaker-addressees). The first is realized via conventionalized two-person bodily configurations, as in Figure 8.3, and the second is realized via conventionalized three-person bodily configurations, as in Figure 8.4. Each framework exerted different pressures on the production and reception of signs.
Figure 8.3: Two-person Configuration
In Figure 8.4, Adrijana, who is in the middle, is signing no to two interlocutors. In a three-person configuration like this, all signs must be duplicated, so there is one copy for each addressee (See Figure 8.5). In the case of no, duplication is straightforward, because in VASL, this is a one-handed sign (See Figure 8.6.9)
However, in the case of two-handed signs, production is more complicated. There are three types of two-handed signs in VASL and two types of one-handed signs. Each sign type is
Figure 8.4: Three-Person Configuration
Figure 8.5: Duplicated One-Handed Sign
(a) (b)
Figure 8.6: no in VASL
defined as follows (Battison p.28-9):
Type 0: One handed signs articulated in free space without contact (e.g. preach as in Figure 8.7).
Type X: One handed signs which contact the body in any place except the opposite hand (e.g. apple as in Figure 8.8).
Type 1: Two handed signs in which both hands are active and perform identical motor acts; the hands may or may not contact each other, they may or may not contact the body, and they may be in either a synchronous or alternative pattern of movement (e.g. which as in Figure 8.9).
Type 2: Two-handed signs in which one hand is active and one hand is passive, but both hands are specified for the same handshape (e.g. name as in Figure 8.11).
Type 3: Two-handed signs in which one hand is active and one hand is passive, and the two hands have different handshapes (e.g. discuss as in Figure 8.10).
Figure 8.7: Type 0 Sign preach in VASL
Type C: Compounds which combine two or more of the above types.
Figure 8.8: Type X Sign apple in VASL
The interaction of the two manual articulators in all VASL signs is constrained at the sub-lexical level (e.g. van der Hulst 1996, Sandler 1993, Eccarius and Brentari 2007, Morgan
205
(a) (b) (c) (d)
Figure 8.9: Type I Sign which in VASL (movement is alternating)
(a) (b) (c) (d)
Figure 8.10: Type 3 Sign discuss in VASL
Figure 8.11: Type 2 Sign name in VASL
206
and Mayberry 2012, Stokoe 1960, Battison 1978, Channon 2004, Napoli and Wu 2003). New, and importantly, conventional, participant frameworks among DeafBlind people are exerting pressure on the way the manual articulators interact, and therefore on this level of grammatical organization.
In particular, the role of the non-dominant hand is changing in three-person configurations. While in VASL, the hands work in tandem to produce two-handed signs, in TASL, each hand must produce an independently meaningful sign: one for each addressee. Therefore, the reconfiguration of basic participant frameworks is leading to language-internal changes.
8.5 Sublexical Constraints on Two-Handed Signs in VASL
In comparing the sublexical structure of spoken and signed languages, Battison points out that the bilateral symmetry of the body (two arms, two hands, two sets of fingers, and so on) is imperfect from the perspective of the signer (Battison 1978:26). One side of the body is always more dominant than the other. Battison writes that “this opposition between potential visual symmetry and the actual manual asymmetry of the body creates a dynamic tension of great importance for the formational organization of signs” (ibid.:26). In order to capture some of the formal consequences of this fact, Battison provides several terms.
Like Stokoe, he rejects the terms “left” and “right” because the left or right handed production of a sign is non-distinctive in ASL. The first set of terms used in place of “left” and “right” are “dominant” (the hand preferred for most motor tasks) and “non-dominant” (the other hand) (ibid.:27). The second set of terms is “active” and “passive,” which together describe the roles taken by either the dominant or non-dominant hand in the production of a given sign. The active hand is the hand in motion, while the passive hand is the hand that does not move, or moves very little relative to the active hand. In other words, “The active hand has a much larger role and executes a more complex motor program than its passive partner, which can be absolutely stationary” (ibid.). Despite noted exceptions (Battison 1974; Klima and Bellugi 1975; Frishberg 1976b [cited in Battison 1978:27]), Battison argues that the dominant hand tends to assume the active role, while the non-dominant hand tends to assume the passive role (ibid.).
In describing the orientation and location of the hands relative to the body, the same issue of left/right arises, and another pair of terms is proposed. For signs that make contact with the same side of the body with respect to the active hand, the term “ipsilateral” is used. For signs that make contact with the opposite side of the body with respect to the active hand, the term “contralateral” is used. Battison’s examples are the pledge of allegiance and a military salute. In the first, the dominant hand contacts the contralateral breast (ibid.:28). In the second, the dominant hand contacts the ipsilateral forehead.
8.5.1 Symmetry and Dominance Conditions
For the sub-set of signs that are produced using two hands, Battison proposes two phonological constraints, which are interlocking-- the Symmetry Condition and the Dominance Condition.
The Symmetry Condition states that (a) if both hands of a sign move independently during its articulation, then (b) both hands must be specified for the same location, the same handshape, the same movement (whether performed simultaneously or in alternation) and the specifications for orientation must be either symmetrical or identical” (34).
The Dominance Condition ...states that (a) if the hands of a two-handed sign do not share the same specification for handshape (i.e. they are different), then (b) One hand must be passive while the active hand articulates the movement, and (c) the specification of the passive handshape is restricted to be one of a small set: a, s, b, 5, g, c, o. . . . Type 3 signs obey this constraint with very few exceptions (Battison 1978:35)
These handshapes that occur on the passive side of two-handed signs are unmarked in two respects. In terms of both articulation and perception, they are maximally distinct and geometrically basic:
a and s are closed and maximally compact solids; b is a simple planar surface; 5 is the maximal extension and spreading of all projections; g is a single projection from a solid, the most linear; c is an arc; o is a full circle (Battison 1978:36).
Battison argues that these handshapes are unmarked phonologically as well, since they appear very frequently and in many contexts in VASL, they were present in all signed languages that had been described when Battison was writing, and they are the first handshapes mastered by deaf children learning VASL (Battison 1978:37).10
8.5.2 Weak Drop in VASL
Two-handed signs in VASL can undergo a phonological process called “weak drop” (Padden and Perlmutter 1987), where the non-dominant, or “weak” hand drops out and a one-handed variant is expressed. However, this process is constrained. First, in VASL, alternating signs do not undergo weak drop (Padden and Perlmutter 1987:350). Second, once a sign has undergone weak drop, it cannot undergo certain morphological processes (such as compounding) (Sandler 1993:347-353) and certain forms of inflection (Padden and Perlmutter 1987:367-8). Third, two-handed variants are basic, while one-handed variants are not (Padden and Perlmutter 1987:351). If the two-handed variant were to disappear in the underlying representation, or be replaced by the one-handed variation, distinctions between minimal pairs in VASL would be obscured (ibid.).
8.6 Changes in Sign Production
In order to understand how new participant frames are affecting the sublexical structure of TASL, I located signs, which in VASL, would fit each of Battison’s two-handed categories (Type I, Type II, and Type II). I then documented how their production and reception changed when instantiated in a tactile field. For each type of two-handed VASL sign (Type I, Type II, and Type III), three sets of data were collected. Set 1 includes signs produced by people who have had minimal exposure to pro-tactile practices. This set was taken from the first few weeks of the pro-tactile workshops. Set 2 includes signs produced by people who had attended 2 1/2 weeks or more of the pro-tactile workshops. Set 3 includes signs produced by the instructors of the workshops, who had been engaged in developing pro-tactile practices for about four years already at the time of the workshops.
In this section, I argue that constraints on symmetry in two-handed signs are growing more demanding as a result of deictic integration. This is leading to a reduction in formational complexity, when compared to VASL lexical signs. In the next chapter, I show how this reduction in complexity is complemented by an increase in formational complexity in poly-componential signs. This redistribution of complexity across grammatical subsystems is evidence that the tactile and visual systems are undergoing a grammatical divergence.
8.6.1 Type I Signs
Type I VASL signs are defined by Battison (1978:28-9) as follows:
Two handed signs in which both hands are active and perform identical motor acts; the hands may or may not contact each other, they may or may not contact the body, and they may be in either a synchronous or alternative pattern of movement (which, car, restrain-feelings).
In a tactile field, the aim of the signer in a three-person configuration is to produce a perfectly duplicated message so there is one copy for each addressee. Given this aim, Type I signs should change the least, since the motor activity of each hand is, by definition, already identical in VASL. However, there are two features of this sign type that consistently changed over the course of the workshops. First, in VASL, the movement of the two articulators can be alternating rather than synchronous (as in which). As the workshops progressed, there were more and more instances where, in VASL, alternating movement would be expected, and a synchronous movement was produced instead. This was coded as “sync.”
Second, this type of sign can contact the ipsilateral, contralateral, or mid-line body. It can also be produced in neutral signing space but in alignment with the ipsilateral or contralateral body. In the workshops, there was a trend toward ipsilateral contact or alignment where contralateral or central contact or alignment would be expected. Also, orientation of the hands tended to shift so instead of the hand extending from contralateral alignment to ipsilateral alignment (as in the VASL sign now), the hand rotated so that it extended forward away from the ipsilateral body on both sides.
Lastly, in some signs, such as the two-handed version of inform-you, one hand may contact the body, while the other hand does not, despite the fact that the motor activity of the articulators is the same. These signs tended to change, so that both hands made contact with the ipsilateral body. All of these cases were coded as “ipsi.” The signs that did not change were coded as “no change.” In Figure 8.12, the percentage of signs in each data set that diverged from what would be expected in VASL is represented.
(a) Figure 8.12: Changes in Type I
Signs
For signers with little exposure to pro-tactile practices, almost 100% of signs were produced as one would expect in VASL. As exposure increased, greater percentages of signs diverged from VASL. This trend is represented by the line labeled “NO CHANGE” in Figure 8.12. The line labeled “IPSI” represents Type I signs where contralateral or central contact or alignment would be expected in VASL, but ipsilateral contact or alignment occurred instead. As is shown, ipsilateral contact or alignment became increasingly common as exposure increased. Lastly, the line labeled “SYNC” represents signs where alternating motion would be expected in VASL and synchronous motion occurred instead. Again, the percentage of signs in the data set where this change occurred increased steadily with exposure.
Type I Signs (Set 1)
In the first set,11 as shown in Figure 8.12, there was very little divergence from VASL. This sign type is maximally symmetric compared to the other two-handed sign types, so there are few asymmetries in access for the two addressees. However, there were some issues that arose. There are near-minimal pairs in VASL that become minimal pairs in a tactile field. For example, the signs culture and class, differ in two respects, but in a three-person configuration, only one of these is perceptible. culture is produced with the active hand in a c-handshape. The passive hand is in a g-handshape, which functions as a place of articulation (as opposed to an active articulator). In a three-person configuration, the passive hand tends to duplicate the handshape of the active hand (See section 8.6.4). If this occurs, the resulting sign culture is indistinguishable from class.12 The same ambiguity arises in the two-person configuration if the addressee is using one-handed reception. In both cases, the distinction between the two meanings is either not signaled formally in the language or not accessible, so an inferential processes is required.
Another source of ambiguity is alternating vs. synchronous movement of the two hands. This distinction is no longer perceptible with access to only one of the signer’s articula-tors. For example, at the beginning of the pro-tactile workshops, the participants used a modified version of the VASL sign sign to describe the duplicate signing they were doing in three-person configurations. In VASL, sign would be produced with both hands in a g-configuration and the movement of each hand would be alternating. In order to describe duplicate signing, the movement was made synchronous. The resulting sign reflected the meta-linguistic observation that in duplicate signing, symmetry is maximized. Ironically, the difference in meaning signaled by alternating vs. synchronous movement was not perceptible in the configurations it was meant to describe. Within a couple of weeks, the sign changed to sign same-time, where sign was once again alternating as in VASL. Although the participants of the workshops did not orient to these problems in any observable way, these issues foreshadowed changes that manifested in Set 2 and Set 3.
Type I Signs (Set 2)
In this set,13 there was an overall shift toward greater synchronization and symmetry between the two articulators. In Figure 8.12, an increase in signs produced with ipsilateral contact and an increase in signs produced with synchronous movement is represented. There were also instances where the signer started with an alternating sign, and mid-sign, altered it so it was or could be synchronous. In one case, the signer started to articulate dialogue, produced with two g-configurations alternating at the chin. Before he completed the sign, however, he switched to a VASL Type O sign talk and duplicated it.
This kind of repair happened not only with the replacement of one sign type with another, but with the production of a particular sign. In these cases, phonological features were replaced and the sign itself was changed. For example, In VASL, eat is a one-handed sign. Inflected for progressive aspect, it becomes a two-handed, alternating sign. When this sign occurred in a three-person configuration, the signer started out alternating, and then mid-sign her hands fell into alignment, and the movements became synchronous. Signs that occurred more frequently, like people, began to be predictably produced with synchronous rather than alternating movement.
In signs that make contact with the signer’s body, two patterns were observed. First, a preference for ipsilateral contact over contralateral contact emerged, as did a preference for horizontal symmetry over vertical symmetry. These tendencies led to changes where signs were produced. For example, the VASL sign enjoy is produced with both hands in a 5/b-configuration, stacked vertically on the mid-line of the signer’s chest. In a three-person configuration, the place of articulation shifted, so the hands were horizontally aligned and both made contact with the ipsilateral chest. The same shift from vertically aligned mid-line contact to horizontally aligned ipsilateral contact occurred with the sign happy. Another example is ask as in “request,” which is produced with two hands in a 5/b- configuration. The hands make contact with one another at the mid-line. This sign occurred three times in this data set. In one of these cases, there was no contact between the hands, and rather than being aligned with the vertical mid-line of the signer’s body, both hands moved toward ipsi-lateral alignment. In VASL, information is symmetrical, except that the dominant hand contacts the forehead and the non-dominant hand does not. In this data set, information occurred twice. Once, it was produced in the same way one would expect in VASL. The second time, both hands contacted the ipsilateral forehead, increasing symmetry.
Type I Signs (Set 3)
Among the instructors of the workshops, the same patterns held. For example, the VASL sign body is produced with two hands--one stacked vertically above the other on the mid-line of the signer’s chest. In this data set, it was produced with the hands aligned horizontally, each one making contact with the ipsilateral chest, rather than the mid-line. Likewise, the sign interesting is produced in VASL with the hands in vertical alignment with one another on the mid-line of the signer’s chest. In this data set it is produced with horizontal alignment, both hands contacting the ipsilateral chest. explain is sometimes signed with alternating movement (as in VASL) and sometimes with synchronous movement. The VASL sign enjoy, like body, involves two hands, vertically stacked on the mid-line of the signer’s chest in VASL. In this data set, it is produced with horizontal alignment, both hands contacting the ipsilateral chest. people is signed in this data set with synchronous movement, where in VASL, it would be signed with alternating movement. communicate is produced with alternating movement in VASL, but in this set it is produced with synchronous movement.
One additional issue that was raised in this data set was the degradation of iconic relations that can sometimes result from changes in production. For example, the sign replace. In VASL, this sign represents the idea of replacement with two f-handshapes. Via alternating movement, one f-handshape “replaces” the other. As with the other Type I signs, this sign moves from alternating to synchronous movement and both hands move further toward ipsilateral alignment with the signer’s chest. In the resulting sign, iconic links to the activity of replacement are severed.
8.6.2 Type II Signs
Type II Signs are defined by Battison as follows:
Two-handed signs in which one hand is active and one hand is passive, but both hands are specified for the same handshape (name, short/brief, sit/chair).
Type II signs present more of a challenge than any other sign type to the signer in a three-person configuration. The aim is to duplicate the message so there is one copy for each articulator. Type II signs are symmetrical in terms of hand configuration, but potentially asymmetrical in all other respects. The passive hand often acts as a place of articulation for the active hand (as in sit). In Type O signs, the two hands are maximally asymmetrical, since one hand is not used at all. These signs were easily duplicated by signers in a three-person configuration. On the other end of the spectrum, Type I signs are almost symmetrical and duplicating them required minimal adjustment. Type II signs are a mixture of symmetrical and asymmetrical. When sublexcial constraints for the formation of this sign type were integrated with deictic constraints on three-person communication, new regularities in sign formation emerged.
8.6.3 Changes in Type II Signs
(a) Figure 8.13: Changes in Type II Signs
Type II Signs (Set 1)
Participants in the early weeks of the workshops often failed to duplicate this entire category of signs. Out of 74 tokens, 58% were not duplicated. This meant that one of the addressees did not have access to these signs, except via the non-dominant hand. If the addressee on the non-dominant side noticed, they intervened. It was not always clear to them what was happening, though, and participants were not all that reflective about mistakes until later on in the workshops. After being reminded many times, signers started pausing awkwardly when they encountered Type II signs, but usually moved on without executing any kind of repair.
Where duplication was attempted, there were two possibilities for how the signs changed. The first possibility was that the signer would duplicate the sign sequentially, the dominant hand playing the active role first and then the non-dominant hand (or in some cases vice versa). This was coded as “sequential alternation” shortened to “alternate” or “alt.” There were 16 instances in this set (about 22% of tokens were alternated). The second possibility was that the non-dominant hand would be dropped altogether. This was coded as “non-dominant dropped” shortened to “drop.” There were 14 instances of dropping in this set (about 19% of tokens were dropped).
Type II Signs (Set 2)
In the second set, there were far fewer cases where the signs were simply not duplicated.14 Earlier on in this set, most Type II signs were duplicated sequentially. In the first production, the dominant hand played the active role and the non-dominant hand took on the passive role (or vice versa) and in the second production, the roles were reversed. As the workshops went on, there was an increasing tendency to drop the non-dominant hand altogether, duplicating the active hand’s role with the dominant and non-dominant hand simultaneously. Of the first 24 tokens in this set, only two dropped. Of the last 27 tokens of this set, 12 dropped. The tendency toward more dropping continued to increase.
Type II Signs (Set 3)
Among the instructors dropping was still more common. Out of 66 tokens produced by the instructors, 39% were alternated and 42% were dropped. The remaining tokens were not duplicated.15 There was one sign in this set that was changed further. The VASL sign interrupt is signed with a b/5 passive hand and an active b/5 hand contacting the passive hand at the web between the thumb and the index finger. In the instantiation of interrupt in this data set, the passive hand was dropped and the active hand was duplicated.
8.6.4 Type III Signs
Type III signs are defined as follows by Battison:
Two-handed signs in which one hand is active and one hand is passive, and the two hands have different handshapes. Note that signs which were excluded specifically in type X fit into types 2 and 3--one hand contacts the other (discuss, contact (a person)).
Type III signs are very similar to Type O (one-handed) signs with two exceptions. First, the place of articulation is the non-dominant hand rather than the body of the signer or neutral space. Second, this type of sign almost always obeys the dominance constraint, so the configuration of the non-dominant hand is restricted to one of the following unmarked handshapes: a, s, b, 5, g, c, o.
8.6.5 Changes in Type III Signs
(a) Figure 8.14: Changes in Type III Signs
When Type III signs were embedded in a tactile field, they were reconfigured in much the same way that Type II signs were reconfigured (See Figure 8.14.). With less exposure to pro-tactile practices (Set 1), signers tended to produce these signs as they would be produced in VASL. Set 1 included 61 tokens produced by 10 signers. 46% were produced as one would expect in VASL, 23% were alternated, and 30% were dropped. As exposure to pro-tactile practices increased (Set 2), signers tended to alternate the dominant/non-dominant configuration of the sign. Set 2 included 51 tokens produced by 6 signers. 25% were produced as one would expect in VASL, 51% were alternated, and 24% were dropped. Among the instructors, who had the most exposure to pro-tactile practices (Set 3), Type III signs were produced most often by dropping the non-dominant hand altogether, which was coded as “drop.” Set 3 included 39 tokens produced by 2 signers (the instructors). 0% were produced as one would expect in VASL. 47% were alternated, and 51% were dropped.
This tendency toward dropping the non-dominant hand was also visible in patterns of self-repair. There are two instances in the data where a signer starts out alternating and part way through drops the non-dominant hand instead, or alternates the sign and then immediately repeats the sign, dropping the non-dominant hand instead. There are no instances where the signer starts out dropping the non-dominant hand and then switches to alternation. This is further evidence that the system is losing an articulator for purposes of lexical production in Type II and Type III signs. These changes have implications for sublexical constraints on two-handed signs in VASL, including constraints on symmetry across the two manual articulators and constraints on “weak drop.”
8.7 Implications for Sublexical Constraints in TASL
Since the pro-tactile movement took root in Seattle in 2006, basic participant frameworks have shifted, and as a result, the production and perception of two-handed signs has changed. In this section, I show how these changes are causing a reconfiguration in sign types as well as changes in constraints on symmetry and on weak drop.
8.7.1 Symmetry
In a three-person configuration, from the perspective of the signer, Type 0 signs, which are “articulated in free space without contact” (Battison 1978:28), become type I signs, which are “two-handed signs in which both hands are active and perform identical motor acts; the hands, may or may not contact each other, they may or may not contact the body.” Except that the following portion of the definition of that sign type does not hold: “[the hands] may be in either a synchronous or alternative pattern of movement” (ibid.).
In a three-person configuration, signs tend toward synchronous movement and away from alternating movement. Type X signs in VASL, or “one-handed signs which contact the body any place except the opposite hand” (Battison 1978:28), become Type 1 signs, which are “two-handed signs in which both hands are active and perform identical motor acts.” However, as with all other Type 1 signs in TASL, they tend to be produced with ipsilateral contact or alignment with the body of the signer, where contralateral or mid-line contact or alignment would be expected in VASL. In addition, synchronous movement is preferred to alternating movement. This means that in TASL, Type 0, Type X, and Type 1 signs are collapsed into a single category, all of which are under more demanding symmetry constraints than their corresponding category (Type 1 signs) in VASL.
For example, in Figure 8.15, the VASL sign fine (Figure 8.15a) is duplicated (Figure 8.15b). Contact with the signer’s body moves from the mid-line to ipsilateral contact on both sides. The long line in the middle is an approximation of the mid-line on the signer’s body and the two shorter lines on either sides show the approximate point where the signer’s thumbs contact his chest. In both cases--where synchronous movement is replacing alternating movement, and where ipsilateral contact is replacing mid-line contact, constraints on symmetry are becoming more demanding. The two articulators must be perfectly identical or motorically symmetrical in every respect. Type II and Type III signs are also collapsed into
(a) VASL fine
(b) fine (duplicated) Figure 8.15: fine duplicated with ipsilateral contact
this category when duplicated as well since they too must be perfectly symmetrical. Perfect symmetry is achieved by dropping the non-dominant hand and transforming it into a second active hand.
8.7.2 Complexity
This collapse of all sign-types into one allows TASL signers to produce two-handed signs that are maximally redundant, thereby enabling them to address two people at the same time. Given this communicative aim, the two manual articulators no longer work in tandem as they do in VASL. Rather, they produce identical copies of a single sign (symmetry is maximized). In Battison’s terms, this maximization of symmetry constitutes a minimization of formational “complexity,” which Morgan and Mayberry succinctly capture: “A two-handed sign that shares all phonological aspects is the most redundant and therefore least complex [ . . . . ] Increasing mismatches (departures from symmetry between the two hands) in each of these aspects create more complexity” (Morgan and Mayberry 2012:148).
8.7.3 Place of Articulation Features in TASL
From the analyst’s perspective, there appears to be a shift from the mid-line toward ip-silateral contact. However, from the perspective of TASL signers, the signing space itself may have been halved. Under this analysis, the two shorter “ipsilateral” lines marked in Figure 8.15b represent duplicated mid-lines and the larger line in the same figure would be the boundary between the first and second signing space. This suggests a reconfiguration of constraints on signing space (and therefore the distribution of places of articulation) for the production of lexical signs16 in three-person configurations.
Insofar as phonological distinctions within the reduced signing space dissolve, and perceptual ambiguity increases, distinctive locations can be expected to be redistributed as the system develops further. Indeed, this is already occurring. As we will see in Chapter 9, signing space is extended in the production of polycomponential signs to incorporate places of articulation on the body of the addressee.
8.7.4 Weak Drop
In addition, the constraints on “weak drop” (Padden and Perlmutter 1987) where the non-dominant, or “weak” hand in two-handed signs drops out and a one-handed variant is expressed, are changing. Weak drop in TASL violates constraints imposed by the grammar of VASL. For example, alternating signs in VASL do not undergo weak drop (Padden and Perlmutter 1987:350). In TASL, they do. In addition, minimal pairs become indistinguishable (as was discussed previously) if the distinction between one-handed and two-handed signs is collapsed. Padden and Perlmutter use the example of interesting and like; the former is a two-handed sign while the latter is a one-handed sign. In all other respects the two signs are identical (1987:351). Finally, morphological processes in VASL require both manual articulators (see Sandler 1993:347-353). Given a one-handed system, these processes must be accomplished some other way. Therefore, while it is true that the non-dominant hand is, in many cases, optional; this is not the case for all classes of two-handed signs and it is not true when morphological processes like compounding are in play.
In TASL, communication pressures are leading to decreased formational complexity in two-handed signs and constraints on weak-drop are relaxed. This is leading to ambiguities, which are being resolved by DeafBlind people in novel ways. These strategies and their implications for the ongoing grammatical divergence between TASL and VASL are discussed in the following chapter.
8.7.5 Which Participant Framework is Basic?
The analysis presented thus far relies on the assumption that the three-person configuration is, in fact, a basic level participant framework. In order to determine if this is the case, I examined the production of one-handed signs in two-person configurations. According to strictly pragmatic constraints, one-handed signs would only need to be duplicated in three-person configurations. If they are duplicated in two-person configurations, this would suggest that the motoric patterns shaped by the habituation of signers to three-person configurations are spreading to the linguistic system, proper. It would also suggest that the changes in constraints discussed in the previous section will continue and the visual and tactile systems will continue to diverge.
In a two-person configuration, reception tends to be one-handed and in three-person configurations, reception is necessarily one-handed. In Figure 8.16, the woman in the middle is signing the number three to two addressees. I have outlined the addressee on the right in the image. Her right hand is receiving the sign tactually, while her left hand is in contact with the second addressee. The duplicated three is being received by the other addressee’s left hand. Backchanneling cues produced by the addresses are duplicated so that both the signer and the second addressee have access to them. This configuration also works to maintain co-presence between all three participants.
For one-handed signs in VASL, this three-person configuration requires the signer to duplicate the sign so there is one copy for each addressee. In a two-person configuration, there are two possibilities for the production of one-handed signs. They can be produced as they would be in VASL, or they can be duplicated, as they would be in a three-person configuration such as that pictured in Figure 8.16. If the second articulator is being used as it would be in three-person configurations, this is evidence that signers are becoming habituated to a different configuration of articulators, initiated by changes in the deictic field, but consequential for the sublexical structure of TASL.
Although it is quite early in the emergence of this new system, something like this would surely be necessary, since languages generally do not vary the complexity of the articulatory apparatus as the number of addressees changes. The lower level cognition required to produce
Figure 8.16: One-handed Reception in 3-person configuration
signs within the phonological parameters of a particular language should recede into the liminal zones of a speaker’s consciousness so that cognitive and motoric resources can be freed up for other communicative tasks. If DeafBlind people duplicate one-handed signs regardless of whether they are in a two- or three-person configuration, the formal composition of signs remains constant, and sign production does not require the coordination of higher and lower-level cognitive resources.
In order to find out whether or not singers were duplicating one-handed signs in two-person configurations, I selected stretches of interaction where two-person configurations were in use and coded all of the one-handed signs used therein for ± duplication, the name of the signer, and the sign being used.
I collected three sets of data. Set 1 was produced by signers who were in their first couple of weeks in the pro-tactile workshops, and therefore, had had very little exposure to pro-tactile practices. Set 2 was produced by signers who were in their last few weeks of the workshops, and therefore had had more exposure to pro-tactile practices. Set 3 was produced by the instructors, who had been developing pro-tactile practices for several years before the workshops.
For Type 0 signs, signers who had had very little exposure to pro-tactile practices (Set 1) did not duplicate one-handed VASL signs in two-person configurations. Out of 40 tokens produced by 3 signers, 0% were duplicated. After a few weeks of exposure, duplication of one-handed signs increased dramatically. Out of 49 tokens produced by 5 signers 35% were duplicated. Among the instructors, who had been developing pro-tactile practices for years,
Figure 8.17: Type 0 Signs in Two-Person Configurations
the rates for duplication were significantly higher than rates for Set 1, but they fell below those recorded for Set 2. Out of 43 tokens produced by 2 signers, 12% were duplicated (See Figure 8.17).
Figure 8.18: Type X Signs in Two-Person Configurations
After finding this pattern in Type 0 signs, I expected to find a similar pattern in Type X signs. However, rates for duplication increased among the instructors for Type X signs relative to Set 2 (See Figure 8.18). In Set 1, out of 47 tokens produced by 4 signers, 2% were duplicated. In set 2, out of 53 tokens produced by 7 signers, 11% were duplicated. Among the instructors, 42 tokens were produced and 24% were duplicated. On the one hand, these results indicate a clear increase in duplication of one-handed signs in two-person configurations. However, the results for Set 3 in each sign type suggest conflicting projections for the development of TASL. In Set 1 for both sign types, signers are new in the workshops and are therefore very likely to be communicating in ways they would have communicated outside of the workshops. Given the data for Type 0 signs only, it seems possible that duplication increases in the learning phase, when signing in three-person configurations is still far from automatic. As the interactional patterns become naturalized, signers can switch more fluently between duplication and non-duplication in three- and two-person configurations respectively. However, the data for Type X signs suggests instead that signers will continue to duplicate one-handed signs in two-person configurations.
More research will be necessary to resolve this discrepancy, as I was unable to find any patterns external to the data set that could explain these findings. I considered the difference in sign-type--Type 0 signs do not make contact with the body of the signer, while Type X does. However, there is no reason why this difference should be so significant for duplication. Second, I looked into the semantics of the signs that were used, but could find no relevant pattern. The only significant external factor I found was that one signer duplicated one-handed signs more than all other signers. This signer had less experience using tactile reception prior to the workshops than others. She also had a physical problem affecting her tendons and joints during the workshops, so her mobility was slightly restricted. This suggests that increased cognitive and motoric demands leads to less variation in the production of signs across participant frameworks.
During the pro-tactile workshops, many new practices were introduced and signers had to make previously automatized processes of production and reception the focus of attention. This put more strain on cognitive and motoric coordination. It is possible that as cognitive and motor demands increase, signers will intuitively reduce variation in sign-production. Since three-person configurations require a system that operates on a single manual articu-lator (which can be duplicated or not), and two-person configurations are more flexible, the former is more likely to become the default. Therefore, it is possible that TASL, as it develops, will provide phonological specifications for only one manual articulator and the second articulator will optionally produce an identical copy. If this occurs, then the phonological system is losing the non-dominant hand as a place of articulation and a resource for marking phonological distinctions. This prediction is consistent with changes already taking place in the formation of polycomponential signs. Instead of using the non-dominant hand of the signer as a place of articulation, the hand and other areas of the body of the addressee are being recruited as places of articulation.
8.8 Effects of Deictic Integration on Sublexical Structure
The reconfiguration of the deictic field of TASL has led to the emergence of two competing basic participant frameworks among DeafBlind people in Seattle. One framework incorporates three participants, while the other incorporates two participants. In three-person configurations, signs must be duplicated so that one copy is produced for each addressee. The integration of the deictic field, which contains these structures, with the language, is putting pressure on the sublexical structure of TASL. From the addressee’s perspective, the language is moving from a two-handed to a one-handed system. From the signer’s perspective, more demanding constraints on symmetry are imposed on two-handed signs. Deictic integration is also pushing the phonological process of “weak-drop” beyond what the grammar of VASL allows. As a result, ambiguities arise often, which are difficult for DeafBlind people to resolve in interaction.
These changes mark the second moment in the divergence of Tactile and VASL. In the next chapter, I show how DeafBlind participants are resolving ambiguities that arise from the loss of complexity in lexical signs by recruiting the hands and arms of the addressee as places of articulation and articulators. I discuss the implications of these changes for further grammatical divergence between TASL and VASL.
223
Chapter 9
Formational Constraints on Complex Signs in TASL
9.1 Introduction
This chapter analyzes changes in formational constraints on signs known as “classifier constructions.” These constructions can be distinguished from lexical signs in at least two respects. First, they tend to encode meanings that are more complex than the meanings associated with lexical signs, and second, they tend to incorporate both linguistic and non-linguistic elements (Edwards 2012, Liddell 2003, Schembri 2003, Morgan and Woll 2007, among others).
In TASL, these signs are not produced on the body of the signer or in the space in front of the signer as they are in VASL, but rather, on the body of the addressee. This change is rooted in a broader shift in how DeafBlind participants orient to and access their environment. Prior to the pro-tactile movement, visual access was assumed. Individuals who could no longer communicate in ways that were normative for sighted people were expected to compensate in whatever way would be most effective for them, such as making adjustments in how signs were received and relying on sighted interpreters to relay information. Since the inception of pro-tactile movement, reciprocal, tactile communication is becoming the norm instead. Everyone, whether they are sighted, partially sighted, or blind, is now expected to produce and receive signs in a reciprocal tactile channel.
This shift has led to a reconfiguration of figure/ground relations in the immediate environment, so that a tactually accessible ground is required for individuating objects, whether talk about those objects is involved or not. Linguistic signs are increasingly caught up in this pattern, since they, too have to be individuated, or rather differentiated, against an accessible ground. Therefore, rather than being produced on and around the body of the signer, new TASL signs are often produced on the body of the addressee, where relative spatial locations can be easily perceived.
This process is not linguistic. However, as signs are transposed onto the body of the ad-dressee, signers encounter new motor-perceptual affordances and limitations for producing and receiving signs and a divergence in the visual and tactile systems appears. For example, the amount of surface area in a given region of the addressee’s body will limit the number of distinct locational targets allowed in that region. While several locations on the palm of the addressee can easily be kept distinct, only one location can be marked on the tip of the addressee’s finger. I argue that differences like this will, over time, give rise to a new set of constraints on the production and reception of TASL signs.
Classifier constructions are deictics in the sense that they integrate characterizing elements that are retrieved from the linguistic system, with deictic elements that are retrieved from the deictic field (see chapter 7). Over time, patterns in retrieval are coordinated in tighter and more restricted ways and language-internal relations adjust to accommodate these restrictions. This is what I am calling deictic integration. The focus of this chapter is the effect of deictic integration on formational constraints in polycomponential signs.
Interactional mechanisms that are driving this process include signal transposition, sign calibration, and sign creation.1 Signal transposition involves the transposition of handshapes onto the body of the addressee, yielding a tactually accessible ground. This process has implications for formational constraints, but is driven by the coordination of the linguistic system and the deictic field. Sign calibration is an interactional process through which participants clarify and adjust signs which have lost their capacity to refer to objects in the immediate environment. This process, in turn, led to the formation of signs and novel forms were created that would not be predicted given the grammar of VASL. I call this process sign creation. In this chapter, I show how these processes are leading to divergent constraints on the formation of “classifier constructions.”
In section 9.2, I provide a brief introduction to classifier constructions in VASL, which, I argue, can be analyzed as composites composed of “characterizing” and “indexical” elements (Morris 1971 [1938]). Iconicity and gesture fall out from these relations and therefore are not essential, definitional components. This approach to sign language classifiers (like many other approaches) departs from canonical understandings of classifiers in spoken languages. Therefore, I follow Slobin et al. (2003) in adopting the term “polycomponential signs.” This term allows for the combination of semiotically distinct elements, without specifying the nature of those elements (e.g. gestural, linguistic, indexical, iconic).
In section 9.3, I show how DeafBlind people created new polycomponential signs in the pro-tactile workshops and I argue that this process is a result of deictic integration. In section 9.4, I compare constraints on location in VASL and TASL. In order to isolate these constraints, I make a clear analytic distinction between social, deictic, and linguistic phenomena, all of which influence the production of signs. For example, social constraints limit possible places of articulation on the body of the addressee by applying social frames of value to communicative acts. In TASL, there are no places of articulation on the groin of the addressee--not because it is difficult to reach, but because it is considered inappropriate to touch the groins of others. Deictic constraints, on the other hand, have to do with the modes of access participants have to the immediate environment via an established set of participant frameworks in a given field.
Distinguishing between social, deictic, and linguistic constraints prevents intrusions of nonlin-guistic phenomena on the linguistic analysis. It also provides a principled way of accounting for the role of nonlinguistic processes in the structuring of TASL. Finally, in section 9.4.1, I track the transformation of particular components in polycomponential signs as values are retrieved from a tactile, rather than a visual deictic field. I show how the affordances and limitations of the tactile modality subsequently force changes in production, and how these constraints are applied to new TASL signs. I conclude with some thoughts about potential trajectories for the continued development of TASL.
9.2 Classifier Constructions in VASL
Classifier constructions in signed languages were initially named for their similarity to a subcategory of spoken language classifiers called “verbal classifiers.” Spoken language verbal classifiers consist of a morphological element, affixed to the verb, which classifies one of the verb’s nominal arguments according to semantic criteria. For example, the forms represented below, which are found in Diegueno, a Native American language spoken in California (Langdon 1970:78, cited in Grinevald 2000:67):
a’mi ... ‘to hang (a long object)
p’mi ... to cary (like bucket)
tumi ... ‘to hang (a small, round object)
In visual signed languages there are similar constructions. For example, in VASL, a morphological element that looks like the b-handshape (Figure 9.1) can be incorporated into a verbal sign to classify one of its nominal arguments as a flat, rectangular thing. When the
Figure 9.1: A morpheme used to classify objects as flat and rectangular
b-handshape is embedded in a representation of an action involving an object, it systematically draws attention to the flat and rectangular qualities of that object. Therefore, its form is tied to a stable semantic function. However, the movement and location parameters of the verbal element are not stable in the same way. Rather, their formal properties and meanings vary according to dynamics and relations outside of the language.
For example, if the remembered, imagined, or actual location of the table is to the signer’s left, then the activity of “laying” is conveyed by moving the semantic element to the left, toward the remembered, imagined or actual table. This part of the sign often incorporates gestural material. However, the gestural material, upon incorporation, is subject to formational constraints, which are linguistic.
These more context-sensitive dimensions of classifier constructions have often been associated with “iconicity.” For example, following Supalla and Newport (1978), Mandel defines VASL classifiers as “a rule-governed system of iconically-derived morphology that allows signers to generate novel verbs of motion and location with complex meanings” (1981:204).
However, iconicity must be “limited to allow signers to chunk and process material as phonology at the high speeds of linguistic interaction which require choosing between discrete alternatives, with the room for imprecision that that implies” (ibid.:206). Distinctions of direction, distance, and speed are far more limited than what the visual body is physically capable of perceiving and what the musculature can produce in non-linguistic processing. Fo r e x a m p l e , t h e d i fference between a 90 degree left turn and a 105 degree left turn can not be coded in the ASL classifier system because direction is “digitized” in quanta greater than 15 degrees (ibid.:208).
Therefore, under this view, classifier constructions are composed of (1) a semantic element, or stable form-meaning correspondence; (2) an “iconic,” gestural component that is coordinated with the semantic element; and (3) analysis of the composite sign to the forma-tional parameters of the language, which allow the addressee to process the sign at linguistic speeds.
In what follows, I argue that in TASL, constructions like these are formed through a coordination of linguistic and indexical elements. Iconicity is understood as an effect of coordination, and is therefore attributed very limited significance in sign creation. Indexicality can be understood in many ways. In this chapter, I am drawing on a specific definition of the term, which I take from the semiotician Charles Morris (1971 [1938]).
In order to account for the relationship of the sign to context, Morris posits a three-way distinction between indexical, characterizing, and universal signs. Indexical signs denote an object and are exemplified by pointing. Characterizing signs denote objects and analyze them in some way, highlighting certain aspects (1971 [1938]:17). In order for an object to be responded to, it must be located in terms of its relevant characteristics, which requires the combination of a characterizing sign and an indexical sign. The characterizing sign provides the determinateness of expectation (if I say “round,” you expect something round); and the indexical sign provides both the directivity of reference (you know where to direct your attention). Lastly, there must be signs that indicate the relation of these signs to one another and their relation to the class they are members of. These are “universal signs” (ibid.:17).
In Morris’s terms, classifier constructions incorporate characterizing and indexical elements. Characterizing elements are coded in conventional handshapes, movements, and locations in the language. Indexical elements allow signers to place these characterizations in spatial configurations, which direct the addressee’s attention to referents in particular ways. Rela-tions of resemblance between the characterizing element and the referent only appear after shared modes of access to the referent have been established, and are therefore, relatively unimportant for processes of language emergence.2 It is the composite form, which combines indexical and characterizing elements, which is central in the creation of new signs. These composites, which derive, in part, from nonlinguistic phenomena, become signs as they are analyzed to the formational parameters of the language.
The combination of semiotically distinct elements in sign language classifiers departs from canonical understandings of spoken language classifiers (See also Edwards (2012:43-9)). In response to this and other discrepancies, alternate terms have been proposed, including “polycomponential signs,” which has been gaining ground in recent years (e.g. Slobin et al. 2003, Quinto-Pozos 2007, Morgan and Woll 2007, Schembri 2003). Slobin et al. justify their use of “polycomponential signs” as follows:
In [the Berkeley Transcription System], signs that incorporate “classifiers” are treated like other complex signs, which we refer to as polycomponential signs. Like Elisabeth Engberg-Pedersen (1993), Adam Schembri [2003], and others, we seek to represent the range of meaning components, both manual and nonmanual, that co-occur in complex signs.[ . . . ] We have chosen to use polycomponential, rather than Engberg-Pedersen’s polymorphemic, because we are not ready to determine the linguistic status of each of the components, manual and non-manual, in complex signs. And we have replaced Engberg-Pedersen’s verbs and Schem-bri’s predicates, with signs, because the handshape expressions under study are used in verbal, adjectival, and nominal constructions.
The focus of this chapter is a new system used by DeafBlind people to create new signs. These signs often incorporate gestural and linguistic elements into a range of construction types--e.g. adjectival, verbal, and nominal. Therefore, the term “polycomponential” used by Slobin et al. is fitting, and will henceforth be adopted.
9.3 Polycomponential Signs in TASL
During the pro-tactile workshops,3 participants engaged in certain activities that required the creation and use of polycomponential signs. One of these activities was a game where DeafBlind participants were organized into dyads and each dyad was given a bag full of objects--things like old cell phones, toy snakes, and tea strainers. One DeafBlind person would pull an object out, explore it tactually, and then describe it in detail to the other DeafBlind person. When they were done, they handed the object to their partner, who explored it tactually, and then evaluated the description in terms of how well it prepared them for the qualities of the object, or in the terms of the game, whether or not the description “matched” the thing.
This required a formal mechanism for characterizing the object in terms of its relevant and accessible qualities. Participants all started out using VASL constructions for this task. However, these forms often led to frustration, blank stares, confusion, and eventual requests for intervention on the part of the instructors. When Lee intervened, she resolved the problem by introducing constructions like the ones presented below, which I consider new TASL signs.
In contrast to the VASL constructions, TASL signs tend to elicit recognition and participation. This interactional effect can be attributed to two things. First, these signs represent tactile qualities of objects, rather than visual qualities; and second, the composite sign composed of characterizing and indexical elements is analyzed to the formational parameters of TASL rather than those of VASL. This results in a meaningful, perceptible sign that can be distinguished from other signs given tactile production, reception, and modes of access to the immediate environment.
The following series was taken from an interaction between Lee, Allen, and Lina, who were playing the game described above. Allen had been using VASL constructions to describe the object, and Lina could not understand. The object was a phone charger like the one in Figure 9.2. Because Allen and Lina are having trouble communicating, Lee intervenes. She
Figure 9.2: The Car Charger
begins by describing the body of the car charger. She clasps her index finger and thumb around the wrist of the addressee, while holding the addressee’s hand in place. She slides her hand toward the elbow of the addressee (Figures 9.3 and 9.4).
Geometrically, the sign is composed of two circular shapes and a relative spatial relation between them, which together yield a cylindrical shape. The spatial relation is established by holding the hand of the addressee in place, thereby signaling the ongoing relevance of the first circle and anchoring its location relative to the second circle.
From the perspective of the addressee, the cylindrical shape is, at this point, abstract. However, as the interaction continues, this region on the body of the addressee is used to ground relative spatial relations between the body of the car charger and other parts of the car charger such as the cord and the tip, where it is plugged in. The handshapes used to represent these various parts encode meanings that are transferable across contexts (round thing, thing that moves in and out when you push on it, etc.). Therefore, they can be analyzed as characterizing elements, which provide a “determinateness of expectation.”
Figure 9.3: Sketch of Sign Representing Body of Car Charger
On the other hand, relative spatial information about the various parts of the car charger is established in interaction to draw attention to specific features of the object and distinguish it from other objects. In other words, they provide the “directivity of reference,” which in Morris’s view, is the function of indexical signs. Together, characterizing and indexical components allow signer and addressee to individuate the body of the charger in terms of its relevant characteristics, and a “match” between the sign and its referent is achieved. This match is a result of integrating characterizing and deictic elements, or what I am calling “deictic integration.”
(a) (b)
Figure 9.4: Sign Representing the Body of the Car Charger
In Figure 9.5, Lee continues by describing the shape of the cord. First, she manipulates the addressee’s hand into a partially open fist. Then she runs her pinky finger through the inside, tracing a tight, spiral pattern on the addressee’s palm (as in Figure 9.6). She continues with this spiral motion, out and away from the addressee’s arm (Figure 9.7). The i-handshape is a conventional VASL handshape used to characterize long, thin things. However, unlike VASL signs that incorporate this handshape, the motion is produced on the inside of the addressee’s hand.
(a) (b)
Figure 9.5: Sign Representing Shape of Cord
I encourage the reader to place their pinky finger inside of their partially-closed fist, and in a spiral motion, move from the center to the outside of the fist. If you have a spiral cord, like the one shown in the picture above, pull it slowly through your partially closed hand or move your hand slowly over it. If you have done this, you will notice a tactile resemblance between the sign and its referent. However, in order for this resemblance to appear as such, you must turn your attention to the tactile qualities of the object and the tactile dimensions of the representation. This kind of shift in orientation is possible, but not habitual for visual people, and losing vision does not automatically cause it.
(a) Adr.’s hand (b) Signer’s hand
Figure 9.6: Sign Representing Cord
Prior to the pro-tactile movement, DeafBlind people were visual people who could not see very well, if at all. As a result of the movement, embodied sensibilities were reconfigured and formerly visual people became tactile people. In order to effectively direct the attention of a tactile person to a specific characteristic of an object, tactile modes of access must be assumed. Only then, can “resemblance” function as such for both signer and addressee. The primary reason that this form is effective in conveying relevant aspects of its referent, is not that it is iconic, but rather, that it is embedded in a particular deictic field.
Next, Lee continues to hold the hand of the addressee in place. This anchors the previously described car charger body, allowing other aspects of the car charger to be described in relation to it. Constructing polycomponential signs in TASL requires an anaphoric deictic field, organized by tactile modes of access. One of the reasons that VASL polycomponential signs became difficult to perceive was that anaphoric relations were difficult to track against a visible backdrop. These moments of anchoring in a tactile field turn the previously objectified aspect of the charger into the ground against which other aspects are objectified.
You can see this process continue to unfold in the next move, when Lee describes the cord by articulating the spiral motion in an outward trajectory from a tactile point of contact on the elbow of the addressee. This establishes a spatial relationship between the cord and the body of the car charger. The relation is signaled by continuing to hold onto the addressee’s hand, thereby keeping the tactually accessible ground present in the description (Figure 9.7). Finally, she uses the VASL sign “plug-in,” indicating that the spiral shaped portion of the object she has just described is a cord for an electrical device.
(a) (b)
Figure 9.7: Representation of Cord Location Relative to Body of Charger
In Figure 9.9, Lee describes the button at the tip of the charger (Figure 9.8) by grasping the index, middle, and ring finger of her addressee. She presses on the tip of the middle finger several times as in Figure 9.10. Imagine yourself exploring this object tactually. As you run your fingers over the body of the charger, and up toward its tip, you encounter a small piece of metal, which gives way to your touch. The most salient thing about this part of the charger, from a tactile perspective, is the fact that it moves when pressed on, while the rest of the charger remains stationary. The sign representing this metal button is, therefore, iconic. However, the assumption that the addressee will explore the object tactually has to do with conventional modes of access, which are organized by deictic, not iconic relations.
Finally, Lina is given the actual car charger to explore tactually. She explores the cord first, then the body of the car charger, and finally, its tip, which she presses on several times. Lee taps on her arm and then on her leg and asks her if the representation matches her experience of the object. Lina says no, so Lee asks her why not. Lina runs her fingers over the body
Figure 9.8: Button at tip of charger
Figure 9.9: The Car Charger Tip
Figure 9.10: Representation of button
233
of the charger and then pushes down on the button at the tip and says that Lee failed to describe the button. Lee insists that she did describe the button and repeats her previous description (Figure 9.10). Lina laughs and emphatically signs “oh-i-see,” meaning that she understands. But Lina draws Lee’s attention to another feature of the object--a small metal spring on the side of the body of the charger that holds the charger in place once it it is plugged in (Figure 9.11). Lee says, “Oh! I didn’t notice that!” In order to describe this
Figure 9.11: Metal springs on car charger
portion of the car charger, Lee isolates the the index and middle fingers of both interlocutors and then pushes and releases several times on the sides of the fingers, as in the sketch in Figure 9.12. I encourage the reader to produce this sign on your own hand, or even better, someone else’s hand.4 You will notice a feeling that is tactually similar to pressing on small, metal springs. Once again, however, the assumption that the addressee will have tactile, rather than visual knowledge of the object follows from a certain configuration of indexical relations, and this is a prerequisite for a relation of resemblance to appear. In Figure 9.13,
Figure 9.12: Representation of metal springs
Lee is duplicating the sign--one copy for each addressee.5 At this point, Lina, Lee, and Allen
all agree that the various parts of the description correspond to the various aspects of the object and their combination counts as a legitimate, way of representing the tactile qualities of the car charger. This kind of negotiation was common in the pro-tactile workshops. The
Figure 9.13: Lee Duplicates a Representation of the Metal Springs
workshops were experimental and collaborative, and though the instructors had far more experience and were clearly leading the group, all participants contributed to clarifying and adjusting signs to integrate them more seamlessly with their shared experience. Novel signs were evaluated either explicitly (as in this case), or implicitly in interaction (e.g. addressees expressing confusion, irritation, requests for clarification, etc.).
This interactional process, which I call “sign calibration” is leading to the integration of linguistic and deictic elements, or deictic integration. Deictic integration is, in turn, contributing to an emergent set of constraints for generating polycomponential signs, which diverge from those found in VASL. The remainder of this chapter will examine the nature of those constraints and their relation to corresponding constraints in VASL.
9.4 Constraints on Location in VASL
In VASL, there are restrictions on where signs can be produced. For example, Stokoe (1960) observed that the “zero tab” (or the space in front of the signer) is constrained by motor capacity as well as economy. While other areas may be physically possible to articulate a sign in, a restricted area in front of the signer’s body allows for the greatest ease of articulation (2005[1960]:25). Klima and Bellugi sharpen this observation via a comparison with non-linguistic body movements in pantomime.
In free pantomime there are only physiological restrictions on the space used differentially in conveying a message. To mime opening a door, putting on a boot, or picking apples off a tree, a person may walk around, reach down to his feet, or extend his arms high above his head. By contrast, ASL signs in citation
form are made within a highly restricted space defined by the top of the head, the waist, and the reach of the arms from side to side (with elbows bent) (1979:51).
The fact that signs are not produced in locations outside of this space, despite the physical possibility of doing so, shows that location is constrained, at the very least, by economy. In addition, there are arbitrary constraints that come into view in a cross-linguistic frame. For example, the back of the head and the underarm are never used in VASL, but in other signed languages they are (Mandel 1981:11).
9.4.1 Implications for Formational Constraints in TASL
The use of locations in TASL, which are never used in VASL,6 suggests a divergence in underlying constraints--some of which follow from conditions of production and reception in a tactile modality, and some from arbitrary and/or nonlinguistic orders. In Figure 9.14, I have highlighted regions on the addressee’s body where polycomponential signs are produced in TASL. Examples of some of these locations are represented in Figures 9.15, 9.16, 9.17, 9.18.
Figure 9.14: Locations on Addressee’s Body Where TASL Signs are Produced
Notice that articulation is not performed on the the groin area, the area below the knees, the inner portion or backs of the thighs, the feet, or the front of the neck of the addressee. Some of these restrictions are attributable to principles of economy or motor-perceptual capacity. For example, it is hard to envision a bodily configuration in which the feet of the addressee would be readily accessible to the signer. Likewise, in a standing configuration, the backs of the thighs are hard to reach, and while sitting, they are inaccessible.
(a) (b) (c)
Figure 9.15: Examples of Locations on Addressee’s Arm
(a) (b) (c)
Figure 9.16: Examples of Locations on Addressee’s Head and Face
(a) (b)
Figure 9.17: Examples of Locations on Addressee’s Shoulder and Neck
Figure 9.18: Example of a Location on Addressee’s Back
237
However, there are also many non-linguistic constraints. The groin of the addressee, for example, cannot be admitted into the linguistic system because it is socially unacceptable to touch this area of the body for routine communicative purposes. The same is true of the inner portions of the thighs. These kinds of constraints derive from particular, historically constituted social fields. For DeafBlind people in the pro-tactile workshops, all tactile contact with the body of the addressee, even in relatively uncontroversial locations such as the arm, required major adjustments in evaluative frames.
In addition, there are constraints on sensory orientation and modes of access that do not derive from the language or from the social field, but from the deictic field. As outlined in Chapter 1, the deictic field includes: (1)“the positions of communicative agents relative to the participant frameworks they occupy”; (2)“The position occupied by the object of reference”; and (3)“The multiple dimensions whereby agents have access to objects” (Hanks 2005b:192-3).
The physical relation of one body to another is organized by participant frames and frameworks. Participant frameworks become conventional, and this leads certain physical relations in interaction to become expectable, such as standard distance between speaker and addressee, relative symmetry in height, reciprocal sensory orientations, etc. In order to identify constraints on production and reception in a given language, observed instances of use must be performed in unmarked interactional contexts.7 For TASL, this kind of regularity in reciprocal, tactile interaction, has only begun to emerge over the past 5-7 years. This, in combination with shifts in sensory orientation, have made the emergence of stable, tactile constraints on production and reception, possible.
Figure 9.19: The Structure of the Deictic Field
In the deictic field (schematically represented in Figure 9.19), access to objects is grounded in the bodily configurations through which participant frames are realized.8 Therefore, objects are objectified against a background which includes the corporeal sphere occupied by speaker and addressee, as well as many other things such as “common sense,” shared knowledge, etc. There is a shift in perspective that is necessary for grasping this fact. Rather than viewing the body as a producer and receiver of signs, it must be viewed as part of the indexical ground of communicative activity. The body that appears under a deictic perspective interacts in crucial ways with the body that appears under a linguistic perspective, but it is not identical with it, and must be distinguished analytically. As a result of the pro-tactile movement,
relations between the body and objects in the immediate environment snapped to a new set of coordinates organized by tactile, rather than visual modes of access (See Chapter 5). This essentially non-linguistic transformation affects the linguistic system, since signs are among the objects that must be accessed via particular bodily configurations.
The transfer of signs from a visual to a tactile field among DeafBlind people in Seattle can be productively broken into two moments. The first consists of a kind of deictic transposition, which I call “signal transposition.” The second consists of a change in formational constraints on location triggered by this process.
In Figure 9.3, the signer uses a handshape that is similar to the f-handshape in VASL (See Figure 9.20) to characterize the body of the car charger as a small, round thing. However, rather than being produced against the visible backdrop of the signer’s body, it is produced against the tactile backdrop of the addressee’s body. This is what I am calling signal transposition.
Figure 9.20: The “F” Handshape in VASL
The tactile surface of the addressee’s body has different limitations and affordances than the space on and in front of the signer’s body. Therefore, signal transposition triggers changes in constraints on the production and reception of signs. For example, for the curve of the f-handshape to be perceptible tactually, it has to wrap around a curved surface. The curve of the addressee’s forearm lends itself to this function, since it is also curved. However, the index finger and thumb of the signer cannot close entirely around the arm of the addressee, so the handshape must be open, rather than closed, as it is in VASL. In this case, a kind of borrowing can be reconstructed, where a VASL handshape is fit to motor-perceptual constraints in a tactile channel.
However, this kind of link between the visual and tactile forms is not always recoverable. For example, there is no VASL handshape that corresponds in any obvious way to the TASL handshape in Figure 9.9. This has to do, in part, with the fact that tactile, rather than visual dimensions of objects are being represented. However, it also has to do with motor-perceptual limitations of the tactile modality. The tip of the charger itself is small and round. VASL has ways of characterizing small round objects, which can involve the f-handshape in figure 9.20. However, the tip of the addressee’s finger has a highly restricted surface area relative to the size of the signer’s hand. It is not clear how this handshape could be used in this location. Instead of using the VASL handshape, the signer presses on the tip of the
finger several times to show how the button at the tip of the charger moves when pressed. This sign is shaped by a tension between articulatory constraints on the signer’s hand and the limitations and affordances of the surface on which signs must be produced.
Analogous tensions undergird the sublexical organization of VASL. Battison (1978) observes that in VASL, the configurations of the hands with respect to one another and the relative positioning of the fingers within each of the hands imply a fairly compact spatial zone of activity. When signs are articulated by moving the whole hand from one location to another, a different spatial scale and correspondingly different motoric and perceptual requirements are involved. The internal features of handshapes maximally occupy the space of an extended 5-hand and a bit of space around it. In contrast, locations require differentiations in a much larger spatial zone that includes the space in front of the torso and the face. This discrepancy between the motor-perceptual activities required to produce and perceive handshapes and those required to produce and receive signs articulated at locations in the signing space call for some kind of “compensation.”
Compensation is achieved in three ways. First, locational targets in larger spaces must be further apart. Second, the “visual backdrop of the body itself” serves to differentiate locations. Battison writes, “Locations in signing space are not differentiable by relative distance alone, but by their proximity or relations to the gross landmarks of the body--the head, chin, shoulders, waist, etc” (1978:41). Third, different areas of signing space allow for different levels of articulatory complexity-- from less complex to more complex moving vertically from the waist to the head (ibid.). In support of this last claim, Battison shows that there are greater numbers of marked handshapes produced as the location in signing space grows higher, approaching the head. (ibid.:42-3).
Thus, it does appear that the vertical location component of signs is systematically restricted in a manner consistent with the need to keep visual elements perceptually distinct. Areas higher in the signing space permit more complex combinations of manual visual elements, both in terms of fineness of location distinctions and the complexity of individual handshapes (ibid.:43).
Addressees tend to fix their gaze not on the hands of the signer, but on the lower part of the signer’s face, therefore this pattern may also follow from visual acuity. The closer the location is to the central field of vision of the addressee, the more complex and finely differentiated the handshapes can be. Further from this area of high visual acuity, more unmarked handshapes (simpler handshapes) and two-handed signs would be used to increase redundancy in the signal (Siple via Battison 1978:43).
Restrictions on new TASL signs diverge from those described for VASL because the signing space on the surface of the addressee’s body carries different affordances than the signing space in front of the signer’s body. First, it is not necessarily the case that locational targets in larger spaces must be further apart. It seems (so far) that tactile locations within a large body area, such as the back of the addressee, can be just as finely differentiated as on the addressee’s palm without causing perceptual difficulty. Second, it is not clear yet what regions of the body will become most salient in distinguishing locations from one another, however, they are not likely to be the same as those in VASL. For example, from a tactile perspective, the elbow joint is more perceptually salient than the chin, and therefore, is a better candidate for landmark status. Third, in TASL, the palms of the hands, the forearms, and the back of the addressee permit more complexity in handshapes and fineness of location distinctions than either the face of the signer or the face of the addressee. This suggests that the vertical arrangement of articulatory complexity described by Battison does not hold for TASL. Finally, and related to this, areas of greater tactile acuity are not identical to areas of greater visual acuity.
In VASL, the addressee rests their gaze on a particular region of the signer’s body. In contrast, the addressee’s body is not only available to the TASL signer given conventional bodily configurations, but is actively manipulated by her. For example, in Figure 9.21, the signer (right) is manipulating the arm of the addressee (left). First, she raises his arm, and holding his hand in place, she touches three locations on his body. In Figure 9.21a, she touches his shoulder near the outer edge of his collar bone. In Figure 9.21b, she touches the outside of his elbow. In Figure 9.22, she touches the palm of his hand. This establishes a relative spatial relationship between three geographic locations she is representing. The signer in this
(a) (b) (c)
Figure 9.21: Signer Manipulates Addressee’s Arm to Produce Sign
example is Adrijana, one of the instructors of the workshops, who had been developing pro-tactile practices for about 4 years at the time. In Figure 9.21, a less experienced tactile signer (left) is learning to represent relative spatial locations in this way. Over the course of the interaction, he attempts to produce signs by manipulating the addressee’s hands and arms, but he encounters limitations in the mobility of the joints. In Figure 9.23a, he attempts to mark locations on the back of the addressee’s hand, and in doing so, flexes her wrist beyond what is comfortable, and has to adjust. In Figure 9.23b, he has the opposite problem, where he encounters the limits of flexion in the addressee’s wrist. After this attempt, he leans back, and he and his interlocutor laugh and comment on the awkward position they ended up in. In Figure 9.23, he encounters similar problems, but this time, the problems include the shoulder joints as well.
Movements like these, which require hyper flexion or extension of the joints, are not permitted in TASL. They are only found among people with very little or no exposure to pro-tactile practices, and are often followed by laughter and comments about how awkward or uncomfortable it is to produce such signs. These types of constraints are not a question of tactile acuity, but they are a question of mobility in the joints of the addressee and the spatial
241
(a) (b)
Figure 9.22: Hyperflexion and Hyperextension of Addressee’s Wrist
(a) (b)
Figure 9.23: Hyperflexion of Addressee’s Shoulder
242
resources it affords for producing and receiving signs.
The fact that the boundaries of the arm, as well as its position relative to the body of the signer, are both resources for producing the sign raises an interesting problem. Is the arm of the addressee serving a strictly perceptual role? Or is there also an articulatory function involved?
In some cases, the answer to this question is less ambiguous. For example, in Figure 9.24, the signer (left) is describing the movement of a snake’s body. She grips Manuel’s arm just below the armpit, and holds onto his wrist. Then she moves each point of contact alternately to produce a snake-like motion in Manuel’s arm. In Figure 9.24a, she moves Manuel’s arm away from her body, and in Figure 9.24b, she moves it back again.
(a) (b)
Figure 9.24: Addressee’s Arm as Articulator
This requires motor coordination between signer and addressee. The addressee must be responsive, like a dancer following their partner’s lead. Therefore, motoric constraints on polycomponential signs like these would have to be distributed over the dyad. In addition, there are three, rather than two articulators involved.
In VASL, there are constraints on articulatory complexity in two-handed signs, which can be succinctly stated as follows: “Maximize symmetry and restrict complexity in the handshape features of the two hands9 (Eccarius and Brentari 2007:1198).
TASL permits signs that require three articulators, each one with a distinct motor task. This constitutes an increase in complexity that exceeds constraints on two-handed VASL signs. At this time, there are not enough data to attempt a systematic analysis of constraints on complexity in three-handed signs in TASL. However, the fact that signs like this are being produced suggests that the rules for generating polycomponential signs in TASL are diverging in fundamental ways from those in VASL.
9.5 Effects of Deictic Integration on Formational Constraints in TASL
In this chapter, I have shown that new modes of access and orientation and new participant frameworks, are leading to the emergence of a new set of formational constraints in TASL. This transformation is occurring in two moments. First, signs are being transposed from a visual to a tactile ground, a process I call signal transposition. This leads the signer to encounter new affordances and limitations on sign production, which, in turn, influences the way signs are distinguished from one another.
In VASL, locational targets must be further apart when they are located on or in front of large body areas, such as the torso. In TASL, this is not the case--locational targets on larger areas, such as the back of the addressee can be just as close together as in smaller areas, such as the palm of the addressee. This may be related to the fact that the tactile backdrop of the addressee’s body is itself differentiated in ways that differ from the visual backdrop of the signer’s body. From a visual perspective, certain body areas, such as the chin, nose, eyes, etc. are visually salient, and therefore make good “landmarks” which can be used to help distinguish one location from another in a visual modality. From a tactile perspective, the elbow of the addressee is more perceptually salient than the nose, or the chin of the signer. Therefore, as the language develops, the tactile ground of signs will likely be split into contrastive regions that do not correspond to those found in VASL.
There are also constraints on the formational complexity of handshapes and the fineness of location distinctions in VASL, which do not correspond to emergent constraints on TASL. For example, the palms of the hands, the forearms, and the back of the addressee permit more complexity in handshapes and fineness of location distinctions than the face and head do in TASL. In contrast, complexity increases as you move vertically from the waist to the head of the signer in VASL. In addition, in TASL, the hands and arms of the addressee are manipulated. These manipulations are limited by the mobility in the joints of the signer and addressee, and their ability of the dyad to coordinate movements. The system is new, however, these kinds of limitations point to emergent cognitive and motoric constraints on manual coordination in TASL, which differ from those found in VASL.
All of this is evidence that new formational constraints are emerging in the tactile system. Some of these constraints, such as limitations on mobility in the joints may be particular to all tactile signed languages, and therefore attributable to the the modality itself. Others, such as the body areas within which signs are permitted, might vary across tactile signed languages, and therefore be attributable to social, interactional, or arbitrary constraints. In order to pursue these lines of inquiry, additional tactile languages, which are used in a reciprocal sensory channel, will need to examined.
With respect to VASL, the most dramatic divergences are found not in the lexicon, but in polycomponential signs, which incorporate both characterizing and indexical elements and are therefore, more sensitive to context. Constructions like these have been shown to be a new source of lexical items in nearly all signed languages studied to date (Aronoff et. al. 2003, McDonald 1982, Enberg-Pedersen 1993, Klima and Bellugi 1979, Schembri2000, Shepard-Kegl 1985, Zeshan 2003). Therefore, it is expectable that these changes will contribute to a more comprehensive restructuring of TASL at the formational level. These changes are all driven by a process of deictic integration, through which characterizing and deictic elements are coordinated with one another in tighter and more restricted ways over time.
Chapter 10 Conclusion
In this dissertation, I have shown that the grammar of Tactile American Sign Language (TASL) and Visual American Sign Language (VASL) are currently diverging as a result of changes in the social and deictic fields engaged by DeafBlind people in Seattle, Washington. I have argued that this grammatical divergence is a result of contextual integration, which involves the coordination of the linguistic system with deictic and social fields it is instantiated in.
I compare the emergence of TASL with three previously documented cases: homesign systems in Philadelphia and Chicago (Goldin-Meadow and Feldman 1977, Goldin-Meadow and Mylander 1983, Goldin-Meadow and Morford 1985), Nicaraguan Sign Language (A. Senghas 1999, A. Senghas and Coppola 2001, Kegl et al. 2001), and Al-Sayyid Bedouin Sign Language (Sandler et al. 2005, 2011, Forthcoming). I show that in all three cases, the emergence of a language-like system corresponds to a tightening of relations between linguistic, deictic, and social phenomena. In the homesign case, deictic and characterizing signs combine in increasingly predictable orders. The reason homesign does not develop into a full-fledged language is that it is not embedded in a viable social field.
In Nicaragua, language emergence is associated with the emergence of spatially modulated verbs. I have argued that spatial modulation in signed languages is the result of deictic integration. Furthermore, I show how the integration of linguistic and deictic systems in Nicaragua was preceded by the establishment of a social field with an internally asymmetric structure. Therefore, I identify the broader phenomenon of contextual integration as a driving force in the emergence of Nicaraguan Sign Language as well.
I have also argued that deictic integration plays an important role in the emergence of Al-Sayyid Bedouin Sign Language (ABSL). ABSL has recently developed a productive morphological process whereby one deictic and one characterizing sign are compounded to produce place names. As these connections have become increasingly conventionalized, the order of the compounded elements has become fixed; the deictic component is word-final. This consistent ordering of elements, in addition to changes and reductions in the movements of the signs, enact the same kind of tightening of relations between deictic and linguistic phenomena that were noted in the NSL and homesign cases.
In addition, a reconfiguration of the social field among ABSL signers is threatening the viability of the language (Kisch 2012). Over the past 30 years, many changes have taken place, including the establishment of separate schools for deaf and hearing children, changes in marriage patterns, and shifts in the availability of employment (ibid.). These changes are all converging to make ABSL a less legitimate means of position-taking in a viable social field. I have argued that deictic integration is not enough. In order for a full-fledged language to emerge and be sustained, a broader process of contextual integration must transpire, through which linguistic, deictic, and social orders are coordinated with one another in tighter and more restricted ways over time. This means that “a language” is not strictly linguistic. Rather, it coheres in the relations of embedding between linguistic, deictic, and social phenomena.
This approach to language emergence is complementary to those that focus on the innate capacities of the human mind. While those approaches have focused on the role of abstraction in “liberating” language from its contexts of use, I have emphasized the role of integration, through which deictic and social relations are increasingly caught up in, and coordinated by, linguistic processes, and vice versa. Whether the goal is to understand context or to factor it out, practice theory is useful for understanding the emergence of new grammatical systems as influenced by, but distinguishable from, broader socio-historical and interactional processes.
Endnotes
Chapter 1
1 This orthographic representation emerged along with the pro-tactile movement, and has since come into wide-spread usage.
2 See section 8.2 on page 193 for more on the efficacy of tactile reception of VASL signs.
3 The pro-tactile movement is not an identify movement, nor is its focus language standardization. Rather, its focus is co-presence and the hope of communicating in ways that feel effortless and “natural.” It is also about building a home world that can truly be inhabited. The sighted world cannot be inhabited, but given a strong and intuitive grasp of the tactile world, analogic relations can be established, and the world of the sighted-- the broader society in which DeafBlind people live--can be imagined and therefore maneuvered within. Without a home world, the worlds of others cannot begin to be grasped or changed (see chapter 4).
4 The conceptual framework that accounts for this process is not ‘integrationist’ in the sense of Toolan 1999, Harris 2002, or Love 2006. See Edwards (2012:65) for a more detailed discussion of the differences between the two frameworks.
5 See chapter 2 for discussion of three cases in which language-like systems, or full-fledged languages have emerged, and the role of contextual integration in these processes.[i]
6 Sidnell and Enfield use this term in precisely the opposite way. They mean that as interactants select certain lexico-grammatical resources to accomplish interactional goals, there are consequences for how the interaction unfolds (Sidnell and Enfield 2012:313). I mean that socio-historical changes unfold in a semi-autonomous field, governed by distinct principles of organization. Likewise, interaction is structured by principles which are unique, and therefore, the field in which language-users interact is also semi-autonomous. Finally, languages and the sub-systems they are composed of are also semi-autonomous. Nevertheless, when socio-historical processes affect the structure of the social field, there can be collateral effects for the structure of interaction and for the organization of the language itself.
7 Saussure identifies three aspects of language: langue, parole, and langage. Langue is the formal system, parole is language-in-use, and langage is the whole thing together. Although not unimportant, parole is ultimately left to other disciplines, and Saussure names langue as the proper object of linguistics (1972 [1915]:66). In the approach taken here, formal systems are distinguished from interactional and social processes. However, the semiotic status of a whole language cannot be ascertained from a linguistic, interactional, or social perspective; all three are necessary, and a theory of the relations that obtain between them is required.
8 I have examples of VASL signs produced in this way, however, I am not including frames in the dissertation because I need to protect the identity of these signers.
9 Green (2014) and Goodwin (2000) show many of the ways that radically non-reciprocal linguistic competence can be overcome (or not) via social and interactional means. I am arguing that similar procedures can act not only as a means of circumventing asymmetries, but also as a means of correcting them via augmentation of the linguistic system itself.
10 The authors thank Stephen Anderson, David Perlmutter, and Maria Polinsky for independently raising these questions in person.
11 RJ Senghas has made as similar point with respect to second hand accounts of the Nicaragua case (2003:272). He notes that Chomsky, in an interview with the BBC, claimed that the Nicaragua case involved the development of a new language based on “no external input.” Senghas points out that this is observably
untrue. What was missing was linguistic input, but both socio-cultural and non-linguistic semiotic resources
were available to deaf Nicaraguans. Also see Russo and Volterra (2005) and Fusellier-Souza (2006). Kisch (2012) makes similar observations about research on ABSL.
12 This observation also applies to language maintenance and language shift. When a language can not be used as a legitimate means of position-taking, it is likely to be replaced by one that can. This perspective can be understood in contradistinction to the idea that languages preserve or transmit culture (see Muehlmann 103:146-69 for discussion).
13 The transmission of the habitus in the Deaf and DeafBlind communities is less straightforward, since most Deaf and DeafBlind people do not have Deaf or DeafBlind parents. The habitus is transmitted within the community, usually in later stages of childhood and beyond. Nevertheless, a Deaf habitus forms and can be recognized. For example, Bahan describes a scene where a father and daughter are sitting in a cafe people-watching. The father tells the daughter to look into the crowd outside and identify the Deaf person among them and she does so successfully, despite the fact that he was not signing. Bahan attributes her success to the fact that she and the man she identifies are both “people of the eye” (2008:83). In the present framework, it is attributable to a shared, visual habitus, which can be identified via habitual modes of orientation, navigation, and comportment.
14 see index in Bu¨hler (2001[1934]:499) for specific page numbers.
15 For more on Bourdieu’s sources in connection with the field concept, (including structuralist thinkers, the Russian formalists, and others), see Hanks 2005a:72.
16 Dignity is therefore a “fieldable” value, while wealth is not. See following sections for more on fieldability.
17 There is a great deal of work on perspective in language, which I will not discuss here. However, see, e.g., Dancygier and Sweetser 2012 and Dudis 2004 for more in-depth discussion of this topic in signed and spoken languages.
18 Saussure says, “All conventional values have the characteristic of being distinct from the tangible element which serves as their vehicle” (1972 [1915]:116-17).
19 See Hanks 2005a:194.
20 Contextualization is an inferential process (i.e. Sperber and Wilson 1986, Levinson 1983), which involves “hypothesislike tentative assessments of communicative intent” (Gumperz 1992:230).
21 Keying involves a change in frame through which an activity is understood, for example, when playful, “biting-like behavior” turns to biting (Goffman 1974:41-4).
Chapter 2
1 See also Zeshan and de Vos (2012) for typological, anthropological, and sociolinguistic factors in the emergence (and in some casedecline) of new signed languages.
2 See Kisch (2012), Zeshan and DeVos (2012), Russo and Volterra (2005), and Fusellier-Souza (2006) for critical commentary.
3 This story was also used to frame an ethical debate about scientific studies of “Genie,” a girl who was deprived of all social and communicative contact for the first 13 years of her life (Rigler 1993, Rymer 1993).
4 These were their ages at the beginning of the study.
5 They explain that the caregivers used both speech and gesture in communicating with their children. Although “gesture and speech might form an integrated communication system” for hearing people, they analyzed the mothers communications from a visual perspective, since they took this to be the point of view of the deaf children (Goldin-Meadow and Mylander 1983).
6 Fillmore recognizes the irony in the fact that Benjamin Lee Whorf made the earliest, most forceful case for covert categories, or “cryptotypes” (see Whorf 1956:70-80) in support of linguistic relativity--precisely the opposite of their use in generative grammar, where they were the basis for universals (Fillmore 1968:3).
7 RJ Senghas has made as similar point with respect to second hand accounts of the Nicaragua case (2003:272). He notes that Chomsky, in an interview with the BBC claimed that the Nicaragua case involved the development of a new language based on “no external input.” Senghas points out that this is observably untrue. What was missing was linguistic input, but both socio-cultural and non-linguistic semiotic resource were available to deaf Nicaraguans. In first hand accounts, the picture is much more complex.
8 See also R. J. Senghas, Polich 2005, Fusellier-Souza 2006, and Kisch 2012.
9 Polich emphasizes that “The model, however, is indebted to outside influences and outside precedents, and did not originate with Nicaraguan deaf members. Attitudes, especially from Sweden, Finland, and the United States introduced the philosophy; but starting in 1990, and especially after 1992, it was adopted by the leading members of ANSNIC, who started a campaign to include more sign language in the schools, and to increase use of Spanish/NSL interpreters for deaf persons in daily life. Without the reification of sign language brought to Nicaragua from Costa Rica, the United States, Sweden, and Spain, or without the financial aid and the anti-integrationist perspective of the SDR, it is possible that this model would have been much longer in the making (ibid.:97). See R.J. Senghas (2003:275-277) for more on the global networks within which deaf Nicaraguans are embedded.
10 They also note a fourth “system,” which is a “pidgin” used between hearing and deaf signers--where “signers view themselves as speaking Spanish, and Spanish speakers view themselves as signing or using Mimicas” (ibid.:182). This phenomenon is recognizable given familiarity with the American Deaf community and is very interesting, but I take it to be on another level of communicative complexity in the sense that it combines the more basic systems. Therefore, I bracket discussions of it in my summary of this research.
11 Since then, similar classes of verbs have been identified in almost every signed language that has been documented (Mathur and Rathmann 2012:137).
12 It is difficult not to put almost every term used to describe spatial modulations in scare quotes since nearly all of them have attracted some kind of controversy. However, when recounting a particular view, I will use the terms put forth by the author of that view. The difficulty, for example, in using the term “affix” here will become clear below.
13 This category has been broken down into at least 5 sub-classes (See Supalla 1986, cited in Padden 190:119). However, for the sake of brevity, they are not recounted here.
14 lifeprint.com
15 Mathur and Rathmann 2010 for a more detailed discussion.
16 see Chapter 7 for a more detailed discussion.
17 This suggests something strikingly similar to Liddell’s analysis, despite the fact that Senghas compares spatial modulations to “grammatical endings appended to words in spoken languages,” which are presumably organized according to strictly linguistic principles, and Liddell sees spatial modulations as governed by the universal capacity to create conceptual representations of objects and relations in the world.
18 In some of the earlier work (e.g. Kegl et al. 2001), the various home sign systems that children came into school with with were viewed as substrates, which, in the absence of an accessible superstrate, combined with one another to form something like a pidgin. Over time, the pidgin was “elaborated” as it underwent creolization. The word “elaboration” implies an increase in complexity, not a process of abstraction. However, in this work, elaboration is seen as the product of language acquisition. In this process, the innate structures of the language-ready mind act on imperfect, or impoverished input (the home sign systems) to produce something more complex and systematic. Therefore, there is no construct established for explaining the interaction of linguistic and non-linguistic phenomena, unless one considers the innate structures of the language-ready mind to be non-linguistic, which as was discussed in section 2.1.3, cannot be the case.
19 There was one report of a deaf man who had befriended another deaf man from a neighboring settlement in the 1960s. In addition, one of the deaf members of the first generation of signers had partial literacy in Arabic. However, aside from these very limited kinds of exposure, deaf signers were not exposed to any external signed or spoken languages (ibid.)
20 In the 1960s, a few deaf children were enrolled in a school for one year, where they acquired some basic Arabic literacy and were exposed to Jordanian Sign Language (ibid.).
21 Although, Kisch points out that many different social factors must be considered in constructing boundaries between generations. While others focus on biological lines of descent, Kisch argues for the importance of social networks, including education, and marriage and labor patterns (2012).
22 See chapter 8 for a brief introduction to the phonology of VASL.
23 See Brentari (1998), Perlmutter (1992), Sandler (1989), and Sandler and Lillo-Martin (2006) for proposed feature hierarchies in more established signed languages.
24 Both examples were given in precisely these terms in a lecture at the University of California, Berkeley by William Hanks on 2/18/09.
25 This insight draws on a synthesis of Pierce’s notion of indexicality and Spinoza’s concept of “memory” (1985 [1677]:465-467). Spinoza argues that bodies (in the most general, philosophical sense) are affected by one another (which the mind perceives) in the present, but associations build up in the present through past affections as well. If the human body has been affected by more than one body, and if the mind later imagines one of those bodies, the others will be recollected as well (ibid.:465). This is what memory is for Spinoza: “a certain connection of ideas involving the nature of things which are outside the human Body--a connection that is in the Mind according to the order and connection of the affections of the human Body” (465). This order that emerges out of the connections and affections of the human body is distinct from the order that emerges from the intellect. The intellect is the mode through which “the Mind perceives things through their first causes, and which is the same in all men” (ibid.:466). Because these two orders meet in the mind, our thoughts do not proceed from thing to thing based on the likeness between them, in themselves, but because of the association they have with each other according to the order of connections and affections of the body (ibid.). The mind perceives affections of the body, but it also perceives the ideas of those affections (ibid.:468). And so, “the Mind and the Body are one and the same Individual, which is conceived now under the attribute of thought, now under the attribute of extension” (467).
Chapter 3
1 This chapter draws on research that was conducted in several visits to Seattle: 2 months of fieldwork in the summer of 2006, 4 months of fieldwork in the spring of 2008, and 1 year of sustained dissertation fieldwork in 2010 and 2011. During each visit, I conducted interviews with DeafBlind people, people involved in their community and its development, and people who make decisions that affect DeafBlind people, such as city planners, advocates, and state officials. I also videorecorded interaction between DeafBlind people and visual interpreters as well as interaction between DeafBlind people. Lastly, I collected fieldnotes during each visit, sometimes written during an event I was observing and/or participating in, and sometimes written afterwards. Interviews and videorecordings of interaction were subsequently transcribed and analyzed. Nearly all of the DeafBlind people who were directly involved in my research were born Deaf and lost their vision slowly. Everyone who was involved in the pro-tactile workshops has Usher Syndrome, which is a genetic condition that causes congenital deafness and Retinitis Pigmentosa, which leads to a slow degeneration of the retina. The effect is a slow loss of vision from the periphery in. Rates of vision loss vary. However, the idea behind the pro-tactile movement is that anyone who cultivates tactile sensibilities will find a pro-tactile field of engagement easy to engage. Acquisition of the practices and of the language will feel natural and easy compared to the languages used by hearing and sighted people. Therefore, people who grew up hearing and lost both their hearing and sight--as is the case for people with Usher Syndrome Type III, or people who are injured in mid-life and become both deaf and blind--will not be excluded in any way from the pro-tactile movement or the tactile world it is generating.
2 On the topic of myths, taboos, and stereotypes about blind people, Frances A. Koestler (1976) describes the dual figuration of blind people in the popular imagination. On the one hand, they are figured as tragic and dependent, worthy of pity and charity. On the other, they are imbued with magical or extra-sensory powers (ibid.:7). She cites many examples, including a young woman who, it was claimed, could distinguish colors by smell (ibid.:5), or another who could distinguish them by touch (ibid.:6). Another woman could purportedly read the bible, thanks to her “eyeless sight” (ibid.). These and many more cases were shown to be hoaxes or misunderstandings in the end, and Koestler implies, have more to do with entertaining the public than with the lives of blind people. Koestler points out that “what most people continue to misunderstand, is that both acuteness of hearing and sensitivity to touch in blind people are not compensatory gifts of nature but the products of long, hard concentration and training” (ibid.:4). In other words, the sensory orientations of blind people are the outcome of practices which incorporate sensory dimensions. They are not reducible to a natural outcome of sensory capacity or change. Recognition of this fact is the starting point of this chapter. However, I am not only interested in showing that this is the case, but also in how, particular practices were shaped by social and historical forces, and how these developments set the stage for the pro-tactile movement.
3 Giddens’ distinction between “social integration” and “system integration” is useful here. In both cases, the notion of integration implies a “reciprocity of practices” which can be understood as “involving regularized relations of relative autonomy and dependence between the parties concerned” (1979:76). Reciprocity does not require “cohesion” but rather, demands asymmetries of various kinds. Social integration applies at the level of face-to-face interaction and it concerns reciprocity between actors (ibid.:76-7). System integration applies on the level of social systems, institutions, and other collectivities and it concerns reciprocity between groups (ibid.:77). The aim of the pro-tactile movement was to establish reciprocity among actors in face-to-face interaction in order to establish system integration with the broader society. One of the mechanisms of social integration is the “reflexive monitoring of conduct” (Giddens 1979:77). As we will see, this is precisely what led to new forms of social integration in the Seattle DeafBlind community as part of the pro-tactile movement.
4 See chapter 1 for a discussion of habitus.
5 No sighted people were allowed, apart from the research crew, which included three videographers, one of whom was the ethnographer. During one class, a few select sighted people were invited to give DeafBlind people the chance to try out their pedagogy. Ultimately, the goal was to slowly invite sighted people back in, insofar as they were open to cultivating tactile sensibilities and learning to do things the “DeafBlind way.”
6 There are many historical developments, important events, people, and issues that I was made aware of during the course of my research. However, I am highly selective in what I include here. I only address those early events and dynamics that are important for understanding how communication conventions among DeafBlind people developed. I do not include anything about the history of Seabeck camp, for example, which deserves an entire chapter of its own in the overarching history of the DeafBlind community. I include very little about the development of DBSC between the time it was founded and the time the pro-tactile movement was initiated there. I would like to thank everyone who shared their memories of these times, and I plan to incorporate those memories into a separate historical project to be pursued at a later date.
7 This information was accessed in 2011.
8 This date was taken from a timeline compiled by an administrator currently working at the Lighthouse who was also involved in the earliest stages of the DeafBlind program.
9 I found the original hand-drawn matrix in a box of pictures and old newsletters and such at the Lighthouse, while I was conducting fieldwork. It was hand-written and faded and was charmingly informal for its important role in the history of the community.
10 The term was originally taken from AADB, but has diverged since then as it has developed in Seattle.
11 DVR, DDD, and DSB.
12 People contrast this time with the increasingly professionalized role that interpreters have now. Back then, they thought of themselves as political allies, fighting for civil rights, first, and interpreters second. Now, this would likely be seen as a conflict of interest and a breach of the code of ethics on the part of interpreters.
13 A well-known Deaf interpreter with native command of Visual ASL and a flare for eloquent, artistic renderings.
Chapter 4
1 See Chapter 1 for a discussion of habitus.
2 For example--In 2006,I conducted a series of interviews aimed at understanding what makes a good SSP, or visual interpreter. A DeafBlind person who had been involved for many years in training interpreters told me the following:“Really, you can’t train SSPs. [ ...]. You can’t fix a bad attitude or a difficult personality. You can teach them what their attitude should be like, but if they can’t really internalize it, and make it part of who they are, then they will fail. There are habitual ways of being that are very difficult to change. [ ... ] It has to do with whether the person sees themselves as above DeafBlind people or sees themselves as their equal. If they see themselves as superior to DeafBlind people, then it’s never going to work out to try to train them. But really, most of the SSPs who are really good, who have a good attitude are also successful elsewhere and they leave the community to pursue other opportunities. The ones who are iffy at best are the ones we see consistently. [...]. I think the only way to recruit the good SSPs is to acquire enough money to pay them well. But then, I’m sure it’s not only money.
3 I have heard the term “pod” applied within the community to capture the scope of communication norms. Small groups form, which are comprised of sighted and DeafBlind people, and within those small kin or kin-like networks, communication conventions develop. For Adrijana, her “pod” was important at this stage, because the people in it knew how to communicate with her and had a shared vision for the kinds of communication practices that should spread. This was seen by some a “favoritism” since she was essentially hiring her friends. But for Adrijana, it was largely a communication issue. Tactile communicative practices had become conventional enough within her pod that affect could circulate. She saw this as an essential part of moving the organization forward and to reaching the people it was supposed to serve.
4 Another situation in which DeafBlind people have communicated directly with one another has been in families where DeafBlind people had older siblings who also had Usher Syndrome, or among couples who were both DeafBlind. One sighted person talked about going to a pro-tactile workshop in the summer of 2011, and as she was learning some of it, thought, “Who does this? Joe and Ellen [A DeafBlind, tactile couple] and whoever they’re talking to do that all the time. Also Jack and Eileen [who were siblings and were both DeafBlind] used to do that all the time-- If I told Jack something, he would tell Eileen at the same time. Not if I was talking to Eileen, but if I was talking to Jack and Jack wanted to include Eileen. They did that all the time- maybe Jack would do that when he had vision, and then when he lost his vision, he continued doing it.” In both of these cases it seems that when there were two DeafBlind people, one person would copy what a third participant was saying, thereby occupying the position of the sighted interpreter. This is not the same thing as signing with two dominant hands to two addressees at the same time. The latter became the convention for three-way communication in a pro-tactile context. When there were more than three people conversing, though, one person (the one to the right of the signer) would relay what was being said to the person to their right. Although communication practices like these-- between DeafBlind siblings and spouses were not identical to emergent pro-tactile conventions, they surely had an influence on them. Several of the participants in the pro-tactile classes that were held in 2010 and 2011 had siblings with Usher Syndrome. It is highly likely that they drew on their experiences in building the communicative repertoire that has since become more widely shared.
5 See chapter 3 on the history of the Lighthouse and the history of “sheltered workshop.”
Chapter 5
1 I also have been doing this kind of work for many years, and I incorporate my own intuitions about this work here.
2 See Chapter 1.
3 See chapter 1 and also Hanks 1990:137-187 via Goffman 1981, Levinson 1987, C. Goodwin 1981, M.H. Goodwin 1985.
Chapter 6
1 I recorded 120 hours of video data during these workshops. This video corpus was subsequently indexed, selectively transcribed, and thematically organized. This, in addition to detailed ethnographic field notes recorded in a variety of contexts, and the intuitions I have developed over many years of involvement in the Seattle DeafBlind community, form the empirical basis of the argument presented in this chapter.
2 O&M training has been in place long before the pro-tactile movement. However, the pro-tactile social field favors people who can orient to their immediate environment without support from sighted persons. Therefore, the kinds of changes that occur in people working with Marcus became more desirable, and contributed to the overall shift in the deictic field.
3 Marcus contracts with the Seattle Lighthouse for the Blind. Funding for his services come from Metro King County, and grant funds secured by two employees of the Lighthouse (one of whom is DeafBlind). State agencies, such as the Department for Vocational Rehabilitation and the Department of Services for the Blind also occasionally contract with the Lighthouse, but this money comes with restrictions that don’t make sense for DeafBlind people, so Marcus avoids relying on it too heavily. Unlike other O&M instructors in Seattle, Marcus uses Visual American Sign Language to communicate with his clients. In these sessions, I walked with Marcus behind his students. As they practiced, Marcus narrated their actions, explaining what they were doing right, what they were doing wrong, why he was or was not going to intervene, etc. I took detailed notes as we walked (while holding an umbrella, so my paper didn’t get too wet). I drew little maps of what was happening in moments of trouble. When I went home afterwards, I typed up these notes, and drew the diagrams in a Word document.
4 These alternate constructs are discussed in more depth in the following chapter.
5 Goffman was working with research conducted by Gumperz and Cook-Gumperz over a period of several years, where an attempt was made to list the motivations and functions of instances of code-switching in a particular bilingual setting. The list included: direct or reported speech, selection of recipient, interjections, repetitions, personal directness or involvement, new and old information, emphasis, separation of topic and subject, and discourse type (ibid.:127). In the process, they discovered “code-switching-like” behavior that didn’t involve the switching of actual codes. This is the initial point of departure for Goffman and it leads him to the broader category of “footing,” which describes shifts in alignments between the speaker, his “projected self,” and his utterances--whether he is play acting, serious, unsure of the truth of his statement or not, and so-on.
6 which includes configurations like the one pictured in figure 6.1 as a category member (but also variants in which participants were standing)
7 The rule was explained in terms of 4-way stop-signs. The person in contact with the right hand of the signer was responsible for copying their utterance for the fourth participant. This only began to be fluidly accomplished by a few of the workshop participants at the very end of the workshops.
8 Participants in this frame are wearing blindfolds. This was common during the workshops. It was a way of cultivating tactile sensibilities by blocking out disruptive, and often useless, visual stimuli.
Chapter 7
1 Eye-gaze, lips, and other body parts can also function this way in signed languages, just as they can in spoken languages (Enfield 2001, Sherzer 1973, Kendon 2004, Wilkins 2003, also see Meier and Lillo-Martin 2010:347-353).
2 See Section 6.2 in Chapter 6 3This claim has been generalized across signed languages. However, Berenz (2002) claims that if eye gaze is taken into account, in LSB, there is a three-way distinction between first, second, and third person forms (cited in Pfau 2011:154).
4 See Pfau 2011 and Kita 2013 for more on pointing.
5 Also see Cormier 2002 and DeVos 2012 for interesting discussions about the integration of pointing signs into the grammar of signed languages.
6 By Deaf Interpreter, I mean a Deaf person with a native command of VASL, who works as an interpreter. Not an “interpreter for the Deaf.”
7 A reception signal, for Bu¨hler, is the inverse of an “action signal” such as an imperative.
8 See Dancygier and Sweetser (2012) for more on viewpoint in language in multiple modalities.
9 Schutz’s reciprocity of perspectives can be summed up as follows: “I take it for granted--and assume my fellow man does the same--that if I change places with him so that his ‘here’ becomes mine, I would be at the same distance from things and see them in the same typicality as he actually does; moreover, the same things would be in my reach which are actually in his. (All this vice versa).” (Schutz 1970:183).
10 The featural analysis is a more recent contribution to this long standing debate, however, Mathur and Rathmann (2012) also find enough similarities in their approach and Padden’s original (1983) analysis to group them together under the “featural” heading.
11 Mathur elsewhere appeals to “referential space” (2000:75). That term would be more consistent with the perspective put forth here.
12 See p.143 for a breakdown.
13 This is modeled on Jackendoff’s architecture of grammar.
14 For example, as TASL develops further, it will interesting to see if phonological adjustment rules can be posited, and what their relation is to those found in VASL.
15 Signs that retrieve values exclusively from the deictic field, as opposed to combining grammatical and deictic elements, are “gestures.” But gesturing is only one kind of semiosis that retrieves values from the deictic field and the explanatory power of the deictic field extends far beyond gesture.
16 Section 6.2 in Chapter 6
17 I have outlined the pointing finger to make it more visible.
18 The emphasis comes from the strength of movement, which is not visible in the frame grabs, but is visible in the video clip from which the frame grabs were taken.
19 This sign could mean “measure,” “inch,” or “size.” I have glossed it as “inch” because Nina specifies this meaning by fingerspelling i-n-c-h later in the interaction.
20 Nina and Lee’s descriptions were shown to two users of ASL who live in California and have no contact with the Seattle Deaf-Blind community. Neither of them understood Lee’s description and both of them understood Nina’s description (which were showed to them in that order). The first treated Lee’s description as a degraded version of visual ASL and told me that Nina’s description was obviously more clear and that Lee’s description “needed work.” The second person said that she couldn’t understand Lee’s description, and in particular found all of the signs articulated on the hand of the addressee unfamiliar and unintelligible. Even with the benefit of understanding some of the signs Lee used, she couldn’t tell what was going on in the interaction or what Lee was trying to get across. Then I showed her Nina’s description and she understood with no difficulty that Lee was describing a measuring tape.
21 Signal transposition, while not standard in basic participant frameworks, is imaginable if two Deaf people are trying to communicate in the dark, for example, I have been told that children in Deaf residential schools, sometimes signed on each other’s bodies, or used tactile reception, after the lights had been turned off at night. However, this form is, under no circumstances imaginable, even in non-standard participant frameworks.
Chapter 8
1 This research included the Seattle DeafBlind community but also included other places such as Boston and Washington, D.C.
2 This is a sketch of a sketch. The original sketch was published in Klima and Bellugi (1979).
3 See chapter 6 and also Hanks 1990:137-187 via Goffman 1981, Levinson 1987, and Goodwin 1981.
4 I also use the term “basic participant frameworks,” which I treat as interchangeable with the term “participant frames.”
5 It is unclear if tactile reception would have been comparably accurate prior to the pro-tactile movement in Seattle. There are many differences between Reed et al.’s research subjects and the members of the Seattle DeafBlind community who participated in this research. However, it would be interesting, taking these difference into account, to test whether or not accuracy is significantly higher since a new, tactile language has begun to emerge.
6 The status of gesture as “supplementary” is contentious in current frameworks, and I do not mean to support Sapir’s position on this point.
7 Insofar as the fingerspelled word has not been borrowed into VASL. Also see Mulrooney (2002) for a more detailed discussion.
8 Stokoe compares facial expressions to suprasegmental features of spoken languages such as stress and pitch. He considers these “metaspectual” parts of the language important, but he does not attend to them further
9 All VASL examples from this section were taken with permission from an online ASL dictionary-- www.lifeprint.com
10 Also see Battison1978:37 for further evidence.
11 In total, 69 Type I signs produced by three different signers comprise this set.
12 At one point, an instructor signs culture in a three-person configuration and she does so by alternating her dominant and non-dominant hands, repeating the sign sequentially, rather than producing both C-handshapes simultaneously. Both because the addressee has access to the non-dominant hand and because there is a temporal lag between the production of that sign and the next, class may be distinguishable from culture. But this is the kind of complicated inference that would be demanded less by a truly tactile language. Later in this same stretch of interaction, the same signer starts to sign culture a couple of times and replaces it with other signs instead of completing the sign. For example, she is comparing the DeafBlind way of doing something and sets up a comparison with how Deaf sighted people would do it. She signs deaf then starts to sign culture but says “at Gallaudet” instead.
13 141 Type I signs, produced by four people were analyzed in this set.
14 Out of 51 tokens 12 were not duplicated. Four of these signs were borderline Type I and Type II signs like interpret and how. Although the dominant hand is active and the non-dominant hand is passive in these signs, the movement of the active hand affects movement in the passive hand that is probably perceptible tactually. Other than this difference, the two articulators are mirrors of one another.
15 8 of the 12 that were not duplicated were the sign right, and two were the sign can’t. These signs have been duplicated both by alternation and by dropping in other instances.
16 see Chapter 9 for a detailed account of these constraints.
Chapter 9
1 See Chapter 7.
2 However, iconicity may be very important for language acquisition, or for other processes.
3 See Chapter 4
4 Make sure to ask for permission first.
5 Up until this point, she has been alternating between addressees, producing a description for Lina while Allen waited or listened in as best he could, and then the reverse.
6 Locations in TASL examples above include: addressee’s palm, addressee’s wrist, addressee’s arm, the inside of addressee’s elbow, the tip of addressee’s middle finger, or the outer edge of middle phalanx on addressee’s index and middle fingers.
7 See section 8.2 on page 193 for more on this.
8 This is a reproduction of a figure from Hanks 2005a.
9 See Chapter 8 for more on this.
Bibliography
Aronoff, Mark, Meir, Irit, Padden, Carol and Sandler, Wendy (2003). Classifier Constructions
and Morphology in Two Sign Languages. In Karen Emmorey (ed.), Perspectives on
Classifier Constructions in Sign Languages Mahwah, NJ: Lawrence Erlbaum and
Associates. Aronoff, Mark, Meir, Irit, Padden, Carol and Sandler, Wendy (2004). Yearbook of Morphology.
In Geert Booij and Jaap van Marle (eds.), The Netherlands: Kluwer. Aronoff, Mark, Meir, Irit, Padden, Carol and Sandler, Wendy (2008). Holophrasis,
compositionality and protolanguage. Special Issue of Interaction Studies, 133-149. Bahan, Benjamin (2008). Upon the Formation of a Visual Variety of the Human Race. In H-Dirksen L. Bauman (ed.), Open Your Eyes: Deaf Studies Talking Minneapolis: University
of Minnesota Press. Barthes, Roland (1984). The Rustle of Language. Berkeley: University of California Press. Battison, Robbin (1978). Lexical Borrowing in American Sign Language. Silver Spring, MD:
Linstock Press. Bloom, Lois (1970). Language development: Form and function. Cambridge, MA: MIT Press. Bourdieu, Pierre (1990 [1980]). The Logic of Practice. Stanford: Stanford University Press. Brentari, Diane (1998). A Prosodic Model of Sign Language Phonology. Cambridge,
Massachusetts: MIT Press. Brentari, Diane, Coppola, Marie, Mazzoni, Laura and Goldin-Meadow, Susan (2012). When
does a system become phonological? Handshape production in gesturers, signers, and
homesigners. Natural Language and Linguistic Theory 30, 1-31. Bühler, Karl (2001 [1934]). Theory of Language: the representational function of language.
Amsterdam/Philadelphia: John Benjamins. Bynon, Theodora (1977). Historical Linguistics. Cambridge: Cambridge University Press. Channon, Rachel (2004). The Symmetry and Dominance Conditions Reconsidered. Chicago
Linguistic Society 44-57. Chicago. Chomsky, Noam (1965). Aspects of a Theory of Syntax. Cambridge: MIT Press. Clark, John Lee (2014). Pro-Tactile: Bursting the Bubble. In: Where I Stand: On the Signing
Community and my DeafBlind Experience. Minneapolis: Handtype Press Cleve, John Vickery Van (2007). The Academic Integration of Deaf Children: A Historical
Perspective. In John Vickery Van Cleve (ed.), The Deaf History Reader 116-135.
Washington, DC: Gallaudet University Press. Coleman, Linda and Kay, Paul (1981). Prototype Semantics: The English Word Lie. Language
57, 26-44. Collins, Steven and Petronio, Karen (1998). What Happens in Tactile ASL? In Ceil Lucas (ed.),
Pinky Extension and Eye Gaze: Language Use in Deaf Communities 18-37. Washington,
D.C.: Gallaudet University Press. Collins, Steven Douglas (2004). Adverbial Morphemes in Tactile American Sign Language.
Interdisciplinary Studies: Graduate College of Union Institute and University. Comrie, Bernard (1989 [1981]). Language Universals and Linguistic Typology. Chicago: The
University of Chicago Press.
248
Coppola, Marie and Senghas, Ann (2010). Getting to the point: How a simple gesture became a
linguistic element in Nicaraguan signing. In Donna J. Napoli and Gaurav Mathur (eds.),
Deaf Around the World. Oxford: Oxford University Press. Cormier, Kearsy Annette (2002). Grammaticization of Indexic Signs: How American Sign
Language Expresses Numerosity. Linguistics 204. Austin: The University of Texas at
Austin. Crystal, D. (1987). The Cambridge Encyclopedia of Language. Cambridge: Cambridge
University Press. Dancygier, Barbara and Sweetser, Eve (2012). Viewpoint in Language: A Multimodal
Perspective. New York: Cambridge University Press. Danesi, Marcel (1993). Vico, Metahpor, and the Origins of Language. Bloomington: Indiana
University Press. Descartes, Rene (1985[1647]). The Passions of The Soul, Part One. The Philosophical Writings
of Descartes Cambridge: Cambridge University Press. Dorian, N.C. (1981). Language Death: The Life Cycle of a Scottish Gaelic Dialect. Philadelphia:
University of Pennsylvania Press. Dudis, Paul G. (2004). Body Partitioning and Real Space Blends. Cognitive Linguistics 15, 223-238. Eccarius, Petra and Brentari, Diane (2007). Symmetry and Dominance: A cross-linguistic study
of signs and classifier constructions. Lingua 117, 1169-1201. Edwards, Terra (2012). Sensing the Rhythms of Everyday Life: Temporal integration and tactile
translation in the Seattle Deaf-Blind Community. Language In Society 41. Enfield, Nick (2001). Lip Pointing? A Discussion of Form and Function with Reference to Data
from Laos. Gesture 1, 185-212. Enfield, Nick J. (2009). Composite Utterances. The Anatomy of Meaning: Speech, Gesture, and
Composite Utterances. Cambridge: Cambridge University Press. Engberg-Pedersen, Elisabeth (1993). Space in Danish Sign Language: the semantics and
morphosyntax of the use of space in a visual language. Hamburg: Signum Press. Fauconnier, Gilles and Turner, Mark (1998). Conceptual Integration Networks. Cognitive
Science 22, 133-187. Feldman, Heidi, Goldin-Meadow, Susan and Gleitman, L. (1978). Beyond Herodotus: The
creation of a language by linguistically deprived deaf children. In A. Lock (ed.), Action,
Symbol, and Gesture: the emergence of language. New York: Academic Press. Fillmore, Charles (1975). An Alternative to Checklist Theories of Meaning. Berkeley Linguistics
Society 123-131. Berkeley: eLanguage. Fillmore, Charles (1976). Frame Semantics and the Nature of Language. Annals of the New York
Academy of Sciences 280, 20-32. Fillmore, Charles J. (1968). The Case for Case. In Emmon Back and Robert T. Harms (eds.),
Universals in Linguistic Theory 1-90. New York: Holt, Rinehart and Winston. Friedman, Lynn (1977). Formational properties of ASL. In Lynn Friedman (ed.), On the other
Hand NY: Academic Press. Fusellier-Souza, I. (2006). Emergence and development of sign languages: from a semiogenetic
point of view. Sign Language Studies 7, 30-56. Gal, Susan and Irvine, Judith T. (1995). The Boundaries of Languages and Disciplines: How
Ideologies Construct Difference. Social Research 62, 976-1001. Giddens, Anthony (1979). Central Problems in Social Theory: Action, Structure and
Contradiction in Social Analysis. Berkeley and Los Angeles: University of California Press.
249
Goffman, Erving (1964). The Neglected Situation. American Anthropologist 66, 133-136. Goffman, Erving (1974). Frame Analysis: An Essay on the Organization of Experience. Boston:
Northeastern University Press. Goffman, Erving (1981). Footing. Forms of Talk Oxford, UK: Basil Blackwell. Goldin-Meadow, Susan (2010). Widening the Lens on Language Learning: Language Creation
in Deaf Children and Adults in Nicaragua. Human Development 53, 303-311. Goldin-Meadow, Susan and Feldman, Heidi (1977). The Development of Language-Like
Communication Without a Language Model. Science 197, 22-24. Goldin-Meadow, Susan and Morford, Marolyn (1985). Gesture in Early Child Language: Studies
in Deaf and Hearing Children. Merrill-Palmer Quarterly 31, 145-176. GoldinMeadow, Susan and Mylander, Carolyn (1983). Gestural Communication in Deaf
Children: Noneffect of Parental Input on Language Development. Science 221, 372-374. Goodwin, Charles (2000). Gesture, Aphasia, and Interaction. In David McNeill (ed.), Language
and Gesture Cambridge: Cambridge University Press. Goodwin, Charles (1981). Conversational Organization: Interaction Between Speakers and
Hearers. New York: Academic Press. Goodwin, Marjorie (1985). Byplay: The Framing of Collaborative Collusion. Annual Meeting of
the American Anthropological Association Washington, D.C. Green, Elizabeth Mara (2014). The Nature of Signs: Nepal’s Deaf Society, Local Sign, and the
Production of Communicative Sociality. Ph.D. Thesis. The University of California,
Berkeley. Grinevald, Colette (2000). A morphosyntactic typology of classifiers. In G. Senft (ed.), Systems of
nominal classification Cambridge: Cambridge University Press. Groce, Nora Ellen (1985). Everyone Here Spoke Sign Language: Hereditary Deafness on
Martha's Vineyard. Gumperz, John J. (1992). Contextualization and Understanding. In Alessandro Duranti and
Charles Goodwin (eds.), Rethinking Context 229-252. Cambridge: Cambridge University
Press. Haiman, John (1985). Introduction. Natural Syntax: Iconicity and Erosion. Pp. 1-18.
Cambridge: Cambridge University Press. Hanks, William F. (1990). Referential Practice: Language and Lived Space among the Maya.
Chicago: The University of Chicago Press. Hanks, William F. (1996). Language and Communicative Practice. Boulder: Westview Press. Hanks, William F. (2005a). Pierre Bourdieu and the Practices of Language. Annual Review of
Anthropology 34. Hanks, William F. (2005b). Explorations in the Deictic Field. Current Anthropology 46, 191-220. Hanks, William F. (2009). Fieldwork on Deixis. Journal of Pragmatics 41, 10-24. Hanks, William F. (2013). Counterparts: Co-presence and ritual intersubjectivity. Language and
Communication 33, 263-277. Harman, Gilbert (ed.) (1982). On Noam Chomsky. Amherst: University of Massachusetts Press. Harris, Roy (2002). The Language Myth in Western Culture. Richmond, Surrey: Curzon Press. Hockett, Charles F. (1960). The Origin of Speech. Scientific American. Hulst, Harry van der (1996). On the Other Hand. Lingua 98, 121-143. Jackendoff, Ray (1990). Semantic Structures. Cambridge: MIT Press.
Jakobson, Roman (1971 [1939]). Signe Zero. The Collected Writings of Roman Jakobson 211-219.
250
Keating, Elizabeth and Mirus, Gene (2003). Examining Interactions Across Language
Modalities: Deaf Children and Hearing Peers at School. Anthropology and Education
Quarterly 34, 115-135. Kegl, Judy, Senghas, Ann and Coppola, Marie (2001). Creation through Contact: sign language
emergence and sign language change in Nicaragua. In Michel DeGraff (ed.), Language
Creation and Language Change: creolization, diachrony, and development London:
MIT Press. Kendon, Adam (2004). Gesture: Visible Action as Utterance. New York: Cambridge. Kisch, Shifra (2008). ``Deaf Discourse'': The Social Construction of Deafness in a Bedouin
Community. Medical Anthropology: Cross-Cultural STudies in Health and Illness 27,
283-313. Kisch, Shifra (2012). Demarcating generations of signers in the dynamic sociolinguistic landscape
of a shared sign-language: The case of the Al-Sayyid Bedouin. In Ulrike Zeshan and
Connie de Vos (eds.), Sign Languages in Village Communities Berlin: de Gruyter. Klima, Edward S. and Bellugi, Ursula (1979). The Signs of Language. London: Harvard
University Press. Koestler, Frances A. (1976). The Unseen Minority: A social history of blindness in the United
States. New York: McKay. Kuschel, Rolf (1973). The Silent Inventor: The Creation of a Sign Language by the Only Deaf-Mute on a Polynesian Island. Sign Language Studies 3, 1-27. Labov, William (1972). The Study of Language in Its Social Context. Sociolinguistic Patterns
Philadelphia: University of Pennsylvania Press. Lakoff, George (1987). Women, Fire, and Dangerous Things. Chicago: University of Chicago
Press. Lakoff, George and Johnson, Mark (1980). Metaphors We Live By. Chicago and London: The
University of Chicago Press. Lane, Harlan, Robert Hoffmeister, Ben Bahan (1996). A Journey into the Deaf World. San
Diego: Dawn Sign Press. Levinson, Stephen C. (1983). Pragmatics. Cambridge: Cambridge University Press. Levinson, Stephen C. (1987). Putting Linguistics on a Proper Footing: Explorations in Goffman's
Concepts of Participation. In P. Drew and A. Woolton (eds.), Goffman: An Interdisciplinary Appreciation 161-227. Oxford: Polity Press. Liddell, Scott K. (2000). Blended Spaces and Deixis in Sign Language Discourse. In David
McNeill (ed.), Language and Gesture Cambridge: Cambridge University Press. Liddell, Scott K. (2003). Grammar, Gesture, and Meaning in American Sign Language.
Cambridge: Cambridge University Press. Love, Nigel (2006). Language and history: integrationist perspectives. London: Routledge. Mandel, Mark Alan (1981). Phonotactics and Morphophonology in American Sign Language.
linguistics 323. Berkeley: The University of California, Berkeley. Mathur, Gaurav (2000). Verb Agreement as Alignment in Signed Languages. Dissertation.:
Massachusetts Institute of Technology. Mathur, Gaurav and Rathmann, Christian (2010). Verb agreement in sign language morphology.
In D. Brentari (ed.), Sign Languages: A Cambridge Language Survey 173-196.
Cambridge: Cambridge University Press. Mathur, Gaurav and Rathmann, Christian (2012). The features of verb agreement in signed
languages. In R. Pfau, M. Steinbach and B. Woll (eds.), Handbooks of Linguistics and
Communication Sciences on Sign Languages 136-157. Berlin: Mouton de Gruyter.
251
Mayberry, Rachel I. (1992). The cognitive development of deaf children: Recent insights. In S.J.
Segalowitz and I. Rapin (ed.), Handbook of neuropsychology Amsterdam: Elsevier. McCawley, James D. (1976). Syntax and Semantics 7: Notes from the linguistic underground.
New York: Academic Press. McDonald, B. (1982). Aspects of the American Sign Language Predicate System. Buffalo:
University of Buffalo. Meier, Richard P. (1990). Person Deixis in American Sign Language. In Susan D. Fiscer and
Patricia Siple (eds.), Theoretical Issues in Sign Language Research Chicago: The
University of Chicago Press. Meier, Richard P. and Lillo-Martin, Diane (2010). Does Spatial Make It Special? On the
Grammar of Pointing Signs in American Sign Language. In Donna B. Gerts, John C.
Moore and Maria Polinsky (eds.), Hypothesis A/Hypothesis B: Linguistic Explorations in
Honor of David M. Perlmutter London: MIT Press. Meier, Richard P. and Lillo-Martin, Diane (2012). Response: The apparent reorganization of
gesture in the evolution of verb agreement in signed languages. Theoretical Linguistics 38. Meir, Irit (2002). A cross-modality perspective on verb agreement. Natural Language and
Linguistic Theory 20, 413-450. Milroy, James (2001). Language ideologies and the consequences of standardization. Journal of
Sociolinguistics 5, 530-555. Morgan, Gary and Woll, Benice (2007). Understanding sign language classifiers through a
polycomponential approach`. Lingua 117, 1159-1168. Morgan, Hope E. and Mayberry, Rachel I. (2012). Complexity in two-handed signs in Kenyan
Sign Language. Sign Language &Linguistics 15, 147-174. Morris, Charles (1971 [1938]). Foundations of the Theory of Signs. Chicago: University of
Chicago Press. Muehlmann, Shaylih (2013). Where the River Ends: Contested Indigeneity in the Mexican
Colorado Delta. Durham: Duke University Press. Mulrooney, Kristin J. (2002). Variation in ASL fingerspelling. In Ceil Lucas (ed.), Turn-taking,
fingerspelling, and contact in signed languages Washington, D.C.: Gallaudet University
Press. Napoli, Donna jo and Wu, Jeff (2003). Morpheme structure constraints on two-handed signs in
American Sign Language: notions of symmetry. Sign Language &Linguistics 6, 123-205. Newport, Elissa (2001[1999]). Reduced Input in the Acquisition of Signed Languages:
Contributions to the Study of Creolization. In Michel DeGraff (ed.), Language Creation
and Language Change: Creolization, Diachrony, Development 161-178. Cambridge,
Massachusetts: MIT Press. Nonaka, Angela M. (2007). Emergence of an Indigenous Sign Language and a Speech/Sign
Community in Ban Khor, Thailand. Los Angeles: University of California, Los Angeles. Nuccio, Jelica and Smith, Theresa B. (2010). Providing and Receiving Support Services:
Comprehensive Training for Deaf-Blind Persons and Their Support Service Providers. In
Robert I. Roth (ed.), Seattle, WA. Nyst, Victoria (2007). A descriptive analysis of Adamorobe Sign Language (Ghana). PhD
Dissertation.: University of Amsterdam. Padden, Carol (1990). The Relation Between Space and Grammar in ASL Verb Morphology. In
C. Lucas (ed.), Proceedings of the Second International Conference on Theoretical Issues
in Sign Language Research Washington, D.C. : Gallaudet University Press.
252
Padden, Carol A. (1983). Interaction of Morphology and Syntax in American Sign Language,
Ph.D. Thesis. Linguistics San Diego: The University of California San Diego. Padden, Carol A. and Perlmutter, David M. (1987). American Sign Language and the
architecture of phonological theory. Natural Language and Linguistic Theory 5, 335-375. Peirce, Charles Sanders (1955/1940 [1893-1910]). Logic as Semiotic: The Theory of Signs. In
Justus Buchler (ed.), Philosophical Writings of Peirce New York: Dover. Perlmutter, David M. (1992). Sonority and Syllable Structure in American Sign Language.
Linguistic Inquiry 23. Petronio, Karen and Dively, Valeria (2006). YES, #NO, Visibility, and Variation in ASL and
Tactile ASL. Sign Language Studies 7. Pfau, Roland (2011). A point well taken: On the typology and diachrony of pointing. In Donna J.
Napoli and Gaurav Mathur (eds.), Deaf Around the World Oxford: Oxford University
Press. Polich, Laura (2005). The Emergence of the Deaf Community in Nicaragua. Washington, D.C.:
Gallaudet University Press. Quinto-Pozos, David (2002). Deictic Points in the Visual-Gestural and Tactile-Gestural
Modalities. In Richard P. Meier, Kearsy Cormier and David Quinto-Pozos (eds.),
Modality and Structure in Signed and Spoken Languages 442-467. Cambridge:
Cambridge University Press. Quinto-Pozos, David (2007). Why Does Constructed Action Seem Obligatory? An Analysis of
Clasifiers and the Lack of Articulator-Referent Correspondence. Sign Language Studies 7,
458-506. Rathmann, Christian and Mathur, Gaurav (2002). Is verb agreement the same crossmodally? In
Richard P. Meier, Kearsy Cormier and David Quinto-Pozos (eds.), Modality and
structure in signed and spoken languages Cambridge: Cambridge University Press. Reed, Charlotte M., Delhorne, Lorraine A., Durlach, Nathaniel I. and Fischer, Susan D. (1990).
A Study of the Tactual and Visual Reception of Fingerspelling. Journal of Speech,
Language, and Hearing Research 33, 786-797. Reed, Charlotte M., Delhorne, Lorraine A., Durlach, Nathaniel I. and Fischer, Susan D. (1995).
A study of the tactual reception of Sign Language. Journal of Speech and Hearing
Research 38. Rigler, David (1993). Letter to the Editor. The New York Times. Rochester, Jumius (2004). Seattle's Best-Kept Secret: A history of the Lighthouse for the blind.
Seattle: Tommie Press. Russo, Tommaso and Volterra, Virginia (2005). Comment on ``Children Creating Core
Properties of Language: Evidence from an Emerging Sign Language in Nicaragua.
Science 309. Rymer, Russ (1993). Genie: A Scientific Tragedy. New York: HarperCollins. Sadock, Jerrold M. (1985). Autolexical Syntax: a proposal for the treatment of noun
incorporation and similar phenomena. Natural Language and Linguistic Theory 3, 379-439. Sandler, Wendy (1989). Markedness in American Sign Langauge handshapes: a componential
analysis. In H.G. van der Hulst & J. van de Weijer (ed.), HIL Phonology Conference
Leiden: Leiden University Press. Sandler, Wendy (1993). Hand in hand: The roles of the nondominant hand in Sign Language
Phonology. The Linguistic Review 10, 337-390.
253
Sandler, Wendy, Aronoff, Mark, Meir, Irit and Padden, Carol (2011). The Gradual Emergence
of Phonological Form in a New Language. Natural Language and Linguistic Theory,
503-543. Sandler, Wendy, Aronoff, Mark, Padden, Carol and Meir, Irit (Forthcoming). Language
Emergence: Al-Sayyid Bedouin Sign Language. In Nick Enfield, Paul Kockelman and
Jack Sidnell (eds.), Cambridge Hnadbook of Linguistic Anthropology Cambridge:
Cambridge University Press. Sandler, Wendy and Lillo-Martin, Diane (2006). Sign Language and Linguistic Universals.
Cambridge: Cambridge University Press. Sandler, Wendy, Meir, Irit, Padden, Carol and Aronoff, Mark (2005). The Emergence of
Grammar: Systematic Structure in a New Language. Proceedings of the National
Academy of Sciences of the United States of America 102, 2661-2665. Sapir, Edward (1949 [1934]). The Grammarian and His Language. In David Mandelbaum (ed.),
Selected Writings of Edward Sapir in Language, Culture, and Personality 564-568.
Berkeley: University of California Press. Sapir, Edward (1995 [1927]). The Unconscious Patterning of Behavior in Society. In Ben Blount
(ed.), Language, Culture, and Society 29-42. Long Grove, Illinois: Waveland. Saussure, Ferdinand de (1972 [1915]). Course in General Linguistics. New York: McGraw Hill. Schembri, Adam (2003). Rethinking `classifiers' in signed languages. In Karen Emmorey (ed.),
Perspectives on Classifier Constructions in Sign Languages 3-34. Mahwah, NJ: Erlbaum. Schembri, Adam, Jones, Caroline and Burnham, Denis (2005). Comparing Action Gestures and
Classifier Verbs of Motion: Evidence from Australian Sign Language, Taiwan Sign
Language, and Nonsigners' Gestures without Speech. Journal of Deaf Studies and Deaf
Education 10, 272-290. Schick, Brenda (1990). Classifier Predicates in American Sign Language. International Journal of
Sign Linguistics 1, 15-40. Schutz, Alfred (1970). On Phenomenology and Social Relations. Chicago and London: The
University of Chicago Press. Scott, Robert A. (1969). The Making of Blind Men: A Study of Adult Socialization. New York:
Russell Sage Foundation. Senghas, Ann (2000 [1999]). The Development of Early Spatial Morphology in Nicaraguan Sign
Language. In S.C. Howell, S.A. Fish and T. Keith-Lucas (eds.), The Proceedings of the
Boston University Conference on Language Development Boston: Cascadilla Press. Senghas, Ann (2010). The Emergence of Two Functions for Spatial Devices in Nicaraguan Sign
Language. Human Development 53, 287-302. Senghas, Ann and Coppola, Marie (2001). Children Creating Language: How Nicaraguan Sign
Language Acquired a Spatial Grammar. Psychological Science 12. Senghas, Richard (2003). New Ways to be Deaf in Nicaragua: Changes in Language,
Personhood, and Community. In L. Monaghan, K. Nakamura, C. Schmaling and G.H.
Turner (eds.), Many Ways to be Deaf: International, Linguistic, and Sociocultural
Variation 260-282. Washington D.C.: Gallaudet University Press. Shepard-Kegl, Judy (1985). Locative relations in American Sign Language: Word formation,
syntax, and discourse. Cambridge: MIT. Sherzer, Joel (1973). Verbal and Nonverbal Deixis: The Pointed Lip Gesture among the San Blas
Cuna. Language in Society 2, 117-131. Sidnell, Jack and Enfield, Nick J. (2012). Language Diversity and Social Action. Current
Anthropology 53.
254
Silverstein, Michael (1996). Monoglot "Standard" in America: Standardization and Metaphors of
Linguistic Hegemony. In D. Brenneis (ed.), The Matrix of Language: Contemporary
Linguistic Anthropology Boulder, CO: Westview. Slobin, Dan I., Hoiting, Nini, Kuntze, Marlon, Lindert, Reyna, Weinberg, Amy, Pyers, Jennie,
Anthrony, Michelle, Biederman, Yael and Thumann, Helen (2003). A
cognitive/functional perspective ont eh acquisition of ``classifiers''. In Karen Emmorey
(ed.), Perspectives on Classifier Constructions in Sign Languages 271-296. Mahwah, NJ:
Erlbaum. Sperber, Dan and Wilson, D. (1986). Relevance. Cambridge: Harvard University Press. Spinoza, Baruch (1985 [1677]). Descartes' Principles of Philosophy. In Edwin Curley (ed.), The
Collected Works of Spinoza Princeton, NJ: Princeton University Press. Stokoe, William, Casterline, Dorothy and Croneberg, Carl (1965). A Dictionary of American
Sign Language on Linguistic Principles. Silver Spring, Maryland: Linstock Press. Stokoe, William C. (2005 [1960]). Sign Language Structure; An Outline of the Visual
Communication Systems of the American Deaf. Journal of Deaf Studies and Deaf
Education 10. Supalla, Ted (1982). Structure and acquisition of verbs of motion and location in ASL.
Unpublished Doctoral Dissertation. San Diego: University of California, San Diego. Supalla, Ted (1982). Structure and acquisition of verbs of motion and location in ASL.
Unpublished Doctoral Dissertation. San Diego: University of California, San Diego. Supalla, Ted (1986). The classifier system in American Sign Language. In C. Craig (ed.), Noun
classes and categorization 181-214. Amsterdam: John Benjamins. Taub, Sarah (2001). Language from the Body: Iconicity and Metaphor in American Sign
Language. Cambridge: Cambridge University Press. Toolan, Michael (1999). Integrationist linguistics in the context of 20th century theories of
language: Some connections and projections. Language and Communication 19, 97-108. Trudgill, Peter (2008). Colonial dialect contact in the history of European languages: On the
irrelevance of identity to new-dialect formation. Language in Society 37, 241-280. Urciuoli, Bonnie (1995). Language and Borders. Annual Review of Anthropology 24, 525-546. Vos, Conny de (2012). Sign-Spatiality in Kata Kolok: how a village sign language of Bali
inscribes its signing space, PhD Thesis. Nijmegen: Radbound University. Washabaugh, William (1991). Providence Island Sign: A Context-Dependent Language.
Anthropological Linguistics 20. Wilkins, David (2003). Why Pointing with the Index Finger Is Not a Universal (in socio-cultural
and Semiotic Terms). In Sotaro Kita (ed.), Pointing: Where Language, Culture, and
Cognition Meet 117-215. Yuasa, Etsuyo and Sadock, Jerry M. (2002). Pseudo-subordination: a mismatch between syntax
and semantics. Journal of Linguistics 38, 87-111. Zeshan, Ulrike (2003). 'Classificatory' Constructions in Indo-Pakistani Sign Language:
Grammaticalization and Lexicalization Processes. In Karen Emmorey (ed.), Perspectives
on Classifier Constructions in Signed Languages 113-141. London: Erlbaum. Zeshan, Ulrike and Vos, Connie de (eds.) (2012). Sign Languages in Village Communities.
Boston/Berlin: Gruyter.
255
This dissertation examines the social and interactional foundations of a grammatical divergence between Tactile American Sign Language (TASL) and Visual American Sign Language (VASL). My central claim is that TASL is breaking away from the scaffolding of VASL and is emerging as a distinct, linguistic system. In order to make that case, I examine the effects of a recent social movement, known as the pro-tactile movement, on communication practices in the Seattle DeafBlind(1) community, and I show how those practices are giving rise to new grammatical subsystems in TASL.
Prior research on language use among DeafBlind people in the United States shows that differences in production and reception of signs prior to the pro-tactile movement were “accommodations” and “adjustments” to VASL (Collins and Petronio 1998, Collins 2004, Petronio and Dively 2006). DeafBlind people compensated for vision loss by adjusting VASL signs in idiosyncratic ways. These compensatory strategies are comparable to lip-reading; they are ways of accessing a visual language, tactually, just as lip-reading is a way of accessing an auditory language, visually(2).
In contrast, the pro-tactile movement created new kinds of tactile people who no longer sought to reconstruct the visual world they once inhabited. Instead, they set out to build a world of their own(3). As this tactile world came together, it was coordinated with the linguistic system in new and consequential ways. I call this process “contextual integration” (4) and I accord it a central role in the emergence of TASL and language emergence more broadly(5).
My overarching argument relies on the assumption that “a language” can be delimited and compared to other languages. From a strictly linguistic perspective, this is a difficult claim. Typological categories, which serve as the basis of cross-linguistic comparison, do not apply to whole languages, but rather, to particular parts of a language, such as morphology, word order, or clause structure (Comrie 1989:52). It is theoretically possible to classify whole languages by correlating logically independent typological parameters. Comrie compares this approach to biological classification, “where typologizing an animal as a mammal subsumes a significant correlation among a number of logically independent criteria (e.g. viviparous, being covered with fur, having external ears, suckling its young)” (Comrie 1989:40). In linguistic typology, this kind of classification has been attempted, though not with much success (Comrie 1989:40). Using strictly linguistic parameters, it is difficult to find any ground for comparison, and therefore, it is difficult to distinguish one language from another.
In addition, variation at the level of the individual and the subordinated group, as well as diachronic change, further complicates any attempt to identify a language as such without relying on socially determined hierarchies, which value one variety above all others (Bynon 1977, Labov 1972). Equally problematic is the fact that bilingual language-users often mix codes in ways that obscure language boundaries (e.g. Urciuoli 1995) and in cases of language shift, competence may vary across a group of language-users so that some are “speakers” while others are “semi-speakers” (e.g. Dorian 1981, cited in Urciuoli 1995:530). This raises interesting questions about whether or when a language becomes a non-linguistic mode of semiosis, and how the boundary between the two might be identified. In reverse, this question is also raised in recent work on young signed languages and homesign systems. When, for example, does a phonological system emerge, as such, and is it possible to have a “language” without one? (Brentari et al. 2009, Sandler et al. 2011).
In contrast, language boundaries come into sharp relief as objects of socio-political valuation (e.g. Gal and Irvine 1995, Milroy 2001, Silverstein 1996, Trudgill 2008, Urciuoli 1995). However, the pro-tactile movement is not driven by metalinguistic reflection or valuation, but rather, by a shared desire for immediacy and co-presence (Clark 2014, Chapter 3 and 4). In order to achieve tactile immediacy, DeafBlind people have reflected upon and changed their communication practices; the emergence of new grammatical subsystems is an unintended consequence of those efforts. Therefore, while social and political dynamics do affect the development of the language, it is not through language-planning, shifts in language ideology, or other forms of metalinguistic discourse. Rather, changes in the grammar are collateral effects(6) of changes in social and interactional processes.
In arguing that TASL is emerging as a distinct language, I am making two claims. First, several grammatical subsystems are currently diverging from VASL. At this stage of development, changes are most evident in the deictic and phonological systems. However, there are clear implications for morphological and syntactic systems as well. My second claim is that these changes are a result of a dis-articulation of VASL from the interactional and social fields it has grown up in, and a re-articulation of idiosyncratic, simplified versions of VASL to new, historically emergent fields.
I am therefore claiming that a language is a configuration of grammatical subsystems embedded in historically and interactionally constituted fields of activity. In other words, a language is not strictly linguistic. However it cannot be reduced to ideologies about language or meaning-effects that emerge out of interaction, either. Rather, a language as a whole must be grasped in relations that cohere between social, interactional, and linguistic phenomena. As these relations tighten into increasingly restricted configurations via contextual integration, semiosis becomes more “language-like.” Sapir assumed a process like contextual integration when he claimed that all languages are “formally complete”:
By formal completeness I mean a profoundly significant peculiarity which is easily overlooked [ ... ] [A] language is so constructed that no matter what any speaker of it may desire to communicate, no matter how original or bizarre his idea or his fancy, the language is prepared to do his work (1949 [1934]:153).
Like Sapir, I am claiming that “a language” (7) should be seamlessly embedded in its contexts of use so that it can do all of the work its speakers require of it. However, under conditions of significant sensory change, this claim may not be valid. In Seattle, VASL could no longer do what its DeafBlind users required of it. This highlights the fact that languages can go through stages where they are not, in Sapir’s sense “formally complete,” or seamlessly integrated with their contexts of use. Integration must therefore be understood as the outcome of socio-historical and interactional processes and not as an inherent property of all languages.
Most members of the Seattle DeafBlind community were born sighted and slowly lose vision. Many of them acquired VASL as children, but over time, the language became increasingly difficult to use. Prior to the pro-tactile movement, DeafBlind individuals compensated for those difficulties in increasingly idiosyncratic ways as vision deteriorated. This led to a splintering of the language and the pragmatic norms necessary for its use. For example, in the early stages of the pro-tactile movement, most members of the community were still resistant to new communication practices. Lee, one of the leaders of the movement, explained how people insisted on keeping their own idiosyncratic strategies, rather than adhering to emergent pro-tactile norms:
A month ago, I was with [Janet], and I ended interpreting what people were saying because I wasn’t lost, but she was totally lost and frustrated, and [she was] complaining that people weren’t following all of the many ridiculous rules that you have to follow to make visual communication with her possible. She put it in terms of “respect.” She said people weren’t respecting her. They shouldn’t walk quickly by--its confusing. They should stand at the right distance. They should sign slowly ... It is not reasonable to expect people to do that, and they don’t. So the result is that she’s left out, and is getting more and more frustrated as time goes by ... I have already become pro-tactile. She won’t embrace the pro-tactile movement, and she’s getting older. She must be in her 50s by now. It is really incredible.
Prior to the pro-tactile movement, these kinds of “ridiculous” idiosyncratic rules were the only option and as Lee noted, they were usually not followed by others. Over time, this led to difficulties in language-use, an increase in social isolation, and ultimately, the un-learning of the language itself.
Evidence of un-learning can be found in the way DeafBlind signers produce utterances. For example, older DeafBlind signers stop expressing grammatical and prosodic cues on the face, leading to a “flat” stream of production, which can be difficult to parse (8). In some cases, compensatory cues are added, such as substituting the manual sign no for negation, where it would otherwise be expressed with the face and/or head (Petronio and Dively 2006). However, it is more often the case that DeafBlind listeners are expected to fill in missing information via pragmatic maneuvering of various kinds, including inference, guessing, and requests for more information. This only works for so long, and at some point they not only stop filling in the cues as listeners; they stop producing them as well.
Maintaining the psychological reality of the language also means remembering visually accessible forms, which correspond to differences in meaning. But as visual memory fades, the ability to maintain those connections is affected, and as a result, the language itself deteriorates. My evidence for this, presented throughout the dissertation, is largely ethnographic. DeafBlind people who have been blind for many years do not produce VASL signs the way that sighted people do, and they can be exceedingly difficult to communicate with. One signer, for example, who is in his 70s, produces lengthy pauses between individual signs and very few facial expressions. It is difficult for me, when listening to this signer, to understand what the topic is, when something happened, who did what to whom, and other basic information that one would expect, given a shared code, to be unambiguous.
In addition, words that are commonplace among the sighted such as “email” and “computer” are not associated with any meaning at all in some DeafBlind signers, and attempts to explain their meanings often fail if there is a lack of experience with the objects or processes represented by those signs. Lexical and grammatical resources deteriorate in idiosyncratic ways for each DeafBlind person. This is an effect of vision loss, but it is also an effect of different degrees of isolation from the world and from things in the world. For example, in Figure 1.1, a visual interpreter is describing a sculpture in downtown Seattle to a DeafBlind man who I call Roman. The sculpture (Figure 1.3) is a representation of a man holding a hammer. His arm is moving very slowly, up and down, hammering in slow-motion.
First the interpreter describes the motion of the arm on the sculpture by combining a conventional hand configuration (the fist in Figure 1.1a, represented schematically in Figure 1.2.), with context-sensitive movements that represent the way the arm of the sculpture moves. Together, these elements characterize the referent according to its relevant dimensions. The interpreter then adds a deictic to direct the DeafBlind person’s attention to, and individuate, the referent (Figure 1.1b). However, this description does not inspire immediate recognition for Roman. After a few seconds of searching for the referent (Figure 1.1c), and apparently failing to locate it, he says, “I remember I saw that sculpture about ten years ago.”
Modes of access that allow the interpreter to link the hand configuration to its referent are tenuous for Roman. He is relying almost entirely on faded, flat memories that are not likely to conjure the sculpture’s towering size, its immutable presence--black against a sharp, grey sky--or the striking temporal juxtaposition of the arm, slowly sliding back and forth against the fast-paced activity in the city around it. The interpreter’s description can only be received by Roman as uprooted and abstract. He can to some degree or another understand the meaning of the interpreter’s words, but he is alienated from the visual field the description is meant to articulate to. Roman is receiving utterances that are detached from the material particularities of the objects to which they refer. This form of abstraction, occurring across a group of language users over time, leads to a reduction in semantic complexity.
According to Fillmore (1976), the meanings of words are linked to other words via interactional and cognitive frames. An interactional frame structures things like greetings and leave-takings and a cognitive frame links elements in a prototypical interaction. For example, the frame for a commercial transaction links elements like a buyer, a seller, the goods, the money, and so-on. Activation of the entire system is a prerequisite to understanding the meaning of any one word within it. Aspects of frame and setting activate one another in the minds of people who have learned the conventional associations, and learning these associations is one of the main activities of language acquisition in early childhood (Fillmore 1976). Over time, as new domains of experience are linked to old frames in a given speech community, the frames themselves grow more complex. Therefore, Fillmore argues that frame semantics can be used to gain insight into the evolution of language by analyzing nascent linguistic systems, such as pidgins, creoles, and child language in terms of relative frame complexity. The more complex the system of frames, the more developed the language. (ibid.:30).
For Roman and for other DeafBlind people in Seattle, frame complexity in VASL has decreased. That is to say that when a form in VASL is received by Roman, fewer and fewer associations are activated for him, and over time, entire patches of the semantic field go dark. This slow loss of frame complexity in VASL can be compared to a reversal in the process of language acquisition, as it is conceived of by Fillmore. I call this process of language acquisition in reverse, “semantic erosion.”
Semantic erosion presents an additional layer of difficulty for DeafBlind people attempting to use VASL. Not only is the sign increasingly difficult to perceive and to distinguish from other signs, the meanings associated with signs deteriorate as well. Across a group of language users, the cumulative effect can be thought of as a slow leak, through which semantic content is evacuated in idiosyncratic ways. The root of the problem is the fundamentally non-reciprocal nature of communication for DeafBlind individuals. Everyone they communicated with prior to the pro-tactile movement had visual access to the immediate environment and for the most part, communicated as if others did as well.
Reciprocity has been identified as a key requirement for the emergence of signed languages, more generally. For example, when deaf children grow up without access to a visually accessible language, they often create “homesign” systems (Goldin-Meadow and Feldman 1977). These systems do not become full-fledged languages because they are not shared by a community of users (Goldin-Meadow 2010:306)(9). Deaf children use homesigns to communicate with hearing caregivers and members of their family. However, just as sighted interpreters go on using VASL, hearing caregivers go on speaking English. Whatever co-speech gesture they use is integrated into a coherent communicative stream (Goldin-Meadow and Mylan-der 1983). Since the speech stream is inaccessible to the deaf homesigners, they receive partial and disordered communicative input, which they compensate for in different ways, generating idiosyncratic, but internally consistent communication systems.
When homesigners are brought together, for example, in a school, these systems can develop into a full-fledged language (Senghas 2000 [1999], A. Senghas and Coppola 2001, Kegl et al. 2001, Goldin-Meadow 2010). Two prerequisites have been identified as crucial for this transformation to take place. First, the system must be produced and received reciprocally within a community of users (Goldin-Meadow 2010:306). Second, the system must be transmitted from cohort to cohort (Senghas 2000 [1999]) or generation to generation (Sandler et al. 2005).
The Seattle DeafBlind community has been established since the early 1980s. However, a tactile language did not begin to emerge until 2010, when communication became reciprocal. This confirms that reciprocity is required, and it pushes this requirement beyond the exchange of the semiotic system itself, to include a more far-reaching “reciprocity of perspectives” (Schutz 1970:183). The reciprocity of perspectives is not a descriptive fact, but a principle that people orient to--they act as if there were a certain degree of similarity between their perspective and that of their interlocutor. At the perceptual level, this includes assumptions about the mutual accessibility of objects, people, signs, and events in the immediate environment, so that when I say “this,” while pointing to an object, I assume that my interlocutor can see what I am pointing to, in more or less the same way that I see it.
In the DeafBlind community, this as if clause was pushed to its breaking point. While differ-ences in sensory capacity, sensory orientation, social roles, status, biography, and memory, all affect the ability of participants to establish reciprocity (Hanks 2013), this case highlights the fact that perspectives must be, to some degree, actually reciprocal. The pro-tactile movement legitimized tactile modes of access to the immediate environment, thereby building a foundation for a broader, tactually grounded “perspective.” This made it possible for Deaf-Blind people to evaluate qualities such as pressure, speed, rhythm, and texture against new frames of social value. DeafBlind people no longer took instruction on how to hold their body or orient their gaze in order to give sighted people the impression that they were worthwhile, interesting, or legible. Instead, they began to instruct others on how to cultivate tactile sen-9Green (2014) and Goodwin (2000) show many of the ways that radically non-reciprocal linguistic competence can be overcome (or not) via social and interactional means. I am arguing that similar procedures can act not only as a means of circumventing asymmetries, but also as a means of correcting them via augmentation of the linguistic system itself.
sibilities so that value and worth can be apprehended and evaluated in tactile terms. Within these frames of value, the social field took on a coherent and asymmetric organization--some DeafBlind signers emerged as legitimate leaders, imbued with more authority than others. Their authority was applied in judgements about the “correctness” of particular linguistic forms and interactional conventions, which contributed to processes of conventionalization. In Chatper2, I argue that similar processes can be identified in other cases of language emergence as well. For example, in Nicaraguan Sign Language and Al-Sayyid Bedouin Sign Language, some styles, genres, or modalities of language became legitimate ways of being educated, smart, interesting, or “culturally Deaf,” while others did not. Insofar as the language marks social distinctions like this, and can be used to access desirable positions in the social field, it will continue to organize idiosyncratic perspectives as social actors struggle and compete for resources.
In 2007, as part of pro-tactile movement, communicative expertise was redistributed within the community, contributing to the reorganization of the social field. DeafBlind people began to turn to one another to solve communication problems rather than relying on sighted people and in doing so they realized that new communication conventions would need to be established. Toward this end, a series of 20 pro-tactile workshops were organized by two DeafBlind leaders for 11 DeafBlind participants. The goal of the workshops was to establish new conventions for direct, reciprocal, tactile communication, thereby reducing dependence on sighted people. As part of my dissertation research, I collected approximately 120 hours of videorecordings of interaction and language use among DeafBlind people during the workshops. Over the course of ten weeks, these new communication practices contributed to a grammatical divergence between TASL and VASL, and ultimately, to the emergence of a new, tactile language. The main goal of this dissertation is to understand this process and establish a framework that is useful for understanding the relationship between language and context in other cases of language emergence as well.
1.1 Language Emergence and the Problem of Context
Recent approaches to language emergence have focused on the innate capacities of the human mind, as distinct from those of other primates (Goldin-Meadow and Feldman 1977, Goldin-Meadow and Mylander 1983, Goldin-Meadow and Morford 1985, A. Senghas 2000 [1999], A. Senghas and Coppola 2001, Kegl et al. 2001). Innate structures are, by definition, present prior to activity. Therefore, in order to discern the nature and organization of these structures, context must be factored out to the greatest degree possible. Analytically, this amounts to a problem of extraction, since the innate structures of the mind are only visible via observation of language in use and other forms of activity.
For example, Sandler et al. report that Al-Sayyid Bedouin Sign Language developed a consistent word order in the space of two generations (2005). They argue that word order functions syntactically to signal relations between a verb and its arguments, and they conclude with the following reflection:
Of greater significance to us than any particular word order is the discovery that, very early in the life history of a language, a conventionalized pattern emerges for relating actions and events to the entities that perform and are affected by them, a pattern rooted in the basic syntactic notions of subject, object, and verb or predicate. Such conventionalization has the effect of liberating the language from its context or from relying on the semantic relations between a verb and its arguments (Sandler et al. 2005:2664-5).
Upon reporting these findings, the authors were asked whether word order patterns in ABSL are driven by an emergent syntactic system or by patterns in discourse (10). This question is important because if patterns in word order are driven by discourse, their emergence cannot not be attributed to the innate capacities of the mind alone.
The underlying problem is not new, nor is it specific to language emergence. It arises, for example, in the problematic interaction of Saussure’s principles of arbitrariness and linearity (1972 [1915]:66-70). For Saussure, there is no abstract syntax that can be separated from co-present sound-patterns in a sequence, such a sentence, or a “syntagma” (1972 [1915]:121). Value accrues to a unit in a syntagma by virtue of what precedes and/or what follows that unit. The units, in order to be related in this way, must be co-present. In other words, “syntagmatic relations hold in praesentia” (ibid.:122). The principle of linearity, in tandem with the principle of arbitrariness, govern langue, and yet linearity cannot be entirely extracted from the realm of parole: “Where syntagms are concerned ... one must recognize the fact that there is no clear boundary separating the language, as confirmed by communal usage, from speech, marked by the freedom of the individual. In many cases it is difficult to assign a combination of units to one or the other. Many combinations are the product of both, in proportions which cannot be accurately measured” (ibid.: 123).
The semiotician Charles Morris recognizes a related analytic problem when he claims that syntax is constituted in the relations of sign vehicles to sign vehicles, and yet it also provides a set of rules through which interpreters respond to objects (1971 [1938]:26). The solution is to posit a tension between “conventionalism” and “empiricism,” which accounts for “the dual control of linguistic structure” (ibid.: 12-13). Along these same lines, Jackobson notes that the order in which words are organized is not entirely arbitrary with respect to the phenomena they refer to since “the temporal order of speech events tends to mirror the order of narrated events in time or in rank” (1971:27). These problems are encountered any time the analyst attempts to move from language-use to abstract syntactic patterns, and therefore, they have resurfaced often as the field of linguistics has developed (e.g. Chomsky 1965, Fillmore 1968, Searle 1982 [1974], Sadock 1985, Jackendoff 1990, Yuasa and Sadock 2002, McCawley 1976, Jakobson 1971, Haiman 1985). However, these old problems are encountered in new and productive ways in debates about emergent signed languages.
In the case of homesign, deaf children develop language-like gestural systems, despite the fact that they are not exposed to a perceptible language (Goldin-Meadow and Feldman 1977, Goldin-Meadow and Mylander 1983, Goldin-Meadow and Morford 1985). Goldin-Meadow and colleagues emphasize the important role the child must play in these processes, since there is no viable model for them to learn from. Therefore, analyzing these emergent gestural systems offers a window onto the innate, creative capacities of the child’s mind. However, in order to be sure that the phenomenon under investigation can be referred to innate capacities and is not an effect of some external process, distinct modes of semiosis must be distinguished from one another.
In the early work on homesign, the framework that was used to accomplish this combined Fillmore’s case theory as it appeared in The Case for Case (1968) with a framework like the one put forth by Charles Morris in the Foundations of the Theory of Signs (1971 [1938]). Only the former was identified explicitly, however, the two basic categories of signs out of which phrases are built (deictic and characterizing signs), align with the terms found in Morris (1971 [1938]), and their use is consistent with his framework. By revisiting these frameworks, we can understand how the problems outlined above were addressed. In doing so, a broader range of semiotic phenomena are made explicit in ways that clarify the boundaries between innate capacities, the languages that are acquired when those capacities are applied, and the contexts in which languages are used.
1.1.1 The Case for a Theory of Signs
Morris defines semiosis as “the process in which something functions as a sign” (1971 [1938]:3). This process requires three things: (1) The Sign Vehicle/sign: “that which acts as a sign”; (2) The Designatum/denotatum: “That which the sign refers to”; and (3) The Interpretant/interpreter: “The effect of the sign on an interpreter, by virtue of which, the sign counts as a sign to that interpreter” (ibid.). In order to account for the relationship of the sign to context, Morris posits a three-way distinction between indexical, characterizing, and universal signs. Indexical signs denote an object and are exemplified by pointing. Characterizing signs denote objects, but also analyze them in some way, highlighting certain aspects (1971 [1938]:17).
In order for an object to be responded to, it must be located in terms of its relevant characteristics. This requires the combination of a characterizing sign and an indexical sign. The characterizing sign provides the determinateness of expectation (if I say “dog,” you expect a dog); and the indexical sign provides the directivity of reference. Lastly, there must be signs that indicate the relation of these signs to one another and their relation to the class they are members of. These are “universal signs” (1938:17). These sign types map onto the distinction between pragmatics, semantics, and syntactics in Morris. Pragmatics is constituted in the relation between the interpretant and the sign vehicle. Semantics inheres in the relation between the sign vehicle and the designatum. Syntactics is constituted in the relations between sign vehicles and the categories to which they belong. No one dimension can be dissociated from the others; a language is irreducibly triadic.
While Morris is clearly relevant to analyses of homesign, his framework is not foregrounded. Instead, Goldin-Meadow and colleagues point to Fillmore’s case theory (1968) in accounting for the innate structures of the child’s mind. Their challenge is to factor out external input, to be sure that contributions to the emergence of language-like homesign systems are the achievements of the child alone. However, the only factor outside of the child’s innate capacities that is explicitly ruled out is linguistic input. Other contextual factors play a pivotal role, which is reflected in the terms of analysis as well as the examples (11). This can be seen most clearly by viewing one of their examples first through Fillmore’s framework and then juxtaposing this with an analysis from Morris’s perspective. What I aim to show is that both frameworks are necessary in accounting for the regularities observed in homesign, and that this has consequences for our understanding of language emergence.
In The Case for Case, Fillmore argues that the syntax of a language cannot be stripped of all associated semantic elements, and further that semantic relations actually constitute an underlying structure, or “frame” that explains many syntactic constraints. The following example and others like it form the core of Fillmore’s argument. He begins with a covert distinction between affectum and effectum, which is observable in the following two sentences (1968:4): (1) John ruined the table; and (2) John built the table. In sentence (1), the object exists prior to John’s activities and in sentence (2), it exists as a result of John’s activities. It would appear, Fillmore says, that the distinction is purely semantic and that the syntactic system of English does not require its speakers to confront it. In other words, the ability to interpret the verb-object relation in two distinct ways in these two sentences, has nothing to do with a knowledge of English syntax. Nevertheless, the distinction has syntactic relevance: “The effectum object does not allow interrogation of the verb with “do to,” while the affectum object does.” Therefore, if you ask “What did John do to the table?” You can answer: “What John did to the table was ruin it.” But you cannot answer: “What John did to the table was build it.” (1968:4). The reason is that, prior to being built, the table doesn’t exist.
This is a semantic fact that has implications for syntax. Fillmore calls the relations between the two case relations, or simply case (1969:21). Case relations are covert, and in their totality, form “a universal system of deep-structure cases” (1968:21). Case forms, on the other hand, are the expression of case relations “through affixation, suppletion, use of clitic particles, or constraints on word order” in a particular language (ibid.:21). At one level, cases are linguistic in nature, but Fillmore backs up further and sees them as consistent with a broader range of cognitive capacities, which are “identified” by the cases, just as the cases are identified by verbs and nouns. In Fillmore’s words
The case notions comprise a set of universal, presumably innate, concepts which identify certain types of judgments human beings are capable of making about the events that are going on around them, judgements about such matters as who did it, who it happened to, and what got changed (1968:24).
These broader cognitive capacities allow for the mental representation of events, actions, and the things that participate in them. In order to identify the structures that allow humans to discern who did it, who it happened to, and what got changed, syntax must be extractable, and therefore, autonomous, and yet, as Fillmore shows, its autonomy is a persistent problem.
In Fillmore’s scheme, the correlate to signs that refer to, or characterize, actions are verbs and those that refer to, or characterize, objects or entities are noun phrases (Fillmore 1968:24-5). The homesigners that Goldin-Meadow and colleagues are working with do not produce verbs and noun phrases, but combinations of pointing gestures and characterizing gestures. This poses no problem because in Fillmore’s framework, the surface structure of the utterance is not important. The focus is instead on the relations that obtain between representations of referents (noun-like forms) and representations of actions and states (verb-like forms). Goldin-Meadow and Mylander “stress that [they] use linguistic terms such as sentence loosely and only to suggest that the deaf child’s gesture strings share certain elemental properties with early sentences in child language” (1983:372). They never claim that these systems are linguistic systems, and are careful to distinguish language-like phenomena from language. However, verb-like gestures are, through the use of Fillmore’s terms, implicitly compared to verbs and noun-like gestures are compared to nouns (or noun phrases). Goldin-Meadow and Feldman decompose communicative events into elements and relations like this, arguing that when deprived of exposure to a conventional language, the minds of children act on the gestural resources available to them in ways that the mind of any child capable of acquiring language would to yield a language like any other.
In one example, a child points at a shoe and then points at a table. In Fillmore’s scheme, we would start with the requested action: Please put the shoe on the table. The first pointing gesture stands in for a noun phrase that refers to the shoe. In relation to the action (verb-like element), this pointing gesture can be interpreted as the expression of the covert semantic element: patient. The second pointing gesture stands in for a noun phrase that refers to the table and can be interpreted as the expression of the covert semantic element: recipient.
In Morris’s scheme, the first pointing gesture (or sign vehicle) refers to an object (or desig-natum), as does the second pointing gesture. For Morris, semantics consists in the relation between the sign vehicle and the designatum, so a semantic relation is expressed by these elements in Morris, just as it is in Fillmore. But we have only accounted for the noun-like elements of the example. There is no overt manifestation of the verb-like element. This element is a product of the interpretation--that the two pointing gestures are a request to put the shoe on the table. If the mother responded to the pointing gesture (sign vehicle) by picking up the shoe and putting it on the table, this response would constitute the in-terpretant, or “the effect of the sign on the interpreter.” Since the utterance itself does not demand this interpretation, the analyst must have inferred it from a contextual scenario like the one I have just proposed.
For Morris, the response of the care-giver does not belong to semantics; it belongs to pragmatics, which inheres in the relations between interpretants and sign vehicles. Fillmore’s model does not account for communicative effects of sign vehicles, nor does it account for objects apart from their mental representations. Therefore, both frameworks are necessary in assigning semantic roles to the gestures that make up the gesture phrase. Without pragmatics, there is no action, without an action, there can be no case relations, and without case relations, there can be no innate capacities of the mind. Therefore, while “syntactics,” in Morris’ terms has become central to arguments about the emergence of new languages, autonomy (not surprisingly) remains problematic.
If Fillmore and Morris were explicitly combined we could understand the increasingly consistent ordering of semantic elements in homesign systems as a kind of integration between deictic, characterizing, and universal signs. With repeated use in familiar contexts, deictic and characterizing signs become increasingly caught up in and coordinated by relations of signs to one another and to the underlying categories they are members of; and the reverse is also true. The relations of signs to one another and the underlying categories they are members of are increasingly caught up in and coordinated by patterns in the way objects are individuated and characterized. This move brings us into a broader, analytic frame in order to distinguish between what is “universal” (in Morris’ terms) and what is not--prior to more detailed analysis of any one dimension of the phenomenon.
In addition, these semiotic processes are embedded in socio-historical frames, which have also been crucial in understanding how nascent signed languages emerge. For example, 1946, the first special education school was established in Managua (Polich 2005:24). Before that time, deaf children in Nicaragua had very little contact with the outside world and no contact at all with other deaf children. There were no schools for deaf children (or children with other disabilities) and no way for them to acquire basic communication or living skills (ibid.:13-24). By 1974, there were four schools involved in educating deaf children (Polich 2005:24). These changes coincided with an important, and much broader, transition in public perspectives on disability. Deaf people went from being seen as “eternal children” incapable of becoming productive adults to being seen as “potentially remediable subjects” (Polich 2005:24). Opportunities for deaf people in Nicaragua began to grow. Then, in 1979, the Sandinista Revolution took root, and the number of special education schools grew as well. From there, advocacy groups, clubs, and grass-roots organizations emerged (ibid.:53-91).
Within these groups certain individuals emerged as leaders within the deaf community. Even prior to the emergence of a full-fledged language, meta-linguistic discourses began to circulate, and the internal stratification of the community imbued some deaf people with the authority to decide what counted as the “correct” form of a sign (Polich 2005:53-91). The possibility of signing in “correct” and “incorrect” ways and the emergence of experts within the group meant that the language, even as it was forming, was viewed by deaf people as a legitimate means of position-taking in an internally asymmetric social field.
In chapter 2, I argue that this process is a prerequisite for language emergence. If the semiotic system in question is not a legitimate means of position-taking, it will not become a full-fledged language (12). Therefore, in addition to the requirement that a semiotic system must be transmitted from cohort to cohort (Senghas 2000 [1999]) or generation to generation in a community of users (Sandler et al. 2005), and that it must be a reciprocal means of communication (given a broader understanding of reciprocity), I am also claiming that a language must be a viable way of occupying social positions, and that those positions must be embedded in patterns of inequality within the community of language-users. In order to build a framework that examines language emergence in broader semiotic and socio-historical frames, I appeal to practice theory, as it has been developed for the analysis of language (Bourdieu 1990 [1980], Giddens 1979, Hanks 2005a, 2005b, 2009, Edwards 2012). In the following section, three key concepts are discussed in relation to the emergence of TASL and other signed languages: habitus, field, and embedding.
1.2 Language Emergence in a Practice Framework
DeafBlind people in Seattle were once sighted. They oriented to their immediate environment in ways that sighted people do, and they continued to do so, even after they lost their vision. Starting in 2007, under the influence of the pro-tactile movement, DeafBlind people began to cultivate tactile sensibilities. This shift, which eventually led to the emergence of new grammatical systems in TASL, can be understood as a reconfiguration of the “habitus.” This process is social in nature and does not yield to linguistic analytics, but, as I will show, it has consequences for the structure of the emergent linguistic system.
1.2.1 Habitus
Habitus derives from socially and historically specific patterns of perception, thought, and action weighed against notions of correctness, appropriateness, and politeness. These patterns take shape through processes of socialization in childhood and beyond (Bourdieu 1990 [1980]:53). According to Bourdieu, we are socialized to recognize certain immediate and urgent triggers to say something or not say it, to act or not act, and to identify certain objects in the environment as relevant, or not relevant. The trigger-response loop is automatic, which hides the fact that all of these acquired patterns and schemes which predispose us to respond to stimuli in particular ways are themselves predisposed to reproduce the systems and regularities which created them (ibid.:55). Out of this circularity, a “common sense” is instilled in the individual, understood as “embodied history, internalized as second nature and so forgotten as history” (ibid.:56). Children are socialized to accept common sense as such and this works to naturalize historical effects (13).
Bourdieu’s formulation of habitus can be traced to Panofsky, who viewed “cultural production [as] profoundly shaped by the ways of the thinking of its time” (Hanks 2005a: 70). Panofsky proposed homologies between philosophical thought and the thought procedures of cultural producers in a given period, which give rise to widespread, underlying logics of cultural production. Bourdieu drew on Panofsky’s thinking, but under the influence of Merleau-Ponty, he went on to propose “that the body, not the mind, was the site’ of habitus” (ibid.:71). Panofsky’s notion was further modified through its synthesis with the Aristotelian notion of hexis--the meeting of an intention (or desire) to act with judgments of that intention against frames of social value and meaning as well as phenomenological notions of habituality and embodiment. The phenomenological dimensions of habitus were taken from Merleau-Ponty, who saw the body as the site of a particular kind of knowledge or “grasp” that social actors have of being a body-- a “corporeal schema,” which is transmitted by the habitus (see Hanks 1996:69). In sum, the habitus is shaped by patterns of perception, thought, and action, along with social frames of value that guide the actor in applying those patterns in ways that feel appropriate, correct, and polite. These patterns are internalized at the level of the corporeal schema, where they are difficult to reflect on or reason about.
DeafBlind people grew up sighted, and during that time, they developed a corporeal schema, which was coherent in a field of visual dynamics and relations. Prior to the pro-tactile movement, communicative conventions in the community were established in order to maintain that schema. DeafBlind people used interpreters who could help them orient their body to their addressee in a way that would feel appropriate to sighted people; they stood at distances that would feel polite, and refrained from touching others, for fear of being rude. All of this served to maintain the visual habitus as long as possible. However, attempts at enacting the visual habitus eventually led to characteristically strange behavior, which, in turn, led to less coherent social relationships and ultimately to greater social isolation.
Leaders of the pro-tactile movement traced these problems to a single cause: DeafBlind people did not have enough tactile access to their environment. They argued that representations only make sense if they conjure experience, and because DeafBlind people had been relying so heavily on interpreters, a chasm between the two had opened up. In other words, via a “reflexive monitoring of conduct” (Giddens 1979:25), DeafBlind leaders saw that habitus must articulate with field. Rather than attempting to prop up the visual habitus, they intervened in the social order at the level of motoric habituation, and established a tactile habitus. They did this consciously and effectively, in ways that Bourdieu might not have predicted, since his social actor operates in mostly non-reflexive modes. In order to account for these kinds of conscious interventions, Giddens breaks the consciousness of the actor onto three planes: practical consciousness, discursive consciousness, and the unconscious (1979:2). He recognizes a kind of tacit, embodied knowledge like the kind transmitted by the habitus, but he argues that all social actors also “have some degree of discursive penetration of the social systems to whose constitution they contribute” (ibid.:5).
In the pro-tactile workshops, discursive and practical consciousness were ramped up, and the “unconscious” in Giddens’ terms, was altered. As both Bourdieu and Giddens might expect, these changes were confusing in early stages of the transformation. A bid for a turn was misunderstood as a sexual advance. An attempt at co-presence was misunderstood as a bid for a turn. Fairly quickly, though, possibilities were narrowed as patterns in interaction began to settle and social boundaries around touch were redrawn. Within new limits, a range of possible and expectable behaviors cohered and began to be evaluated against new frames of social value. Embodied communicative behaviors went from choppy and arhythmic to smooth and automatic within the span of a few weeks. There were new ways of being inappropriate and politeness quickly became a common sense matter-- a new habitus began to emerge.
This process, which is fundamentally social, unfolds on the level of motoric habituation and therefore also affects the production and reception of signs in ways that are linguistically significant. However, feature hierarchies are not useful for understanding changes in the habitus and politeness is irrelevant for understanding feature hierarchies. Therefore, the two orders of phenomena must remain analytically distinct, despite the fact that, in practice, they are intimately related. In addition, the habitus must be distinguished from the social fields to which it articulates, despite the fact that in practice, they are inextricable. The “field” concept is useful in establishing these analytic distinctions.
1.2.2 Field
A field, broadly construed, is a structured space, into which elements can be inserted, or on which, they can be arranged. For example, an electronic form is composed of spaces paired with specifications for information, such as last name, first name, date of birth, etc. Each space is set to receive elements arranged in a particular order or formatted in a particular way. For example, names that are too long are truncated and if a date is entered in an unrecognizable format, the form will be returned.
For Karl Bu¨hler, language and everything around it is replete with fields of all kinds: the symbolic field, the deictic field, the perceptual field, the inner field, the outer field, the field system of the type language, and so on (2001 [1934])(14). Bu¨hler’s fields are exemplified by grids, schemes, chess boards, geographical coordinates, the lines on music paper, vacant slots, and pathways in the countryside, on which signposts are situated. In practice theory, the field concept has been taken up as a way of understanding the dynamics of institutionally embedded social roles (Bourdieu 1990 [1980], Hanks 2005a)(15). In this dissertation, I distinguish between three fields, all of which are necessary for understanding the social and interactional foundations of language emergence in the Seattle DeafBlind community: the social field, the deictic field, and the symbolic field.
The Social Field
The social field is a structured space, into which elements are inserted and values are assumed. Its structure is defined by two things: “(a) a configuration of social roles, agent positions, and the structures they fit into; and (b) the historical process in which those positions are actually taken up, occupied by actors (individual or collective)” (Hanks 2005:72). For example, the DeafBlind community was built around a a local institution called the Seattle Lighthouse for the Blind, which is a manufacturing company with a social service mission. The Seattle Lighthouse and other organizations were once “sheltered workshops for the blind,” established to provide work alternatives to blind adults who could not find employment.
In the early 1970s, the scope of these organizations was broadened to include people with disabilities other than blindness (Koestler 1976:229). Shortly thereafter, DeafBlind people from across the country started to relocate to Seattle to take advantage of new employment opportunities. In addition to the provision of jobs, the Lighthouse also addressed the medical, personal, and housing needs of its DeafBlind employees. In order to receive these services, DeafBlind people had to learn to inhabit the social roles given by the history of the field, such as the “expedient blind person,” the “true believer,” and the “professional blind person” (Scott 1969:86-7). The expedient blind person tries to perform the role expected of him when sighted people are present, but takes this activity to be a performance that can be abandoned. The true believer is a blind person who actually experiences the emotions that the experts demand (ibid.:87). They express sincere gratitude to the organization and they genuinely believe that they would not be able to live without it (ibid.). The professional blind person lives in a network of blind organizations and agencies, and has very little contact with anyone outside of it (ibid.). The professional is often employed by a blindness organization that views their employment as act of goodwill or charity. These roles are endemic to the field of “blindness” and in order to take up a position in that field (thereby obtaining resources) DeafBlind people have to learn to inhabit them (chapter 3).
However, a social field is not just a place where people obtain or provide resources such as employment, education, or social services. Within any social field, values, such as prestige and authority also circulate, and accrual of these values motivates the strategic action of agents (Bourdieu 1980:112-134, Hanks 2005:73). Each field has a distinct history and a distinct set of circulating values. For example, in the social field of blindness, employment is often offered in exchange for “dignity” (Koestler 1976) and monetary gain is a secondary consideration.16 The historical processes that exclude and include values in a particular field constrain possibilities for action within that field. Each time a DeafBlind person performs work duties or receives social services in meetings, assessments, interviews, and trainings, they encounter these constraints, and over time, are shaped by them. As Hanks point out, this is where “habitus and field articulate: Social positions give rise to embodied dispositions. To sustain engagement in a field is to be shaped, at least potentially, by the positions one occupies” (Hanks 2005a:73).
Language-use, in a practice framework, is a means of position-taking in the social field. Legitimacy accrues to particular styles and genres of language use and not others, so that access to power is restricted by the way you speak (Hanks 2005b). Prior to the pro-tactile movement, visual modes of communication were a legitimate means of taking up valued social roles and tactile modes of communication were not. Then, in 2007 a DeafBlind person who communicated exclusively via tactile reception was hired as the director of a non-profit organization in Seattle. This catalyzed a reconfiguration of institutionally embedded social roles and the values circulating among them. As part of this, tactile modes of communication were legitimized, and communication practices were radically reorganized. While these changes were motivated by struggle and competition in the social field, they also affected the embodied dynamics of interaction among DeafBlind people. Recall that the habitus operates at the level of motoric habituation and affects the body schema of the social actor. This includes perceptual and cognitive schemes used to orient to the immediate environment. These same orientation schemes play a central role in the organization of the deictic field. Therefore, changes in the habitus can affect the way acts of referring are accomplished.
The Deictic Field
The deictic field is organized by the kinds of access that participants have to objects of reference. From the perspective of the individual, access is structured by schemes and patterns of various kinds: perceptual schemes, routine routes through familiar spaces, intuitions one develops for how a city, a village, a store, or a parking lot might be organized, etc. These schemes extend out around the language-user like an orienting grid. When a deictic sign is used, both the signer and the addressee must retrieve values from the deictic field. This requires a reciprocity of perspectives. In other words, participants must be able to take for granted a certain degree of similarity between their perspective and that of their interlocutor. Schutz explains that in a reciprocal configuration
I take it for granted--and assume my fellow man does the same--that if I change places with him so that his ‘here’ becomes mine, I would be at the same distance from things and see them in the same typicality as he actually does; moreover, the same things would be in my reach which are actually in his. (All this vice versa) (Schutz 1970:183).
When a minimum threshold of reciprocity cannot be reached, participants do interactional work to converge on the object. In order to account for the structures that are present prior to activity, and those that are worked out in the course of an interaction, Hanks synthesizes Goffman’s “situation” and Bu¨hler’s deictic field (Hanks 2005a:192). This yields a construct that can account for: (1) “the positions of communicative agents relative to the participant frameworks they occupy”; (2) “The position occupied by the object of reference”; and (3) “The multiple dimensions whereby agents have access to objects” (ibid.:193). These dimensions often include perceptual access, but they can also include shared knowledge, memory, imagination, or any other relation that allows signer and addressee to single out the referent against a horizon of potentiality. Therefore, while each individual comes to an interaction with orienting schemes of their own, the activity of referring requires those schemes to be coordinated in repeatable and expectable ways.17) Coordination of this kind is accomplished within participant frameworks, some of which are more conventional than others. Prior to the pro-tactile movement, participant frameworks organized around visual access were maintained among DeafBlind people, despite the fact that those frameworks actually prevented them from establishing access to the object (see chapters 5 and 6). This was because tactile modes of communication were not a legitimate means of taking-up valued positions in the social field. Once a person started compensating too obviously for vision loss, their social status was compromised. The reconfiguration of the social field opened up the possibility of establishing new participant frameworks, this time organized around tactile modes of access. This had consequences for the organization of the deictic field, and changes in the deictic field had consequences for the language.
When a deictic sign is applied in the speech situation, it retrieves values from two, distinct fields: the deictic field and the symbolic field. All deictic signs are composite in this respect, composed of both “symbols” and “signals” (Bu¨hler 2001 [1934]:99). Their symbolic meaning derives from oppositions in the language (Here is not there; I am not you), which accounts for definiteness of reference. Their indexical meaning derives from the deictic field, which accounts for directivity of reference. Speaking deictically requires the coordination of values from each field in the unfolding of the utterance.
When language-users enact particular retrieval patterns repeatedly, those patterns can become more restricted. This is what I am calling “deictic integration.” For example, the emergence of a full-fledged signed language in Nicaragua has been associated with the emergence of “spatial modulations” which establish relations between the verb and its arguments, or else between the verb and its “referents” (Senghas 2000 [1999]:679). This ambiguity between arguments and referents is at the center of this case of language emergence.
A canonical example cited in the literature on Nicaraguan Sign Language (NSL) involves the signs see and pay. As the language develops, signers consistently move both signs toward a single locus in the space in front of the signer in order to indicate that the same person was both seen and paid. In the first cohort of NSL signers, there was no consistent relationship between the direction in which the signs were produced and who was seen and paid. In the second cohort, movement was consistently represented from the character’s perspective as opposed to the signer’s perspective so that the directionality of the verb could be relied on to express whether or not the same person was both seen and paid. This is analyzed as a case of “co-reference” and also as a case of “agreement.” The two signs co-refer to the locus by moving toward it in space, and in doing so, manifest agreement between both verbs and their shared “nominal argument.” This is presented as evidence that Nicaraguan Sign Language has achieved full-fledged linguistic status with the following conclusive remarks: “Signs produced in a common location now unambiguously indicated a common referent [ . . . ] At this point, the construction could be used to link a verb to its arguments [ . . . ] ” (R.J. Senghas et al. 2005:301).
These findings raise the question of whether or not a verb can have referents, and whether or not this relation is linguistic. If the locus where the signs converge is in fact an argument of the verbs, how can it be specified phonologically and listed in the grammar? If the locus is an expression of a non-linguistic conceptualization of space, then what accounts for the relation between the two? Bootstrapping? Inference? Blending? Abstraction? Conventionalization? From a practice perspective, an ambiguity between referents and arguments poses no problem for the framework. Rather, it is a clear indication that a process of deictic integration is under way.
Deictic integration coordinates the linguistic system with the deictic field, leading to increasingly restricted retrieval patterns. In other words, these verbs were set to retrieve a wider range of deictic values for signers in the first cohort of NSL signers than they were for signers in the second cohort. In the first cohort, the directional movement of the verb was free to respond (or not) to a wide range of deictic phenomena. In the second cohort, the verb developed receptors, set to receive more specific information from the deictic field, and expressed this information consistently from the character’s perspective. “Perspective” is not a linguistic relation, but rather, a relation that accrues to the indexical ground of reference. Nevertheless, the way in which perspective is used to establish syntax-like relations is not a deictic phenomenon. This is where the deictic and symbolic fields converge and are coordinated into tighter and more restricted configurations by NSL signers as they “surpass their input” (Senghas and Coppola 2001:327). Under this perspective, NSL does not emerge as a full-fledged language as it is cut away from context, but rather, as it is integrated with the deictic and social fields it articulates to.
The Symbolic Field
The concept of the symbolic field, as it is employed by Bu¨hler, is a very general one, which encompasses too much to be applied to the analysis of specific linguistic structures. I adopt it here as a way of filtering phenomena at the outset into two distinct categories: those that are amenable to linguistic analysis and those that are not. Phenomena that unfold in the deictic and social fields are not governed by linguistic principles of organization, while those that unfold in the symbolic field often are. For Bu¨hler, the symbolic field is usually exemplified by syntax, but it also stands in for“grammar” more generally, understood as a system for establishing relations between representations of objects (2001 [1934]:28). He writes:
Language does not paint to the extent that would be possible with the resources of the human voice, but rather, symbolizes; naming words are symbols of objects. But just as the painter’s colours require a painting surface, so too do language symbols require a surrounding filed in which they can be arranged. We call this the symbolic field of language (2001 [1934]:171).
All naming, or in Morris’s terms, “characterizing signs” receive their field values from the symbolic field (Bu¨hler 2001 [1934]:94, Morris (1971 [1938]:17-21). Characterizing signs denote and also analyze the objects they represent, highlighting certain aspects and not others (Morris 1971 [1938]:17-21). As characterizing signs are used, denotational and analytic patterns sediment and conventional form-meaning correspondences are established in the type language. This gives rise to a language-internal “semantic field,” which broadly speaking, is made up of “any structured set of terms that jointly subdivide a coherent space of meaning” (Hanks 2005a:192). The analyst knows that the semantic field is relevant when the use of different forms systematically invokes different aspects of setting (ibid.:200). When these units are inserted into the symbolic field, they assume particular values.
The phonological system does not receive its values from the symbolic field, however, it is necessary for distinguishing symbols from one another. Therefore, the symbolic field and the phonological system are interlocking mechanisms, through which, all representations must be filtered. Unlike Saussure, who drew a hard line between form and substance in delimiting langue,18 Bu¨hler argues that “there is neither material without form nor form without material” (2001 [1934]:291). A phoneme is an auditory mark on a word, which can be counted. However, it is not extractable. It is embedded in the sound-shape of the word, “which changes like a human face with the fluctuation of expression . . . ” (ibid.:292). Therefore, from the perspective of the addressee, the phonological system is a system of “detectors” set to identify some marks in the sound stream and not others (ibid.:311). In this view, the figure-ground relation is central, and that relation is conditioned in part by modes of access to the sign-vehicle. Given this perspective, shifts in the deictic field, which affect modes of access, should echo into the phonological system of the language. This is precisely what has transpired in the case of TASL (chapters 8 and 9). This shift, where the linguistic system is transformed as it is aligned with its contexts of use is accounted for in the present framework with the concept of “embedding.”
1.2.3 Embedding
Embedding is a process whereby semiotic elements are converted as they assume values in the symbolic, deictic, and social fields.19 Where conventional practices emerge, relations between fields are tightened into increasingly restricted configurations, so that language is not “taken by surprise” when it encounters the world (Bu¨hler 2001 [1934]:197). Rather, the linguistic system acts like a network of receptors, set to receive certain field-values and not others.
Bu¨hler’s argues that any element inserted into a field must be “fieldable” (Bu¨hler 2001[1934]:211). His example is the following: “[t]he note symbol is not [capable of assuming a field value in the map field], it is not ‘fieldable’ there because it does not symbolize a geographical entity that could receive a local value” (ibid.:211). Therefore, if a musical note were inserted into a geographic map, it could only assume a non-musical value. For example, it might stand for a place where musical concerts are held, thereby undergoing a semiotic transformation.
The same is true of elements transferred from one language to another. A lexical sign removed from one language and inserted into another, will be incapable of assuming a field value without undergoing the structural changes associated with borrowing in that language (Thomason 2011, Battison 1978). In addition, the resulting value will necessarily be distinct from the corresponding value in the donor language (Saussure (1972 [1915]). This process through which elements assume field values as they are inserted into a particular field is, in its broadest sense, “embedding” (see Hanks 2005a:194 for further discussion).
In recent work on language and practice theory, four principles of embedding have been proposed: practical equivalences, counterparts, rules of thumb (Hanks 2005b) and integration (Edwards 2012). Practical equivalences are correspondences between “modes of access that interactants have to objects” (Hanks 2005b:202). For example, in Yucatec Maya, there are two enclitics, a’ and o’, which when combined with one of four bases, produce a proximal/distal distinction (ibid.:198-9). However, in practice, the o form can be used to refer to denotata that are “off-scene” (ibid.:201). In order to use the “distal” deictic this way, a “practical equivalence” must be established between “off-scene” and “distal.”
Counterparts establish relations of identity between objects (Hanks 2005b:202). For example, the proximal deictic can be used by a shaman to refer to a child who is off-scene if there is a visual trace of that child in his divining crystal. This is possible because the visual trace of the child is construed as the counterpart of the actual child (ibid.:201). The shaman is authorized to establish this relation by virtue of his social position, just as the radiologists position authorizes him to interpret x-rays (ibid.). Therefore, counterparts establish relations between: (1) form-meaning correspondences (e.g. a/o=proximal/distal); (2) the deictic field, where access to the referent is established, and (3) the social field, where authorized speakers establish relations between form-meaning correspondences and the deictic field by using legitimate styles and genres of language use.
Rules of thumb guide speakers in responding to commonly occurring, or “stereotypical” situations (Hanks 2005b:206). For example, in Yucatec Maya, a stereotypical greeting includes a question-response sequence like the following (ibid.:206 ):
Speaker A: “Where ya goin?”
Speaker B: “Just over here.”
This exchange “tells A nothing about where B is going or how far away it is, only that he is heading there” (ibid.). Therefore, the proximal form, translated as “here,” is not associated with proximity, but rather, a routine situation. Each of these principles of embedding involves the instantiation and subsequent re-shaping of a form-meaning correspondence.
Unlike related concepts such as “contextualization”20 (Gumperz 1992) and “keying”21 (Goff-man 1974:40-82), embedding draws attention not only to changes in meaning that emerge in interaction, but also to processes affecting the language which operate on historical and institutional scales. Practice theorists distinguish between interactional and social scales in order to establish principled relations between them. Giddens links historical and interactional scales via the “layering” of social structures (1978:65). This is similar to the notion of social embedding developed here. However, Giddens is concerned with social and interactional structures, while embedding draws attention to relations between social, interactional, and, crucially, linguistic structures.
Practical equivalences, counterparts, and rules of thumb all involve a shift or substitution in meaning with respect to a stable linguistic form. For example, when a “distal” deictic is used to refer to an off-scene denotatum in Yucatec Maya, the meaning is converted, but the form remains constant. In contrast, “integration” accounts for cases where both form and meaning are converted (Edwards 2012:61-3). In cognitive science, integration implies a partial projection of elements from two domains into a third, which manifests a structure that is not present in either of its inputs (Fauconnier and Turner 1998:133). The term is used here to describe the emergence of new linguistic forms, not present in the input. However, it focuses on the relations between social, deictic, and symbolic fields, which are not reducible to cognition.
I use the term “contextual integration” to account for effects of embedding in the deictic and social fields, which have consequences for both form and meaning. In both cases, effects can be momentary, or they can be more lasting. For example, if two sighted users of VASL are communicating across a football field, they will extend the space within which signs are conventionally produced to increase visual salience. As a result, “location” and “movement” parameters of the sign will change. This is an effect of embedding in a deictic field where participants momentarily have reduced visual access to signs. Insofar as communicating across football fields constitutes a marked interactional context, this change in production is not relevant to our understanding of the structure of VASL. If, on the other hand, limited visual access is a permanent circumstance among a group of language users, and if this circumstance leads to historical shifts in sensory orientation and social organization, then integration will have more lasting effects.
Particular modes of access are also made feasible (or not) by broader processes of authorization and legitimation, therefore embedding in the social field can have consequences for the organization of the language. For example, if the use of a tactile channel thrusts the language user into a subordinated social position, a tactile language is less likely to emerge. Therefore, while authorization and legitimation constrain position-taking, these processes can also restrict the feasibility of logically possible linguistic forms on social grounds. As new forms of authority accrued to DeafBlind social roles and the tactile modality was legitimized a wider range of tactile linguistic forms became feasible for the language.
1.3 Modality: what does it mean to call a language tactile?
A practice approach to language places the question of “modality” as it has been understood in the sign language linguistics literature within the broader frame of contextual integration. To say that a language is tactile is to say that it is seamlessly integrated with the social, deictic, and symbolic fields engaged by tactile people. Therefore, the emergence of TASL is coeval with the emergence of a tactile habitus and the social field with which it articulates as well as a deictic field organized around tactile modes of access and orientation. Each field is a structured, semi-autonomous space, into which elements can be inserted, or on which, they can be arranged.
Crucially, Bu¨hler’s fields are not related to the elements inserted into them as form is related
to matter. It is not that you insert material elements into formal structures. Instead there is a Gestalt--or a relation of figure to ground. Objects are represented indirectly via the juxtaposition of many interlocking “implements,” which act as filters and intermediaries, each one introducing some arbitrariness of its own. As you go further out of the core mediating implements of a language, you arrive at the world, where you find what Bu¨hler calls “differences in world view” (2001 [1934]:171), or what Schutz calls “differences in perspective.” Ultimately, the diversity observed in linguistic systems is attributed to these differences. At the outer perimeter of the language is the deictic system, reaching on one side toward the grammar, and on the other, toward the deictic field. Through patterns of retrieval and integration, the language is aligned with the world as it is perceived by the users of that language, and those processes echo in arbitrary ways as they move from the perimeter to the core of the grammar.
The semiotic transformations currently under way in the Seattle DeafBlind community suggest a theory of modality like this, which begins in large-scale socio-historical processes, but penetrates through to core grammatical systems. From this perspective, the degree to which grammar can be abstracted away from its contexts of use appears overstated, and at the same time, distinctions between interlocking systems remain important; phonology is not approached as if it were syntax, and syntax is not approached as if it were deixis. A language and the world in which it is used form a gestalt, which foregrounds and backgrounds elements in interlocking systems and fields, like moods passing over a face. A tactile language, then, is system of mediating implements, which is sensitive to, and shaped by, the social and physical world inhabited by tactile people.
1.4 Methods
In this dissertation, I draw on data collected in three field trips: two months in the summer of 2006, four months in the spring of 2008, and 12 months of dissertation research starting in the summer of 2010. In 2006, I conducted a set of 17 semi-formal interviews with 12 people that were videorecorded, analyzed, and transcribed. The average length of the interviews was 1.3 hours. Most of these interviews focused on the life histories of the people being interviewed, including their relationships to sighted interpreters and the the kinds of strategies that were effective or not as vision was lost. These data were originally collected as part of the larger “National Support Service Provider Pilot Project,” funded by the Department of Education, which resulted in a curriculum for training sighted interpreters and DeafBlind people who work with them (Nuccio and Smith 2010).
In 2008, I videorecorded 8 dyads composed of 1 DeafBlind person and 1 sighted person (either deaf or hearing) for 1.5-3.0 hours engaging in a variety of activities such as dog walking, grocery shopping, or attending an event. For those interactions where the subjects were walking, I walked in front of them and recorded them with a camera mounted on a harness and pointed backward over my shoulder. Fieldnotes were collected after recording sessions and these notes form the basis for some of my ethnographic descriptions of practices prior to the pro-tactile movement. I also took fieldnotes after socializing and interacting
with my friends and co-workers in a wide variety of contexts, conducted interviews with eight DeafBlind people in order to understand their perspectives on how interpreters who interpret visual information are useful to them, and conducted several interviews with people who I had not had the opportunity to interview in 2006 about their life histories. In addition to interviews and videorecording, I made myself available in 2006 and 2008 as a sighted interpreter for activities such as people watching and socializing, with the understanding that I would write about those interactions in my fieldnotes. All of these data provide a useful point of comparison with newer communicative practices that are the main focus of this dissertation.
While conducting my dissertation fieldwork, I collected approximately 160 hours of vide-orecordings of interaction and language use among Deaf-Blind people, which for the most part, excludes sighted participants. 120 hours of these were recorded during the pro-tactile workshops. This corpus has been indexed, selectively transcribed, and thematically organized. This is possible in part because many of the recordings are distinct angles on the same interaction. Therefore, in a one-minute interaction, I might have to analyze three minutes of video footage to capture relevant elements from visible angles. The videorecordings from the workshops form most of the empirical basis for the interactional and linguistic analysis in this dissertation. In addition, I draw on detailed field notes, recorded in the following contexts: (1) approximately 14 hours of orientation and mobility trainings with two different DeafBlind people; (2) bi-weekly classes called “DeafBlind class” where news is exchanged and information is shared via interpreters; and (3) informal interaction at a range of DeafBlind events, community meetings about urgent matters, and after socializing with my friends and acquaintances.
I have been involved in the Seattle DeafBlind community for over 14 years in a range of capacities. These experiences have made it possible for me to conduct this research and have shaped its course in many ways. I started socializing and volunteering as an interpreter in the community in 1997, as an undergraduate student. Over the next 5 years, I became increasingly involved in the community--as an interpreter, an employee, a roommate, a friend, a fellow board member, and in many other contexts, until I left for graduate school. Since then, I have returned regularly to visit and to conduct research. While the pro-tactile workshops are at the center of my dissertation research, all of these experiences, and the people who I have been closest to throughout, have shaped my understanding of the phenomenon.
1.5 Overview of the Dissertation
This dissertation begins, in chapter 2, by placing the practice approach to language emergence in a comparative frame. I analyze three cases of language emergence. In each case I ask: (1) what counts as language-like and (2) how relations between language-like phenomena and context are treated conceptually. The first case I examine is the emergence of language-like gestural systems, or “homesign systems” among deaf children who are not exposed to a visible language (Goldin-Meadow and Feldman 1977, Goldin-Meadow and My-lander 1983, Goldin-Meadow and Morford 1985). The second is the emergence of a national sign language in Nicaragua (A. Senghas 2000 [1999], A. Senghas and Coppola 2001, Kegl et al. 2001). The third is the emergence of a new signed language in a Bedouin community in the Negev desert (Sandler et al. 2005, 2011, Forthcoming). I chose these three cases because they have been foundational in establishing language emergence as a field of inquiry. They have well-established bodies of literature associated with them and they present a coherent theoretical ground from which to proceed.
I argue in each case, that a process of deictic integration is recoverable, and I propose that this process is central to processes of language emergence, more broadly. I also argue that in order for a full-fledged language to emerge (as opposed to a language-like gestural system), the semiotic system must become a legitimate means of position-taking in an internally asymmetric social field. In other words, leaders within the community must accrue the authority necessary to introduce evaluative frames for communication practices and language-use. This is the final phase in the integration of symbolic, deictic, and social fields. I call this overarching process contextual integration.
In order to show how contextual integration affected the grammar of TASL, I begin with the reconfiguration of the social field. In chapter 3, I examine the history of two institutions that were foundational in the development of the Seattle DeafBlind community. I show how these institutions gave rise to a limited set of social roles, which were organized around a core opposition between “sighted” and “blind.” Greater forms of authority accrued to sighted roles, and legitimacy accrued to visual communication modalities. Therefore, in an attempt to occupy more valued social positions, DeafBlind people continued to use visual communication practices long after they were no longer effective. In Chapter 4, I show how social roles were reconfigured by DeafBlind leaders as part of the pro-tactile movement, and how this led to the legitimation of tactile modes of knowledge production and interaction. From there, structures of interaction were reconfigured along tactile lines (Chapters 5 and 6), which gave rise to new linguistic mechanisms for referring to the immediate environment and tracking referents across a stream of discourse (Chapter 7), new rules for the formation of lexical signs (Chapter 8), and a new system for generating semiotically complex signs, which incorporate both linguistic and deictic elements (Chapter 9). I conclude in the final chapter with a brief reflection on the role of contextual integration in processes of language emergence--not only in the case of TASL, but in other cases as well.
Chapter 2
Establishing a Comparative Frame: contextual integration in three cases of language emergence
In this chapter, I examine three cases where the transmission of language from one generation to the next has been disrupted, novel communication practices have grown up in the absence of a viable alternative, and new language-like systems have emerged. The first case I examine is the emergence of language-like gestural systems among deaf children who are not exposed to a perceptible language (Goldin-Meadow and Feldman 1977, Goldin-Meadow and Mylander 1983, Goldin-Meadow and Morford 1985). The second is the emergence of a national sign language in Nicaragua (A. Senghas 2000 [1999], A. Senghas and Coppola 2001, Kegl et al. 2001). The third case is the emergence of a new signed language in a Bedouin community in the Negev desert (Sandler et al. 2005, 2011, Forthcoming).There are other cases of emergent signed languages or language-like systems (e.g. Nonaka 2007, Nyst 2007, Groce 1985, Kuschel 1973, Washabaugh 1986). However, these three cases have been foundational in establishing language emergence as a field of inquiry. They have well-established bodies of literature associated with them, and they present a coherent theoretical ground from which to proceed. All of this makes these three cases a productive starting place for complementary
approaches (1).
In each case, disruption is the result of sensory difference--either a single deaf individual being raised and educated in a hearing context (Goldin-Meadow and Feldman 1977; Goldin-Meadow and Mylander 1983; Goldin-Meadow and Morford 1985), a group of deaf people in a common educational setting, set apart from the broader, hearing society (Kegl et al. 2001, Senghas and Coppola 2001), or a small, tight-knit community with a high incidence of deafness, where sign language is in wide-spread usage among both deaf and hearing people (Sandler et al. 2005, Kisch 2012).
The systems that have emerged out of these contexts are signed languages and gestural communication systems with language-like properties. This literature has been overwhelmingly focused on the innate capacities of the human mind2 (e.g. Goldin-Meadow and Feldman 1977; Kegl et al. 2001, Senghas and Coppola 2001; Sandler et al. 2005, Newport [2001] 1999). In order to determine what role innate capacities play in the creation of new languages, context must be factored out to the greatest degree possible. This requires either implicit or explicit treatment of the relationships between capacity, language, and context. In what follows I track (1) what counts as language-like in the phenomena under study, and (2) how relations between this language-like object and phenomena outside it are treated conceptually.
I argue that in all three cases, relations between deictic and linguistic phenomena can be recovered, and that in each case the emergence of a language-like system corresponds with a tightening of those relations. This process, which I call deictic integration, yields signs, which, in addition to being incorporated into a more tightly organized language-internal system, are also capable of characterizing and localizing referents. In addition, where full-fledged language emerge, a social field, comprised of oppositional and asymmetrical social positions also emerges. In what follows, I bring together the ethnographic and linguistic research in order to understand the relationship between these two types of phenomena. I argue that in order for a viable language to be realized, it must become a legitimate means of position-taking in a particular social field.
2.1 Homesign
When deaf children are not exposed to any visible language, they and their family members often develop a limited repertoire of gestural signals to communicate. These repertoires are known as “home sign” systems. The work on homesign that started to appear in the 1970s addressed a question that has drawn interest since at least the seventh century B.C.: can a person who is not exposed to a conventional language develop a language-like communication system on their own? Prior to the early work on homesign, this question had been posed in various ways, but it had never been systematically studied by examining empirical evidence (Aronoff et al. 2004, Feldman et al. 1978). One of the first stories aimed at exploring this question was told by the Ancient Greek historian Herodotus. He said that the Egyptian King Psammetichos, or “Psamtik” wanted to know who the first peoples of the world were, so he gave a pair of newborn twins to a shepherd, sent them to a deserted island, and told the shepherd not to talk to them. Years passed, and then one day one of the twins spontaneously produced the Phrygian word for bread (‘bekos’). Based on this evidence, Psamtik concluded that the Phrygians were the first people (3)(Crystal 1987:288). Psamtik was not alone in his curiosity. This experiment was repeated by the Holy Roman Emperor Fredrick II (1194-1250) and James IV of Scotland (1473-1513) was also similarly compelled. In the latter case, the “shepherd” was reportedly a “deaf and dumb woman,” guaranteeing, he thought, that the neonates would not be exposed to any language at all (Danesi 1993:5-6).
In the modern context, this sort of scenario appeared relevant in new ways as the field of linguistics turned toward generative grammar and the innate capacities of the human mind. The degree to which the stimulus is impoverished may be difficult to determine in ordinary life, but it is less difficult to determine among neonates on an uninhabited island, or in situations where deaf children are denied access to visible language. However, Psamtik’s modern successors weren’t going to be satisfied by the production of a single word, as he was. They were looking for a wider range of formal properties and communicative functions associated with language. In addition, the range of social and interactional phenomena that they had to sort through to find these properties were far more complex than those found on a sparsely inhabited island.
2.1.1 Homesigners in Philadelphia and Chicago
Although the sign language linguistics literature has focused on native users of American Sign Language, these are the minority of d/Deaf people in the United States. Most deaf children are raised by hearing parents and a certain subset of these parents opt for an oral education for their children. For children who cannot hear the range of sounds used to produce spoken language, oral education is not effective (Lane et al. 1996, Mayberry 1992). This is apparent in the studies conducted by Goldin-Meadow and colleagues (e.g. 1977, 1983).
The children in the early studies lacked exposure to a language, but they participated in the daily lives of their families and they did not have any cognitive impairments (Goldin-Meadow and Feldman 1977:401). The six children included in the 1977 study ranged in age from 17-49 months(4). They were enrolled in oral education programs, their parents used only spoken language with them at home, and they had not acquired any usable spoken language (1977:401). They had not been exposed to a conventional sign language either. However, they did communicate gesturally with their caregivers and with the experimenters. Researchers videotaped 1-2 hour sessions in which one child interacted in their home with their primary care-giver (in all cases, the mother), and one or more members of the research team. Subsequently, gestures were individuated in the “stream of motor behavior” on the basis of physical criteria, and broken down into units comparable to words as well as strings of gestures comparable to phrases (ibid.). There was a high level of agreement between coders on the sign and phrase boundaries that were assigned.
Following Bloom’s method of “rich interpretation” (1970), referents were assigned to isolated signs. When signs were incorporated into phrases, they were assigned semantic elements, cases, and predicates, following Fillmore’s “case descriptions” (1968). Again, coders agreed in most instances on the referents and semantic elements that were assigned. Their findings, based on these categories, are summed up as follows (Goldin-Meadow and Feldman 1977:401):
[E]ach of our deaf subjects developed a structured communication system that incorporates properties found in all child languages. They developed a lexicon of signs to refer to objects, people, and actions, and they combined signs into phrases that express semantic relations in an ordered way.
There were two types of signs identified in the lexicon: “deictic signs” and “characterizing signs.” The deictic signs were mostly pointing gestures, which “allowed the child to make reference to any object or person in the present.” The characterizing signs were gestures that resembled their referent in some way,“[f]or example, a closed fist bobbed in and out near the mouth referred to a banana or to the act of eating a banana.” (1977:402).
Goldin-Meadow and Feldman looked for patterns in the way that these two types of signs were combined. They found that the children tended to produce phrases that included a patient, a recipient, and an act. The explain:
Some of the children tended to produce their signs for the patient, recipient, and act semantic elements in consistent positions of their two-sign phrases. Specifically [ . . . ] the children tended to produce phrases with patient-act, patient-recipient, and act-recipient orders [ . . . ]. Not all children showed ordering tendencies for all parts of the three elements; but if the children showed any ordering tendencies at all, those tendencies were ordered in the same direction. We can describe the children’s two-sign phrases with the following element-ordering rule:
Rule A: (choose any two maintaining order)
Phrase ! (patient)(act)(recipient)
Thus, it appears that some of the children expressed semantic relations in a systematic way, that is, by following a syntactic rule based on the semantic role of each of the sign units.
There are four examples given in the 1977 article where this pattern plays out. They are as follows (all taken from p. 402):
- [O]ne child pointed at a shoe and then pointed at a table to request that the shoe (patient) be put on the table (recipient).
- On another occasion, the child pointed at a jar and then produced a twisting motion in the air to comment on mother’s having twisted open (act) the jar (patient).
- Another child opened his hand with his palm facing upward and then followed this ‘give’ sign with a point toward his chest, to request that an object be given (act) to him (recipient).
- David pointed at a picture of a shovel, pointed downstairs where a shovel was stored, produced a digging motion in the air with two fists, and finally pointed downstairs a second time. David had commented in one phrase on two aspects of the shovel, the act usually performed on the shovel and the habitual location of the shovel.
They conclude based on the ordering of semantic elements in examples like these, that, “a child can develop a structured communication system in a manual mode without the benefit of an explicit, conventional language model,” and they emphasize that, “[t]his achievement is cast into bold relief by comparison with the meager linguistic achievements of chimpanzees” (ibid.:403).
2.1.2 What Counts as Language-like in Homesign
Goldin-Meadow and colleagues are at pains to show that these regularities can be attributed to the innate structures of the mind that allow children to acquire language. In order to do this effectively, there are two requirements. First, it must be demonstrated that the regularities are not invented by the caregivers and then taught to the children. They convincingly demonstrate that the gestures produced by the care-givers are not ordered at all (1977, 1983)(5). This confirms the poverty of the stimulus. The second requirement, for isolating the relationship between semiotic regularities and the innate capacities of the child, is to have some idea of what aspects of semiosis are relevant to those capacities. This requires a model of the innate structures of the mind, and this model is taken from Fillmore (1968).
2.1.3 A Model for the Innate Structures of the Language-Ready Mind
At the time of Goldin-Meadow’s early work, syntax and semantics were being reunited in generative grammar (Harman 1982:xv-xvi). A key figure in the reunification was Charles Fillmore. It is not surprising, then, that the analytic categories used by Goldin-Meadow and her colleagues were shaped by Fillmore’s “case grammar,” as it appeared in The Case for Case (1968). In this work, Fillmore engages two main tenets of generative grammar: (1) the centrality of syntax, and (2) the importance of covert categories(6).
Fillmore argues that the syntax of a language can not be stripped of all associated semantic elements, and further that semantic relations actually constitute an underlying structure, or “frame” that explains many syntactic constraints. The relations between the two, he calls case relations, or simply case (1968:21). Case relations are covert, and in their totality, form “a universal system of deep-structure cases” (ibid.). Case forms, on the other hand, are the expression of case relations “through affixation, suppletion, use of clitic particles, or constraints on word order” in a particular language (ibid.). At one level of remove, these deep structure cases are linguistic in nature, but Fillmore backs up further and sees them as consistent with a broader range of cognitive capacities, which are “identified” by the cases, just as the cases are identified by verbs and nouns. In Fillmore’s words, “The case notions comprise a set of universal, presumably innate, concepts which identify certain types of judgments human beings are capable of making about the events that are going on around them, judgements about such matters as who did it, who it happened to, and what got changed.” (ibid.:24). These broader cognitive capacities allow for the mental representation of events, actions, and the things that participate in them.
The analytic framework employed by Goldin-Meadow and colleagues implies an innate capacity for the acquisition of language that is structured like this. By extension, they argue that the linguistic achievements of their research subjects can be attributed to the child’s capacity to make judgements about who did it, who it happened to, and what got changed. They emphasize that it is the child and the child alone who is responsible for the creation of their language-like system. It is clear, in their data, that the linguistic stimulus is devoid of any meaningful order, which is not surprising, given that the caregivers are primarily using spoken language. However, the only factor outside of the child’s innate capacities that is explicitly ruled out is linguistic input. Other contextual factors play a pivotal role in the analysis, which is reflected in the terms of analysis as well as the examples.
2.1.4 Deictic Characterizing and Universal Signs
The two basic categories of signs, out of which phrases are built by the homesigers are defined in terms of their relation to context. These terms align with those found in Morris (1971 [1938]) and his explanation of their significance is useful here. Morris defines semiosis as “the process in which something functions as a sign” (1971 [1938]:3). This process requires three things:
- The Sign Vehicle/sign: “that which acts as a sign.”
- The Designatum/denotatum: “That which the sign refers to”
- The Interpretant/interpreter: The effect of the sign on an interpreter, by virtue of which, the sign counts as a sign to that interpreter.
These three types of signs: deictic, characterizing, and universal map onto the distinction between pragmatics, semantics, and syntactics in Morris. Pragmatics is constituted in the relation between the interpretant and the sign vehicle. Semantics inheres in the relation between the sign vehicle and the designatum. Syntactics is constituted in the relations between sign vehicles and the categories to which they belong. No one dimension can be dissociated from the others. A language is irreducibly triadic.
2.1.5 Poverty of the Stimulus, Abundance of Stimuli
Although the linguistic stimulus is impoverished for the homesigners, non-linguistic stimuli are abundant.7 Goldin-Meadow et al. do not focus on the role of these contextual elements and dynamics, and yet they play a crucial role in each example. By viewing the examples first through Fillmore’s framework and then juxtaposing this with an analysis from Morris’s perspective, the examples are fully accounted for and the interaction of capacity and context is made explicit.
Viewed through Fillmore’s scheme, the correlate to signs that refer to, or characterize, actions are verbs and those that refer to or characterize objects or entities, are noun phrases (Fillmore 1968:24-5). Goldin-Meadow and colleagues are not working with verbs and noun phrases, but with combinations of pointing gestures and characterizing gestures. This poses no problem because in Fillmore’s framework, the surface structure of the utterance is not important. The focus is instead on the relations that obtain between representations of referents (nounlike forms) and representations of actions and states (verb-like forms). Goldin-Meadow and Mylander “stress that [they] use linguistic terms such as sentence loosely and only to suggest that the deaf child’s gesture strings share certain elemental properties with early sentences in child language” (1983:372). They never claim that these systems are linguistic systems, and are careful to distinguish language-like phenomena from language. It is perfectly clear that verb-like gestures are, through the use of Fillmore’s terms, implicitly compared to verbs and noun-like gestures are compared to nouns (or noun phrases). Goldin-Meadow and Feldman decompose communicative events into elements and relations like this, arguing that when deprived of exposure to a conventional language, the minds of children act on the gestural resources available to them in ways that the mind of any child capable of acquiring language would to yield a language like any other.
In their first example, a child points at a shoe and then points at a table. In Fillmore’s scheme, we would start with the requested action: Please put the shoe on the table. The first pointing gesture stands in for a noun phrase that refers to the shoe. In relation to the action (verb-like element), this pointing gesture can be interpreted as the expression of the covert semantic element: patient. The second pointing gesture stands in for a noun phrase that refers to the table and can be interpreted as the expression of the covert semantic element: recipient. In Morris’s scheme, the first pointing gesture (or sign vehicle) refers to an object (or designatum), as does the second pointing gesture. Recall that for Morris, semantics consists in the relation between the sign vehicle and the designatum, so a semantic relation is expressed by these elements in Morris, just as it is in Fillmore.
However, at this point, we have only accounted for the noun-like elements of the example. Notice that there is no overt manifestation of the verb-like element. This element is a product of the interpretation--that the two pointing gestures are a request to put the shoe on the table. If the mother responded to the pointing gesture (sign vehicle) by picking up the shoe and putting it on the table, this response would constitute the interpretant, or “the effect of the sign on the interpreter.” Since the utterance itself does not demand this interpretation, it must have been inferred from a contextual scenario like the one I have just proposed, by the analyst, by the caregiver, or both. For Morris, neither the response of the care-giver nor the response of the analyst belong to semantics. These responses belongs to pragmatics, which inheres in the relations between interpretants and sign vehicles. Fillmore’s model does not account for communicative effects of sign vehicles, nor does it account for objects apart from their mental representations. Therefore, both frameworks are necessary in assigning “semantic roles” to the gestures that make up the gesture phrase. Without pragmatics, there is no representation of an action, and without a representation of action, there can be no case relations. Without case relations, there can be no innate capacities of the mind.
In the second example, a child points to a jar and then produces a twisting motion in the air “to comment on mother’s having twisted open the jar.” In Fillmore’s framework, we can say that the twisting motion stands in for a verb, which is a representation of the semantic element: act. In relation to this verb-like element, the pointing gesture, which stands in for the noun phrase, which represents the jar, takes on the semantic role: patient. However the assignment of these semantic elements relies not only on the order of elements in the utterance, but also on the interpretation included in the second part of the example. Without combining a deictic sign, a characterizing sign, and the effect of these signs on the interpreter, it is difficult to know whether the sign is a request to open the jar, a comment on its existence, a comment on its characteristics, or something else entirely. It could be a request to be served a type of food which is normally stored in such a jar. Without the interpretation given in the example, semantic roles would have been more difficult to assign.
In the third example, a child opens his hand with his palm facing upward and then points to his chest. This is interpreted as a “request that an object be given to him.” In Fillmore’s framework, the upward facing palm stands in for a verb, which represents the semantic element: act. The deictic sign stands in for a noun phrase, which takes on the semantic role of patient with respect to the verb-like element. The question here is why a further interpretation is needed in order to make the example effective. Why must we know that this is interpreted as a request that an object be given to him? This interpretation seems to be, once again, an effect of the utterance (sign vehicles) and not attributable to relations between the gesture signs and their referents. Therefore, pragmatic and semantic elements are both necessary in establishing the parallels between homesign and language.
In the fourth example, David points at a picture of a shovel. He then points down. He then produces a digging motion in the air with two fists, and finally points down again. In this example, the usual format of a formal description plus an interpretation is broken away from, and the interpretation is incorporated throughout. I have reproduced Goldin-Meadow’s example below, but the formal description is in regular text and the interpretation of the forms is in italics:
David pointed at a picture of a shovel, pointed downstairs where a shovel was stored, produced a digging motion in the air with two fists, and finally pointed downstairs a second time. David had commented in one phrase on two aspects of the shovel, the act usually performed on the shovel and the habitual location of the shovel.
Semantic roles and relations are not assigned specifically here, but this is used as an example of “longer phrases that [express] at least two semantic relations” (1977:402). The two relations that they mention must be (1) between the pointing gesture and the habitual location of the shovel and (2) the digging gesture and the act usually performed with the shovel. (1) breaks entirely with Fillmore’s framework, since that framework has no interest in accounting for the ability of people to identify the actual locations of objects in the world. Knowledge about where people usually keep their shovel has even less of a place in his framework. (2) requires the same kind of pragmatic inference that the first three examples (above) required in order to generate an “act” which could stand in for a verb, which could represent a semantic element.
Semantic and pragmatic factors contribute to the emergence of language-like gestural systems among the homesigners that Goldin-Meadow and colleagues studied. By making their terms of analysis explicit, I have shown the necessity of pragmatics, or the “effect of the sign vehicle on the interpreter” in the assignment of semantic elements. Since the innate structures of the mind are modeled as relations between these elements, I have also returned to Fillmore’s insistence that despite a consistent and semi-arbitrary ordering of semantic elements, those elements cannot be extracted from semantic and pragmatic aspects of the communicative event. In order to attribute the achievement of consistent ordering of elements to the innate capacities of the human mind alone, these additional contextual factors would have to be factored out as well. In fact, only the possibility of linguistic input was discussed.
2.1.6 Deictic Integration in Homesign
In a broader frame, the innate capacities of the child’s mind appear to interact not only with gestural input, but with a range of semiotic processes. Some of these processes are identified in Lois Bloom’s method of “rich interpretation,” drawn on by Goldin-Meadow and colleagues in establishing their categories. Bloom explains the rationale for this method as follows:
It has often been observed that what young children say is usually related directly to what they do and see. Brown and Bellugi (1964, p. 135) took notice of the fact that children speak ‘very much in the here and now.’ Leopold (1949, Vol. III, p. 31) made extensive use of the ‘aid of the situation’ in inferring the intended meanings of utterances. Although some utterances may be equivocal or otherwise not interpretable, it is generally not difficult to judge the relationship between what a child says and what he is talking about. [ . . . ] Moreover, overt behavior and features of context and situation signal the meanings of what children say in a way that is not true for what adults say. [ ...] If an adult or an older child mounts a bicycle, there is no need for him to inform anyone who has seen him do it that he has done it. But a young child who mounts a tricycle will often ‘announce’ the fact: ‘I ride trike!’ What young children say usually relates directly to what they do and see, and what they do and see can also be seen and evaluated by a listener-observer in the environment.
For the purpose of this study, evaluation of the children’s language began with the basic assumption that it was possible to reach the semantics of children’s sentences by considering the nonlinguistic information from context and behavior in relation to linguistic performance. This is not to say that the inherent ‘meaning’ or the child’s actual semantic intent was obtainable for any given utterance. [ . ..] The only claim that could be made was that evaluation of an utterance in relation to the context in which it occurred provided more information for analyzing intrinsic structure then would a simple distributional analysis of the recorded corpus (Bloom 1970:9-10).
It is clear from this that the method of rich interpretation used by Goldin-Meadow takes the inextricability of semantics and pragmatics for granted. However, the implicit entanglement of these orders of phenomena does not come through clearly in the conclusions that are drawn from the research, such as the following:
In sum, it appears that neither communication pressure nor contingent approval shaped the deaf children’s sign orders or probabilities of sign production.
Our observations indicate that a child in a markedly atypical language learning environment can apparently develop communication with language-like properties without a tutor modeling or shaping the structural aspects of the communication. These results suggest that the child has a strong bias to communicate in language-like ways (1983:373).
It is clear from the data that both deictic and characterizing elements are necessary for the emergence of a language-like system. Furthermore, through routine use, those elements must be coordinated with patterns in everyday life, through which shared modes of access are established. In other words, the linguistic system and the indexical ground of reference must be coordinated into tighter and more restricted configurations such that a highly schematic pointing gesture can accrue a relatively specific meaning for the deaf children and their caretakers.
This process, or what I am calling deictic integration, does not disprove the finding that children have a bias to communicate in language-like ways, especially when compared with the lack of such biases in chimpanzees. However, understanding the nature of the bias as well as the structures that undergird it requires ruling out a wider range of social and semiotic processes, as well as an explicit theory of context. Social, interactional, and linguistic dimen-sions are all recoverable. However, the focus is on the relationship between the capacities of the mind and the language-like system. All other factors are viewed through constructs established for analyzing this relation. Without distinct analytics for distinct orders of phenomena, things can be located in the innate capacities of the mind that belong in the room, in memory, or in history.
2.2 Nicaraguan Sign Language
Language emergence in Nicaragua has also been framed as a case where the innate capacities of children to acquire language have played a central role. However, the interaction of capacity and context is made more explicit by the researchers and also by partial incorporation of independent socio-historical analyses (R.J. Senghas 2003, Polich 2005). Sociocultural analyses have focused on models of personhood available to deaf Nicaraguans, how these models have changed over time, and how they have been endured, occupied, or engaged by deaf people. They have also highlighted the international networks and circulations of discourse through which Deaf Nicaraguans began to see themselves as a language minority, and the way this shaped the development of their community (ibid.).
In the linguistic research, two aspects of this history have consistently been treated as relevant: (1) the year in which groups of children entered the school, and (2) the age of individual children at the time. These two factors have been isolated because they affect the capacities of the children to acquire language. In what follows, I trace additional links between the socio-cultural work that has been done and the linguistic research(8). I argue that in addition to previously emphasized social factors, one of the prerequisites to language emergence in Nicaragua was the legitimation of the signed language among deaf people as a means of taking up differentially valued social positions. In addition, I argue that conventional ways of accessing and referring to objects, people, and signs in the immediate environment, or “deictic patterns” had to crystalize. These patterns were then incorporated into the language as linguistic and deictic phenomena were drawn into tighter relation with one another.
2.2.1 Establishing a Social Field
Prior to 1946, children who were born deaf or lost their hearing early in childhood had very little contact with the outside world and no contact at all with other deaf children. There were no schools for deaf children (or children with other disabilities) and no way for them to acquire basic communication or living skills (Polich 2005:13-24). While some wealthy families sent their children to boarding schools in other countries, most kept their children at home in various states of isolation from the rest of society. Some families went so far as to physically restrain their deaf children to prevent them from “roaming” (ibid.:15). One girl was restricted to the fenced-in backyard of her relatives home after her mother died, where she reportedly slept, filthy, on a pile of cardboard in the corner (ibid.:17). Some were
so secluded that members of their extended family did not know they existed until after they had passed away (ibid.: 16). The families of the children did not expose them to signed language and the children could not hear spoken Spanish, therefore, they did not acquire any language. Deaf children and their families developed home-sign systems, however, they were often restricted to a small range of communicative situations (ibid.: 13-23). A volunteer from a local deaf association described the home sign system used in one family as “a language of orders where they tell him, for example, go get that, go clean that, go take a bath, go to the store and get some coffee. Sure its communication, but [the deaf child] doesnt get much out of it” (ibid.:14).
In 1946, the first special education school was established in Managua (Polich 2005:24). According to Polich, this coincided with an important transition where deaf people went from being seen as “eternal children” incapable of becoming productive adults to being seen as “potentially remediable subjects” (ibid.). While they were previously given up on, isolated in the family’s back yard, or kept secluded inside the house, now they were treated as disabled children, who, with enough specialized training, might learn to act like hearing people. The first special education school had 20 pupils, half of which were deaf. They used oral education methods (ibid.28-9).
By 1974, four schools in Managua were involved in the education of deaf children. Those who lived elsewhere were either not educated, or they had to relocate to the capital. In 1975, oral methods began to be challenged by the new “total communication” fad, which was passed from the United States to Nicaragua via networks of educators and doctors in Costa Rica, including a representative from Gallaudet (Polich 2005:45-6). A series of workshops were given in Costa Rica that included information about the linguistic structure of signed languages, and different signing methods for the education of deaf children. Some teachers from Nicaragua attended these workshops (ibid.:47). Total communication never became the official method used in Nicaraguan schools, but according to Polich, attitudes about signed languages changed significantly between the years of 1976 and 1980, and so did communication practices.
One interviewee left Nicaragua in 1974 and went to Spain, where he learned the signed language in use among Deaf people. When he returned in 1980, “he was pleasantly surprised to find more signs in use in Nicaragua, but, communication was still different than what he was used to in Spain because individual signs were chained together and getting one’s meaning across was still more awkward than it was in Spain.” (Polich 2005:49). Polich reports him saying that “communication at this time was still a combination of everything: signs, gestures, oral words, written words, acting out--whatever worked. He said that in 1980, he still did not see a sign language, such as he knew existed among the students at the school for the deaf in Spain” (ibid.:50). By 1984, communication was decidedly more fluid and complex meanings could be more easily conveyed (ibid.). However, it was still described by those who had had contact with fully developed signed languages as a mix of different home-sign systems (ibid.:52).
In 1979, the Sandinista Revolution triggered many changes that affected the education of the deaf (Polich 2005:53). Special education was broadened to include a wider range of students in many geographic locations around the country. In addition, the curriculum was standardized. By 1981, there were twenty-four special education schools (ibid.:53). In Managua, the National Center for Special Education (CNEE) was a major center for deaf education as well as the education of students with cognitive and physical disabilities. In the 1980s, a curriculum was adopted at the CNEE that forbade the use of signs and gesture. Students were encouraged to sit on their hands or hold objects while they talked and were only encouraged to use their hands for fingerspelling (ibid.:59-60). However, just as in other oral schools, deaf children did sign with one another outside of the classroom. A teacher who had worked at the school in the early 1980s was interviewed by Polich and reported the following:
We made sure that in the classroom, we taught the classes orally; but the kids outside were using signs among themselves. During recess at the snack bar, everywhere. Some of us used our hands, too, to communicate with the kids, but only in private or where no one could see. In the classroom it was us emphasizing the oral and the fingerspelling, but outside, it was another matter.
However, Polich says, we have no way of knowing whether the signing that was happening was language-like, or whether it was a mix of home-signing, gesturing, pantomime. She writes:
No one recorded it, and no one capable of categorizing it was there watching. Still, the reports from the few teachers who began to imitate the children and learn their communication systems, and from the children themselves, when they remember back as adults, is that at this point, it was, at most a very rudimentary language system (ibid.:64).
In the mid-1980s, the coordinator of deaf education, who had established and enforced oral methods (with some fingerspelling) left her position, and slowly, teaching methods became more flexible (Polich 2005:72). Meanwhile, the Sandinista government was encouraging the formation of grassroots organizations and some hearing advocates and educators of deaf children saw this as an opening for deaf people to improve education and employment opportunities (ibid.:80-1). A group was established called the Association to Help Integrate the Deaf, which was abbreviated APRIAS (ibid.:83). APRIAS came to function not only as an advocacy group but also as a social forum outside of the classroom (ibid.). Prior to the 1980s, there was not a lot of socialization or interaction among deaf people outside of the schools. Still, in the early days, most of the people in positions of authority were either hearing or they were deaf people who could speak (ibid.:84). It was the beginning of a deaf social world, but sociality did not revolve around sign language the way it would later.
As time went on, sign language became more and more important to the members and many of the older deaf people who missed their opportunity to learn sign language in school said that they learned sign language primarily at the APRIAS meetings. These meetings also served as an important venue for the standardization of the sign language that was developing. According to one of the people Polich interviewed, the meetings were difficult in the beginning “because there was no common sign language, and it was hard to understand each other. “But,” he said, “little by little, we learned” (Polich 2005:90).
In the mid 1980s, APRIAS also started having weekend “rescue” workshops, where Nicaraguan
Sign Language signs were sketched by hand and compiled into rudimentary dictionaries that would later be distributed (Polich 2005:89). In retrospect, many of the participants in the workshops and meetings characterized the “language” as combinations of gesture and finger-spelling, which were slowly taking on language-like properties (ibid.). This group of signers, who were being educated in schools with other deaf children and also eventually taking part in the APRIAS meetings and other social events, formed a cohort. Within the cohort, there were certain key figures who took on leadership roles and “taught” the new language to others, even as it was forming (ibid.:91). Polich considers this at some length, since it seems paradoxical to her that a person could be “teaching” a system that is not yet formed. About one of these key figures, she writes:
Javier is, thus, a key figure in the first group to use a standardized sign language as their major mode of communication. How he managed to learn the language first while simultaneously teaching it to the others is difficult to explain. Perhaps taught means that he was more enthusiastic about signing, used it more consistently, was patient about teaching what he knew to those less fluent, and took on the role of ‘language police,’ demanding that others conform to what was considered the ‘correct’ version of signs .. .I observed regular instances in which confusion over the ‘correct’ version of a sign was referred to Javier for arbitration. His decisions were accepted with no dispute. Javier, in a sense, is identified as the ‘apostle’ of NSL by older deaf adults. I had many informants tell me that Javier was the first to learn the language (how they dont know) and that he transmitted it to the rest of the deaf community, including themselves. (ibid.:91).
This suggests that there was a differentiated social field forming among deaf people at the time, which was an important precursor to language emergence. The possibility of using language in “correct” and “incorrect” ways and the emergence of experts within the group meant that the language, even as it was forming, was viewed by deaf people as a legitimate means of occupying more valued social roles within their own community. This shift was institutionalized when, in the late 1980s, the officers of APRIAS were replaced by deaf people who were more “pro-sign language,” and the name of the organization was changed to the National Nicaraguan Association of the Deaf, abbreviated “ANSNIC” (ibid.:97). Rather than a focus on the “integration” of individual deaf people into hearing society, they saw membership in the deaf community as the most effective way to exercise agency (ibid.). Polich explains:
By becoming members of the deaf association, deaf people are, de facto, integrated into a society, and they exercise their social agency, albeit as a subgroup in which their NSL is the major unifying factor. Because this mini-society retains ties through interpreters with the larger oral/Spanish-dependent society, members are, in a sense, integrated into the larger society by being situated in the smaller group. There is no need, and in fact, no wish to disperse the members individually to integrate into the larger society to function in a hearing manner. (ibid.:97).
Polich is focused here on the relationship between deaf people and the larger hearing society.
She argues that this model views deaf persons as “social agents” rather than remediable subjects, who can learn to be hearing given enough specialized training(9). However, she notes that this second wave of deaf people who ran and took part in the Deaf association, had had a different set of experiences with sign language and were also exposed to very different ideas about its value and utility. While this new perspective originated outside of the deaf community in broader historical transformations, its effects within the community crystalized around this time to yield a significant contrast between “pro-sign language” people and the group opposed to them.
In 1992, sign language was officially permitted in deaf classrooms for the purpose of instruction (ibid.:72). Around this same time, sign language “became less an adjunct to oral speech” and slowly developed into the dominant mode of communication among deaf people (ibid.:96). In addition, politically charged efforts to document and standardize the language intensified, and in 1997, a dictionary of Nicaraguan Sign Language was published (ibid.:97). This kind of legitimation and subsequent standardization can only be accomplished given an internally differentiated social field, where deaf people view sign language as a means of taking up more powerful social positions. Once a full-fledged language emerged, these dynamics crystalized further, so that deaf people who could not use the language fluently were called NO-SABES or “know-nothings” and they were restricted to a limited set of social roles in institutional settings (R.J. Senghas 2003:270). One of the consequences of institutionalization has been the adoption of organizational paradigms with built-in asymmetries:
It is by and through the national Nicaraguan government that ANSNIC has its legal status as a recognized organization. ANSNIC must therefore follow the government’s guidelines that assume certain paradigms of organization. These include concepts of voting, accountability, and tax-exempt status. ANSNIC has adopted certain structures, roles, and offices, and these certainly have social implications within the Deaf community. As one example, the layout of the ANSNIC facilities and the differential access to these facilities ... makes certain individuals more influential...
These two observations together suggest that differential access to to the social field aligns with local criteria for language competence, such that “better” signers are more likely to accrue authority. The establishment of an internally asymmetric field in which some deaf people had more authority than others is a prerequisite to the legitimation of the language. In the linguistic literature, there is a focus on the year on which groups of children entered the school and the age of individual children at the time. In addition to these factors, the establishment of an internally asymmetric social field and the legitimation of the semiotic system for position-taking in that field appear to be crucial conditions for language emergence.
2.2.2 Three Semiotic Systems: ISN, LSN and Mimicas
Three distinct modes of semiosis emerged out of this history. From a socio-historical perspective, many factors are relevant. From the perspective of those interested in the innate capacities of the mind, only those factors that enable or constrain the ability of children to acquire a first language are relevant. Kegl et al. identify three distinct “cohorts,” each of which developed semiotic systems that were distinct from the others in fundamental ways (2001:187). Membership in a cohort is defined by two main factors: (1) the age at which the individual entered school and started interacting with other deaf people, and (2) the year in which they entered the school (ibid.). The students who entered the school at a younger age tended to acquire (or develop) more complex grammatical structures than those who entered the school later in life. This was due, in part, to the fact that a more rich linguistic environment was available to students who entered the school in later years, since collective communication practices had had time to develop and in part because younger children acquire language more quickly and more completely than older children (ibid.:197).
Three Spanish terms were appropriated by researchers and applied to the semiotic systems available to each cohort. All three terms: lengua, lenguaje, and idioma can be translated into English to mean “language,” but in Spanish they have distinct meanings. A lenguaje can be any type of communication system, while an idioma is, more specifically, an official, national language (ibid.:181). The word lengua is a general term that can include lenguaje and idioma (ibid.). Kegl et al. distinguished between Lenguaje de Senas Nicaraguense (LSN) and Idioma de Senas Nicaraguenese (ISN). The former, they argue, is a “peer-group pidgin or jargon between signers,” while the latter is a “full-blown sign language” (ibid.:181). Both of these systems are distinct from the idiosyncratic home-sign systems that individual deaf children develop within their families, which are called “mimicas” by Spanish speakers.10 At the time the research was conducted, there were no metalinguistic signs used by deaf Nicaraguans that mapped onto this set of terms, however, Polich’s interview data suggest retrospective metalinguistic awareness among some deaf people, and their reflections do not contradict these categories.
Several grammatical characteristics were examined across these three cohorts of signers in order to reconstruct the process of language emergence. Of all of the characteristics, “spatial modulations,” or a tendency for verbs to encode information by moving between points in space, became more central to the arguments of language emergence than others. This became the characteristic that was used as evidence for the linguistic status of ISN. The linguistic status of spatial modulations has been at the center of one of the most productive debates in the field of sign language linguistics more broadly. In order to understand what counts as language-like in the Nicaraguan case of language emergence, key moments in this debate are outlined in the following section.
They also note a fourth “system,” which is a “pidgin” used between hearing and deaf signers--where “signers view themselves as speaking Spanish, and Spanish speakers view themselves as signing or using Mimicas” (ibid.:182). This phenomenon is recognizable given familiarity with the American Deaf community and is very interesting, but I take it to be on another level of communicative complexity in the sense that it combines the more basic systems. Therefore, I bracket discussions of it in my summary of this research.
2.2.3 What Counts as Language-Like in Nicaraguan Sign Language
The term “spatial modulations” is a relatively neutral term that includes a range of phenomena that have been analyzed variously as linguistic, non-linguistic, or some combination of the two depending on the theoretical approach taken and the subset of phenomena under investigation. The debate around spatial modulations in signed languages has been active since the inception of the field, and the issues raised by it are central to the question of what counts as language-like in the emergence of ISN.
In early work on Visual American Sign Language, three classes of verbs were identified11, which differ from one another according to the types of affixes12 they take: “plain verbs,” “agreement verbs,” and “spatial verbs” (Padden 1990:119). Plain verbs are either unin-flected or inflected for aspect (ibid.). An example of this kind of verb is the sign love (See Figure 2.1). In the sentence, “I love you” and “you love me” love is produced in the same way. In contrast, “Agreement verbs” inflect for person and number. An example is the sign
(a) (b)
Figure 2.1: love in VASL
give. For the sentence “I give you the book,” the sign begins near the signer’s body and ends near a point in space that is associated with the recipient of the book. If the receiver of the book were the signer, then the sign would move toward the signer’s body instead of away as in Figure 2.2. Therefore, “the position of the beginning point of the sign varies depending on whether the person of the subject of the clause is 1person . . . or 2person . . . ”(Padden 1983:14). If there is more than one recipient, the sign will move from the body of the signer to a series of loci in space, thereby encoding number. So if “the number of the subject and object varies, the beginning and end points will likewise change in form” (ibid.). Finally,
11Since then, similar classes of verbs have been identified in almost every signed language that has been documented (Mathur and Rathmann 2012:137).
(a) (b) (c)
Figure 2.2: you-give-me in VASL
“spatial verbs” do not inflect for person and number, however, they have locative “affixes.”13 One example of a spatial verb is the VASL sign put (See Figure 2.3). The handshape is
(a) (b)
Figure 2.3: put in VASL
specified, as is a movement, but the direction of the movement varies depending on the spatial relations involved in the represented act of putting. Therefore, spatial verbs are said to encode locative relations. Another kind of verb that has sometimes been included in this class are known as “verbs of location and motion” which are considered a kind of “classifier.” A classifier that represents the path through which a vehicle moves is an example of a spatial verb. The 3-handshape in Figure 2.4 (listed as “CL:3”) is associated with arguments of the verb that belong to the semantic category: ‘vehicle’. The movement of the represented vehicle, however, depends on the path the represented vehicle takes in the reported event. In an ASL dictionary,14 this classifier (“CL:3”) is described as follows: “Depending on the movement, you can use CL:3 to show the parking of a car, a row of cars, an accident, etc.” Notice the “etc.” at the end of the description. Unlike other dictionary entries, where a movement is specified (usually via arrows overlaid onto the image), no movement is specified here. This is because there is an open, rather than closed set of possibilities for the movement parameter. This movement parameter is what Padden (1990) and Supalla (1982) call a “locative affix.”
However, affixes are discrete units that come in finite sets. Therefore, the formal element
Figure 2.4: Classifier as Spatial Verb
with a locative function in spatial verbs like this one is not comparable to locative affixes found in spoken languages. And yet, spatial modulations in the production of these verbs establish relations between the verb and its arguments. In this sense, they are a grammatical manifestation of “agreement.”15 This generates several analytic and theoretical problems, some of which will be familiar from the discussion of homesign.16 For example, are relations between the verb and its associated elements semantic relations? Syntactic relations? Or are they spatial relations, which are conceptualized by the signer as any other spatial relation would be, and therefore, not linguistic at all? Interestingly, these semiotically complex forms have been treated as an indicator that a grammatical system is emerging.
2.2.4 Spatial Modulations in Nicaraguan Sign Language
While a range of linguistic features have been described in ISN, the literature is overwhelmingly focused on spatial modulations as a sure sign of emergent linguistic structure. Across researchers, and with time, different analyses have been proposed, with different implications for our understanding of how language emerges, where it comes from, and what sorts of principles govern its development. Kegl et al. treat spatial modulation as a kind of grammatical agreement between a verb and its arguments. They write, “the grammar of ASL allows a single verb to express subject and non-theme object agreement as well as person and number marking by spatial agreement of the verb with grammatically established referential index points in the signing space” (2001: 1 90). They consider the structure underlying these relations to be an “abstract grammatical device” which the human mind is predisposed to develop. This device is not present in LSN, but is present in ISN, which suggests that LSN is not a full-fledged language, while ISN is. However, this device does not develop spontaneously, as there are similar structures in LSN that appear to be precursors. They explain:
LSN signers do not seem to use any abstract grammatical device to establish spatial indices, especially for people. [However] [t]hey do sometimes agree with real-world locations or paths that are in the shared knowledge-base of the signer and addressee (ibid.).
However, verbs cannot “agree” directly with real world locations. Although the terminology is not explicit, Kegl et al. indirectly recognize this distinction by assigning linguistic status to the former phenomenon, and precursor status to the latter phenomenon. One example of this shift involves the following transition: In LSN, a verb like speaking-to (a person) is linked to participants via a pointing gesture, and “the people referred to are generally present and available as the targets of these pointing gestures” (ibid.:190-1). The pointing gesture “sweeps” from one location to another to indicate who is speaking to whom. In ISN, the same verb is produced by moving from one location to another, and the pointing gesture drops out. This is a characteristic shift that took place in the transition from LSN to ISN (ibid.:191). This change is understood as evidence that an abstract grammatical device appears in ISN which was not present in LSN.
These conclusions follow from the idea that syntax is the most language-like of all linguistic phenomena. Senghas, for example, begins by stating that “one of the most central components of a language’s grammar is its means of expressing argument structure; that is, how subjects and objects are linked to their respective verbs (2000 [1999]:679). Senghas says that such relations are often established in signed languages via spatial modulations in the verb. The directional movement within these modulations, she takes to be a “spatial morpheme”:
As in spoken languages, the concept of spatial morphological elements may be unfamiliar. As in spoken languages, developed sign languages append grammatical elements to words. Many signs are produced neutrally in a central location in front of the signer. By altering the direction of a sign’s movement to or from a non-neutral location, the signer adds a spatial morpheme. For example, in American Sign Language, nouns are marked as definite and specific by being indexed to a particular location in front of the signer; verbs then agree with their noun arguments by taking on these same locations. An agreeing verb will begin at the location assigned to its subject, and move to the location assigned to its object (ibid.:698).
However, in initial attempts to describe structures like these, Senghas and colleagues found that the signers did not localize nouns in the ways they had expected. The verbs were produced with movements to the left and right of the signer, but no “loci” were established before or after the production of the verb. Therefore, Senghas reports, “We [ . . . ] asked whether these movements toward non-neutral locations were predicted by the semantic role associated with the nouns in the sentence” (2000 [1999]:698-9). In other words--they asked if the direction in the movement of the verb consistently mapped onto semantic relations such as “agent” and “patient.” In order to answer this question, research subjects were shown a video stimulus that included 22 signed sentences produced by the research subjects themselves (both cohorts). These sentences had been elicited during an earlier study, using a simple video stimulus that involved events like a woman tapping a man. Research subjects were asked to watch the sentence and then choose from a list of pictures on an answer sheet. After each sentence, the research subjects were asked if the direction in which the verb was produced made any difference for the interpretation of the sentence they has just watched (ibid.:701-3).
Senghas found that signers in the first cohort interpreted directional verbs as corresponding to a wider range of stimuli than the second cohort. A difference in the directionality of the verb did not correspond to a difference of direction in the stimulus. So if a woman tapped a man, or a man tapped a woman, the form of the verb, including its directional movement, was likely to remain the same. The second cohort, on the other hand, assigned a more narrow interpretation to the directional movement of the verb, consistently associating it with the direction of the represented movement from the character’s perspective (as opposed to the signer’s perspective). These differences were also reflected in their metalinguistic judgements.
When asked ...whether the direction of movement in a verb made a difference in their responses, all four first-cohort subjects responded that a verb could be signed to the left or the right without changing the meaning of the sentence, and without affecting their responses. In contrast, all four of the second-cohort subjects responded that the direction in which the verb was produced did make a difference (Senghas 2000 [1999]:703).
Ultimately, it is this shift from a wider, to a more narrow interpretation (or an increase in “specificity”) that best describes the shift between the less and more elaborated semiotic systems. However, this is not exactly what Senghas was looking for at the outset, and it is not accounted for by any explicit theory of language. The goal in the beginning was to establish consistent relational patterns between the verb and its “subject” and “object”--all of which are syntactic categories. The explicit theoretical assumption was that this kind of syntactic relation is the most central component of a language’s grammar. However, in the absence of loci, which could be associated with the nominal elements, these relations could not be established formally. Instead of positing a zero morpheme, or a null argument, Senghas explored the possibility of assigning semantic roles to the lexical nouns in the signed sentence and establishing relations between those roles and the verb, much as Goldin-Meadow did (see section 2.1). However, no generalization emerged. Therefore, an even more basic notion of contrast (in the Saussurian sense) was appealed to. In a sentence where see and pay are both produced with directional movements to the left, signers in the first cohort would find two interpretations equally acceptable--either one person was seen and another was paid, or a single person was both seen and paid. Signers in the second cohort, however, only found the second interpretation acceptable. In the transition between LSN and ISN, a meaningless variation in signing became differentiated into two contrastive forms with systematically distinct meanings. The two forms became systematically contrastive.
This constitutes the “emergence of a new grammatical structure,” which Senghas speculates, may have originated in more “concrete” uses of space. Via metaphorical structuring of the kind found in Lakkoff and Johnson (1980) and (Taub 2001), these concrete uses of space were mapped onto more “abstract” uses of space for establishing relations between signs (R.J. Senghas 2003:527). For example
the movement toward a location with the verb give indicates the recipient of a giving event. Perhaps child learners of NSL first developed conventions for physical, locative descriptions, and then used these to bootstrap into devices for grammatical relations (R.J. Senghas 2003:527).
Here we see that the theory of language that is in play has momentarily shifted away from syntax and toward a more fundamental, structuralist notion of contrastive opposition. Con-trastive opposition is a relation between signs, and as in Saussure, this relation is considered “abstract” with respect to the undifferentiated conceptual and material substance it is differentiated against. Language emergence is associated here with this process of abstraction. However, unlike Saussure, Senghas speculates that the ground against which these distinctions emerge is, itself differentiated. The two orders are linked via metaphorical mapping (17).
Following up on this idea that grammatical use of space derives from more concrete uses of space, Senghas identifies two main functions associated with spatial modulations: (1) expressing the participants of events, or as she says, indicating who and (2) describing locations and orientations of referents, or as she says, indicating where. In order to determine if a who construction is in play, one must ask, “is the signing space used in a way that shows who did what to whom? For example, in a sentence that describes a man giving something to a woman, do signers use space to link the signs man and woman to the roles of giving and receiving?” (2010:292). Senghas answers these questions in the affirmative. In order to determine whether a where construction is in play or not, one must ask: do signers have a common system for representing objects and their locations? Do they have common signs for objects and common uses of signing space to locate referents relative to each other? For this, they must have consistent ways of “mapping between their spatial signs and physical locations in the world” (ibid.:296). The who construction is taken to be abstract, while the where construction is understood as “iconic,” and therefore more “concrete” and “closer to its gestural roots” (Senghas 2010:290). These are understood as distinct construction types, however, Senghas speculates that their origins are similar.
We do not doubt that both uses have their origins in the gestural reference to the locations of people and things. It is no surprise that we might describe something that is to someone’s right with a gesture to the right. Such a spatial reference was unquestionably adopted into the homesign systems that predated and fed into NSL [Coppola and Senghas, 2010]. It may even be the case that the argument structure constructions [the who constructions] initially adopted wholesale the forms used to describe spatial relations. That is, there may very well have been a time when she gave to him was expressed with a construction meaning she gave to the right (ibid.:299).
However, when the second cohort arrived, these two uses diverged and became two distinct types of construction (ibid.). Senghas asks, then, which came first, and concludes, counter to her initial intuitions (2003:527), that the abstract who construction came first. This suggests that the innate capacities of child language-learners have an important role in the process of language emergence.
The locative use of spatial modulation, however, is not expected to follow the same path of abstraction. That is because the locative forms are “iconic” and must remain that way in order to fulfill their function: “[M]uch of the form of such utterances is drawn from the structure of the world” (ibid.:291). What makes these constructions useful for communicating is that their interpretation is mediated not only by the relation of the sign to the world, but also by the relation of signs to other signs (ibid.). This relation between signs is accounted for by a “conventionalized device” that allows signers to determine “how space is being used in a particular utterance” (ibid.). Without such a device, “[a] single movement might be simultaneously to the north, toward the door, or to the right of the signer. The interlocutor must be able to identify which interpretation of the movement is intended” (ibid.:291). This “device” sounds like a grammatical structure, but appears to be identified only with the process of “conventionalization.” So here the linguistic (abstract) and iconic (concrete) dimensions of spatial modulation are linked via conventionalization--a fundamentally social process whereby arbitrary correspondences between form and meaning become stable over time.
The argument for the linguistic status of ISN, or “Nicaraguan Sign Language,” rests on the emergence of an abstract grammatical device, however, this device amounts to a conventionalized way of mapping signing space onto spatial relations in the “real world.” This involves relations between signs and referents as much as it does relations between signs and signs. The canonical example that is used in many works is the see and pay example, which is analyzed as a case of “co-reference” and “agreement.” The two signs co-refer to the locus by moving toward it in space, and in doing so, manifest agreement between both verbs and their shared nominal argument. This is presented as evidence that Nicaraguan Sign Language has achieved full-fledged linguistic status: “Signs produced in a common location now unambiguously indicated a common referent” (R.J. Senghas et al. 2005:301). R.J. Senghas and colleagues conclude that, “at this point, the construction could be used to link a verb to its arguments, a noun to its modifiers. Now a common spatial modulation could be used to mean that as single person was both seen and paid” (ibid.).
This argument raises problems that can also be found in the literature on spatial modulation more generally: Can a verb “refer” or ”co-refer” to its argument(s)? How can the locus with which the verb refers be phonologically specified? If it cannot be phonologically specified, then it must be posited as a null argument paired with a deictic gesture as it is realized, which would require an interaction of syntax and the deictic system. If, on the other hand, there is a non-linguistic conceptualization of space underlying the grammatical structure, then what mechanism accounts for their relationship? Bootstrapping? Inference? Blending? Abstraction? Conventionalization? Lastly, what if the non-linguistic world which interacts with linguistic structures and devices cannot be adequately described via conceptualizations of the world outside of language, but rather, must include additional elements and dynamics, which are not governed by strictly cognitive principles, but rather, by social, historical, or interactional principles?
In a practice framework, an ambiguity between referents and arguments is a clear indication that a process of deictic integration is under way. In this case, the process leads to a narrowing of values that are retrievable from the deictic field of the language. Signers in the first cohort interpreted directional verbs as corresponding to a wider range of stimuli than the second cohort. Therefore, if the stimulus included a woman tapping a man, or a man tapping a woman, the form of the verb, including its directional movement, would remain the same. The second cohort, on the other hand, assigned a more narrow interpretation to the directional movement of the verb, consistently associating it with the direction of the represented movement from the character’s perspective (as opposed to the signer’s perspective). Ultimately, it is this shift from a wider, to a more narrow interpretation (or an increase in “specificity”) that captures the shift between the less and more elaborated semiotic systems. In other words, a reciprocity of perspectives was been established, which affected the organization of the deictic field. Directional verbs, or, in a practice framework, what we might call “deictic verbs” retrieve values from that field. Over time, arbitrary restrictions on patterns of retrieval emerge. Ultimately, this process aligns the linguistic system with its contexts of use, including language-external modes of semiosis, which might otherwise be called “gesture.”
A Class of Verbs with a Gestural Component?
Scholars working in distinct theoretical frameworks have converged on two orders of phenomena that must be considered in any analysis of “agreement” verbs. Senghas calls these two orders “iconic” and “grammatical” and also “concrete” and “abstract.” Following Jackend-off (2002), Mathur and Rathmann (2002, 2012) view these two orders as distinct modules, related via an interface between “spatio-temporal structure” and “the articulatory-phonetic system.” The first module is syntactic, the second is gestural, and they posit a pairing of the null non-first person forms with a deictic pointing gesture to account for the endpoint of the verb’s directional movement. Meier and Lillo-Martin (2012) address this semiotically complex aspect of agreeing verbs in terms of a tendency to “point.” Nearly all signed languages studied to date have a sub-class of verbs that work this way, and interestingly, as signed languages mature, both dimensions become more closely associated with certain functions and meanings and these functions and meanings are coordinated with one another in increasingly restricted ways. Meir (2011) describes a process like the one recounted for Nicaraguan Sign Language, where static verbs plus pointing gestures are replaced by spatially modulating verbs. In a discussion of her results, Meier and Lillo Martin write:
With historical change, the endpoints of directional verbs have ceased to be fixed--they have lost their lexical specification--and instead have become free to point to locations associated with arguments of those verbs [ . . . ]. The surprising conclusion is that, with time and with the emergence of morphosyntactic processes that are agreement-like on our view and on that of Irit Meir, ISL verbs (or at least the endpoints of those verbs) have in some sense become more gestural, not less. They point more (2012:154).
In the research on NSL, this pairing of “pointing gestures” with grammatical processes is for some reason associated with the systematization of “iconic” elements. However, pointing suggests an indexical, not an iconic relation. More specifically, the functions of agreeing verbs that do not fit easily into a syntactic frame, are canonically associated with deixis.
As will be discussed in section 2.3, the typical tripartite verbal system found in nearly all signed languages is not found in the second generation of a very new signed language called Al-Sayyid Bedouin Sign Language. Instead, there is a two-way split between spatial verbs and plain verbs. There are no verbs with a directional component, where that directional component serves either an anaphoric function, or a syntactic function. This suggests that agreeing verbs derive, diachronically, from spatial verbs. If this is the case, then what we are seeing as signed languages mature, is a tightening of linguistic and deictic relations. By tightening, I mean that the relations between sign-vehicle and referent are increasingly caught up in and coordinated with relations between signs and the categories to which they belong (i.e. Morris’ ‘universal signs’). What makes them more linguistic than spatial verbs is the relative density of the relations between the two orders of phenomena. This is what I am calling “deictic integration.”
In order to get some analytic purchase on this notion of deictic integration, two distinctions must be made at the outset. First, the deictic system must be distinguished from the deictic field (Bu¨hler 2001 [1934], Hanks 1990). Prior to instantiation, deictic signs are highly schematic (Hanks 1990, 2005). When they are applied in the speech situation, they receive “field values” Bu¨hler (2001 [1934]:99). Field values are retrieved from distinct fields, including the symbolic field and the deictic field. The former inheres in the linguistic system, while the latter does not. Their symbolic meaning derives from oppositions in the language (Here is not there; I am not you), which accounts for definiteness of reference. Their indexical meaning derives from the deictic field, which accounts for directivity of reference. Bu¨hler compares the deictic field to pathways, which extend out around the speaker, projecting a limited set of choices for activity. He compares deictic signs to signposts on those pathways. We use deictic signs to prevent wrong-turns, clarify potential ambiguities, or highlight one choice over a limited set of alternatives (ibid.). Therefore, the efficacy of deictic signs is primarily attributable to the deictic field, which restricts possibilities for interpretation prior to the instantiation of the deictic sign (also see Hanks 2005b: 193-196). Second, processes and constraints that inhere in the deictic field must be analytically distinguished from the grammar of the language, more generally. Only then can principled relations be established.
In the cases of language emergence we have examined so far, phenomena organized by deictic principles have not been granted their own construct. They are backgrounded, and only called on as things that can fill in where needed to make the linguistic theory internally consistent. This is an effect of examining language emergence through a theory of language. In a broader semiotic frame, different kinds of semiosis can by distinguished from one another more easily. Once again, Morris (1971 [1938]) is useful in this respect because of the primacy he attributes to the “the syntactical dimension” of the sign while also situating syntax in a broader semiotic frame. For agreement verbs in signed languages, the autonomy of syntax is at once the problem and the solution. For example, if syntax is autonomous, then every element in the sign must be phonologically specified, otherwise, it cannot be accounted for with the categories and relations that represent the linguistic system. Then again, because syntax is autonomous, the abstract relations can be peeled away, and the problem of phonologically un-specified forms is reduced to the insignificant difference between an argument and a null argument.
The Primacy of the Syntactical Dimension
Morris’s sign is composed of one triadic relation, and three dyadic relations. The triadic relation consists of the designatum (D), the sign vehicle (S), and the interpreter (I). Each of these three aspects can be thought of as points that make up a triangle: The lines that
Figure 2.5: The Triadic Relation of the Sign
connect the points can be thought of as the dyadic relations (1971 [1938]:6). The first dyadic relation is that of sign vehicle to object (S to D). This is the “semantical dimension.” The second dyadic relation is that of the sign vehicle to the interpreter (S to I). This is called the “pragmatical dimension.” The third dyadic relation does not complete the triangle, as one might expect. Instead, it represents the formal relation of sign vehicles to one another (S to S). This third relation is the “syntactical” dimension. The reason there is no line
Figure 2.6: The Diadic Relation of the Sign
connecting the designatum and the interpreter, is that there is no unmediated experience. This appears as a problem to Morris. He states: “ ...It has become clear to many persons today that man--including scientific man--must free himself from the web of words which he has spun and that language--including scientific language--is greatly in need of purification, simplification, and systematization. The theory of signs is a useful instrument for such deba-belization” (Morris 1971 [1938]:3). Morris wants out of the webs of words he is suspended in, but he knows that there is no such thing as immediacy, or pure sense-perception. Therefore, he goes in the other direction (abstraction). He wants to break the transparency of language by creating a technical descriptive language for those webs, and others like them. In order for this to work, however, the language of semiotic must apply universally to all language, and so Morris says, “Semiotic supplies a general language applicable to any special language or sign, and so applicable to the language of science and specific signs which are used in science” (ibid.).
Although Morris stresses the “three dimensional” character of his approach, and says that no one dimension should be emphasized over any other (1971 [1938]:10), he goes on to say that a sign (triadic entity) can still be a sign without a denotatum. It can also be a sign without an actual interpreter. Therefore, neither the relation of S to D, nor the relation of S to I are necessary. “It is not possible, however, to have a language if the set of signs have no syntactical dimension, for it is not customary to call a single sign a language” (ibid.). The line connecting the sign vehicle to the sign vehicle, addresses the question of whether or not you can have an isolated sign vehicle that is not a member of a system of sign vehicles. Morris says you cannot: “Certainly, potentially, if not actually, every sign has relations to other signs, for what it is that the sign prepares the interpreter to take account of can only be stated in terms of other signs” (ibid.:7). Therefore, in Figure 2.6, the meaning of “S” must be thought of not as “sign-vehicle,” but as a system of relations through which sign-vehicles are defined by their relation to other sign-vehicles, or “syntax”-- not the syntax of a specific language, but that of a more general language, which can only be discovered on the basis of its necessary consequences in specific languages.
In established languages where syntax is the object of analysis, the analytic loop might run more smoothly from theory-internal logics of a general language to necessary consequences of that theory in specific languages. However, any argument for the emergence of a new language must necessarily posit a relationship between a system-internal logic like this and something else which is both prior to and semiotically distinct from that system (such as gesture, homesign, or a pidgin). If the two systems are taken to be of the same semiotic type, then the phenomenon becomes language change, not language emergence. This requires an explicit position on the relationship of syntax to phenomena that are, in some proportion, gestural, iconic, deictic, or otherwise semiotically distinct.
2.2.5 Deictic Integration in Nicaraguan Sign Language
So far, the tendency has been to posit a certain kind of abstraction or disassociation of syntax from the gestural phenomena they interact with.18 In this section I have shown that spatial modulations, which have been used as the primary evidence for language emergence in Nicaragua, simultaneously express syntactic and deictic relations by integrating deictic elements and relations into the linguistic system in tighter and more restricted ways over time. This observation contributes to the overarching argument of this dissertation--that theories of language emergence should include an explicit theory of context, which does not skip over everything in between grammar and demographics. In particular, I argue for the importance of the deictic field, which is organized by shared modes of access and orientation, as opposed to strictly linguistic principles. The deictic field is not part of the linguistic system, however, in this section, I have shown that understanding the process through which linguistic and deictic elements are coordinated in tighter and more restricted ways, is crucial to understanding processes of language emergence.
2.2.6 The Emergence of the Social Field of Nicaraguan Sign Language
In this section, I have also argued for a principled way of understanding the relationship between nascent signed languages and the social fields they grow up in. In the Nicaraguan case, there is a clear divide between linguistic and social analyses. From the psychologists’ perspective, the role of socio-historical phenomena is primarily limited to demographic data, including the age and year of entry into the school. However, Polich describes the emergence of an asymmetrical social structure within the Nicaraguan deaf community. Authority and legitimacy accrued to certain social positions and not others, and these asymmetries were institutionalized in the structure of national Deaf organizations, eventually influencing the schools as well. These are precisely the kinds of transformations that can be accounted for using the anthropological notion of a “social field,” which derives from Bourdieu’s practice theory and has since been applied to the analysis of language in social context (Hanks 2005a, 2005b). In this section, I have argued that in order for Nicaraguan Sign Language to emerge and become a full-fledged language, it had to become a legitimate means of position-taking in a specific, historically emergent social field. Close attention to naturalistic interaction among signers in that community would provide insight into the cumulative effects of position-taking on the disposition of language users in that community and the structure of their language.
2.3 Al-Sayyid Bedouin Sign Language
2.3.1 The Social Field of ABSL
Al-Sayyid Bedouin Sign Language (ABSL) emerged under a different set of social pressures than either homesign or Nicaraguan Sign Language. The incidence of deafness among the Al-Sayyid Bedouin is high, and many families have both hearing and deaf members (Kisch 2012:87). In a population of about 4500 people, approximately 130 are deaf (ibid.:90). In this context, hearing and deaf children are often exposed to the local signed language from birth. Therefore, Kisch calls ABSL and other signed languages like it “shared sign languages,” highlighting the fact that signing is not something that deaf people do exclusively amongst themselves. Rather, signing enables communication between hearing and deaf people.
Over the past 30 years, however, the sociolinguistic landscape has undergone many significant changes that have exerted pressures on how ABSL is used. First, separate schools have been set up for deaf and hearing children. The schools differ in the quality and focus of the education provided and they are leading to a divergence in social networks. One of the effects of these changes is that the space shared by deaf and hearing people has been consistently shrinking (ibid.:110). Another effect is that Deaf Al-Sayyid women are increasingly marrying Deaf men from elsewhere, and Israeli Sign Language (ISL) is becoming the language used in the home (ibid.:111). When Deaf Al-Sayyid women marry Al-Sayyid men, their husbands are, with rare exception, hearing (ibid). These patterns together lead to an increasingly significant split between the sign language that is used among deaf people (ISL) and the sign language that is used for deaf and hearing people to communicate (ABSL). The former is associated with an emerging deaf identity, or sense of “deafhood” (Kisch, 2008) which is necessary for accessing broader, deaf social networks. When schools for deaf and hearing children were separated, non-kin networks became more central in mediating employment opportunities for deaf men and when deaf women were employed, it was often in the schools themselves (ibid.:114). Kin-based networks tended to strengthen ties between deaf and hearing people and the local sign language grew. Within the newer non-kin networks, these ties are becoming weaker, and the use of ABSL is becoming less frequent (ibid.).
Linguists interested in the emergence of ABSL have focused on the earliest available generation of signers, who grew up before formal education was made available to deaf children, and when it was still rare for hearing children (ibid.). The first generation of signers included 6 deaf individuals (Kisch 2012:101). This generation developed home sign systems within their families, and were only exposed to external signed languages in very limited contexts.19 The younger siblings, however, were exposed to the more elaborated home sign systems of their older siblings since there was as much as 16 years separation in the ages of the siblings (ibid.). In addition, the hearing people who acquire the language are bilingual in the local signed and spoken languages. Therefore, Kisch argues, ABSL cannot “be considered to develop without exposure to a language model” (ibid.88).
Nevertheless, the structures described by linguists are distinct from the structures of surrounding spoken and signed languages. Therefore, despite the unspecified diachronic relation between the spoken language and the emergent signed language, a significant degree of autonomy appears to obtain. The second generation of signers is composed of 11 deaf signers (Kisch 2012:102). These signers did not grow up with older deaf and/or hearing signers in their homes. Kisch speculates, drawing on interview data, genealogical data, and social network analysis, that the parents of these children picked up some sign language from the first generation signers and relatives who learned to communicate with them, but for the most part, new homesign systems evolved independently in each family (ibid.:102-3). In addition, these homesigners came in contact with external signed languages, again in limited contexts.20 The third generation is increasingly bilingual, using both ABSL and ISL to communicate in their daily lives (Kisch 2012:104). In general, though, among the thirdgeneration, ABSL has become the language used for communicating with hearing family members and within extended kin-networks, while ISL is the language of school, work, and the language most closely associated with en emergent, Deaf identity movement.
Within 2 generations, then, homesign systems became integrated with the social field that organizes marriage patterns, labor patterns, socialization, and more broadly, the circulation of knowledge. However, like other signed languages that have arisen in similar circumstances (e.g. Zeshan and de Vos, 2012, Nonaka 2007), this field is now shifting, and knowledge of ABSL is becoming less useful for taking up desirable social positions. This is leading to more restricted usage of the language, and could, eventually lead to its attrition or death (ibid.). This suggests that a crucial element in the emergence and maintenance of a language is an institutional structure, or stable social field, which can be occupied via the use of the signed language. In the homesign case, no full-fledged language developed because homesign cannot be used to occupy a complex, internally asymmetrical social field. In the case of Nicaraguan Sign Language, a full-fledged language did emerge, and this hinged on the emergence of an internally differentiated social field, where institutional authority accrued to positions, taken up via legitimate use of the signed language.
2.3.2 What Counts as Language-like in ABSL
When linguists began studying the structure of ABSL, there was almost no evidence available from the first generation of signers. Therefore, they focused on the second generation (Sandler et al. 2005:2662).21 In circumscribing a language-like object of analysis, many of the same problems that arose in the first two cases also apply to ABSL. Like the homesign case, the first evidence that was presented to support the language-emergence case, was a robust word-order, which, importantly, was distinct from the surrounding spoken and signed languages (Sandler et al. 2005). Like the Nicaraguan case, this pattern emerged fairly quickly, in the second generation of signers. Like both previous cases, the phenomenon is treated as language-like because it provides a way of “relating actions and events to the entities that perform and are affected by them, a pattern rooted in the basic syntactic notions of subject, object, and verb or predicate” (ibid.:2664). Unlike non-linguistic means of making such connections, syntactic systems have the “effect of liberating the language from its context or from relying on the semantic relations between a verb and its arguments” (ibid.:2665). In other words, the ability of the syntactic system to dissociate from the semantic and pragmatic dimensions of determining who did what to whom, what happened to what, and what got changed, is the hallmark of language.
Recall that in spatial modulations of verbs in signed languages, the autonomy of syntax caused problems for the phonological representation of certain elements of the sign, since some of those elements were gestural. In the homesign case, the representation of a nominal argument of the verb took the form of a deictic gesture directed at an actual object in the room. This causes no problems for the analysis, because the syntax has abstracted away from the sign-vehicle; the NPs do not need to be phonologically specified. This all points to a demotion of phonology in the range of phenomena that can count as language-like, since phonological specification appears optional.
The work on ABSL pushes further in this direction. These scholars find that despite the generally accepted assumption that duality of patterning is one of the basic design features of language (Hockett 1960), ABSL, in its second generation, has no duality of patterning (Aronoff et al. 2008). Instead of claiming that ABSL is, therefore, not quite a full-fledged language, they claim that the basic design features of language should be reconsidered. Their evidence for this claim is, interestingly, not linguistic:
In the absence of a structural definition of what constitutes a completely developed human language, ABSL’s functional versatility and the absence of any apparent difficulty in communication, combined with its acceptance as a second language in the community, lead us to conclude that it is a bona fide but very new human language (Aronoff et al. 2008:134).
This harkens back to Sapir’s claim that language is a “complete system of reference,” which is to say that language will do everything that users of that language need it to do (Sapir 1949[1934]:153). There is a certain seamlessness in the fit between the linguistic system and the world in which it is instantiated, so that no trouble in communicating can be detected. This is presumably not the case for home signers, or others who do not use a full-fledged language. In place of phonology, both “holistic” and “compositional” expressions are found (ibid.:135). They explain:
Although we do not dwell on it here, we find (especially in the narratives of older signers) frequent occurrences of depictions of entire propositions in a single unanalyzable unit. For example, in describing an animated cartoon in which a cat peeks around a corner, one signer used his entire body to depict the cat’s action. These holistic pantomimes are interspersed with individual signs. The individual signs contrast with pantomimic expressions in several ways: they are conventionalized, much shorter, confined largely to the hands (rather than involving the entire body) and express concepts that are members of individual lexical categories (e.g. noun, verb, modifier) and distributed accordingly in the syntax. This mixing of pantomime and words suggests that the rudiments of language may encode events holistically to some extent, but that compositionally is available as a fundamental organizing principle at a very early point in the life of a language (ibid.).
Because their explicit definition of language is based on a goodness of fit between the communicative activity (or what they call “linguistic events” (ibid.) of signers and the world in which those activities unfold, both pantomime and compositional elements count as “linguistic expressions” (ibid.). This is consistent with their finding that ABSL had no duality of patterning until recently, so that a more direct connection between the sign-vehicle and the object to which it refers is permitted, while not compromising the linguistic status of ABSL.
2.3.3 Deictic Integration in ABSL
The earliest morphological processes described for ABSL, is compounding, and as in home-sign, the compounds are composed of one characterizing sign and one deictic sign. For example, place names tend to be generated by compounding a sign that represents a typical piece of clothing worn in the area, or a typical characteristic of the place in some other way, with a pointing sign that corresponds to the location of the place. The authors explain one case that involves a head scarf, typically worn in the place referred to, and a pointing gesture, which is glossed as the sign there:
The sign head-scarf is used as a single sign elsewhere in the language to refer to the kafiyeh commonly worn by Arab men throughout the region, but the compound form head scarf [plus] there, refers specifically to the Palestinian Authority (the West Bank and Gaza), and to cities located in those areas, such as Hebron. The sign long-beard describes facial hair, but in the compound long-beard-there, the form loses this specific reference and comes to mean Lebanon (Aronoff et al. 2008:146).
The order of the compounded elements is fixed--the deictic component is always word-final (ibid.). This consistent ordering of characterizing and deictic elements is an indication that deictic elements and relations are becoming increasingly caught up in and organized by the grammar of the language. In other words, deictic integration is contributing to the emergence of the morphological system of ABSL. Deictic integration can also contribute to our understanding of its emergent phonological system.
Sandler et al. argue that unlike established signed languages, ABSL is only beginning to develop phonological structure.22 By phonological structure, they mean a system of meaningless elements, which combine according to particular constraints to form meaningful units in the language (Sandler et al. 2011:508). Evidence for the existence of such units in established signed languages have included minimal pairs, the predictable absence of logically and motorically possible signs, and predictable assimilation patterns that do not follow from mere coarticulation effects (ibid.:508-15). In earlier stages of research, the authors had administered three picture-naming tasks to 23 subjects in an effort to compile an ABSL dictionary. However, they found a wide range of lexical and formational variation (ibid.:517). Therefore, they returned to their data, this time, in order to determine whether ABSL had any of the tell-tale signs of phonological structure present in established signed languages. They found very little evidence to support such a claim.
First of all, we have encountered no minimal pairs in our study of the language to date. While we can’t deny the logical possibility that minimal pairs are there but evading us, we find it striking that none have surfaced so far, in over 150 words of elicited vocabulary [ . . . ] hundreds of elicited sentences, and numerous narratives and conversations. Second, while constraints on the form of a sign are not absent, they are not strictly enforced. We interpret this as an indication that these constraints, shared as they are by established sign languages that have been studied, are articulatorily grounded, and become more strictly enforced as phonological organization emerges.
So, they say, it is “as if the signers are aiming for an iconic and holistic prototype, with details of formation taking a back seat” (ibid.). For example, the sign for lemon was produced by different signers using different handshapes, orientations, and movements. However, the variations are themselves meaningful in that they correspond to different ways of squeezing a lemon (ibid.:518). Another example is the sign for dog:
Of eleven signers, ten used the same lexical item, representing the barking mouth of a dog with the hand or hands. One signer represented a dog’s ears and paws, this exception proving the rule that dog was the same lexical item for the other subjects. Ten out of eleven is unusually high consensus on a lexical item and dog therefore gives us a good opportunity to observe phonetic variation. While the sign is iconically motivated, it is still lexicalized, in the sense that it conventionally selects a particular aspect of dogginess to represent: barking. [ . . . ]. Across the exemplars of dog in ABSL, there was a great deal of variation (ibid.:519).
Variation was distributed across high-level feature categories in established signed languages, such as handshape, selected fingers, location, and movement23 (ibid.:519-20). So, for example, in one instance, the sign dog was produced in the area of the torso, while in another, it was produced near the mouth of the signer. In established signed languages, these major body areas (head and torso) are contrastive. The authors argue that
[o]n the face of things, one might be tempted to suggest that it just so happens that these particular features are not contrastive in this language while other heretofore unattested features are contrastive. But we stress that this is unlikely, because differences in pronunciation such as those we exemplify here involve major feature categories . . . If the language does not exploit these broader categories to make distinctions, it seems unlikely that it will exploit finer distinctions. By looking for contrasts at higher levels of the hierarchy--comparable for example, to a contrast between voiced and voiceless states of the glottis or nasal and oral sounds rather than finer distinctions such as between coronal and palatal places of articulation--we are giving ABSL, a newly developing language, the benefit of the doubt, assuming that early contrasts would be at broader rather than finer levels of articulation . . . Even at the broader levels, we find non-constrastive variation and no minimal pairs” (ibid.:520).
Where signs in spoken languages can be broken down into meaningless elements, ABSL contains signs, which, as a whole, correspond to an “iconic prototype.” The conceptual prototype is not systematized in the language, but it does represent regularities in experience, some of which become foregrounded and expectable. In a footnote, the authors explain: “Dogs are not beloved pets in the Al-Sayyid village. Rather, they are feared, and are chained near livestock to fend off intruders. It is no wonder, then, that the most salient feature of a dog there is its barking mouth” (ibid.:519). While iconicity can account for the relation of resemblance between the sign and the referent from the perspective of ABSL signers, iconicity does not explain why the barking mouth, as opposed to other aspects of the dog would be selected as the relevant aspect of doginess (why not the running paws, as in Israeli sign language, for example?).
In order to explain the selection of the mouth, the indexical relation between the sign, the object, and the conceptual representation of the object must be considered. According to Peirce, an index “is a sign which refers to the Object that it denotes by virtue of being really affected by that Object” (1955/1940 [1893-1910]:102). An index is not related by similarity or analogy like an icon is, but rather by association, either in space or in “the senses or memory of the person for whom it serves as a sign” (ibid.107). For example, a weather vane is an index because it shifts according to the direction of the wind. In this same way, patterns on the surface of water can be an index of wind.24 In both cases, the “sign” is “really affected” by the object.
In any social world, things are next to other things. We are differentially affected by the things we live among, and these differential affections (or dynamical contiguities) cohere into patterns in everyday life.25 Therefore, as people in a particular place move through space, they have certain expectations about what they will encounter and how they will be affected. Insofar as these patterns of expectation are shared, they will tend to produce a convergence in the patterns of association and expectation that signers have, and this kind of convergence will influence the selection of certain aspects of the referent over others in the conventionalized lexical representation. The relation of resemblance (iconicity) that obtains between this aspect and the sign-vehicle is secondary. If Sandler et al. are right, and this convergence on a conventionalized lexical representation is a precursor to duality of patterning, then indexicality should be given a key role in processes of language emergence, and more specifically, deixis. It is not important that the new sign for dog resembles the dog, but rather that the process of creating a sign for dog is influenced by patterns in how people routinely encounter (or “access”) dogs in the course of an ordinary day.
These kinds of patterns give rise to “pathways” in Bu¨hler’s sense, which accrue to the indexical ground of utterance, and in some cases, are incorporated into the deictic field, which supplies values to the deictic system of the language. Here again, deictic integration, or the coordination of deictic and linguistic elements in tighter and more restricted configurations, takes on a crucial role in the process of language emergence. Iconicity cannot explain why one aspect of the referent would be incorporated into the representation, over and against others. In contrast, deictic integration makes the selection of one aspect an ethnographically predictable choice, hinging on shared modes of access and orientation to the immediate environment, which cohere in local patterns of activity and exchange.
2.4 Deictic Integration and Language Emergence
In all three cases, the emergence of a language-like system corresponds to a tightening of relations between linguistic and deictic phenomena into more restricted configurations. In the homesign case, deictic and characterizing signs combined in increasingly predictable orders as the system matured. In addition to the role that the innate capacities of the mind play, assignment of semantic elements in a given order relied on certain modes of access, such as shared knowledge, perceptual access, shared patterns of use (e.g. they are both familiar with the routine use of an object, the location of the object is expectable for both communicators, they can both see the object, etc.). If this distinction is not made, then knowledge about the location where the shovel is usually stored in a particular house would need to be stored in the semantics of the language and associated with a pointing gesture. It seems advantageous to assign a more schematic meaning to the gesture (e.g. locative) and attribute the rest of the meaning to the modes of access available to both speaker and addressee in the deictic field. From there, one can ask how semantic and deictic elements are integrated into tighter and more restricted configurations, to yield more elaborated and more predictable communicative effects.
In Nicaragua, language emergence has been associated with the emergence of spatially modulated verbs. I recounted the finding that for a verb like speaking-to (a person), signers used to point to a person in the immediate environment, produce the verb, and then sweep the finger from one person to another to indicate who was speaking to whom. Later on, signers moved the verb from one location to another, incorporating the sweeping pointing gesture into a single, verbal sign. This is like agreement in the sense that relations are being established between a verb and entities that can be represented by a nominal signs. However, the referents are not represented by nominal signs. Instead,they are linked directly to the verb via a deictic gesture. Positing a null argument is one way of addressing this problem. Another way, which I have put forth here, is to posit a process that draws linguistic and deictic elements into tighter and more restricted configurations as the language develops. Under this analysis, certain classes of verbs develop receptors, set to receive a limited range of values from the deictic field. Like a pointing sign, they cannot be interpreted until the sign has been applied the speech situation and field values have been retrieved.
Deictic integration has also been important in the emergence of ABSL. For example, ABSL
has recently developed a productive morphological process whereby one deictic and one characterizing sign are compounded to produce place names. As these connections have become increasingly conventionalized, the order of the compounded elements has become fixed; the deictic component is word-final. Therefore, in the terms that are being developed in this dissertation, the consistent ordering of elements (in addition to changes and reductions in the movements of the signs) enact the same kind of tightening of relations between deictic and linguistic phenomena that were noted in the NSL and homesign cases. In NSL, linguistic and deictic elements combined to yield a subset of verbs with a directional component. Agreeing verbs are generally assumed to be more linguistic than spatial verbs, because the deictic component has an anaphoric, rather than a strictly referential function. It indexes a relation between linguistic elements, rather than a relation between a linguistic element and an element outside of language. In ABSL, only spatial verbs have been identified. This suggests that the in the second generation of ABSL signers, deictic components of spatial verbs are not as tightly integrated into the relations between signs, as they are in more established signed languages, such as Visual American Sign Language.
Recall Fillmore’s claim that relations between a verb and its semantic elements are under-girded by “a set of universal, presumably innate, concepts which identify certain types of judgements human beings are capable of making about the events that are going on around them, judgements about such matters as who did it, who it happened to, and what got changed” (1968:24). In ABSL, such capacities no doubt were in play, but equally important are the kinds of access that participants have to objects and to other people in the routine patterns of their daily lives (see also Kisch 2012). These forms of access contribute to processes of conventionalization, which Sandler and colleagues note is far more central in language emergence than they had previously assumed (2011:536). Ultimately, in fact, they argue that
conventionalization among signers, and the automaticity and redundancy that go with it, underlie the emergence of a meaningless formal level of structure in the language of a community. As a particular sign becomes conventionalized, attention to the form-meaning correspondence is reduced, and the formational elements themselves self-organize, under cognitive and motoric pressures for ease of articulation, formal symmetry, and the like. An element that is automatically and conventionally part of some sign may become redundant in the sense that the meaning of the sign does not directly rely on it, and it can then become vulnerable to permutation under formal organization pressures such as ease of articulation (ibid.537).
What I am suggesting is that an important part of conventionalization--including the au-tomaticity and redundancy characteristic of form-meaning correspondences in language, derives in part from the patterns that organize the deictic field, or the modes of access and orientation through which speaker and addressee access objects in the immediate environment. These patterns are further embedded in a social field, which has taken shape around work, family, marriage, and school-related activities. This field has become internally complex and asymmetrical, such that ABSL can be used to access some social positions and not others (Kisch 2012). Therefore, in order for ABSL to emerge as a full-fledged language,linguistic elements have to be aligned with the deictic and social fields where the language is used. As these relations become more stable, and the language is more throughly embedded, it becomes more linguistic in nature. This means that language is not strictly linguistic. Rather, a language coheres in the relations of embedding between linguistic, deictic, and social phenomena. Nevertheless, each category of phenomena requires a different analytic approach, since each is governed by distinct principles of organization. Therefore, they are distinguished initially, in order to draw principled connections between them, simplifying the linguistic analysis and preventing the misapplication of linguistic models to nonlinguis-tic phenomena.
Chapter 3
The History of the Social Field of TASL
In this chapter, I sketch the history of the social field of Tactile American Sign Language (TASL).1 I show that sensory change is only one element in a complex set factors that contributed to this process. A tactile language did not emerge simply because a group of people who were deaf and blind came together in the same geographic location. However, it was also not the case that DeafBlind people decided to invent a language. Rather, they set out to solve practical problems via political and social means. One of the many effects of those efforts was the eventual emergence of a new language. This chapter examines shifts in sensory orientation, communication, and language among DeafBlind people in Seattle as part of broader social and political dynamics, in order to understand the social foundations of TASL.
The Seattle DeafBlind community was established by the late 1980s, and yet, TASL did not diverge significantly from VASL until the mid-2000s. Therefore, the first question that must be asked is not why a new language emerged, but why it didn’t happen sooner. Much of this chapter aims to address this question by looking at the institutionally embedded social roles available to DeafBlind people, how they came to occupy those roles, and eventually, how social roles and relations were reconsidered by DeafBlind leaders, leading to the initiation of a social movement, which took root between the years of 2006 and 2010.
This movement, known as the “pro-tactile” movement, triggered a fundamental shift in what was imaginable for DeafBlind people. Instead of working toward improved resources for compensating or coping with vision loss, DeafBlind people began to imagine a world that could be inhabited without compensation--a world that felt natural, concrete, and effortless. The pro-tactile movement started as a critique of the overwhelming dominance of sighted people in DeafBlind spaces. Almost immediately, though, critique gave way to the morass of what it would mean to establish a DeafBlind space. No one really knew. What kinds of practices would make a room “inviting” for a DeafBlind person? What would a meeting, run for and by DeafBlind people look like? How could groups of DeafBlind people communicate without relying on sighted people to mediate? If sighted people were not so ubiquitous, what decisions might DeafBlind people make for the future of their community? Therefore, from the start, the scope of the movement was necessarily broad, incorporating everything from co-presence and reference to legitimacy, authority, and power. It was never a set of fixed “techniques” for communication.
Pro-tactile practices2 are guided by what its leaders call a philosophy, which begins with the following axiom: Legitimate knowledge can be produced from a tactile perspective without first passing through visuality. In a visual world shaped by sighted people, vision loss leads inevitably to alienation and subordination. Sighted people will always know more about the world and their perspective on it will always be more legitimate. However, given a tactile world shaped by tactile people, it becomes possible to understand visual worlds in tactile terms and alienation is no longer inevitable.
Therefore, for leaders of the pro-tactile movement, the first move was not to create a bridge between DeafBlind individuals and the broader society, but to find a place away from sighted people where DeafBlind people could cultivate tactile sensibilities and modes of communi-2On the topic of myths, taboos, and stereotypes about blind people, Frances A. Koestler (1976) describes the dual figuration of blind people in the popular imagination. On the one hand, they are figured as tragic and dependent, worthy of pity and charity. On the other, they are imbued with magical or extra-sensory powers (ibid.:7). She cites many examples, including a young woman who, it was claimed, could distinguish colors by smell (ibid.:5), or another who could distinguish them by touch (ibid.:6). Another woman could purportedly read the bible, thanks to her “eyeless sight” (ibid.). These and many more cases were shown to be hoaxes or misunderstandings in the end, and Koestler implies, have more to do with entertaining the public than with the lives of blind people. Koestler points out that “what most people continue to misunderstand, is that both acuteness of hearing and sensitivity to touch in blind people are not compensatory gifts of nature but the products of long, hard concentration and training” (ibid.:4). In other words, the sensory orientations of blind people are the outcome of practices which incorporate sensory dimensions. They are not reducible to a natural outcome of sensory capacity or change. Recognition of this fact is the starting point of this chapter. However, I am not only interested in showing that this is the case, but also in how, particular practices were shaped by social and historical forces, and how these developments set the stage for the pro-tactile movement.
Prior to the pro-tactile movement, DeafBlind people rarely communicated directly with one another. Instead, they communicated via sighted interpreters. This meant that the field of engagement was organized along visual lines and accessed via compensatory strategies. Interaction was fundamentally non-reciprocal. People stood at visual distances from one another. They used visual attention-getting strategies (waving a hand in the direction of a person, for example). They used visual back-channeling cues, such as head-nods and eyebrow signals. They attended to the visual qualities of objects and the visual dimensions of encounters and represented those qualities and dimensions using a visual language. Although some DeafBlind people received visual signs tactually, the language and the fields to which it articulated, remained visual. This was possible because DeafBlind people worked with interpreters to find ways of approximating visual ways of listening, interacting, and thinking. However, as vision was lost, and visual memories faded, approximation became less and less effective. Therefore, greater vision loss meant greater exclusion from social life.
DeafBlind individuals did everything they could to avoid exclusion, and as part of this, powerful stigmas were established around everything related to touch. The pro-tactile movement works against these stigmas, insisting that tactility is not the problem, but the solution. However, simply changing the modality through which signs are produced and received would not have been enough. From early on, the leaders of the movement were calling for a broader shift in the way people oriented to their environment, their language, their bodies, and the institutionally embedded social roles they inhabited.
In order for these changes to take place, boundaries around what counted as appropriate and inappropriate touching, had to be revised, and the norms that felt intuitive to sighted people had to be left behind. Once this was accomplished, tactile alternatives to head-nodding, attention-getting, and turn-taking could be established. Tactile communication in groups could be worked out. DeafBlind people could learn to discern qualities such as politeness, impatience, and attractiveness by evaluating tactile cues against new frames of social value. All of these developments were prerequisites for language emergence. In other words, the emergence of TASL as a distinct linguistic system followed from a reconfiguration of power relations, new frames of social value against which communicative behaviors could be evaluated, new structures of interaction, and a new tactile habitus.4
While some of these changes happened slowly, there were key events that acted as catalysts.
In 2010, a series of 20 pro-tactile workshops were organized by DeafBlind leaders for 11 DeafBlind participants. Counter to custom, no interpreters were provided, and no sighted people were invited.5 Since these workshops, new communication practices have proliferated, along with discourses about their social significance.
The idea that DeafBlind people could develop their own communication practices and learn from one another, rather than from sighted people, was a major shift in thinking. Prior to these workshops, most communication-related instruction was provided to DeafBlind people by sighted people. Indeed, in a visual field of engagement, sighted people were the experts. In the pro-tactile workshops, DeafBlind instructors had to work hard to convince their students that in a tactile field of engagement, they were, in fact, the experts. Adrijana, one of the leaders of the movement and instructors in the pro-tactile workshops explained it to her student in the following way:
We need to teach sighted people our tactile ways. All this time, it has seemed like we’re slow to catch onto things. Sighted people are always thinking so hard about how to explain things to us. It makes so much sense for us to figure it out ourselves. We learn from each other really quickly. We don’t talk to each other as though things will be difficult to understand--saying things slowly and in perfectly broken down steps. The problem--the reason why they’ve done that all this time, is because they don’t know how tactility works.
They have no intuitive understanding of touch. They’re just more tuned in to auditory and visual aspects of things--all of their habits are based on sound and sight. So they aren’t the right people to try to figure out how tactile practices work. It really doesn’t make any sense for them to try to teach us how to communicate and how to relate to things. We’ve been working so hard to do it their way, but we can do better than that. We can meet half way by inviting them into our tactile world and showing them how touch works.
Adirjana is not saying that sighted people should be excluded from the DeafBlind community, or that they have nothing to contribute. There is nothing about pro-tactile discourses that suggest at attachment to separatism, authenticity, or identity politics. The focus is, instead, on the possibility of immediacy and the social and political futures riding on that possibility. In order for immediacy to be achieved, DeafBlind people have to have time and space to figure out how tactile communication works and what it means to be a tactile person. In Giddens’ terms, a process of “social integration” was required (1979:76-7).
In the passage above, Adrijana raises two problems. First, she points to the dominance of sighted people in the shaping of DeafBlind communication practices and argues that DeafBlind people are in a much better position to develop these practices, since it is easier for them to become attuned to the tactile dimensions of language and their environment. Second, she argues for direct communication between DeafBlind people, which had previously been rare. In this chapter I argue that both the concentration of communication expertise among sighted people and the absence of direct communication between DeafBlind people have historical explanations, and understanding this history is crucial in understanding the emergence of TASL (6).
3.1 The Seattle Lighthouse for the Blind
In the Seattle DeafBlind community today, there are two main institutions, around which the community has been built: The Seattle Lighthouse for the Blind, and the DeafBlind Service Center (DBSC). DeafBlind people have moved to Seattle in waves since the mid-1980s. Most were able to do so because they were offered employment at the Lighthouse. Therefore, the Lighthouse has played a foundational role in the establishment of the Seattle DeafBlind community. However, this fact is not reducible to the provision of jobs. They are a manufacturing company, but their mission has always included employment support and a variety of social services, in addition to employment opportunities. On their webpage,7 their mission is stated as follows:
[...] to create and enhance opportunities for independence and self-sufficiency of people who are blind, DeafBlind, and blind with other disabilities
This combination of manufacturing and social service is a distinct characteristic of organizations like the
Lighthouse, most of which began as “sheltered workshops for the blind.”
Sheltered workshops have played an important and contentious role in the lives of hearing blind Americans since the 19th Century and are at the center of political discourses that have intensified since the beginning of the 20th Century. In what follows, I draw on some of this history in order to sketch the scene that pre-existed the DeafBlind program at the Lighthouse. I argue that the inception of the DeafBlind program at the Lighthouse was a site for the convergence of Deaf and blind histories, social roles, and political dynamics. It was this complex and specific social field that eventually gave rise to the pro-tactile movement and to Tactile American Sign Language. Therefore, understanding these historical convergences is important for understanding this case of language emergence. The more general blind history recounted below is not meant to stand in for the history of the Lighthouse or the DeafBlind community, but rather, to give a sense of the broader social field that shaped both.
3.1.1 Sheltered Workshops for the Blind
The first Sheltered workshop was established as part of the Perkins School (then called the Perkins Institution and Massachusetts Asylum for the Blind) (Koestler 1976:209). The sheltered workshop was a solution to a widespread problem. When graduates of the Perkins Institution sought jobs, despite their training and capabilities, they faced many obstacles. So in 1840, a separate work department was established in the school and was soon replicated in schools for the blind across the country (ibid.). Later, the work departments were transferred from the schools to voluntary organizations and later still, to state agencies (ibid.). By the 1950s, they had been entirely transferred out of blind schools. However, they retained certain elements of their history. A school would be much more inclined to take responsibility for the moral and emotional well-being of children than to view them as laborers who could help turn a profit. This was also the case for the workshops.
The goal of these organizations was not to turn a profit, but to give blind people a sense of purpose and independence (ibid.). This view of blind labor also appeared in the 1930s, when blind people argued for a work program that would serve the same purpose that the PWA served for sighted Americans. However, there was a parallel discourse that viewed the provision of such jobs as an act of charity. As the country stabilized, and the PWA was shut down, the latter of these discourses prevailed. Blind labor was not primarily seen as something that was done in exchange for monetary compensation. Rather, it could be exchanged for “dignity” and “self-esteem” and was presented as an alternative to isolation. Monetary compensation (often minimal) took a secondary role in the arrangement (Koestler 1976:195).
By the 1950s, the sheltered workshops were well-established, but transportation was very limited, so blind people had to live nearby in boarding homes. Eventually, people who had not grown up blind, but had become blind later, came to live in these homes and be trained in “personal adjustment” and “work skills” (Koestler 1976:209). In this way, the workshops became vocational training centers as well. There were several ambiguities that were endemic to the institution from early on. First, it was not clear whether the workshops were intended to be temporary interventions that would help blind people find gainful employment elsewhere, or if they were intended to be a refuge for people who lacked alternatives.
In 1908, there were 16 workshops nationwide, all of which produced a limited range of handmade objects including brooms, caned chairs, and woven goods. They employed a total of 583 blind people. These workers were paid
an average of just over $3.00 per week per person. It was hardly a living wage, even in those days. But then, workshops were not expected to yield a living wage; they were subsidized by their sponsoring agencies, and the blind person whose family could not supply the difference between his earning and his needs usually received a small cash supplement from the agency (Koestler 1976:210).
However, during World War I, several hundred of these workers were employed in war factories, and paid significantly better wages. Their posts in the workshops were filled by
“multi-handicapped people,” so when the war was over, there were two problems. First, it was no longer clear who should have priority in the workshops, since many blind people had shown that they could work in industry. However, it was not clear that blind people with other physical or cognitive disabilities would be capable of such a thing, and therefore, maybe the workshops should be reserved primarily for them. Exacerbating the problem was the fact that blind people who had been working in industry were no longer interested in the low wages and poor work conditions that were common in the workshops (Koestler 1976:210). The same problems would arise during World War II, and answers to these questions would require further clarification as to the primary purpose of the workshops.
What should be the basic function of the workshop? Should it be primarily a training school to fit people for employment in open industry? Should it be a self-supporting production unit, able to compete in the open market with commercial firms? Should it be an outright social service, a work therapy setting for those blind people who could never realistically be expected to pull their economic weight? Should it combine all three functions? (Koestler 1976:210-11).
In the past, these questions have been answered in contradictory ways, contributing to tensions between blind laborers and those making decisions that affect them. Answers to these questions also change depending on how they are interpreted, and the historical context in which they are considered. For example, if people who were once considered incapable of working were suddenly able to pull their own weight in war times, then a designation of incapacity can be understood as a way of removing competitors from a saturated labor market, not a descriptive fact about blind people. However, some argue (though not in these terms) that the unwillingness of sighted people to hire blind workers is a social fact, which renders blind people unemployable. In this view, a distinction between social and physical reasons for unemployability is irrelevant.
Limited employment opportunities has been a central concern for blind people since at least the 1920s (Koestler 1976:9). In the 1930s, the situation became even more pressing, and three pieces of legislation were introduced to mitigate: the Randolph-Sheppard Act of 1936, the Wagner-O’Day Act of 1938, and the Vocational Rehabilitation Act amendments of 1943 (ibid.:193). The Randolph-Sheppard Act began prior to the 30s, with the observation that the Public Works Administration (PWA) provided work opportunities for millions of people, but much of the work it provided often could not be done by blind people. Therefore, there should be a supplementary national program through which blind people could be employed (ibid.:197).
Previously, in 1920, a law was passed, ensuring that blind people were one of the groups given priority in operating news stands in Federal buildings. This was a lucrative alternative to the limited range of “blind trades” that would have otherwise been available. The New York Association for the blind soon implemented a program helping people access this new opportunity through interest-free loans and other forms of support (ibid.:193). According to Koestler, this was an important development leading up to the Randolph-Sheppard Act because blind people moved into the public eye, where they were “showcased” as examples of competent business operators and not merely tragic dependents. This led to additional opportunities for blind people in manufacturing and production as well as Federal civil service(ibid.:198).
Blind leaders focused their efforts on continuing to improve the public image of blind people, in an attempt to broaden employment opportunities. In 1937, Joseph Clunk was appointed to administer the Randolph-Sheppard Act, thereby becoming the first blind civil servant (Koestler 1976:198). The Act required that at least 50% of those hired to administer the act at the Federal level should be blind as well, so Clunk was responsible for hiring the first blind Federal Civil Servants in the history of the United States. Clunk’s aim was to seize on the opportunities that the Randolph-Shepard Act created, while not acquiescing to the presuppositions that made the passage of the act possible. Rather than appealing to the sympathies of employers, or asking for “concessions,” he argued that the limitations of blind workers could easily be overcome with a little imagination on the part of employers. Once employers could be convinced that particular jobs could be done by blind workers just as well as they could be done by sighted workers, then blind people would be free to enter the labor market with no need to ask for charity. Furthermore, their labor could be exchanged primarily for money, rather than dignity.
3.1.2 From Sheltered Workshops to Big Business
The history of blind labor suggests that the possibility of work for blind people has more to do with ideological and economic conditions in a particular period than the physical capacities of blind people. Since the 1920s, the situation has fluctuated--improving and deteriorating as circumstances change in the labor market, in manufacturing in the United States, and elsewhere. However, in the late 1930s, a special place was carved out for blind labor in the “state-use” market to prevent blind people from being pushed out of their jobs every time one of these fluctuations occurred.
In the late 1920s, prison labor had started flooding markets, including broom manufacturing. Labor unions, manufacturer’s associations, and citizen groups all banded together to try to eliminate the unfair competition by restricting the sale of prison-made products to “state-use,” thereby removing them from the open market. One of the manufacturing associations suggested that the workshops for the blind be given priority in the production of state-use brooms. The workshops followed up on this. Though they weren’t given first priority, once the entire inventory of prison-made brooms had been purchased by Federal departments, workshops for the blind were allowed to bid for the remaining contracts (Koestler 1976:212). Workshops began competing with one another for work and in doing so, started undercutting each other’s prices (ibid.). This led to worsening conditions for blind workers. It became clear that in order to address the problem, the workshops would need to secure federal broom business that did not require such fierce competition (ibid.:213). To this end, the Wagner-O’Day Act was passed in 1938. This act mandated that brooms and mops and “other suitable commodities” be purchased from blind agencies at market price (ibid.:214). Two months later, the National Industries for the Blind was established to implement the Act.
In 1939, the first federal order was filled, and the 36 workshops participating sold $220,000.00
worth of brooms and mops (Koestler 1976:219). This was a positive outcome of the Wagner-O’Day Act as it had been conceived. However, with World War II, blind workers were one of many groups needed to meet production needs for the Federal government, and the Wagner-O’Day Act suddenly placed blind workers in a privileged position. State-use markets, which had once been marginal, were now booming, and the workshops had more work than they could follow through on (ibid.:220).
Only one year after the National Industries for the Blind was established, in 1940, workshop sales rose from $220,000 for 36 workshops to $1 million for 44 workshops (ibid.:220) and the average sales for the duration of the war was $8 million annually. In response, workshops expanded, and far in advance, began to plan for post-war changes in demand. By the time the war ended, the rapid decline in Federal sales was already being replaced by a rapid incline in commercial sales. By 1960, 62 NIB affiliated workshops were up to $24,000,000.00 in sales. $8,700,000.00 of this was earned through sales to Federal departments. From 1971 on, the military would be included as one of many Federal departments that were required to give organizations that employed blind laborers preference. Nevertheless, military cutbacks and a more general recession began in 1969, and the early 1970s were fraught. Koestler writes:
What happened to NIB during this troubled period constituted more than operational and financial reorganization. There was a change in direction, away from the toe-to-toe competition with profit-making industry which had been the main thrust during the Sixties and back to the basic purpose of services aimed at giving blind men and women maximum opportunity for self-support through constructive use of workshop facilities for vocational training and employment (1976:226).
However, over the previous several decades, vocational rehabilitation services has grown, and blind workers had been placed in jobs in open industry. Those who were still employed by workshops were mostly those with multiple disabilities (1976:226). While employing blind people had always required equipment modifications, the new demographic required many more services. Koestler writes:
Brought into play were medical, psychiatric, and psychological testing; individual and group counseling; assistance with mobility and with skills of daily living; recreational services; social work help with family relationships, housing, and other problems” (1976:226).
These changes coincided with a nation-wide emphasis on standards in training methods, required qualifications of staff, construction of facilities, and operating practices and procedures in the human services (ibid:227). One of the ambiguities about the function of sheltered workshops and the status of those employed by them emerged again as a problem around this time.
To the sponsoring agencies and the taxpaying or contributing public which financed the workshops, the people who worked in them were subsidized clients of a non-profit social service. Many of the people, however, thought of themselves as employees who earned by means of their labor and were therefore entitled to the
same rights and benefits as all other workers: minimum wages, unemployment insurance, paid vacations and various other fringe benefits. While many of the more enlightened workshops did, in fact, provide such benefits, others were guilty of substandard work practices if not outright exploitation. Even these, it should be said, were not necessary acting callously but out of differences in viewpoint as to what workshops were designed to accomplish. Those who believed workshops should operate as self-supporting entities, neither making a profit nor requiring subsidy, attempted to hold on to their best and most productive workers, making little or no effort to move them out into open industry. In such shops the less capable workers who could not earn their keep were left to fend for themselves (Koestler 1976:227).
If the employees of the workshops were employees, they had certain rights. If they were clients of a non-profit social service organization receiving training, therapy, and support, these rights did not necessarily apply. For example, “[s]ome were paying low trainee wages to persons employed under a vocational rehabilitation plan and kept such persons in trainee status for unduly long periods” (ibid). It was also claimed that Vocational Rehabilitation (VR) counselors contributed to the problem, by using the workshops as a an easy solution for people they thought would be difficult to place (ibid). Once they referred them to the workshops, they no longer attempted to place them elsewhere, so the workshops became a kind of dead end (ibid).
In 1966, against opposition from sheltered workshops, the Fair Labor Standards Act was passed, which mandated that employees of sheltered workshops be paid 50% of the minimum wage (ibid.:228). There were, however, classes of workshop clients who were exempted from this requirement--those who either held trainee status, or were “so severely handicapped that their earning capacity was severely impaired” (ibid.), or they were employed in “work activities centers” (ibid.). These were intended for people who were deemed incapable of productive labor, and provided therapy, support, and activity, as opposed to work (ibid.). Although a minimum wage had been established, many other standards and benefits were denied, including unemployment insurance and collective bargaining rights (ibid.).
In 1971, with the amendment of the Wagner-O’Day Act, workshops for the blind were no longer strictly for the blind. Their privileged position in production for the federal government was opened up to workshops that served people with any kind of disability, not limited to blindness (Koestler 1976:229). This created an important opening for DeafBlind people. On the one hand, there were more jobs available for them, since hearing blind people had moved increasingly into open industry, and on the other hand, there were less internal barriers to broadening the range of accommodations and services that could be provided, such as interpreting services.
In combination with other state agencies, the Seattle Lighthouse for the Blind would become central to the lives of many DeafBlind people. Their housing, medical, personal, and employment related needs were often addressed via the Lighthouse. In order to receive these services, they had to take on roles given by the organizations that provided the services, and in doing so, they were shaped by those organizations. DeafBlind subjectivity in Seattle has emerged, since the 1970s, as something unique that is irreducible to either of its constituent terms. In order to understand this process, I begin with an account of how blindness organizations, including those like the Lighthouse, have shaped hearing blind subjectivities.
3.1.3 The Making of Blind Men
In The Making of Blind Men, Robert A. Scott examines the socialization of blind adults through their interaction with the “large, intricate, multimillion-dollar national network of organizations, professional specialities, and programs for blind people” (1969:1). Many of these organizations, including state agencies, have their roots in charity organizations like sheltered workshops, where the boundaries between givers and receivers are firm. Scott describes a similar dynamic in the support apparatus available to blind people in the 1960s. Boundaries between professionals providing services and those receiving them were clear, and the dynamic between them, as Scott described it, was one of conversion and domination that left blind people with a very limited repertoire of potential social roles (ibid.:71-89).
According to Scott, when blind people first seek help from an organization for the blind, they often have a clear idea of what their problems are and what kinds of help they are looking for. Some are experiencing difficulty reading, and would like to learn how to access texts in large print. Some would like help with household chores that have become difficult with deteriorating vision. Some would like to learn how to use a cane. However, the “workers for the blind,” as Scott calls them, have a very different idea of what their clients need. He explains that the professionals
regard blindness as one of the most sever of all handicaps, the effects of which are long-lasting, pervasive, and extremely difficult to ameliorate. They believe that if these problems are to be solved, blind persons must understand them and all their manifestations and willingly submit themselves to a prolonged, intensive, and comprehensive program of psychological and restorative services. Effective socialization of the client largely depends upon changing his views about his problem. In order to do this, the client’s views about the problems of blindness must be discredited.
What appears at first to the client to be a need for practical guidance is seen by the professionals as a small manifestation of a much larger problem. An attempt to learn large print, becomes a battery of psychological tests. An attempt to learn to use a cane becomes a long-term program of “testing, evaluation, and training” (Scott 1969:78). What promised to be a resource for learning seemingly simple skills, becomes a slow and complex process of socialization. According to Scott, there are various rewards and punishments for adhering or not to these programs, which seek first and most fundamentally, to disabuse the client of their misguided impressions regarding their condition.
Scott distinguishes between two general approaches to “blindness work.” The first he calls the “restorative approach,” (1969:80-84) and the second he calls the “accommodative approach” (ibid.:84-89). The restorative approach assumes that most people who become blind can return to a life much like the one they had prior to becoming blind. However, in order to succeed in doing so, the blind person must come to terms with a “life crisis” and be trained in various modes of “adjustment” and “rehabilitation” (ibid.:83). This process includes “training the other senses to take over the role of sight; training in basic skills and the use of various mechanical devices; restoring the sense of psychological security; and assisting the individual to meet the prevailing attitudes of the society toward him” (ibid.:82). Scott points out that the approaches imposed by the experts often do not coincide with those of the client. Ideas they might have had for improving their prospects are not taken into consideration. Therefore, the knowledge acquired by the client can, in addition to being useful, also act as a limit. Or in Scott’s words, “the choice of compensatory skills around which the theory revolves means the exclusion of a spectrum of other possibilities” (ibid.:84).
The restorative approach seeks maximal integration in the sighted world. However, proponents of the accommodative approach point out that the feasibility of integration changes, depending on many large scale historical, economic, and social factors. Therefore, obstacles to gainful employment and social integration in other domains can be significant. To address this problem, accommodative organizations establish special environments that accommodate blindness. They install special auditory signals in the elevators, braille displays on computers, and so on. Some arrange special transportation, and provide foods in the cafeteria that are not awkward for blind people to eat. Social activities, such as “bingo games” are organized and sighted people are available to monitor the game and do anything the blind person is not able to do for themselves (ibid.:84-5).
In manufacturing companies that take an accommodative approach, the production method will often be engineered with the disability in mind, so that “there is little resemblance between an average commercial industrial setting and a sheltered workshop. Indeed, the blind person who has been taught to do industrial work in a training facility of an agency for the blind will acquire skills and methods of production that may be unknown in most commercial industries” (ibid.:85).
In accommodative settings, the aim is not to prepare blind people for work outside of the agency, but to help clients organize their lives around the agency or organization as a permanent solution to a completely disabling set of circumstances (ibid.:85). These circumstances include the physical fact of blindness, but also other factors, such as the widespread unwillingness of hearing sighted people to hire disabled workers. After many years in such an organization, the blind person is likely to be maladjusted to the outside world, and therefore, “has little choice but to remain a part of the environment that has been designed and engineered to accommodate him” (ibid:85-6).
These two perspectives shape the field that blind people must occupy when seeking services, and a finite set of social roles emerge: the “expedient blind person,” the “true believer,” and the “professional blind person” (Scott 1969:86-7). The expedient blind person makes a conscious effort to perform the role expected of him in the presence of sighted experts in order to gain access to resources, but sees it as a performance that can be abandoned. The true believer is a blind person who actually experiences the emotions that the experts require of them (ibid.:87). They express emphatic gratitude to the organization, and they genuinely believe that they would not be able to live without it (ibid.). The professional blind person lives almost entirely within the network or organizations and agencies through which they have been socialized, and has very little contact with anyone outside of it (ibid.). The professional is often an employee of a blindness organization and their employment is understood as an act of goodwill or charity on the part of the organization.
3.1.4 “Integration” from a Deaf Perspective
The split that Scott identifies between agencies oriented toward full integration of blind people into society and those aiming to accommodate them, has been highly politicized among blind Americans. However, many members of the Seattle DeafBlind community had never before come in contact with blind agencies or blind people before moving to Seattle. In the Deaf worlds they had come from, nothing was valued more than access to a community where sign language was used. For this reason, one of the main thrusts of political discourse among Deaf Americans has been to argue against so-called “integration” in deaf education.
Precisely counter to blind politics, Deaf political discourse has focused on the detrimental effects of deinstitutionalization, integration, and mainstreaming, since these moves often mean isolating deaf children in schools full of hearing children, and cutting them off from any perceptible language, and therefore from normal patterns of socialization (e.g. Cleve (2007), Keating and Mirus (2003), Lane, et al. (1996)). As I describe in section 3.3.2, The Lighthouse was often apprehended by DeafBlind people as a place where the effects of blindness could be held at bay, and visual communication and ways of life could be recovered, if temporarily. Work was a means to that end, and the labor itself was not politicized in the way that it is among blind people.
However, 20 years later, the pro-tactile critique points to an asymmetric distribution of expertise that sounds strikingly similar to Scott’s critique. Adrijana and Lee, two of the central leaders of the movement, have consistently argued that the dominance of sighted people in matters of DeafBlind communication has undermined tactile modes of knowledge production. This asymmetry in knowledge production is comparable to asymmetries Scott describes, which lead to direct conflicts between the forms of knowledge produced by blind people on the one hand, and by the people providing services to them on the other. In the next section, I look at how the institutional structure of the Lighthouse may have affected the distribution of expertise in the DeafBlind program, and how these and other factors shaped communication practices in the DeafBlind community.
3.2 The DeafBlind Program at the Seattle Lighthouse for the Blind
The Seattle Lighthouse for the Blind, like other organizations of its kind, was once a sheltered workshop, and over the years has grown and diversified in terms of products and workforce/clients (Rochester 2004). However, unlike the others, in 1976, the Seattle Light-house established an employment program specifically for DeafBlind people.8 In order to understand the pro-tactile movement and its effect on language and communication, I focus on two achievements in the early history of the DeafBlind program. First, Visual American Sign Language was established as the primary language of the community. This was not an obvious or inevitable development. In many other places where DeafBlind people are socially and politically organized, spoken English, paired with amplification systems, is the primary mode of communication. Second, conventions for mediated group communication began to be established, making it possible for DeafBlind people to meet in groups, as opposed to being limited to one-on-one communication. These important changes happened within the institutional structure of the Lighthouse with influence from Deaf and sighted people who had not previously been involved with blind people or the organizations and agencies that serve them. Many of those people were affiliated with or trained in the Interpreter Training Program at Seattle Central Community College, and/or were members of the Deaf community.
3.2.1 Interpreter Training Programs
Seattle Central Community College established a program for Deaf students in the 1960s and an Interpreter Training Program (ITP) in the 1970s. According to Laura, a Deaf student who was there in the late 1970s, there were about 100 Deaf students enrolled at the time. Some took two years of general requirements and then transfered to a four-year university, such as Gallaudet. Some learned technical skills such as boat-building, or mechanics. The Deaf program and the ITP were housed in the same building so there was a lot of interaction between hearing and Deaf students. Laura said that
[l]ater, it became really common for people to get together in the cafeteria. And people didn’t care if you were Deaf or hearing, as long as you were signing. It was a really thriving social scene. That’s what it was like back then. And interpreting services was in the same building, too.
Early on, when DeafBlind people moved to Seattle to work at the Lighthouse, they were among a very small group. Given the diversity in language background, it was likely that they would either not be able to communicate with other DeafBlind people or that they would have nothing at all in common with them and would not feel compelled to communicate with them.
Seattle Central Community College was an important resource for those people in broadening the pool from which potential interlocutors, friends, and communication supports could be found. Early on, ties between the two organizations were informal, but over time, they became stronger. First, a small number of specialists with Deaf-related expertise who were affiliated with Seattle Central in some capacity, were hired at the Lighthouse in permanent positions. From the very beginning, this included Deaf and hearing people.
Next, the ITP at Seattle Central started encouraging (and later requiring) their students to volunteer in the DeafBlind community at events that were part of the DeafBlind program at the Lighthouse. This mutually beneficial relationship, which was forged in the 1970s, has been very important throughout the history of the DeafBlind community for maintaining the pool of interpreters available to work with DeafBlind people. In the late 1990s and early 2000s, the relationship became weaker, and students were not being asked or required to volunteer in the same ways. This trend continued further when a private ITP in Seattle and the Seattle Central ITP both closed, one after the other, due to changing standards in the national certifying organization for interpreters, and other factors.
In 2010 and 2011 when I was conducting my fieldwork, it was clear that there would soon be no ITP at all in Seattle proper. These changes contributed to a severe interpreter shortage in the DeafBlind community, which was only expected to worsen. Already, DeafBlind people were having to cancel or postpone events due to a lack of qualified interpreters. When given a choice between waiting and communicating without an interpreter, some chose the latter, and in doing so, were forced to develop new communication practices.
3.3 Why Didn’t a Tactile Field of Engagement Emerge Sooner?
When communication specialists and interpreters came to work at the Lighthouse in the 1980s, they did so in a variety of capacities. Although their training focused on the history, culture, and language of Deaf people, they had to learn how to extend their expertise to include things that would be relevant for Deaf people who were going blind. Some things required improvisation, while others fit fairly neatly into the structures, categories, and practices that were already in place. For example, one graduate of the Seattle Central ITP was hired to each “independent living skills,” which is a recognizable category among blind people. Some of the things that would normally be included in such a class would be instruction in how to cook without vision and instruction in reading and writing in Braille. The Department of Services for the Blind (DSB) provided these services, but only in spoken English, since most of their clients were hearing. When the numbers of DeafBlind people started growing in Seattle, it was cheaper and more effective to train an ASL user to provide the training directly than to hire interpreters, and DSB provided the funds.
These techniques or strategies were taught, for the most part, by sighted experts to adults who had become blind. Given this institutional structure, tactile reception of ASL fit in easily as an additional technique or strategy that could be used to compensate for vision loss. Just as Braille is a tool that helps people access written English, tactile reception was treated as a tool that could help people access Visual ASL. This alignment of tactile reception with services provided to blind people may have contributed to the sense that Visual ASL could be detached from the visual channel it was produced and received in as well as the visual worlds and practices that had shaped it. On the one hand, there was a language. On the other, there was a means of adapting that language using compensatory strategies. In combination with the lack of direct communication between DeafBlind people, this distribution of expertise may have been one factor that contributed to the maintenance of a visual field of engagement, rather than the establishment of a tactile field of engagement.
3.3.1 Moving to Seattle from Elsewhere
Another factor preventing the emergence of a tactile field of engagement was the fundamentally visual orientation of DeafBlind people prior to their arrival in Seattle. People with Usher Syndrome, for example, were used to communicating in visual modalities while strategically compensating for their loss of vision. While living elsewhere, they had learned to linger in the back of the room where their tunnel vision would capture a wider swatch of activity. In conversations, they stood far away from the person they were talking to so they could see both the hands and the face. When more than one person was involved in a conversation, they looked for cues to know when and in what direction to turn their heads. When this became impossible, they honed their skills of inference and tried at least to keep up the appearance of participation. When neither approach worked and even appearances couldn’t be maintained, they limited themselves to one-on-one conversation.
Slowly, entire categories of experience were deemed inaccessible: staying out past dark, going to parties, meeting friends in restaurants or bars with low lighting, and so on. If this process went too far, people became withdrawn and isolated. Once a person has become withdrawn and isolated, it becomes harder and harder to re-establish contact with the outside world. People forget how to behave in socially recognizable ways fairly quickly, their strange behavior drives people away, and isolation becomes self-perpetuating. People who move to Seattle do so, at least in part, to avoid such cycles.
Upon arriving in Seattle, the situation is hopeful. DeafBlind people encounter others who are familiar with their experiences and who want to be part of a better future. They also find an army of interpreters trained to provide visual information and otherwise facilitate communication. With interpreters, they enjoy renewed access to some of the categories of experience that had previously grown inaccessible. If they had stopped joining group conversations, now they could do so with an interpreter. If they had stopped going out past dark, now they could do so with an interpreter. In addition, the strategies they had for maintaining visual communication practices became legible. In Seattle, in addition to making up part of an elaborate compensatory apparatus, these strategies also constitute ways of taking up recognizable social positions such as “tunnel vision person.” Outside of the DeafBlind community, they are more likely to be interpreted as idiosyncratic behaviors that mark a person as deviant or different.
Sighted and DeafBlind people together take part in building the compensatory apparatus. In Seattle it has been part of the common sense shared by sighted and DeafBlind people alike that if you are talking to a tunnel vision person, you have to back up. Everyone wears clothing that contrasts maximally with the color of their skin. People with light skin wear black, navy blue, or dark grey. People with dark skin wear white, or pink, or teal. That way the signs stand out against their clothes and tunnel vision people can go on longer using visual reception. Sighted people with ties to the DeafBlind community often carry contrastive clothing with them in case they run into a DeafBlind person, and DeafBlind people almost always wear contrastive clothes (so much so that they occasionally wax nostalgic about a time when they could wear red or polka dots).
There are also interactional conventions for turn-taking so people cue one another when collective focus shifts. Everything is geared toward maintaining visual communication practices as long as possible, which is a relief to people who had previously been out there in Deaf communities trying to fill in the blanks, bridge the gaps, and keep up appearances with less and less success. In Seattle it is possible for familiar, visual sensory orientations to be kept in tact a little longer. Therefore, for many of the people I interviewed, moving to Seattle was not a move toward tactility, but a way of postponing blindness. For those in the earlier generations especially, a great deal of negativity and fear had accrued to blindness. The promise of postponing it and the isolation it threatened was better than most could have hoped for--even if one day they would have to give up on vision entirely and “go tactile,” thereby becoming a “tactile person.”
3.3.2 Growing up with Usher Syndrome in the ‘60s and ‘70s
When people in the older generations were told they would go blind, they couldn’t imagine how life could go on at all. No one explained to them what they could expect or how they might cope. When people did suggest ways of coping with blindness, they were often very unappealing. For example, two sisters with Ushers, who had been living in Seattle since before a community formed there, reportedly sought out advice from a prominent Deaf teacher in the Seattle Deaf community about what to do when they lost their vision. They were told that once they were blind, they couldn’t sign any more. They would have to sign smaller and smaller as their tunnel of vision grew smaller, and at the end they would have to switch to fingerspelling. Whether they were given this or other scenarios, blindness, it seemed, would be even worse than what they had already experienced.
In many cases, growing up with Ushers meant being picked on by other kids, being called clumsy, being treated as not smart or not capable because of misunderstandings surrounding vision, and so on. Blindness was what made you not a good athlete, not a graceful person, not smart-- but it was not clear, in a positive sense, what life might be like as a “blind Deaf person.” Against this background, Seattle appeared as a place with hope for a collective future and energy for building it. Blindness was not stigmatized the same way that it was in the broader Deaf community. There were recognizable social roles to be inhabited and people to hang out with. Particularly in a time when access to information was limited, the phenomenon of the DeafBlind community came out of nowhere as a viable alternative to many of the effects of blindness--though not exactly as a place where blindness could be embraced. Counter intuitively, cultivating a “DeafBlind” identity led not to a shared world suited to a tactile mode of experience, but rather to services and social roles that would keep impending blindness at bay. Daniel’s story illustrates much of this.
Daniel grew up in a residential school for the Deaf in the 1970s. After graduation, he went to see an eye doctor because he suspected something was wrong with his vision. There were no interpreters present at the appointment, though, so the results of the exam weren’t clear to him. The doctor referred him to the Department of Services for the Blind (DSB). When he arrived at DSB for his first appointment, he thought he would be fitted for glasses. Instead, he had his first experience being thrust into the social role of a blind person.
[A woman who worked there, named Lisa] came out and met me, and pulled me by my forearm into her office. I thought,‘What is this lady doing?’ But she just went right on, smiling, and pulling me by the arm into her office. Finally, we sat down. She pulled out a Braille book and some math cards. I had no idea what was going on. I couldn’t imagine why she was pulling out all of this stuff for blind people. I wrote on a piece of paper that she must have misunderstood or something, that I only came to get glasses. I told her I had perfectly good vision. So she wrote back:
You’re going to be blind in 15-20 years.
I couldn’t believe it. I was in shock. I felt terrible. “Blind!” I thought. I told her I had to go to work, and she asked if I would be coming back in two weeks. I told her I would--you know-- whatever she wanted to hear. I didn’t understand if in 15 years I would wake up one day and suddenly be blind, or if I would be slowly going blind or what. I had very little actual information. When the time came to meet with Lisa, I didn’t go [ ... ]. The stigma associated with blindness was so great, that I assumed there was nothing but an empty existence for blind people. I was terrified of that [ ...]. This was in the ‘70s, and it was different then. [ ... ] So the years went by, and I wasn’t sure what to do about it.
Later, Daniel met a blind Deaf person who had Ushers. That person told him about the American Association of the DeafBlind (AADB) and also explained crucial facts to him about what he could expect in terms of his vision--for example, that it would slowly deteriorate from the periphery in. Only after meeting several people with Ushers at AADB who all told him the same thing, did he confirm for himself that his vision would fade from the periphery in over time. In 1984, Daniel attended AADB again, this time in Seattle. He liked what he found so much that he decided to move there.
I liked the people here in Seattle a lot. There seemed to be no stigma at all associated with being blind here. People were willing to help out when needed. I was really impressed. In [the state I had come from], if they found out you were blind that was the last you would see of them. It was really hard to find anyone willing to be your friend, let alone people to help you. In Seattle, not only were people willing to help, everyone saw each other as equals. I felt like I would have a better life in Seattle. [ ... ] So that is how I came to be a member of the DeafBlind community, and how came to identify as DeafBlind.
Daniel was not the only one. According to a record compiled by a former director of the DeafBlind Service center, 48 DeafBlind people moved to Seattle between the years of 1984 and 1987. In interviews I conducted with several of these people, they told stories similar to Daniel’s. After attending the 1984 AADB meeting, they were so taken with Seattle--the people, the energy, the possibility for once again being part of a community, job opportunities at the Lighthouse for the blind, etc.. that they decided to move there.
3.3.3 Fear of Going Blind
In the early ‘80s, there was great resistance on the part of many DeafBlind people to tactile modes of communication since these were associated with blindness and blindness was feared. Communicating with other DeafBlind people sometimes required tactile communication, so this was avoided. Joey, a Deaf communication specialist working at the Lighthouse in the early 1980s recalled:
Some DeafBlind people were very resistant to the idea that they were blind. They were always saying that they were only “a little bit blind,” and they insisted that they were Deaf. They wanted to keep communicating the way they did when they were sighted, which was fine, but as soon as they were put in a position to communicate directly with another DeafBlind person, they didn’t want anything to do with it. They just really had a lot of resistance to changing the way they communicated.
This is consistent with what many DeafBlind people told me about their experiences. In the the pre-ADA era people were often informed of their inevitable blindness in a crude way, which was followed by a lack of information about their condition. These experiences led some to develop strong aversions to everything they associated with blindness, including tactile communication. They came from Deaf sighted environments where visuality was highly valued and blindness was highly stigmatized. Kathryn explains that DeafBlind people in her Deaf school were picked on and in her case, even beaten up.
When I was a senior at the Deaf school I was on the volleyball team. I was a star player. I was chosen by the school to join the team. I was very involved, and things were going along OK. Then one game, we were playing against another Deaf school, and it was a really close game. We were neck and neck--they would gain the lead, then we would come back, and toward the end of the game, it was a tie. The ball came over the net, and somehow, my mind couldn’t understand what I was seeing and it went right over my head. Their team won. So I was disappointed, but I had to accept that we had lost. Then, once we were off the court, a player from our team came up to me and said she didn’t like to lose, and then she beat me up. She did it because I couldn’t see the ball, and so I contributed to our team losing. That was a terrible day that I will never forget.
Events like this continued happening until Kathryn’s parents decided she should see an eye doctor. She describes, like Daniel, the crude way in which she was informed of her impending blindness by the doctor, and the effect it had on her:
I went in for all day testing. I didn’t like it at all. No interpreter was provided. The ADA hadn’t been established yet at that time, in 1977. [ ...] There was no law that said you had to provide an interpreter.
So I spent the whole time tapping people on the shoulder and asking them, “What did you say? What did you say?” My parents and the doctors were all standing there discussing the situation. My parents said they would tell me later. I had very limited knowledge about Usher Syndrome. The doctor said, “You. One day you will be blind.” I was shocked. I didn’t understand why he thought I would become blind when I was older. I thought to myself, “I can’t accept blindness.” I had already grown up sighted for 19 years, experiencing the world that way. So when I found out I had Ushers, I just couldn’t accept it. And the way the doctor told me in no uncertain terms, “You will be blind one day.” [ . . . ] If only that doctor had described these things to me properly. If only he had had a good attitude, brought in an interpreter, and explained in a reasonable way that I should go to Braille school. Maybe I could have accepted it if that had been how I found out. But that doctor had a really bad attitude. He was cocky and he thought he knew everything. That hurt me a lot. It changed my life. Before I met with that doctor, I was talkative, social, but after that, I became very reserved.
The shock of finding out that she would be blind was compounded by the fact that Kathryn had already overcome other major obstacles to make her way into the visual world of Deaf, sighted people. Kathryn had no Deaf siblings and was subjected to years of oral “education” where ASL and even gesture were not allowed.
If a child gestured, they would be punished. The teacher would smack their hand. You really weren’t allowed to use your hands for any kind of communication. I rebelled in that environment, because I really couldn’t understand speech. I can’t hear at all [ . . . ]. Later, my family moved out of that neighborhood, north, and I was transferred to a different school. Unfortunately, it was the same situation. ASL and gesture were both forbidden. The only improvement was that they policed the use of gesture a little less, and they didn’t really hit our hands if we did try to gesture to one another. Nevertheless, it was an oral program run by people who believed strongly in teaching deaf children to speak.
Eventually, Kathryn met a girl who attended the residential school for the Deaf and she decided to visit her there. Shortly thereafter she transferred into the Deaf school and found her life greatly improved.
At that school you could be involved in drama, in sports, in all sorts of activities. There didn’t seem to be any limitation. With hearing students, what you could do was very limited. There were no ways to provide those kinds of opportunities because of the communication barriers. Hearing students didn’t understand me, and I didn’t understand them.
Kathryn had finally found a social setting where she could communicate and therefore participate, only to find out that she would become blind. Like Daniel, she couldn’t imagine what being DeafBlind would be like.
[The idea of blindness] scared me to death. I thought, ‘I’ll be blind and deaf. That means I won’t be able to see or hear’. I thought that meant I would be utterly helpless, unable to function. I had no idea how a person could live like that. There were no services, no support [ . . . ]. I wondered what my life would be like in 20-30 years. I didn’t think about technology. I didn’t think about computers. They came later. I couldn’t imagine at all how DeafBlind people could communicate. I just asked myself how?? I had so many questions, and no answers. It felt like no one was helping me.
Kathryn went on to attend Gallaudet University, where she occasionally encountered Deaf-Blind people. By that time, so much stigma and fear had been bound up with the idea of blindness, that she saw DeafBlind people as a threat.
One day I saw some fully blind DeafBlind people communicating tactually, and I was taken aback. I felt like if I touched someone like that, I would suddenly lose all of my vision. I didn’t want that, so how was I supposed to communicate with them? So I avoided DeafBlind people.
It wasn’t until she was living in Seattle that collective norms required her to face her fear of tactile communication. Nevertheless, there was an important line that she still would not cross. Although she learned to communicate with people who use tactile reception, going tactile herself remained unimaginable.
I had to accept touch. I had to learn how to interact with and communicate with tactile people. But it was all one-way. They would use tactile reception, but I wouldn’t. I hadn’t practiced, so I didn’t know how. Really that doctor [ . . . ] ruined it for me. That experience was so traumatic, that even after 33 years, it’s still hard to get over it.
Kathryn summed up her fear of going tactile, as a symptom of her “denial.” She found the thought of going blind so terrifying that she never accepted the fact that it was happening. Moving to Seattle was a sort of compromise. The supports that were in place in Seattle on the one hand forced her to accept a DeafBlind “identity.” Receiving services required this. On the other hand, these supports allowed her to continue compensating for vision thereby maintaining a fundamentally visual orientation to the world, as opposed to transitioning to a more tactile way of life.
After I moved here, I wouldn’t say I made wonderful progress. You really have to understand yourself. I knew I needed to know who I really was as a DeafBlind person. I had to accept that. So between then and now, I’ve been doing better, but there are still some things that I haven’t faced. For instance, I should be using a cane all the time, every day, but I don’t. When I look outside, and notice that it is a bright day, I think, ‘I don’t need a cane! I’ll be fine!’ Tactile reception is another example. I don’t need tactile reception. I can still see what people are saying when they sign through my tunnel of vision. So that’s what I mean by ‘denial.’ Really denial means that I haven’t gone for it, and learned tactile reception. I feel that I don’t need it. Therefore, I’m in denial. I mean, I understand the concept of tactile reception, but I don’t practice, and I’m not skilled at it.
This combination of claiming one’s need for tactile communication and simultaneously recognizing one’s denial about that need are a common theme. For many people in the earlier groups this discourse makes perfect sense. Going blind is terrifying and there really isn’t any way to change that. When the time comes, at best you can “go for it” and at worst, you can “give up,” but there is nothing appealing about going tactile whether you are in Seattle or not.
3.4 Visual American Sign Language is Established as the Primary Language of the Community
Diversity in language and communication backgrounds coupled with the effects of stigmas around tactility led to a complicated sociolinguistic situation at the Lighthouse in the early stages of the DeafBlind program. Even before the post-AADB influx in the mid-1980s, there was already an effort to improve communication between DeafBlind employees. However, as the numbers grew, the problems became urgent. For members of the Deaf community, and those who studied their history and their language, these problems were familiar. DeafBlind people who ended up at the Lighthouse had, after all, grown up as deaf children. Deaf education systems (and lack thereof) have produced a wide variety of communication styles and capacities in the broader American Deaf community as various fads and trends have come and gone.
Some Deaf people have Deaf parents, but most have hearing parents. Of those who have hearing parents, some parents learn ASL, some learn cued speech, some develop “home sign” systems, some learn Signed Exact English (an invented code which haphazardly attempts to represent the morphology of English visually). Finally, some Deaf people have been educated orally, which often amounts to a denial of access to visual language and natural social environments, as it did in Kathryn’s case. Given no access to visual language, most fail to develop a native command of either ASL or English. While they are still able to communicate, opportunities for higher education are often very limited.
As an effect of this history, most Deaf people who are members of an established Deaf community in the United States will be familiar with a wide range of types of d/Deaf people. Some members of the Deaf community, due to their particular biography, their skills, and/or their training, act as translators within the community. For example, a person who grew up with parents who had acquired ASL late in life might develop skills for mediating between their parents and the more fluent Deaf users of ASL in their community. In recent years, this role of the Deaf Interpreter (DI) has become professionalized. Today, DIs often act as a second relay in official situations where accurate communication is both very difficult and very important. For example, if a deaf person who doesn’t have a standard language is arrested for a serious crime, the court proceedings need to be clear to that person. A standard hearing interpreter is trained to interpret between two languages--ASL and English--not between English and gestural communications that are shared by a very small group of users (such as the person’s family or the person and their sibling). In a case like this, a DI would be hired to mediate between the hearing interpreter and the deaf person on trial. Although the role of the Deaf interpreter is not new within Deaf communities, the professionalization and recognition of its importance in official settings is.
In the 1980s, when DeafBlind people started moving to Seattle to take positions working at the Lighthouse for the Blind, this process of professionalization was just starting in Seattle. As the sociolinguistic situation grew more complex, it became clear that a “communication specialist,” would be needed. Joey was one of the first people to be hired in this capacity. In an interview, he described his own communication background and explained how his background qualified him for the job.
It was the height of oralism in the ‘70s, and signing was banned in most schools. I have a Deaf brother. I’m the youngest, and he is the third, of five. Also, our oldest brother is deaf, but not culturally. He’s kind of ... hard of hearing, I guess you could say. But not really. The other two kids in the family are my sisters. The younger of the two signs now, but in our family growing up, no one signed. My brother and I sort of “talked” to each other, doing the oral thing, but we really communicated using our own home-made signs. But at the school I went to, there were always Deaf students who signed. Maybe they were kicked out of the Deaf school, or their families were Deaf. Or their families moved from other places--there were military kids in the school, because [the school] is near an airforce base, so there were a lot of kids from families who signed. In the classroom, everyone sat on their hands and acted like good oral kids, but as soon as we were out of the classroom, we couldn’t get enough of signing--that was where the real social stuff happened, and where we all learned ASL. For me, it started out as a kind of combination between the home signs me and my brother developed and then the exposure that I got from kids on the playground. Being deaf, I had a natural inclination for learning ASL, so it happened fast.
These experiences, in addition to his general curiosity about and openness to communicating with a wide variety of people, led him to cultivate the skill of mediation. In 1980 he was hired as a communication specialist at the Lighthouse for the Blind in Seattle. According to his memory, there were about 10
DeafBlind people working there at the time. I asked him why he was chosen for the position and he said:
I was qualified for that job because of my skills with language. I can communicate with a wide range of people with a wide range of communication backgrounds. I can do everything from real big ASL to snobby small signing, to Pidgin Signed English, to Signed Exact English. I can do it all. I have a lot of experience with communication, and I have a certain ability with it, too.
Although Joey didn’t have any experience working specifically with DeafBlind people, what he found when he started working at the Lighthouse was familiar to him. The expertise he brought with him from the Deaf community seemed perfectly applicable. In the Deaf community, as Joey noted in our interview, the solution to communication problems is, very simply, American Sign Language. With DeafBlind people, he said, there was an additional issue with “communication technique.” Nevertheless, Joey and the others he worked with
figured that ASL as a common language would be a first step in the right direction, so Joey started teaching ASL classes to DeafBlind employees at the Lighthouse.
[They] had really mixed backgrounds. Some of them had limited exposure to language in general, or they used a different sign system. It was just like deaf people who were not blind. Many of them came from hearing families, so they had really weak foundations in their language development. So when they met me, and I could communicate with them clearly, they wanted to learn how I did that. I did that by using ASL, so that’s how teaching ASL classes came about. It wasn’t “about” ASL. It was about improving communication skills. The means was ASL. That’s how we put it. I remember one hot issue at the time, and maybe it still is now, was direct communication between two DeafBlind people. Often times, if DeafBlind people communicated directly with one another there would be all kinds of misunderstandings that would lead to accusations and fighting. So as a communication specialist, I would often intervene in situations like this. I would ask each person, one at a time, what happened, and then I would explain to them what had gone wrong.
When I asked him why ASL didn’t solve the problem the way it would have in a Deaf, sighted environment, he explained:
Communication really was limited at that time. There were a lot of misunderstandings when DeafBlind people communicated directly. Now, I think that’s still the case, but back then it was even more the case. It wasn’t only that some people used ASL and some people used Pidgin Signed English and so forth--sometimes that was the problem, but also people had different degrees of blindness. Some people used tactile reception, and some didn’t. So they were incompatible in that way too. It was hard to find a common language and mode of communication that two DeafBlind people could use. So just like with hearing people, when they start to get involved with the community, you have to explain the different kinds of vision loss that people have and how it affects communication: Ushers, tunnel vision, people who need to stand far apart from each other, people who need tactile manual communication, the people who have unclear vision, so you have to sign up close with them . . . DeafBlind people had to learn that stuff, too. When the conversation would start to be frustrating for them, you would have to intervene and explain--“that person can’t see you.” They have to use tactile reception, so you have to sign tactually to them. Or maybe one person doesn’t really have much exposure to English and the other one is throwing big English words at them, and they start calling each other names. So there was language background and then there was also communication technique.
Deaf people like Joey and students of ASL and interpreting were the ones in an institutional position to affect communication conventions. Given their knowledge of Deaf history, Visual American Sign Language was offered as a solution to communication barriers. As I will discuss in Chapter 4, the pro-tactile movement is scaffolded in many ways on Deaf understandings of community, power relations, and the relationship of both to language and communication. It is unlikely that the pro-tactile movement would have emerged in
a DeafBlind community where spoken English was the primary language, therefore, this development was an important first step toward a pro-tactile future.
However, when Visual ASL was introduced among DeafBlind people, there were problems that did not arise among Deaf sighted people. As Joey explained, people have differential perceptual access to the sign vehicle and to one another. DeafBlind people came to Seattle already frustrated by communication barriers, so when they encountered other DeafBlind people who were even more difficult to communicate with than sighted people, this was a level of frustration most could not endure. In the beginning, there were too few DeafBlind people to break off into smaller groups with similar language background. Therefore, sighted people had to intervene. However, mediated communication was limited, in the beginning, to one-on-one configurations and DeafBlind people had no way of meeting in groups at all. Therefore, one of the next goals was to find a way of making group communication feasible.
3.4.1 Toward Group Communication
Prior to the large influx of DeafBlind employees at the Lighthouse, there was less of a problem, simply because there was less interaction between DeafBlind people. An annual picnic, hosted by a DeafBlind employee, was one of the only social events that was recalled in interviews. As time went on, DeafBlind people started organizing social gatherings more often. Several people I interviewed remembered a Halloween party, held in the apartments owned by the Lighthouse. A small group attended, including DeafBlind people, Deaf people, and sighted people. Visual ASL was the common language, and people “did what came naturally” to communicate. There were no official interpreters working, and at least some of the people present thought about guiding and relaying information as part of “hosting.” One person explained, “If someone looked lost, someone else would help them find what or who they were looking for.” Since blindness was so stigmatized elsewhere, a willingness to do simple things like this was unusual. It was also very different from what some of the DeafBlind people had anticipated for their futures, for example, those who had been told that they would have to switch to fingerspelling when they went blind. So, as one sighted participant explained, “There was a lot of excitement. What had been impossible was suddenly possible, and everyone was really excited about it.” In these early gatherings, people communicated one-on-one, adjusting to one another as needed.
Around this time, a class was held as part of a research project being done by a graduate student in psychology. This provided an opportunity to experiment with interpreting strategies for group communication. Although it was awkward and difficult, group communication was popular and people were optimistic that strategies could be improved. Over the next several decades, interpreting practices in Seattle became increasingly sophisticated, streamlined, and effective. These practices made social and political organization possible via meetings of DeafBlind advocacy organizations, like Washington State DeafBlind Citizens (WSDBC), “task force” meetings, which were organized periodically to address economic, social, and political problems, and a bi-weekly meeting that has become a main-stay of the DeafBlind community, known as “DeafBlind class.” DeafBlind class is, to this day, a highly valued venue for DeafBlind people to come together and exchange news, learn about legal, medical, and social developments in society that affect them, and socialize. It is also an important opportunity for interpreting students to improve their skills and to be mentored by more advanced interpreters. By the time I came into the community as an interpreting student in the 1990s, they had mediated communication down to a science. I was part of a small army of volunteers who would go to Seattle Central every two weeks to interpret at DeafBlind class. It often took me the first part of class to figure out where I fit into the overall network of relays (they are exceedingly complex), and yet it all seemed to work and was surprisingly efficient. In the early days of group communication, this was not the case. An interpreter who was new at the time, told me that
one of the most memorable problems was turn-taking--DeafBlind people didn’t understand how to do it, and interpreters too. Interpreters were there for short periods of time [as students], then they moved away, or whatever, so people would learn, but then there were new people who didn’t know yet, and there were so many confusions. Someone would say something, and the person would be confused about why THAT person (the interpreter) would be saying that thing. And the interpreter would try to explain--“Its not ME. Its [Robert] saying that. I’m just interpreting what he’s saying,” and it was really a challenge.
This was a common problem. People would mistake the interpreter for the signer, and communication would go circular:
Ronald stood up in front of everyone, and signed READY? To his interpreter, and [Rose] voiced it. Then his interpreter signed what Rose said back to Ronald, instead of YES, and it just went on like that in a potentially endless loop. Until finally Rose said, “DO NOT SIGN READY! SIGN YES!” [Laughs]. We could still be there if Rose hadn’t said something.
As was discussed in the previous section, in order for DeafBlind people to communicate with one another at all, and especially in groups, sighted interpreters were necessary. However, the use of sighted interpreters prevented a tactile field of engagement from emerging. Instead, a visual field of engagement was maintained, as were the structures of Visual ASL. Expertise regarding communication accrued to sighted social and professional roles, and this distribution of expertise was reinforced by the institutional structure of the Lighthouse and other organizations serving blind people. While these asymmetries were established, mediated group communication was essential, and it led to political recognition of the Seattle DeafBlind community and the establishment of the DeafBlind Service Center.
3.5 Political Organization in the 1980s and the Inception of DBSC
By the mid 1980s, Seattle had drawn national attention as a place where something hopeful was happening for DeafBlind people. Jobs and communication resources were almost im-possible to find elsewhere. Each year, there was a new influx of DeafBlind people who had come to work at the Lighthouse, and the community grew rapidly. Over time, communication appeared as only one of many additional problems. The Lighthouse worked with other organizations to provide services to DeafBlind people, but the coordination and provision of services was extremely complicated and therefore, largely inaccessible. Leah, the manager of the DeafBlind program at the time, said that when DeafBlind people actually did figure out where to go for services, something was almost always lacking. Either the organization in question knew how to address vision loss, but didn’t understand about ASL and interpreters, or it was the other way around. In order to address these problems, a task force was established, which included representatives from several of these agencies, including the Department of Services for the Blind (DSB), the Department of Vocational Rehabilitation (DVR), the Hellen Heller National Center (HKNC), and the Division of Developmental Disabilities (DDD). The director of DSB at the time, who I will call Al, suggested that some research needed to be done about the gaps in services. Leah explained:
[O]ne of the key things we did was put together a matrix. It was done by hand, because it was before computers.9 It was a grid sheet--we had services and organizations--one one each axis, and put an X where there were services, and no X where there were no services. That became a tool for us to make our case.
Some services were not only inaccessible, they were nonexistent. The problems that Deaf-Blind people faced, and the services needed to address them, were often not a product of adding Deaf issues to blind issues. They were unique. One of these things was the use of visual interpreters for running errands such as grocery shopping. DVR paid for these services for a while, since, according to Leah, “You need to buy groceries to eat, so you can go to work, but,” she said, “that was kind of a stretch.” So the term “Support Service
Provider” (SSP) was introduced10 to describe this specialized service that could not be provided elsewhere.
SSP services were beyond the scope of what any of the existing organizations could take on, including the Lighthouse. They needed a separate organization, with separate funding for this. The heads of the state agencies all recognized the problem. Al, the director of DSB at the time said in an interview, “By the time I saw the needs assessment, [Seattle] was a place of choice for DeafBlind people. Large numbers, proportionately, so it created a real challenge for metro, DDD, VR, DSB. We had a real problem.” The solution that was agreed on was to establish a separate non-profit organization that would provide the services other agencies couldn’t. This organization would become the DeafBlind Service Center (DBSC). Early on, Al said, the idea was to create an “embassy” for the state agencies.
This metaphor hasn’t stuck, but that was how I characterized it at the time. Think about immigrant communities. It was like that. We had a community within our state that was a linguistic and cultural minority, and there were real barriers to finding them, to communicating with them, and to serving them. For
that, we (the state agencies) needed an embassy. That way we could sort out the confusion of where people should go for the services they needed. That way we would be able to more effectively serve them, and make it less confusing for us, while also making it easier for them. So that was the pitch. If it’s just for them, that’s not the most convincing argument. It needs to also benefit the system--it needs to help us do our job, too. I don’t remember any big difficulty or battles about that. Also, it was a way to show we were responding to that list of needs that [Leah] had presented us with.
In addition to referring DeafBlind people to the right place within state agencies, DBSC was supposed to provide any services that were not offered elsewhere. Support service providers and accessible advocacy services were two of the things that were glaring needs at the time (and remain so today). The task force participants all agreed that something like DBSC was needed, so they arranged for ancillary services to be provided through a “joint operating agreement.” However, something more permanent still needed to be established, which, as Al said, “was not subject to the while of whoever happened to be directing the three agencies(11)."
According to Leah, everyone thought it was a great idea, but no one was jumping out of their seat to pay for it. So the aim at this point was to convince the governor’s office to establish a bill that would secure funds for DBSC. This was one of the earliest organized political efforts in the DeafBlind community. In order to achieve their goal, political representatives had to become aware of the need for SSPs among DeafBlind people, and for that, they would need to become aware of the growing DeafBlind community.
Toward this end, groups of DeafBlind people and sighted advocates and interpreters started making regular trips to talk to individual senators and representatives at the capitol. One sighted interpreter had a VW bus that everyone would pile into it and go down to the Capitol for the day. They planned their appearances strategically, showing up, for example, during the lunch hour on days when important meetings were happening. There were sleepovers the night before, where people would practice their speeches repeatedly, until they were concise and flawless. Real relationships were growing and sighted volunteers, according to both DeafBlind and sighted people, were abundant(12).
These efforts resulted in getting the legislature to force the relevant state agencies to put a proviso in the budget, which meant that funds would be secured regardless of who happened to be the director of the obligated agency. There are several stories that people have told me about the specific moment when DeafBlind people achieved political recognition at the Capitol. One is about Dan Mansfield, who was one of the first DeafBlind leaders in Seattle. He was one of three DeafBlind siblings, all of whom had Ushers. Dan grew up at the residential school for the Deaf. Although I have never met him, by the time I came into the community in the 1990s, Dan had become a legend. He was known for his charm, his good looks, and his political competence. Many people credit him with the moment of political recognition. The following version was relayed by Adrijana, a current DeafBlind leader in Seattle:
[H]e went to the capitol, and you know he was charming. He walked up to the congressional committee, who were all seated at their raised table, and he told the interpreter he brought not to say anything. He stood in front of them, and pulled out a stack of cards. On each card was one letter. He proceeded to show them one letter at a time, I A-M etc. And then he slipped, and all the cards fell on the floor. Everyone scurried around trying to pick them up. It was embarrassing and uncomfortable for everyone, not to mention a frustrating communication experience. He got up, and tried, with many mistakes to spell something (the cards were now out of order). Then he calmly told the interpreter to start interpreting for him. All he said was,“We need interpreters.” And we have had funding for interpreters ever since...
During this time, DeafBlind people were a persistent presence on the Capitol campus, and it is likely that many moments like this had a cumulative effect. For example, Al told me a similar story about a moment of political recognition:
Jim McDermot was chairman of the Ways and Means committee. It was really hard to get a meeting with him, and I remember the DeafBlind folks were down that day. We had come to his building--his office was in a suite. There was a waiting room and a conference room. And he had an office in the back in the ground floor of this building. Dan Mansfield and 4 or 5 people were standing ... in the hallway outside of his door. He was leaving his office, about to go out to the capitol. He was so hard to meet with, that typically people would ambush him--Senator can I walk with you. Every once in a while, he would see someone he wanted to talk to, and he would walk with them, but most of the time, [he would bolt]. So he stepped out and he glanced down the hall, and he saw several DeafBlind people talking to each other and the interpreters. And he stopped and stared for about a minute watching their communication. I observed this, and I thought, ‘holy hell. He never exposes himself to everyone like that.’ And I thought, ‘they got him. He is seeing what the challenge of communication is--in one respect anyway--and they’ve got his attention’.
According to Al, this kind of fascination played a role in the success of the activists. In a representative democracy, no one should care about this tiny group of people and what they are asking for, at least in theory. But Al said that for this senator, and for others, there was a “lost tribe” aspect to it. He said:
Here’s this thing that you didn’t know exists, and it exists. And DeafBlind people were saying they wanted to come into the fold. They weren’t trying to impress upon us their particularity or their specialness. They just wanted what everyone else wanted.
Seattle became even more appealing to DeafBlind people elsewhere once DBSC had been established. Their work and personal lives could be separated to a greater extent, they had a standard number of hours with a visual interpreter or “SSP” each month that they could count on. In addition, they had somewhere to go to sort out services in the larger system of State agencies. The community continued to attract new members.
A “DeafBlind identity” emerged during this time as something distinct from a Deaf or hearing identity. Many DeafBlind people told me during interviews that they had struggled for many years in accepting it, but eventually came around to accept themselves as “DeafBlind” after moving to Seattle. However, many of these same people were still using visual reception, despite very limited vision. They were still going to great lengths to avoid tactile communication. To them, being DeafBlind did not mean cultivating tactile sensibilities, using tactile communication, or becoming a tactile person. Stigmas around tactility among the earlier groups of DeafBlind people remained powerful.
3.5.1 New DeafBlind Perspectives in the 1990s and 2000s
For some DeafBlind people who moved to Seattle later, in the 1990s and 2000s, the negativity associated with tactility was surprising. Aversion to tactility seemed to come from attitudes and norms in Seattle at least as much as it did from their prior experiences outside of Seattle. For example, when Lee moved to Seattle in 2001, she noted that going tactile was very clearly
something negative that people gave into. Something that would draw sympathy and looks of consoling understanding. Not something people went into with positive aspirations or enthusiasm.
In many of the interviews I conducted, narratives about going tactile were as Lee describes. For example, Susan said that one day she was at a staff meeting at the Lighthouse and she was watching an interpreter visually, as she usually did. At some point, someone said, “Susan? Are you going to answer?” And she realized that she had been missing what the person was saying. Before that, she thought she had been catching everything. To try to clarify, someone tried to communicate tactually with her, and she pulled away, asking what the person was doing. By this time, she was certain everyone was watching, and she was deeply embarrassed. Tactile communication wasn’t helpful for her, because she hadn’t developed the skill. Eventually, she did learn how to receive Visual ASL signs tactually, but this only led to more difficult encounters. She explained that often, DeafBlind people would say, “Susan? That’s you? Communicating tactually with me? Your eyes have gotten worse!” which was really upsetting. Susan said that going tactile was a necessary change, but overall, it was depressing. She said once she went tactile, she couldn’t participate in groups the same way.
For example, at the Lighthouse, there are two separate lunch groups. If you are still using tunnel vision to communicate, you can eat with the other tunnel vision people. Once you go tactile, though, you have to either switch to the tactile group, or be left out of conversations. Susan’s friends were all still in the tunnel vision group, but that was no longer a feasible communication situation for her, so she saw less and less of them. She also described a process of increasing dependence on interpreters, where the quality of her day, or a meeting she attended, or her level of interest in a person she was communicating with always depended to some extent on whether her interpreter was tired, whether they knew her preferences or not, and so on. She said, all in all, going tactile had been a negative experience for her. But, at a certain point, it became necessary, and she had to do it. This kind of story about giving in and going tactile, despite the many negative consequences associated with it was a common theme among the DeafBlind people I interviewed.
3.5.2 Mainstreaming, Inclusion, and Mediation
For Lee, Adrijana, and others who moved to Seattle in the late 1990s and early 2000s, these stories were alarming. They were not attributable to vision loss, but rather, to the aversion so many DeafBlind people had to tactility. After hearing so much about the DeafBlind community in Seattle, the negativity toward tactility that they encountered upon arrival was both surprising and disappointing. The Deaf world that they came from was very different than the one Daniel and Kathryn came from. By the time they moved to Seattle, Lee and Adrijana had spent years linked in to constant streams of information via the internet, email, text messaging, text relay, video relay, and captioned TV. Seattle had become an established phenomenon, and they knew that it was a viable option long before social isolation would have become a problem. “Deaf culture” was something they took for granted, and it was part of their common sense that ASL was a full-fledged language. It seemed like if Deaf people had a world of their own organized along visual lines, complete with everything any human could want, why couldn’t the same be true for DeafBlind people?
For example, Adrijana describes her impressions in the late 90s just after moving to Seattle. She said that she and others who moved there around the same time wanted to get away from being so dependent on interpreters.
I started feeling that way not long after moving here in 1997. I had a lot more vision at that time, but it didn’t matter. I didn’t like the environment. For example, at Seabeck. There was no one to talk to! Everyone was busy chatting with their SSPs. I started to feel like, ‘Who am I? Why did I even move here to Seattle? I’m from a Deaf world where communication is direct and unmediated. Now everything seems wrong.’ Like I took a step backwards into a hearing environment. Later, though, new people were moving here who were more my age [ ...] and Seabeck started to change a little. People in our group, with our communication system, in our world--we started communicating with one another, rather than always going through an SSP.
Adrijana, like Kathryn, had spent many years in hearing environments where she had limited opportunities to engage with her peers and otherwise participate in collective life. It wasn’t until she went to college at the Rochester Institute of Technology and the National Technical Institute of the Deaf that she could fully participate. Not long after, though, her vision got worse, and she no longer found Deaf environments welcoming. She was having difficulty with her job working as a biologist in a lab. She started looking for jobs that did not require vision, and found one at the Seattle DeafBlind Service Center. She expected to find a place where she could communicate tactually with other DeafBlind people in the un-restricted, un-mediated way that had previously characterized Deaf environments for her.
Instead, she found that communication was perpetually mediated by sighted people. In this sense, it was like being a Deaf person in a hearing environment, participating through the use of an interpreter. Adrijana had had enough of that. She wanted a place where interaction felt natural and unmediated. She didn’t think there was anything inherent about being DeafBlind that would prevent that, but in Seattle there was too much resistance to tactility to make it a reality. She found that people preferred to use an interpreter and go on using visual communication practices than go tactile and have unmediated exchange. In some ways, this appeared to Adrijana like a deaf oralist stance--deaf people who would rather appear to be speaking and hearing (meanwhile working hard to compensate for what they miss) than having a genuine, easy interaction in a visual language.
3.5.3 The Crystallization of Anti-Tactile Forces
By the early 2000s, anti-tactile forces had become reified in the organization of the social field. One of the most obvious manifestations of this was a hard separation between sighted and blind social roles. In order to occupy a sighted role, you had to be able to communicate (or appear to be communicating) in a visual modality. If you were no longer able to do this, you were forced to occupy a blind social role. Therefore, DeafBlind people sharpened their skills of inference and performance, trying to appear sighted for as long as possible. Going tactile meant going blind and going blind meant extreme marginalization, even and especially in the community that was once a refuge and source of hope.
Susan, for example, could no longer convincingly occupy sighted social roles, and was therefore alienated from her friends, was more dependent on interpreters, and was less able to access stable and reliable sources of information. She was more isolated and experienced a significant decrease in the quality of her life. In these ways, the occupation of sighted social roles was restricted to those who could pass for sighted, given the necessary accommodations. When no amount of accommodation would suffice, there was no choice but to become blind.
On the other hand, the occupation of blind social roles was also restricted. Lee moved to Seattle in 2001, thinking that she would go tactile upon arrival as a first step in a series of changes that would lead her into a more tactile way of orienting to the world. However, because she still had quite a bit of vision, she encountered a lot of resistance from other DeafBlind people. From one perspective, individuals were resistant to going tactile because of their fear of going blind, which was a response to historical and personal circumstances outside and prior to the Seattle DeafBlind community. However, within the community, these dynamics took on a life of their own, generating increasingly rigid boundaries. One could not just declare that they were DeafBlind and be considered DeafBlind. There were practices through which this position had to be taken up-- some related to language and communication and some not. Lee explained:
I moved here and immediately started calling myself DeafBlind, but people said I couldn’t do that because first, I was still driving. Second, I didn’t use tactile reception, and third, I didn’t use a cane. It was firmly established that until my status changed regarding these three things, I had to wait.
Lee is gay. She thought of these things like “coming out” and saw no reason to put them off. The faster you come out, the faster you are integrated into a world that will support you, rather than remaining in a world that seeks to limit and exclude you. It was the same thing for her from a Deaf perspective--being a part of the Deaf community means embracing a visual way of life, which includes using and valuing Visual ASL and visual communication practices. The sooner you stop trying to approximate hearing ways of doing things, the sooner you find a way of being with others that feels natural and easy. When DeafBlind people stated the requirements for establishing a DeafBlind identity, Lee understood them in these terms. She took their claims seriously, learned to use a cane, learned to use tactile reception, and stopped driving. But to her surprise, she caught a lot of flack every step of the way.
One DeafBlind person really picked on me early on, right after I moved here, saying I was over-eager “like a puppy” and so on--taking any opportunity to insult me. I went ahead in any case--first with the cane. That same person was really dismissive of my decision to start using a cane. Second, I quit driving, and people sort of patronizingly congratulated me on “finally” quitting. Third, I started using tactile reception. People were really discouraging about that one, like, ‘Why are you going to do that? You should wait. I haven’t gone tactile yet.’
Lee went ahead, though, because like Adrijana, she saw how people who didn’t go tactile missed more and more of what was going on around them, and saw that it was more and more difficult for them to learn to communicate tactually. Her decision seemed like the right one on many occasions. She said she often ended up interpreting for people who were still “tunnel vision” people because via tactile communication, she could follow what was going on and they couldn’t. Lee said that tunnel vision people relied more and more on idiosyncratic rules and became very demanding of the people around them. She explained that on one occasion, a tunnel vision person she was with was complaining that people weren’t following all of the many ridiculous rules that you have to follow to make visual communication with her possible. She put it in terms of “respect.” She said people weren’t respecting her. They shouldn’t walk quickly by-- it’s confusing. They should stand at the right distance, they should sign slowly ... It’s not reasonable to expect people to do that, and they don’t. So the result is that she’s left out, and is getting more and more frustrated as time goes by. I knew that by going tactile early, I would never have that problem.
Lee experienced resistance to going tactile primarily in her interactions with other DeafBlind people, but she saw their perspectives as being shaped both by history and by the current configuration of social roles in the community, which included sighted people. She said that middle group came into the community as “hip, cool 30 and 40 somethings.” In contrast to the people who were already older when they moved to Seattle, they, as a group, had more education (most had attended college if not graduated), they had more leadership experience, they had been part of Deaf organizations like Deaf fraternities and sororities and they were used to “being in the public eye.” The older group, she said,
was more used to a world made up of Deaf people. They almost exclusively went to residential schools for the Deaf. They were not college educated. They had worked in manufacturing or other working class jobs for many years, and when they moved to Seattle and got jobs at the Lighthouse, they went on doing the kind of work they had been doing all along. And it was a large group, so they supported one another a lot. [ .. .] The younger group is more used to a mainstream kind of experience. Not just in school, but in life. They’ve already had the experience of working in a hearing company before. They’ve had romantic relationships with hearing people, they have hearing friends, they live in a hearing area, they participate in hearing events and the hearing world in general. They still value Deaf and DeafBlind people, but they have a range of experience. So the two groups are really different. The younger group is more concerned with current mainstream trends, so they’re more likely to resist tactile communication practices, or the use of a cane, that would mark them as different from the mainstream [ ...]. Maybe if mainstreaming never happened, then we wouldn’t have this problem, and people would embrace tactile signing. I don’t really know, but that’s my guess.
Lee speculated that when the more “mainstreamed” people arrived in the community, they were given the impression that they weren’t the same as the older group, but that
[t]hat they were somehow better--had more potential, and they would be leaders. So they had a stake in distinguishing themselves from that older group, and even though they themselves were getting older, they didn’t adjust because adjusting would have been becoming the thing they were valued in opposition to. [ . . . ]
The evaluative perspective that gave rise to these hesitations was primarily, according to Lee, a normative, sighted one, but the boundary it created between tactile people and tunnel vision people was adopted and policed by DeafBlind people. It was then reproduced in many domains of social activity. For example, the way interpreters, as a resource, were distributed has perpetuated the asymmetry between tactile and tunnel vision people. Samantha, a sighted interpreter, who is also an interpreter coordinator explained:
There’s not a lot of support for people who are going through vision change. And I think because of that power dynamic that’s set up. If I have vision I get to watch Harli Johnson.13 He has amazing language. If I don’t have vision, [ . . . ] I’m going to get sometimes a student and sometimes an interpreter who’s OK-- unless I say that I really don’t like to work with that person. But how many times can I [as a DeafBlind person] say that before somebody says, ‘Well they’re really hard to work with’. And then what I really want to do is participate and be involved in this community that functions because we have interpreters in this
setting [laughs exasperatedly].
There are conventions for organizing meetings like the one Samantha is talking about. A person who is presenting will be on the stage. There is also a platform interpreter who copies the questions and comments coming from the audience, as well as providing some visual information. This person is often one of several Deaf interpreters with years of experience, skill, and appeal. If you are a tunnel vision person (a blind person occupying a sighted social role), you are more likely to work with the platform interpreters. However, if you are a tactile person (a blind person occupying a blind social role), you are more likely to work with someone who is not experienced, and is hearing, and therefore does not have a fluent, let alone native command of ASL. This is a further incentive for remaining a member of the tunnel vision crowd for as long as possible.
In addition, if a person is part of a group that is using one platform interpreter, this is less expensive (either in terms of volunteer resources or money) than providing two tactile interpreters for every individual. Although sighted people do not actively discourage requests for tactile interpreters, DeafBlind people are careful about asking. They feel the pressure of the interpreter shortages and until they are really incapable of using visual accommodations, they feel that they should continue trying. When encouraged to start working with tactile interpreters, they reportedly say things like, “I can’t ask for that,” “I don’t want to rock the boat,” or “I don’t know if I want to be tactile.”
In these, and other domains of social activity, blind and sighted social roles have become increasingly contrastive and asymmetrical. The former has accrued less authority, potential, and value. Until recently, using VASL meant taking up a sighted social role, therefore, greater legitimacy and worth accrued to VASL and visual communication practices. Distinguishing one’s self from the tactile people became more important than the actual communication practices from which the social categories derive. At a more fundamental level, the field reproduced by these position-takings was primarily organized visually. This meant that either DeafBlind people were modifying visual communication practices to access visual fields of engagement or they were using tactile communication practices to access visual fields of engagement. The further the mode of reception drifted from visual modes of orientation and representation, the further the person drifted from direct access to what was going on. They relied more and more on descriptions of the visual details of ordinary life that interpreters might or might not be able to capture.
There was no tactile field of engagement. There were only tactile forms of compensation that would allow access to visual fields of engagement. Therefore, the bridge linking experience to collective experience grew longer and more difficult to cross as one adopted tactile modes of communication. The asymmetry in the social field was self-perpetuating. The benefit to living in Seattle was that there were people there who understood what Usher Syndrome was and who were actively trying to help DeafBlind people go on occupying familiar sighted social positions as long as possible. But eventually, the same problems people came to Seattle with happened all over again--group interaction was avoided, inference capacities were pushed to the breaking point, dark restaurants and bars became uninhabitable. In short, social isolation threatened to encroach again. The new life that seemed so promising upon arrival in Seattle became less and less so with time. It was against this background that the pro-tactile movement emerged.
Chapter 4
The Pro-Tactile Movement
Since the 1990s, communication practices have become conventionalized, social and professional roles have become clearly defined, and bridges between the community and the larger society in which it exists have continued to be established. For the first time in Seattle’s history, a DeafBlind woman was hired as the director of the DeafBlind Service Center. The local transit authority, the airport, the public library, and other organizations have begun to work with agencies that serve DeafBlind people to make the city more accessible. The American Association of the DeafBlind, a national advocacy organization, has made progress toward the incorporation of specialized, DeafBlind interpreting services into the Americans with Disabilities Act. All of this is evidence that “DeafBlind” as a political category has continued to gain crucial recognition at the local and national levels--not as a combination of “Deaf” and “blind,” but as its own political position from which DeafBlind individuals and organizations can make specific and relevant claims for access to resources. Meanwhile, the community has grown larger and more diverse, and significant internal divisions have begun to form. These changes together have opened up more space for critical reflection, and attention has turned inward.
Between 2006 and 2010, DeafBlind people started to express dissatisfactions with what had become the status quo. The problems were numerous. There was a lack of DeafBlind leadership. There was an inexplicable separation between tactile people and tunnel vision people that was keeping the community from cohering as a whole. There was too much dependence on interpreters. Those who could pass as sighted had more access to power, and those who actually were sighted were still, largely, the ones making decisions. These concerns signaled a shift in focus. Political recognition from outside of the community, although this was a precondition for its existence, was no longer enough. DeafBlind people wanted to have more influence in decision-making processes that affected them within their community.
Beneath political struggle there were also problems and desires of a different nature. DeafBlind people started communicating with one another, and in doing so, they discovered shared longings. They wanted a world of their own, dense with particularity and potential. Momentary and sporadic access to the worlds of others would no longer suffice. There was a shared sense that somehow, over the years, particularities had been subsumed by types and examples. Three dimensional scenes had been replaced by two dimensional characterizations of scenes and these scenes grew more and more difficult to inhabit. Co-presence had been replaced by representations of co-presence causing loneliness and isolation to encroach, no matter how many people were around. It had been years since sensory experiences actually accrued to the shared networks of association that come with a living language. Now the language itself seemed better suited to faded visual recollections than to the world at hand. Soon, it would lose its capacity to refer to anything at all-- even memories.
The situation was urgent, and this urgency pushed DeafBlind leaders into brand new territory; no one knew quite how to proceed. The pro-tactile movement began as a kind of exploration, looking for ways to solve the many problems that had been identified, and reinstate categories of experience that had grown inaccessible. Direct communication between DeafBlind people seemed like a good place to start, though the practices through which this aim would be realized were yet to be found. In what follows, I sketch a narrative line through some of the events and themes that defined the social field in which pro-tactile practices would be cultivated. Although there are broader historical frames that must be taken into account (see chapter 3), the inception of the pro-tactile movement as such can be located between 2006 and 2008 among the staff of the DeafBlind Service Center, and in particular, among three DeafBlind staff members--Adrijana, Lee, and Jodi.
4.1 “The Family Was Almost Dead”: Degradation of the visual habitus
Prior to Adrijana’s tenure as director, institutional positions of power were not occupied by tactile people. From the novel perspective of a tactile director, there were fundamental problems with DBSC as an organization that needed to be addressed. First, although DBSC provided crucial services, there was a sense that the organization was uninviting to the people and the community it served. As Adrijana put it:
The family was almost dead. It was like the Adams family. No character, no spirit, no nothing. It was just a vacant, bureaucratic feeling.
This problem was operating on several levels. From a visual perspective, Adrijana’s sense that DBSC was inhabited by the living dead might have seemed odd or unexpected. However, when sensory orientation shifts slowly, as it does for people with Usher Syndrome, what counts as self-evident shifts with it. Eventually, a gap opens up between DeafBlind perspectives and dominant perspectives, sometimes causing serious problems (as was the case for DBSC and its relation to the people it aimed to serve). These problems were caused by the degradation of the visual habitus (see section 1.2.1 in chapter 1).
For example, in 2006, I conducted two months of fieldwork, during which time, I made a habit of people-watching with a DeafBlind woman named Helen. We went out in Seattle to places we might have gone anyway-- a farmer’s market, a restaurant, the dog park, and I would describe what I saw, adjusting the focus of description as instructed. On one such outing, we were wandering around in Seattle’s Capitol Hill neighborhood, and we happened upon an art opening. The following is taken from my field notes written afterward.
I started with the hammers. Helen said not to bother, she wanted the feet. So we found a corner and started with the feet, which required attention to the legs. “The toe is planted and the heel is swiveling right to left and back again,” I say. “I don’t understand, show me,” Helen says.
So I plant my toes and swivel my right foot. Helen pats down my leg, while I continue. She makes it down to the toes and back up again, and then says she gets it. She imitates me and asks if that’s it. I confirm.
“Woman or man?” she asks.
“Woman.”
“Is she talking to a woman or a man?”
“Man.”
“Next.”
It turns out that that woman was not the only woman talking to a man and swiveling one of her feet back and forth, pivoting on the toes. There were others. Helen notes that when a woman flirts, she is likely to engage in this particular movement of the foot. I move to the right. Two men are next to a very large sculpture of gears. They are facing each other, feet anchored.
“They’re not moving their feet at all?” Helen asks.
“Nope.”
“Men or women?”
“Men.”
“What about the rest of their bodies? What are they doing?”
“Their hands are in their pockets, their heads are nodding, almost
imperceptibly, and they’re looking at the floor. Every once in a
while, they look at each other and then quickly back to the floor,”
I say. “They’re looking at the floor and their hands are in their pockets?”
Helen asks. “Yep.”
As we made our way around the room, it became clear that these men were not the only ones with their hands in their pockets. There were others. In fact this was almost an entirely generalizable feature of the room. It was a room in which hands were pocketed.
“Feet anchored, eyes averted, hands in pockets.”
“Left foot anchored, right foot swiveling, hands in pockets.”
And it goes on like this, until Helen becomes concerned. She says, “What are they doing with their hands in their pockets? Isn’t this a party?” She hadn’t remembered that hearing people stand around with their hands in their pockets,since they’ve got their mouths and their eyes for talking and seeing and such. She said she must have known that before she was blind. We went over the room again, scouring for hands caught mid-activity, and there were almost no cases to report. She accused them of being devoid of feeling. She accused them of being cold. But after thinking about it longer, she said, “Those poor people! They have too many limbs! They don’t know what to do with them!”
For me, the pocketed hands, the averted eyes, and the swiveling feet all faded into the background as expectable features of an awkward social event. Helen, on the other hand, had been relying on interpreters to read social scenes for years, and this led to a deterioration of the visual habitus(1).
When interpreters used words like “party” and “art opening,” as I did, they prepared Helen for a place with particular characteristics. She expected to find certain types of people, dressed in a particular way, engaging in a certain type of interaction. Meanwhile, Helen’s perceptual schemes were shifting. While interpreters went on describing objects, scenes, and encounters in a visual field, she was filling in the details in ways they couldn’t have imagined. Interpreters were working within the limits of the language they were using, and that language contained forms with associated meanings. Meanings in any language are schematic and are only made definite as they are instantiated in use. Without the particularities of the visible environment, a distance grows between the categories and the phenomena they characterize and point to.
For example, in an interview, Lee explained that sighted people living in Seattle are familiar with downtown hotels. They expect to find automatic, sliding glass doors at the entrance. They anticipate the slightly squishy floor mat as they pass through the threshold. If they are holding a paper coffee cup, only a half-glance will be necessary to confirm the existence of a cylindrical silver trash can into which they can dispose of their cup. “It’s always the same!” Lee said.
However, she explained that DeafBlind people have, until recently, relied on sighted interpreters to navigate public spaces, preventing them from cultivating tactile sensibilities. As a result, Lee says, scenes like the following are likely to unfold :
A DeafBlind person walks into a [hotel], and runs into the garbage can turning the corner. They look shocked and tell the person they’re with that the placement of the trash can is not safe!
Outbursts like this strike others as unwarranted, since from a sighted perspective, the placement of the trash can is expectable. Lee pointed out that if the DeafBlind person were using a cane, and paying attention to their surroundings without passing through someone else’s visual perspective on it, they would notice regularities like this as well. It is not a matter of sensory capacity. It is a matter of orientation, the grasp that social actors have of being a body in space, and how their split-second evaluative responses to stimuli align (or not) with shared frames of social value. The further those responses drift from shared frames of social value, the more “odd” or “eccentric” the DeafBlind person appears.
I lived in Seattle and was involved in the DeafBlind community as an interpreter and in other capacities for 7 years before I went to graduate school. During that time, these events in which DeafBlind people responded to expectable stimuli in non-normative ways seemed quirky to me. However, as the pro-tactile movement took root, and discourses began to circulate, I began to see that they were symptoms of a serious and alarming problem: the visual habitus was degenerating.
I encountered this problem often in my interactions with DeafBlind people. For example, one day, I entered a coffee shop with a DeafBlind man. I told him there were several people in line ahead of us. He responded by repeatedly adjusting his footing, saying “Sorry. Sorry.” He clenched his fists and cringed, as if bracing for a collision. This kind of response to information was not uncommon. I would give a DeafBlind person a piece of information, and they would yell, “Im sorry!” “I didnt know!” or “Im blind!”
When the habitus is intact, we respond to immediate triggers to act in expectable, appropriate, and otherwise normative ways. However, this process depends on access to the immediate environment and a process of socialization that helps us distinguish between relevant and irrelevant stimuli. DeafBlind people become jumpy and over-responsive because they receive triggers to act without the particularities in the environment needed to guide specific action. For example, if you are told that a sighted person is approaching and would like to start a conversation with you, you may feel the urge to turn your torso and face toward them, assume a particular posture, or express a particular emotion with your face. However, after many years of limited access to the bodies of others, you forget how to carry these actions out in ways that feel appropriate or natural. Over time, these failures accrue to the individual as the habitus degenerates.
A person without a habitus has no common sense. They run into ordinary objects and then act surprised that they are there. They stare past people, talk into walls, offer strange and unnatural smiles, and respond to routine questions by yelling, “Im blind!” These events thrust DeafBlind people into devalued social positions. They come to be viewed as “develop-mentally delayed” or are talked about as “slow learners.” They become less appealing to be around, which leads to increased social isolation, and increased social isolation contributes to further degradation of the visual habitus. Over several decades, the DeafBlind person drifts away from any legible position in the social order.
Leaders of the pro-tactile movement saw these problems as rooted not in the failures of the individual, but in naturalized interactional structures. Their hypothesis was that DeafBlind people behave in non-normative ways because they dont have enough direct, tactile access to their environment. Representations only make sense if they conjure experience, and too much reliance on interpreters had opened up a chasm between the two. In the terms employed here, they saw that habitus must articulate with field in order to be maintained, and rather than attempting to prop up the visual habitus, they opted to change the coordinates of the field.
The degree to which sighted people would be invited into this emergent social field had to do with assessments of their “attitude.” According to Adrijana, when she took over, DBSC was mostly staffed by people who privileged visual (and even auditory) communication prac-tices, took them for granted, and were not particularly concerned with the exclusions those practices engendered. Although it wasn’t clear exactly what needed to be done, improved attitudes toward, and competence with, tactility and tactile communication practices was an intuitive first move. The sign glossed as “attitude” in this context diverges from the English meaning. It is treated as an almost inherent part of the person, and it has to do with the capacity to see things from a DeafBlind perspective. People are either capable of learning or they are not. There is no use trying to teach a person with a bad attitude to communicate, thinking they might one day contribute to the community in some way, because they probably won’t.2 People who have bad attitudes (or rather, bad-attitude people) can be surrounded by American Sign Language for 20 years and fail to learn it. They are inert at best and intentionally perpetuating power asymmetries at worst. Therefore, for Adrijana, solving the attitudinal problem, thereby enabling the emergence of a pro-tactile social field, meant replacing almost all of DBSC’s staff members(3).
For about two years, there was a lot of instability in the organization. I really wanted to have the right people in there doing a good job because DBSC is an organization that is there for DeafBlind people, and they had to feel comfortable coming in and getting what they needed.
However, it was not self-evident how to make DBSC a comfortable and appealing place for DeafBlind people. First, there was work to be done on the public image of DBSC as compared with other agencies and organizations in Seattle. Adrijana explained:
We compared ourselves to ADWAS [The Abused Deaf Women’s Advocacy Service]. They’re such a popular organization because they’re attractive to people. They have the auction. They’re an organization of Deaf women, and it is truly a Deaf environment. They don’t have phones, they have TTYs (or they did when they started up). Their board is required to know ASL, etc. The Lighthouse was attractive to people because of DeafBlind community class and Seabeck camp. But where did DBSC fit in? What was so great about DBSC? That was when the notion of pro-tactile came up. It started out really vague and narrow. It didn’t mean ‘tactility’. It meant ‘manual tactile reception’. The point was just to change people’s attitudes about tactile communication, as a modality, to say there’s nothing wrong with it.
ADWAS is known for being a very welcoming organization. Anyone who is willing to contribute to their mission of providing direct counseling and advocacy services to Deaf victims of sexual assault and domestic violence will be invited to participate in some aspect of the organization. However, if hearing people were to volunteer and/or work for ADWAS in an effort to contribute, but used spoken language to communicate, the services would no longer be direct and the mission would be undermined. Therefore, ADWAS has gone to great lengths to make Visual American Sign Language the primary language in which business is conducted. For example, as Adrijana mentions, there are no voice telephones in use. This means that there is no receptionist speaking English at the front desk so when Deaf people enter the building, they are not immediately alienated.
At the same time, ADWAS actively encourages hearing people to participate as volunteers, staff members, donors, board members, etc. The only condition is that they adhere to Deaf norms of communication and interaction. ADWAS has been wildly successful and as Adrijana explained, this is not in spite of, but rather, because of the fact that they are a Deaf organization that serves Deaf people according to Deaf norms. Not unrelatedly, their fundraising events, such as their auction, have taken on a life of their own as vibrant sites of Deaf sociality in Seattle. Talk of a more inviting environment for DeafBlind people came about with a model like this in mind--but what would the DeafBlind version be?
4.2 “Everything We Touched Froze”
Adrijana called a meeting of staff members and some community members to talk about priorities for DBSC’s future. In this meeting, “pro-tactile” started out as a slogan that was used to sell DBSC, but at the same time, the more substantive idea of a “DeafBlind Friendly Zone” was raised. Adrijana explains:
We started using the words, but we didn’t really know what it meant. What does it mean to have a DeafBlind friendly zone? Well, tactile signing was important, and we just started thinking about things like that, which led to more and more discussion, and over time, it kept changing. For example, we started talking about why it was that if two people were talking to each other, and you walked up and put your hands on one of their hands, they would stop talking. Why not continue, so we can listen for a while? We wanted people to get rid of those habits that made it hard for DeafBlind people to move around a room, observing what was going on tactually.
Although it wasn’t clear yet what practices might be considered DeafBlind friendly, there were some things that clearly weren’t, such as this habit people had of pausing, or “freezing” when a DeafBlind person touched them. The freezing phenomenon had an eerie effect. Conference rooms, offices, and hallways seemed perpetually occupied by people who were suspended in mid-air. Adrijana said when she was with another person, for example, eating lunch and conversing, she would take a bite, and then feel the other person’s hand or arms to see if they were still eating or not. If they weren’t, she might say something to them. If they were, she might want to feel their hands take the food to their mouths, or maybe their jaw chewing, but every time she put her hands on theirs, they would pause, awkwardly, until she removed her hand. Or if people were standing around talking in the conference room before a meeting, she would approach them, put her hands on one of them, and hope that they would continue signing, so she could tell what they were talking about. Invariably, though, the conversation would stop. Either the people would stop moving, as if they didn’t know what to do, or they would ask her what she wanted. How was she to know what she wanted if she didn’t know what possibilities for wanting there were? How was she supposed to know what possibilities there were, if she couldn’t observe activity in her environment?
Usually, this kind of observation would be done with a visual interpreter, but interpreters were in short supply, and Adrijana often went without one. Furthermore, she didn’t think tactile observation was implausible in such situations, but in the larger community there weren’t any tactile frameworks for observation, so when it was done, it was confusing, irritating, or on occasion, even interpreted as inappropriately sexual. However, for Adrijana and several of her friends and colleagues, there was a disconnect.
In 2006,I conducted two months of fieldwork, and during that time, I lived with Adrijana and her Deaf, sighted husband. They and several of their friends (both sighted and DeafBlind) had intuitively started developing tactile frameworks for observation. In 2008 I also lived with Adrijana and her husband, as well as working at DBSC, and was integrated into a group of friends and colleagues who continued to develop tactile communication practices. Those of us who were routinely exposed to these practices no longer froze on contact, and without necessarily noticing, our boundaries around touch had been revised.
For example, when Adrijana and I would go out together, she would often start the encounter by touching my feet, feeling the type and texture of shoes I was wearing. She would feel for the style of pants at the ankle and then trace the fabric up the shin to the knee. From there, she would skip to the belt and feel for the thickness and the texture, pausing for a moment at the belt buckle--Small and discrete? Thick and clanking? Then she would move to the neck-line of the shirt and do a quick scan of the sleeves before feeling the style and state of the hair--Still wet? Ponytail? Clean? Dirty? Straightened? Curly? All the while, she would be pulling in gulps of air through her nose, clearly gathering olfactory details as well. Finally, I would add any information that she wasn’t likely to discover-- for example, if we were wearing the same color, I might mention that.
We usually disagreed about something. Adrijana thought our shoes were the same, and I didn’t. Or she would (in good humor) accuse me of stealing her style, and I would try to defend myself. These arguments often ended with her telling me to feel rather than look at the item under dispute and once I had done that, I would often concede. Visually there were differences, but from a tactile perspective, the similarities stood out instead.
Although we were close friends and roommates, this kind of thing felt no more intimate than a friend commenting on your clothes when they see you: I like your shirt. Or: Look! We’re matching! Outside of our small group of friends, however, it was clearly counter to the norm. In the broader community, people were still suspended in mid-air and lacking particularity. Attempts to fill in the details were continually thwarted. When Adrijana became the director of DBSC, the staff there was no exception:
Everyone was like that. Especially Deaf employees. If you came up and put your hands on them, they would either freeze or say ‘Hold on, Im talking to someone.’ Or, ‘I’ll be done in a sec.’
In the past, Adrijana couldn’t always prevent this sort of response, but now that she was the director, changes like this were within the scope of her job responsibilities. It wasn’t just for her. It was part of making DBSC a DeafBlind friendly zone. Adrijana said that she reminded sighted staff members continually, and eventually, they continued signing or going about their work when she put her hands on them.
4.3 DeafBlind to DeafBlind Communication
The new staff included three tactile DeafBlind people: Adrijana, Jodi, and Lee. There were no communication conventions in place for three-way tactile communication. If there were more than two DeafBlind people present, interpreters would be hired to mediate. In an interview in 2010, Adrijana explained:
If Jodi and I were talking and Lee wanted to join, we had to figure that out. It wasn’t obvious to us at first, but we tried to follow our intuitions and find a way to communicate between the three of us. [... ] We weren’t really reflective about it. We just kind of did what worked, which was signing with two [dominant] hands. Then when sighted people would join us, they would look confused--like how am I supposed to communicate with both of you at once? And we would tell them to sign with two [dominant] hands. We didn’t do that if we had to have a meeting for an hour. We did that for short meetings--10 minutes here, 10 minutes there. I didn’t want to explain things to one staff person, and then repeat myself with the second person. That would eat up too much time. So it was a good way of efficiently conveying a short message.
These practices quickly became naturalized among the staff at DBSC. So much so, that they were surprised when others found them novel.
It became so normal for me in such a short period of time that I didn’t think about it. But when people saw it, they would respond--like ‘Wow! That’s so cool!’ And I remember saying, ‘Well, they do that at the Lighthouse, too,’ and being told that they didn’t do anything like that there. That was a big insight for me [...]. I didn’t even realize that that was the case until about a year later. I didn’t come to the realization that there was a discrepancy in how communication was happening inside DBSC and outside.4 It had all happened so naturally that we didn’t think about each little thing we did. No one really talked about it much. It was just an ongoing negotiation and people were expected to do what it took to make themselves understood and understand other people.
From 2006-2007, communication within DBSC was already moving away from reliance on interpreters, and toward direct communication between DeafBlind people. Conventions for communicating with sighted people that included more tactile practices were also developing. This shift eased financial and scheduling strains. DBSC had very limited funds and interpreters are expensive. It also takes time to schedule interpreters, and in order to get the ones you want, they must be booked far in advance, and these problems were increased as interpreter shortages became more severe (See section 3.2.1 on page 78 for more on this).
As Adrijana explained above, there were often situations where an impromptu meeting was needed that required the presence of more than one DeafBlind staff member and using interpreters was not feasible for that reason. In addition, Adrijana noted that people didn’t want to include DeafBlind people in their organizations or events because paying for interpreters for them was so expensive. Therefore, she said, “changing our communication practices could help solve that problem in addition to the day-to-day logistical problem of wanting to have short, spontaneous meetings.”
The process was kick-started because as soon as internal dynamics started changing for the better, there began to be friction with people from outside the organization who came to DBSC regularly and hadn’t been privy to the changes. That friction, Adrijana said, “made [the staff] more insistent and gave [them] the inspiration to get serious about establishing a DeafBlind friendly zone.” A certain repertoire of DeafBlind friendly communicative practices had become naturalized within DBSC, and their naturalization made it difficult to describe them explicitly. As Adrijana says below, even if outsiders wanted to learn (which was not often the case in the beginning), naturalization was a barrier to teaching them.
At first, I thought that communicating in a DeafBlind friendly way was common sensical, or at least easy to learn. But I realized that people don’t like change. These were all big insights for me and I realized that I had to be more patient, take things in baby steps, approach people more gently. We had to ask people nicely. We didn’t want to post big threatening signs [ . . . ], so I decided we would just have to go with the flow more, and be patient about change. That process took about two years-- from 2006 to 2008.
By the end of 2008, the internal dynamics of DBSC were greatly improved and efforts turned to increasing the relevance and quality of services. DBSC contracts with state agencies, such as the Department of Services for the Blind to provide specialized, direct services to DeafBlind people. Therefore, what counts as a legitimate service is shaped as much by the structures and categories of the state agencies as it is by the needs and desires of the community. Adrijana had to find ways of addressing the discrepancies.
We noticed, as staff at DBSC, that [ . . . ] senior citizens [were] coming in droves to discuss problems they were having. When we looked at what was going on, there usually wasn’t a problem. It seemed like they were home alone, socially isolated, going crazy, and had to invent a reason to come in and talk to someone. And then they would have to get caught up in some kind of imaginary problem as their only form of socializing. The advocate would get overwhelmed with all of this work that wasn’t really legitimate. [They needed to] have some kind of positive interaction. The goal was to relieve some of the problems that seemed to come from being isolated--paranoia, stress, etc.-- and it worked.
Given the fact that severe social isolation was a real problem for older DeafBlind people, it seems that they would have gotten together more often on their own. There were two main reasons they didn’t. First, even if they had, they wouldn’t be able to communicate with one another in groups, since no conventions had been established for this. Second, there was what Adrijana called a leadership problem:
A lot of people were retiring, so what were they going to do? [ ... ] That problem became a first priority. [ . . . ] We asked the senior citizens to bring their own SSPs rather than DBSC being responsible for coordinating SSPs, and each month they would be responsible for planning an event themselves. We called that “leadership,” and we expected it to go alright. But then we found out that they weren’t doing anything. They weren’t finding their own SSPs, they weren’t planning their own events. It was really surprising. They had just gotten so used to someone else doing everything for them. They’ll find me an SSP, they’ll plan the events, and so on. Conversations often went like this:
DeafBlind senior citizen: I need a ride.
DBSC staff person: You find your own ride! Use the bus! Or call a cab!
And then nothing happened.
So that was an indication of what had been going on all this time--people had become [ ... ] complacent and unable to do things for themselves, or at least not used to doing things for themselves. So I got really frustrated, and they got irritated, being asked to do things they didn’t want to do and weren’t accustomed to doing. So my great idea didn’t work, because people didn’t just snap into the role that I had in mind. I had to try to do what they expected, rather than trying to make them the kind of DeafBlind people I thought they should be. So I hired a coordinator for the DeafBlind Senior Citizen program. The goal then, was for that person to figure out how to work with DeafBlind people to build leadership potential without making the mistakes I had made, moving too fast and expecting things to change too quickly.
Essentially, Adrijana was asking people who had spent many years in the role of “the served” to step into the role of the service provider. Theresa Smith, a long time ethnographer in the Seattle DeafBlind community, writes about the problems this division between those who provide and those who receive services has caused:
Agencies naturally take their direction from the people who establish, fund and run them. Agencies serving DeafBlind people are typically funded and run by people outside the community. [ ... ] [Therefore] the people in positions of power and authority come from a different world than the people for whom the agency is established. This is a problem. Hearing/Sighted administrators and staff do not share the life experience (deafness, blindness) or socio-economic class (income and life style) of their clients. They do not even share a primary language and culture. Few professionals on staff and fewer administrators have native-like fluency in American Sign Language and Deaf culture [ ... ] This creates an almost insurmountable gap in world view and in access to power. This difference in power has been institutionalized. [ ... ] We want to move beyond the limits of the present to a future in which DeafBlind people have not only power but authority and control within these agencies established in their name.
Although there is a great deal of variation among DeafBlind people in terms of socio-economic class, life experience, access to education, etc., the roles of those providing and receiving services have historically been opposed and mutually exclusive. Therefore, if someone was receiving services, they were by definition, not making decisions about how those services were administered.5 This led to problems like those that the senior citizens were experiencing. There was no agency contracting with DBSC to pay for social events as a way of alleviating social isolation. DeafBlind people knew this, so they had to make their attempts at socializing into a problem suitable for the services that were provided. One of the unfortunate side effects was that DeafBlind senior citizens were shaped by the negative and irrelevant role they were often left playing. They had to put on a performance of distress sufficient to justify a meeting with the advocate. Although they were experiencing distress, the nature and cause of the distress had to be disguised in order to alleviate it.
For DBSC’s staff, redirecting some funds and organizing social events was much preferable to sifting through the details of intentionally confusing stories, as well as being overwhelmed by the number of clients who came in telling them. Furthermore, Adrijana thought DeafBlind people shouldn’t have to be in crisis in order to have human contact. The order of operations should be just the opposite. They should have human contact in order to avoid crisis.
Therefore, she decided to use part of the advocacy budget to pay for minimal support to a DeafBlind Senior Citizen’s program. However, one meeting of the group required many volunteer interpreters (about two per participant). So soon after its inception, interpreters became a problem. Louise, the first volunteer coordinator of the DeafBlind seniors program explained in an interview that the program had to be temporarily suspended.
Now we have a new director at DBSC, Adrijana, who asked me to work with the senior citizen’s program, trying to get it back on its feet, which I agreed to do. I have found volunteer SSPs who are ASL students. The students who have been helping have been absolutely wonderful. Right now we have 10 senior citizens in the program who are very happy to have the program back. But it is uncertain what will happen in the fall because many of our volunteers have to go to school. Some will find jobs. We need funding to pay for SSPs and interpreters. We want to get out of the house and learn more about the world. Many of us stay home for long periods of time, and are very lonely. Just yesterday I got a call from one senior citizen, who was crying because she was so lonely. She just wanted to get out of her house, but there were no SSPs available. It’s really bad.
The shortage of interpreters was the problem on the surface of things, but if interpreters weren’t used, there would no longer be a problem. This, however, would require a major transition where DeafBlind people learned to communicate directly with one another. If this could be accomplished, social isolation could be addressed without appealing to sighted people for support, and further taxing the already depleted interpreting resources.
4.4 A Vision for a Pro-Tactile Future
Once Adrijana took up her post as director of DBSC and replaced much of the old staff, she and her new staff found that many of the problems they hoped to address, when thought through, could be traced to an absence of a tactile field of engagement. Although they didn’t know how they would bring such a thing into existence, they thought that direct communication between DeafBlind people was a good place to start. However, many DeafBlind people didn’t possess the technical skill of tactile reception, so they wanted to find a way to make learning tactile reception appealing. They thought it was strange that in the past, sighted people had often been the ones to teach tactile skills to DeafBlind people, even though they didn’t use tactile reception to communicate. They thought that DeafBlind people should be the ones to teach it--not only because it was more practical, as they were the ones who really knew how it worked, but also because DeafBlind people should be able to turn this practical knowledge into expertise as such, which they cannot do without opportunities to teach. All of this went into the planning a series of classes, which would be offered by DBSC to DeafBlind people, and which would be taught by DeafBlind people without the use of interpreters. The problem was that if they advertised the class as having anything to do with going tactile, no one would sign up--and especially not the ones, who in Adrijana and Lee’s view really needed to sign up. Adrijana explained that “we knew the word ‘tactile’ would turn them off, so we changed it to ‘DeafBlind to DeafBlind class.’ That piqued people’s curiosity, because they didn’t already know what it was.” Most of the classes did not thematize tactility. They were about finance, cooking, wood-working, and other topics. The instructors, though, were all DeafBlind as were the students, and no interpreters were provided. Tunnel vision and tactile people were thrown together and expected to communicate directly with one another.
People who had not yet gone tactile were encouraged to wear blindfolds, but not required to do so. Lee taught the classes, and one of her main strategies was to have discussion groups. She organized people into pairs sitting opposite one another, and then gave them a question to discuss. After 5-7 minutes, she had them rotate so that every person in the room discussed the question with every other person in the room. It seemed time consuming, but she naturalized the process for the participants by saying “this is our culture” and “this is how we do things.” This way of doing things had benefits, which she didn’t state explicitly in the classes, but which shaped her approach.
It meant that there was more equality in access to information. When a group of sighted people are in a room together, they can all be looking at one another. Everyone knows what everyone thinks, what everyone feels, and what everyone says [ . . . ]. It doesn’t work to get everything through one person [an interpreter]. Then you’re totally disconnected from your environment and the people in it. I was interested in finding a way to make group engagement possible--such that you would feel actually connected to the people you were with and the place you were in.
At the time, the classes didn’t feel like an extraordinary success. People were resistant to the idea of having events without interpreters present. In an interview, Adrijana and I discussed reasons for this:
Adrijana: People already have their ways of doing things. Senior Citizens love to go to the monthly meetings [at DBSC] in order to talk to their SSPs! They love it because they get information from them. They don’t see DeafBlind people as a source of information since they’re behind on news all the time anyway.
Te r r a : But do you think that’s true that DeafBlind people don’t have any information to share?
Adrijana: I think DeafBlind people have a disconnect between information that they have and ways of expressing it. I think when SSPs share information, it gets their minds working again--connections start happening, and then they can share with other DeafBlind people. It’s like their brains come alive again, but they need a kick start.
Adrijana was talking specifically about senior citizens here. Most members of this group are fully or almost fully blind and as was described previously, are suffering from some degree or another of social isolation. Social isolation is self-perpetuating. When you don’t talk to people you don’t have anything to say.
For blind DeafBlind people, the situation is worse still; whatever information they generate in their daily lives is generated via primarily tactile means. However, there is no system of representation available to them for expressing knowledge produced tactually. Visual ASL does not always lend itself to the tactile dimensions of objects, encounters, and people. There is, in Adrijana’s terms, a disconnect between information that they have and ways of expressing it. This disconnect leads to a “liveliness” deficit, which makes social exchange difficult. Two people who both have deficit of liveliness cannot help one another. It takes a person tapped into something--anything--to kick start their brains and come alive again. Giving that up would have dire consequences. For fear of such a situation, several people dropped out of the DeafBlind to DeafBlind classes once they realized that no SSPs would be provided.
Of the people who did stay, there were further problems. One of the classes involved going to a coffee shop and using tactile communication in public. While many of the participants were willing to communicate tactually in a private class, they were unwilling to do it in public. Several people dropped the class at this point. Then there was the question of safety. Adrijana and Lee didn’t have a set of practices that they were teaching people for direct, tactile communication. It was more experimental than that. They wanted to see what would happen if they threw everyone together and didn’t invite any sighted people. This was OK for the first several classes, which were taught by tunnel vision people about topics that did not require hands-on activities (e.g. “finance”). But eventually, there was a class taught by Robert, a tactile person.
Adrijana said that “Everyone assumed since he was a blind DeafBlind person, that he would be with an SSP. But just like all the other classes, no one had an SSP. Several students dropped the class when they found that out. Robert felt demoralized.” I asked Adrijana if people gave a reason when they dropped the class and she said they had: “There are no SSPs and Robert is blind.” It turns out that when pressed further, they didn’t feel safe. Robert was teaching wood-working and he was using a large, electric saw and a drill. Adrijana explains:
Before Robert even plugged in the machine, they were scared to death. Robert just wanted to show them the machine and they freaked out. They thought there would be SSPs there, and they would have more of an observational role, but that isn’t what we had in mind.
I asked Adrijana if she thought their fears were warranted, and she said that at first she didn’t think so. But then a while later, she was helping make a bunch of cloth napkins for a DeafBlind event with friends--both DeafBlind and sighted--all of whom had significantly more vision than she did. She fearlessly ventured forth with the sewing machine and ended up putting the needle through her index finger. “I laughed,” she said, “but it hurt like hell.” After that, she changed her perspective on the issue.
115
Part of the problem was that people didn’t trust their tactile experiences, and they didn’t trust that people would be able to reliably explain to them how to use this dangerous machine. They were right. Not only were their sensory orientations always shifting, there was a definite disconnect between tactile experience and Visual ASL. In addition, there was a great deal of variation among the group in terms of sensory orientation and there were no conventionalized practices that equalized these differences. All of this made learning how to use new, potentially dangerous equipment without the use of interpreters a bad idea.
In addition to the safety issue, the fact that group communication among DeafBlind people was not conventionalized yet meant that every little thing took effort. For three DeafBlind people to communicate with one another, one person has to know how to sign with two dominant hands and receive with one hand (not two). It can be annoying and/or frustrating to focus on such tasks while also trying to express a thought, or learn something, and many people felt that it was too much to ask. Two more DeafBlind people dropped out of the classes for these reasons.
I asked Adrijana if she thought that there had been an effect on language and communication practices, despite the initial lack of enthusiasm about the classes. She said, “What I think has been happening is that there is more overlap. Before there was a crystal clear separation between [tunnel vision people] and [tactile people]. Now they are mixing a little.” She went on to explain that homogenization of communication practices seemed like a big challenge.
There’s so much variation. Now we’re just trying to slowly close the gap between the two sides. That will help people to transition to our side--the tactile side-- and it will keep people from being able to reject us. They can’t do that any more. So my experience of the changes since 2007 really includes this narrowing of the gap and a recognition of the importance of it [ . . . ]. All this time I thought that it really hadn’t gotten any better and that was that. But deep down, I knew we had gotten off to a great start. It’s just that I had no idea how it would grow or if it would. That’s why I say it’s all very new, and things are changing very slowly. As far as how it will all end up, I think we have to wait five years or something to find out.
As far as changes in actual communication practices, Adrijana wasn’t sure. She said that she knew that some things were new--like describing relative spatial relations by pointing to locations on the palm of the addressee rather than in space--but, she said, “In DeafBlind to DeafBlind class we never talked about it. We just did what we did. I don’t even know what we did [ . . . ]. Really, you’re asking me if things have changed and I don’t really know.” She said she thought things had changed, but it wasn’t clear when certain practices had come into use and how widely. She was certain that they didn’t teach any new communication practices in these first classes. People “just started picking things up from other people and incorporating what [they] liked. And then some of it stuck and was history.”
As I started my dissertation research, Adrijana and Lee were looking for another opportunity to teach classes like the ones they had taught before, but funding had been scarce, and they had been busy with other projects. I was looking for ways to systematically observe the changes in communication and language that had been occurring. I contributed part of my dissertation funding for a second round of classes, and we started having planning meetings in the Fall of 2010 and the classes started in January of 2011.
Adrijana and Lee prepared the content of the courses and selected and recruited participants. I helped coordinate logistics and took care of tasks specific to research, such as organizing the collection of video data and obtaining consent from participants. I and two other sighted people videorecorded the classes, but did not otherwise take part in them. There were two groups: Group A and Group B--each comprised of five or six students and two teachers. Te n two h o u r c l a s s e s w e r e o ffered to Group A over the course of five weeks. 10 2 hour classes were offered to group B, also over the course of five weeks. In chapters 5-7, I show how the pro-tactile movement affected changes in sensory orientation and structures of interaction, and how, in turn, these changes began to influence the internal organization of the linguistic system.
Chapter 5
The Deictic Field prior to the Pro-Tactile Movement
In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, these Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the East, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars ...
--Borges, “On Exactitude in Science” in Collected Fictions
Prior to the pro-tactile movement, DeafBlind people relied on sighted interpreters to orient to the immediate environment. Using Visual American Sign Language (VASL), interpreters produced map-like instructions for interaction and exchange. However, as vision deteriorated and visual memories faded, the maps did not corresponded reliably to any external reality and deictic reference was strained. In this chapter, I argue that the problem stemmed from a deterioration of the deictic field to which deictic signs articulate and from which they derive meaning and efficacy. In addition, the perceptible ground of the deictic signs themselves became inaccessible.
Initially, DeafBlind leaders tried to address these problems by teaching interpreters to make maps that were more detailed, more life-like, and more compelling. At this point, the “interpretation” was no longer meant to provide orientation to the environment. Instead, it was meant to contain the environment. Ultimately, it became clear, as it did to Borges’ cartographers, that the closer the maps grew to the territory they were charting, the more useless they became.
Examining this tension between access and representation among DeafBlind people and sighted interpreters highlights two things. First, the deictic system, which is crucial for any map-like orientation scheme, must remain distinct from the deictic field to which it articulates. The former inheres in the linguistic system, while the latter is an integral part of the world (Bu¨hler 2001 [1934], Hanks 1990). Second, it highlights the mutual dependence of these constructs in accounting not only for acts of deictic reference, but also for the role these acts play in maintaining the structure and utility of the linguistic system over time.
When relations between the deictic system and the deictic field broke down among DeafBlind people and a new deictic field began to emerge, the system did not merely re-articulate to the new field with no consequences for its internal organization. Rather, each was re-calibrated to the other by TASL signers in interaction, and the linguistic system was altered. This chapter focuses on the disarticulation of the deictic system from the deictic field that was in place prior to the pro-tactile movement. This process is the first moment in the larger reconfiguration of deictic relations.
This chapter contributes to my overarching argument in this dissertation--that languages do not emerge by abstracting away from the contexts of their use, but rather, by being integrated with those contexts in tighter and more restricted ways. In sections 5.1 and 5.2, I introduce the notion of the deictic field, drawing on Bu¨hler (2001 [1934]), Hanks (1990, 2005, 2009), Schutz (1970), and Goffman (1964, 1981). In section 5.3, I show how interpreters were used to generate visual coordinates for orientation schemes, and how this strategy inadvertently prevented DeafBlind people from shifting toward tactility at an earlier point in the history of their community. I conclude that these practices led, over time, to a deterioration of the relations between the deictic system of VASL and its deictic field in the Seattle DeafBlind community.
5.1 The Signpost
In the Deictic Field of Language and Deictic Words, Karl Bu¨hler identifies a subset of pointing gestures that function like “signposts.” He writes that
where the pathway branches, or in countryside lacking pathways, an ‘arm’ or ‘arrow’ is erected so that it can be seen from far off; an arm or arrow that normally bears a place-name. If all goes well it does good service to the traveller; and the first requirement is that it must be correctly positioned in its deictic field (2001 [1934]:93).
Like a signpost, deictic words, such as here and there are combined with pointing gestures to create a perceptually salient sign that directs its recipient. For example, when a human “opens his mouth and begins to speak deictically, he says ... there! is where the station must be, and assumes temporarily the posture of a signpost” (ibid.:145).
The meaning of the deictic expression is not difficult to sort out because speakers and signposts “can do nothing other than take advantage--naturally to a greater or lesser extent--of the possibilities the deictic field offers them; moreover, they can do nothing that one who knows the deictic field could not predict, or, when it turns up, classify” (ibid.). In other words, possibilities for pointing are not infinite. The signpost merely clarifies potential ambiguities between, for instance, branches in a pathway, landmarks in a landscape, or one of a limited set of cardinal directions. A deictic sign is a signal to choose one path over another; it does not launch a trajectory into unstructured space.
Within a field of limited choices the deictic sign, like the signpost, does two things: it names and it points. Its symbolic meaning derives from oppositions in the language (here is not there). Its indexical function derives from oppositions in the “pathway,” or rather, the speech situation, where it is inserted. Deictic words are, therefore, part of language and language must be composed not only of symbols, but also of signals. When linguistic signs, both deictic signs and naming signs, are applied in the speech situation, they receive field values (Bu¨hler 2001 [1934]::99). The most fundamental difference between the two hinges on where each sign-type receives those values. A deictic sign’s meaning is “fulfilled” and “made definite” in the deictic field, whereas a naming sign’s meaning is fulfilled and made definite in the symbolic field.
The idea that the meanings of signs are elaborated, added to, or in some way changed when they are instantiated is, according to Bu¨hler, not controversial. What remains unclear is how far-reaching the consequences of this fact are for the rest of the linguistic system. In what ways is the linguistic system changed by the field values it accrues? Building on Brugmann (1904), Bu¨hler pursues this line of inquiry by considering the role that gestures and other sense data take in complementing and otherwise mediating the meanings of utterances, thereby linking them to the speech situation.
According to Brugmann, gestures are coordinated with the utterance in and through a “perceptual image,” or anschauungsbild (2001 [1934]:147). Bu¨hler names several variously foregrounded, or activated, coordinate systems that can contribute to the perceptual image; the coordinate system anchored by the head (as “a kind of globe”), or head coordinates; the coordinate system anchored by the zero-point of the chest, or chest coordinates; the coordinate system anchored by the eyes, or visual coordinates, among others (ibid.). These systems converge on and “wander” within the “tactile body image,” yielding a synthetic sense of being in a place.
The perceptual image is relevant to language in the sense that it contributes to the I, here, now from which deictic reference is computed. However, Bu¨hler goes further to ask how far the “‘perceptual image’ and its use for the representative purpose of language extend[s] into the entire structure of language” (2001 [1934]:147). Like Bu¨hler, I am concerned not only with how changes in sensory perception affect the ability of DeafBlind people to resolve deictic reference, but also what consequences these changes have for the structure of the language, more generally.
5.2 Beyond the Signpost
Signsposts and acts of reference differ in many respects. Broadly speaking, “the concrete speech event differs from the wooden arm standing there motionless in one important point: it is an event. Moreover, it is a complex human act” (Bu¨hler 2001 [1934]: 93). This difference opens onto many more. First, while both people and signposts occupy physical positions in space, humans also occupy roles in a way that signposts do not (ibid.). A human pointer is a speaker and the person they are communicating with is an addressee. The words I and you vary only according to which of these roles is being occupied, not according to which person is occupying the role (ibid.:94).
For pronominal systems like this to work, there must also be conventional configurations of roles, and conventional ways of moving between them. These patterns settle out of the situated encounter (Goffman 1964, 1981) via habituation and routinization (Hanks 2005b:193). This introduces another layer of structure, which does not inhere in the deictic system of the language, but fits with it, or as Bu¨hler says, “fulfills” it.
Second, deictic words direct and modulate attention in a way that signposts do not. The acoustic or gestural qualities of deictics are calibrated to these efforts. For example, when here is uttered twice, the second time more loudly than the first, its auditory qualities trigger both heightened and directed attention in the recipient. When I say here, you become receptive to the environment, scanning, before you analyze it, locating here, for example, in relation to there. An augmentation or change in receptivity occurs prior to identification. Deictic signs, in this sense, are “reception signals.” They are an inverted version of “action signals” like imperatives. Words like I and this “cause the gaze to turn (or something of the sort) and the result is a reception. The imperative come, in contrast, has the job of bringing about a certain action on the part of the hearer” (Bu¨hler 2001 [1934]:122).
Third, unlike signposts, humans have sensory systems that come with certain limitations and affordances. According to Bu¨hler, when the speaker speaks, the auditory signal gives off clues about the speaker himself as well as his location. These perceptual clues are put together with the visible location of its source and other sense data contributing to his localization. These aspects of speech production work in tandem to join the person to the role they are inhabiting, i.e. speaker (Bu¨hler 2001 [1934]:151).
Finally, humans differ from signposts in that they can remember, imagine, synthesize, and categorize (Bu¨hler 2001 [1934]: 137-154, 203-215). This makes it possible for human communicators to establish a perspective, and furthermore, to establish a “reciprocity of perspectives” with their fellow communicators (Schutz 1970:183). Participants take for granted a certain degree of similarity between their perspective and that of their interlocutor. At the perceptual level, this includes assumptions about the mutual accessibility of the immediate environment, including people, signs, objects, events, and so-on. When I say, “here,” pointing to an object, I take for granted that you can see what I am pointing at, more or less as I see it. In other words, “I take it for granted--and assume my fellow man does the same--that if I change places with him so that his ‘here’ becomes mine, I would be at the same distance from things and see them in the same typicality as he actually does; moreover, the same things would be in my reach which are actually in his. (All this vice versa)” (ibid.).
Prior to the pro-tactile movement, perspectives were not reciprocal among DeafBlind people in Seattle. Despite the fact that the members of the community were all more or less blind, visual capacities and orientation schemes were taken for granted. It was as if everyone could see, could access visual memories, and could respond to stimuli as sighted people do. From there, accommodations were made on a case-by-case basis for individuals. Strange things transpired as a result. For example, eye-contact was still treated as a way of marking an interlocutor as an addressee, despite the fact that DeafBlind signers often had to be told where the addressee was before they could fix their gaze. Pointing gestures were still used, despite the fact that very few DeafBlind people could link such gestures to a referent. These practices led to greater dependence on sighted interpreters. If sighted people were not present and available to mediate, deictic reference could not be resolved.
This case makes clear that at some level, perspectives must be reciprocal for deictic reference to work. Perceptual access is, however, only one small part of what constitutes a perspective, and therefore must be considered within a broader analytic context. Objects of reference are individuated against an indexical ground, or an “origo” (See Figure 5.1 taken from (Hanks 2009:12). The origo
may be the [speaker], the [addressee], the relation between them, or some other aspect of context, depending upon the case . . . The relation between origo and object may be spatial, distinguishing for instance relative proximity, inclusion or orientation. But space is just one sphere of context. Other spheres attested in deictic systems include time, perception (Tactual, Visual, Auditory), memory versus anticipation, and what we might call the force of the deictic (Presentative, Directive, Demonstrative, Referential, non-Referential). [ . . . ]. In addition to these functions, any one of which may be conventionalized, deictics in use pick up lots of other pragmatic baggage. They tend to be very sensitive to whether the referent is an object of mutual knowledge or not, or whether one or another participant has special claim over the object (by authority, ownership, habitual familiarity) (Hanks 2009:12).
Figure 5.1: The Structure of the Deictic Field
Among DeafBlind people, sensory capacities and orientations shift idiosyncratically. Everyone loses vision at different rates and in different ways. Prior to the pro-tactile movement, this splintering of perspectives was addressed by compensating and accommodating as needed. As a result, the indexical ground of deictic reference began to erode--at first along perceptual lines, and then in a broader sense as common knowledge became more difficult to generate and maintain. In this chapter, I examine the role of sighted interpreters in addressing this problem and the reasons that alternate strategies were eventually employed.
5.3 Displacement in Interpreted Interactions
Prior to the pro-tactile movement, interpreters described environments in the same way that a person would describe an environment to a non-present person, for example, a person on the phone. This approach was effective insofar as the environment could be reconstructed via memory or imagination. According to Buhler, memory and imagination work together like a “recording device ... that gives the organism ... a sort of orientation table for its practical behavior” (ibid.: 145). In this view, I, here, now is located in relation to past and anticipated experience, all situated in overlapping coordinate sets produced by sensory systems (visual, tactile, vestibular, etc.). Relations between coordinate sets accumulate, extending out around the present moment like roads or pathways, which structure movement through, and orientation to, space.
For example, in Figure 5.2, you see a schematic image of a sighted person orienting to a door. The projected line of travel follows from a visual orientation scheme. After DeafBlind people lose their sight, they continue orienting to objects in their environment in this way despite the fact that their visual system does not generate the necessary clues. This is an effect of habituation, as well as dynamics and constraints in the social field (see chapter 3). In order to adjust these habituated patterns so that orientation is organized around perceptible clues, DeafBlind people can receive “Orientation and Mobility training, or “O&M.”
A person who has adjusted their orientation scheme in this way, will orient to objects differently, as in Figure 5.3. Given this orientation, pathways that extends out around the traveler, will snap to a different grid. For a person attuned to tactile relations, a diagonal path through a room, like the one in Figure 5.2, is entirely unstructured, providing no clues as to where the door might be located. Therefore, an alternate route must be taken. Using a cane, some kind of orienting line must be identified, otherwise known as a “shoreline.” For example, the line where the wall meets the floor is a shoreline. If a tactile person follows this smooth orienting line with their cane, they can be confident that it will eventually be disrupted by door frames and other protrusions. Over time, intuitions grow stronger about how and where lines of travel intersect and where various protrusions are likely to be. Potential trajectories extend out around the DeafBlind traveler. Overlapping coordinate systems anchored by sensory systems converge on, and are elaborated by, this grid.
When visual orientation schemes deteriorate, it becomes more difficult for DeafBlind people to navigate independently. Prior to the pro-tactile movement, this problem was not addressed by cultivating tactile sensibilities or attending more closely to tactile cues. Instead, sighted people were increasingly relied on as interpreters and guides. The goal, in relying on interpreters, was to trade in dependence at the sensory level for autonomy at higher levels of processing--for example, decision-making. The interpreter guides the DeafBlind person to the rack of shirts, tells them what colors there are, describes the styles, and DeafBlind person decides which one they want. In 2006 and 2008, I recorded dyads composed of one
Figure 5.2: visual path
Figure 5.3: tactile path
DeafBlind person and one sighted interpreter running errands like this.1 I found that most interpreters did not paint vivid scenes of the environment. Rather, they used the few words that were necessary to guide the DeafBlind person through familiar scenarios.
When a DeafBlind person enters their bank, for example, where they plan to deposit their paycheck, they need to know where the end of the line is. This goes without saying, and upon entering, the interpreter guides the DeafBlind person to the end of the line. Once they are in line, the DeafBlind person needs to know how many people are in front of them and how quickly the line is moving so they know how to stand, whether to strike up a conversation with their interpreter or not, etc. Once they have reached the front of the line, they will need to know when one of the tellers motions to them to come to the window. The details of the gesture are unimportant as are the physical and personal characteristics of the teller. There is no clue that means stay, so any deviation from silence will mean proceed to the window. Once at the window, the DeafBlind person will need to know when the teller is ready to receive the check so they can coordinate their actions with the teller’s. At each turn, the visual interpreter must focus on visual cues in the environment that will help the DeafBlind person execute their check-depositing plan.
The visual information that is relayed to the DeafBlind person is a tiny fraction of what the interpreter sees. These bits of information are sufficient because the bank is not experienced by the DeafBlind person as vague gradations of color or disorganized centers of warmth and cold. He has been to banks before, and in particular his bank. Those prior visits have led to a set of expectations about banks. In familiar places like this, action can take on a binary character:
Is there a line? Yes or No
If Yes, find appropriate place in line.
If no, proceed to neutral location near tellers. Has the teller signaled? Yes or No
If Yes, follow interpreter to teller
If no, remain in current location
Communicative signals like the one produced by the teller are interpreted as instructions to act in very specific ways. They are interpretable because they are embedded in an orienting scheme, which has been “recorded” over time and, crucially, because DeafBlind people are habituated to the environment. In these contexts, interpreters tend to say things like: “Your turn,” “Go ahead,” “pull [the door handle],” “Prescription number please,” and so on. This information allows the DeafBlind person to choose between a very narrow range of possibilities--push or pull, move forward or stay, etc. The automaticity observed in these cases is a result of many years of bank visits sedimenting into a field of limited choices.
Equally important, however, is an alignment between the overlapping coordinate systems anchored by the sensory systems in the body and the broader orientation table these systems are absorbed by. Since the visual system of the DeafBlind person no longer aligns with the rest of his orientation table, he relies on visual data provided by his interpreter. Alignment is thereby maintained by distributing the perceptual field across two participants, only one of whom has full access.
In order for the orientation scheme to snap to a tactile set of perceptual coordinates, the DeafBlind person would have to be able to identify correspondences between a perceptible quality and an object acting as signpost. In other words, the DeafBlind person would have to be able to enter the bank and sort out for himself where the beginning of the line is, when the teller is signaling for him to come, and so-on. They would then have to find corresponding values in their visual memory and adjust the expectations that guide movement through the environment, yielding a coherent orienting scheme. However, prior to the pro-tactile moment, the strategy did not involve reconfigurations and realignments like this. Instead, the orientation table was kept in tact and a surrogate see-er was inserted, who could provide minimally necessary cues for routine action.
5.3.1 Useful Interpretations
The type of interpreting that involves minimally necessary cues is known as “useful interpreting” (Nuccio and Smith 2010:122-159). In useful interpretations emergent aspects of activity are, by definition, not included. If useful interpretations are the only kind of interpretation a DeafBlind person has access to, signposts start to hover above an increasingly irrelevant ground. They are isolated from the extended grid they were once a part of, and therefore, no longer mark moments of decision in a complex network of potential trajectories. Instead, they are like mileposts along a singular and undifferentiated path. It is not possible, given this state of affairs, to deviate from a series of pre-planned actions. Therefore, despite attempts to preserve autonomy, very little remains.
In 2006, while I was conducting two months of fieldwork, DeafBlind leaders were looking for ways to increase the autonomy of DeafBlind people and these kinds of barriers that were built into the interpreting process were a main focus. In the pre-tactile era, the solution seemed obvious; visual interpreters needed better training. Instead of providing only the most minimally useful cues, it was thought, they should also learn to attend to emergent particularities, or the “interesting” aspects of setting (Nuccio and Smith 2010:122-159). Interesting aspects of setting included things that could not be readily referred to a type, a category, or a grid. This kind of input would open up possibilities for action, allowing DeafBlind people to deviate from the plan, become distracted, fascinated, or surprised, and eventually, to have genuine choices in how they moved through their environment with an interpreter.
Over the next couple of years, between 2006 and 2008, conversations on this topic became more public than they might otherwise have been because DBSC received grant funds from the Department of Education to write a curriculum for training visual interpreters. The final draft of the curriculum was published in 2010 by Jelica Nuccio and Theresa Smith. Sections written for intermediate and advanced sighted interpreters provide ways of moving beyond the minimal instructions needed to complete practical tasks, and into the excesses and particularities that cannot be immediately referred to categories, roles, or structures. In order to collect a range of visual data, distinct “modes of attention” were incorporated into the model. These visual data were supposed to fill in where memory had receded, thereby maintaining visual orientation schemes.
5.3.2 Four Modes of Attention for Maintaining Visual Orientation Schemes
As the visual field deteriorates, it becomes increasingly difficult to act on the basis of minimal cues. In order to maintain and repair it, the attentional repertoire of interpreters was augmented. Interpreters were good at providing clues that would be immediately relevant to the next step in a conscious plan, particularly when the plan was highly scripted, as in the banking scenario. However, given the training they had, they were less able to venture into the details of the situated encounter. For example, they couldn’t capture possible but unrealized moves in an interaction. They couldn’t grasp transitional moments that turn situations into encounters (in Goffman’s sense), or cues that signal types of encounters as distinct from particular encounters. They were also not trained to capture habitual behaviors or routine patterns. Most of this receded into the periphery of their awareness and was,therefore, hard to retrieve and objectify.
However, DeafBlind leaders identified these dimensions of interaction as key for maintaining the deictic field and they thought that interpreters could learn to incorporate them, developing a kind of artistic practice. They attempted to formalize instruction for doing so in the curriculum. Four types of visual information were defined, according to the modes of attention that produce them. Together, these categories were meant to generate both useful and interesting interpretations (Nuccio and Smith 2010:126-7):
Passive seeing is not looking at any one thing in particular (as when walking down a familiar street) but absently noticing things as they come into view.
Focused looking is when reading, threading a needle, or looking at a painting.
Monitoring is being focused on something else but being aware of changes and ready to respond (as when having a conversation with a friend but monitoring the actions of the children, or having a leisurely dinner but watching the time so you’re not late for the next event).
Scanning is a way of quickly shifting focus or attention across a broad area, looking for something specific, for example: moving focus across an area in search of one particular thing (scanning to see where I put my keys); moving focus across an area for one type of thing (scanning the picnic area for an empty spot); or moving focus across an area around a broad area for a sense of place (scanning a friend’s apartment the first time you enter).
For purposes of training interpreters, engaging distinct modes of attention is an ethical matter. If the interpretation is reduced too much to immediately useful cues produced by a focused mode of attention, it can devolve into instructions that do not allow the DeafBlind person to make their own decisions. The choice to continue on course or abandon that course for some other requires potentially relevant information which must be gathered via different modes of attention. Therefore, Nuccio and Smith separate the modes of attention engaged by the interpreter to produce objects of attention and the process of meaning assignation that follows. They explain:
We use our vision to gain a sense of place, to feel oriented, and know where we are. Accordingly, we feel safe or tense, relaxed or focused and so on. We ascribe meaning to what we see. What we see is interpreted by us to mean something. We evaluate what we see (2010:127, original emphasis).
Ideally, it is the DeafBlind person who assigns meaning to objects of attention. If they are highly trained as well (there is nothing natural about the knowledge required to work with visual interpreters), they might even become skilled at deciding when the interpreter should switch from one mode of attention to another, and instruct them accordingly. If the sighted person imposes meaning, then the interpretation is going to give the DeafBlind person access to the interpreter’s experience of the environment rather than their own experience and agency will be lost. Therefore Nuccio and Smith identify restraint in meaning assignation as part of the ethics of visual interpreting, and developing this skill is a focus of trainings at every level--from beginning to advanced.
Restraint in meaning assignation requires a mode of attention that dwells in the situated details of the present moment, without leaping too quickly to categories, schemes, and types. Much of this can be accounted for with Goffman’s notion of the situation; particularly “scanning” and “monitoring.” Buhler’s language-user does a lot of focused looking, so that is not difficult to account for given the framework that has already been established, either. However, the category of “passive seeing” fits with neither Goffman, nor Buhler’s frameworks.
In 2006, I attended a workshop for sighted interpreters on “visual analysis” where passive seeing was introduced. Lee, the DeafBlind instructor, said that with this mode of attention:
The goal is more to evoke an image that the DeafBlind person can then interpret. Tap into the mood of the place, the passive aspects. Fill in the background, the texture of the scene, so the DeafBlind person can be free to make their own decisions about how to interact with their world. You can’t substitute your opinion for visual analysis and expect that to be informative.
Lee and the other DeafBlind instructor, Adrijana, went on to perform a role-playing exercise that illustrated the difference between conveying an “opinion” such as, “That man over there is friendly,” and conducting “visual analysis,” where details of the scene are relayed as close to the perceptual level as possible. The role-play in the workshop was set in a restaurant. The instructors were interacting, but saying very little to one another. The students were instructed to ignore the dialogue and attend to the “feeling” of the interaction, which they will be asked to report on later. There were several DeafBlind people attending the workshop who were using sighted interpreters. A few moments into the exercise, one of those visual interpreters interrupted the role-play to explain that without dialogue there was nothing to interpret.
The instructors explained that the point of the workshop was to see that when nothing is being said, the real work begins. Some examples they gave were the positioning of shoulders, the movements of heads, the direction and consistency of eye-gaze; light flows and responses to them; details about clothing, shoes, and jewelry, including the way they move, and are adjusted, both habitually and idiosyncratically; the particular rhythms of foot-tapping, hand-tapping, and the coordination (or not) of those rhythms between interlocutors and the broader surround.
Some of the data produced by this mode of attention goes to the habitus and its articulation with the social field. The conveyed cues act as triggers (if not immediate triggers) to act or to speak in particular ways.2 DeafBlind people grew up sighted and therefore developed a sighted habitus. If you tell a person with a visual habitus about the posture a person is assuming and what type of jewelry they are wearing, they will have some clues about what kind of person they are. In other words, bodily comportment, clothing styles, etc., are all visible cues that helped refer people to particular positions in the social field, and prevent them from being referred to others. This is the snap-to function of habitus and field.
However, there are also modes of attention that generate sense data which hover in the space between, or are in excess of any scheme or pattern. Disorientation, confusion, fascination, and the sensation of falling in love are all organized by modes of attention like this. In each of these states, there is a sense of immediacy that resists objectification and analysis. These are the phenomena, which, for some period of time, fail to snap to any grid of intelligibility. Nevertheless, we are overcome, carried away, drawn in, and otherwise directed by these modes of attention. In this sense, they restrict and guide our actions. In particular, neither the focused looking of a map-reader, nor the scanning of the sign-post follower can generate a here or a we that is charged with enough intensity and indeterminacy to be readily distinguishable from descriptions of places or groups of people. More than anything else, this is what is at stake in Nuccio and Smith’s category of “passive looking.”
In The Passions of the Soul, Descartes distinguished between three types of perceptual activity. First there are perceptions that we refer to external objects. The mechanism for this kind of perception works so that objects or bodies produce movements in the external sense-organs (for example, the eyes, or the hands), then the nerves carry those movements to the brain, and the brain imprints an idea of the external object on the soul. This kind of perception includes things like hearing a bell ring, or seeing a light (1985 [1647]: 337). Second, there are perceptions we refer to the body. The mechanism for these is the same as the first, except that we judge them to be already in us, and not external to us. They include “hunger, thirst, and other natural appetites” as well as pain, cold, and heat. These two differ from perceptions we refer to our soul, which constitute Descartes’ third category of perception.
This third kind of perceptual activity involves “the feelings of joy, anger and the like, which are aroused in us sometimes by the objects which stimulate our nerves and sometimes also by other arises (1985 [1647]: 337). These perceptions are defined by our inability to refer them to an identifiable, proximal cause. We end up referring them to the soul, not because they are generated in the soul, but because their cause is ineffable. Like all other forms of perception, the passions of the soul, or the “affects” describe a process of being affected by external bodies. Unlike other forms of perception, we experience the cause of an affect as ineffable. Affects link us mysteriously, to others. Ineffability is charged with potential. It heightens our awareness to the immediate surround and others in it, giving us a sense that we are really “here”-- that we are in something together.
DeafBlind people wanted to get as close as they could to an intense, immediate, charged present, and they saw sighted people as a portal. This posed a challenge for the interpreter-- to generate descriptions that were as concrete and indeterminate as reality. One of the ways this could be done was to include too much detail in visual descriptions, triggering a kind of “reality effect” (Barthes 1984:141-154). The reality effect, for Barthes, is a literary maneuver that involves writing in superfluous detail, drawing attention to things that are “neither incongruous nor significant” (ibid.:142). He argues that such details, only when provided in great excess, can end up conveying something of the character or atmosphere of a place. Each thing remains insignificant, but the cumulative effect of all of that insignificance is a sense of concreteness and immediacy.
For DeafBlind people, too many years of receiving “useful” interpretations caused types,
categories, and schemes to peel away from the particularities surrounding them. Therefore, there was no way of distinguishing places from types of places or people from types of people. In an attempt to repair this problem, “passive seeing” was introduced to interpreters as a mode of attention that could restore these distinctions in two respects. First, it would fill in the ground, or horizon, of routine patterns of action and exchange, thereby repairing the trigger-response loop that keeps habitus and field aligned. Second, particularities and excesses that do not snap to any grid or scheme, were fed into the indexical ground of deictic reference by creating an intense, indeterminate here for us to inhabit.
This strategy was ingenious, but it did not pan out for several reasons. First, the literary talents of interpreters vary widely and great heights of artistry were not often reached. Second, there is no way to fill in the background fast enough. Even as interpreters scrambled to describe every detail of every scene, it was not enough. Reality was perpetually flat, despite every attempt to bring it back to life. As a result, DeafBlind people eventually lost interest in the visual world and, as we will see in the next chapter, efforts shifted toward generating new forms of tactile immediacy, which sighted people had no role in generating. One of the things that prevented forms of tactile immediacy from forming earlier (apart from socio-historical dynamics discussed in previous chapters), was the persistence of participation frameworks, which were built around visual access and orientation.
5.4 Participation and Access Prior to the Pro-Tactile Movement
Participant frameworks are the emergent configurations that communicative agents occupy in the unfolding of an interaction, while participant frames are the repository of regularities that emerge in participant frameworks across encounters (Hanks 1990:137-187).3 Participant frameworks require participants to assume certain bodily configurations, and these configurations become regularized (or not) along with other aspects of interaction. In this section, I examine the relationship between participation and access prior to the pro-tactile movement by looking at the bodily configurations made possible by common participant frameworks.
In describing these frameworks, I also intend to emphasize for the reader how complex interaction became as a result of radically asymmetrical modes of access among DeafBlind people.
In the previous sections, I have discussed interactions between DeafBlind people and sighted interpreters as they move through social and physical space. The participation frameworks I examine here involve interpreted interactions where the focus is the exchange of utterances. For example, the DeafBlind man on the right in Figure 5.4 is standing on stage giving a presentation to an audience of DeafBlind people. The interpreter next to him relays visual cues, such as a raised hand, from the audience.
The audience is filled with dyads composed of one DeafBlind person and one interpreter.
Figure 5.4: DeafBlind Presenter (right) with Sighted Interpreter (left)
For example, in Figure 5.5, the man on the left is DeafBlind and the woman on the right is a sighted interpreter. The interpreter copies the presenter’s signs, so they can be received tactually by the Deaf Blind person. Each DeafBlind audience member using tactile reception must have at least one interpreter dedicated to them. Therefore, if there are 10 DeafBlind people present, there will be at least 10 interpreters working at any given time. In participation frameworks like these, DeafBlind people do not have direct access to one another. Instead, utterances are channeled through several relays before reaching the intended addressee(s).
Figure 5.5: DeafBlind audience member (left) with sighted interpreter (right)
This was the norm prior to the pro-tactile movement and it meant that all of the emergent dimensions of interaction-- the moment-to-moment adjustments, the embodied particularities of a smile, flushed cheeks, subtle shifts in posture, etc., were not available to the DeafBlindrecipient. They only had access to disembodied utterances and the name of the person occupying an abstract participant role (e.g.“speaker”).
Participant frameworks are supposed to act as the repository of regularities in interaction (Hanks 1990:137-187). However, without access to embodied particularities in the physical and interactional environment, stores grew thin. As visual memories faded, it became more difficult for DeafBlind people to to imagine how disembodied utterances were being brought to life around them. It also made it difficult to participate in the situated encounter in convincing ways. DeafBlind people ended up depending on interpreters to direct their attention, tell them who they were talking to, where to stand, what orientation and posture to assume, etc.
This reduction of immediacy to displaced roles and disembodied utterances took the au-tomaticity and the appeal out of interaction. Everything required conscious effort; people were flat and uninteresting; deictic reference was difficult to resolve; the exchange of utterances was stilted and arhythmic. However, prior to the pro-tactile movement, abandoning interpreters and engaging in direct, tactile communication was not an option since there were no participant frameworks available for organizing tactile access. Everyone was out of reach.
In addition, each DeafBlind person was losing vision at different rates and in different ways. Some people spent a lot of time in Orientation and Mobility training, others did not. Some people established relative spatial relations tactually (as in Figure 5.3) and some people established relative spatial relations visually (as in Figure 5.2). Some people spent most of their energy reconstructing visual scenes around degraded and partial visual data, while others turned more quickly toward tactility. Individuals were compensating in idiosyncratic ways. At the most fundamental, perceptual level, this contributed to the deterioration of reciprocity in interaction.
For example, DeafBlind people who had only a small tunnel of vision left would back up farther and farther from their interlocutor in order to see them. People who communicated like this were identified as “tunnel vision people.” When this strategy no longer worked, the DeafBlind person would be forced to use tactile reception, thereby becoming a “tactile person.” Being a tactile person did not mean that a tactile orientation scheme had replaced a visual one. It meant that VASL signs were received tactually, rather than visually, and sighted social roles were no longer available.
Once people “went tactile” they could no longer communicate with their tunnel vision friends or co-workers. Two tunnel vision people could stand far away from one another and communicate directly (with greater or lesser success). However, the procedure for a tunnel vision person and a tactile person was as follows: each time the tunnel vision person assumed the role of speaker, they would move to where the tactile person could touch them. Each time the tactile person assumed the role of speaker, the tunnel vision person would have to back up. It wasn’t clear to the tactile person when tunnel vision person was in position, though, so they might start signing before the tunnel vision person had gotten situated. The tunnel vision person was not likely to use tactile reception, even temporarily, because it would thrust them into a blind social role, and that move was seen as a irreversible (See 132 Chapter 3). Given this state of affairs, there was nothing reciprocal about the here occupied by a tunnel vision person and the here occupied by a “tactile” person. In this and other ways, the indexical ground of deictic reference was disjointed.
For these reasons, communication between DeafBlind people across sighted and blind social roles was far too cumbersome and it rarely happened. Likewise, communication between tactile people was difficult because there was no stable deictic field organized along tactile lines. Direct communication was blocked by many layers of mediating structure in the social and deictic fields, all of which had been built up around visual capacities and modes of orientation. Although much of it had nothing to do with vision, directly, taking vision out of the center caused the rest of the structure to collapse. Interpreters were not really able to solve these problems. However, the sheer diversity of orientation schemes among DeafBlind people left little alternative. It seemed impossible to imagine a scenario in which DeafBlind people could communicate directly with each other.
5.4.1 Participant Frameworks as Compensation
DeafBlind people came from different backgrounds and had very different ways of communicating. On top of this, they had different sensory capacities and orientation schemes. Interpreters dealt with this by accommodating each individual according to their needs. Therefore, if there were 15 DeafBlind people at a presentation, there were likely to be almost as many routes of transmission-- each one constrained in different ways. To manage this, each interactional setting has to be pre-structured on a case-by-case basis. Planning communicative events like this requires a great deal of expertise because unlike most routine encounters, nothing in this context is taken for granted. In other words, there are no mechanisms for linking basic participant frames to the situated present in the unfolding of the interaction. Therefore, the interaction must be, quite literally, planned.
Trudy started coordinating interpreters as the community was coming into being in the 1980s and she has been involved ever since--as an interpreter, interpreter coordinator, and in many other capacities. She has the kind of mind that can grasp the complexities of nonreciprocal interactions, anticipating beforehand where the sight lines will be, where tactile access is necessary, how many interpreters will be needed, what skills those interpreters must have, if there will be any personality conflicts, and on and on.
In an interview, Trudy provided me with some schematic representations of typical interpreting scenarios. As she described them, she sketched the configuration of objects and bodies on a notepad and explained in spoken English what types of scenarios would call for the configuration. I had a videocamera focused on the notepad and the microphone picked up our verbal exchange. The audio was transcribed and I reproduced her sketches in digital form using Microsoft Word. Because Trudy and I assume a lot of shared background knowledge, her descriptions require some additional, supplemental description. Drawing on my experience as an interpreter and participant in the community, I fill in as much as seems necessary to make Trudy’s examples legible to the reader. The examples I provide do not constitute an exhaustive list of interactional frameworks mediated by interpreters, nor do they include all of the examples that Trudy described, but they do give a sense of how interaction was organized prior to the pro-tactile movement. They also give the reader an opportunity to appreciate the complexity of the mechanism that was required to compensate for a lack of direct, tactile access to the situated encounter and the kinds of routinized regularities that settle out of them.
A Banquet
One of Trudy’s first scenarios involved a tunnel vision person attending a banquet, or more specifically, a fundraiser luncheon. In this case, the DeafBlind person is sitting at a large, round table. For a person with tunnel vision, such scenarios are impossible without an interpreter, even if everyone else is Deaf and using VASL because conversations jump around, and without peripheral vision, you don’t realize when someone is bidding for a turn by leaning forward, or raising their hand slightly, or giving off other fairly subtle cues that they would like to take the floor. Figure 5.6 is a representation of the sketch that Trudy
(a) Figure 5.6: A Banquet
drew while she was explaining this configuration. The solid black triangle represents the position of the DeafBlind person. The solid black rectangle and the white rectangle both represent interpreters working with that person. Below, is a transcript of her narration that accompanied the sketch:
Sometimes if it’s a fundraiser luncheon . . . something where there’s a table, there’s a round table and the interpreter’s over here [draws the solid black rectangle], the DeafBlind person is over here [draws the solid black triangle], and they’ve got [tunnel] vision [draws the arrow]. But [then] waiters are bringing food, things are happening over here [points to the area to the left of the black triangle]. Then the ‘off’ interpreter sits here--the team interpreter . . . This [solid black rectangle] is the ‘working’ interpreter and this [white rectangle] is the ‘feed’ or team interpreter. [T]hen their role is tactile information.
When Trudy says “tactile information,” she does not mean information acquired via tactile modes of access. She means information acquired via visual modes of access, which is described to the DeafBlind person who is using tactile reception. Therefore, we can consider this interpreter the sighted interpreter, while the interpreter represented by the solid black rectangle is the one focusing on utterances. When the server comes to ask for everyone’s order, the visual interpreter tells the DeafBlind person that they are approaching, but the language interpreter translates the server’s utterances. When someone gestures, as a bid for a turn in the conversation the visual interpreter interprets those gestures while the language interpreter translates the utterances of the person who takes the floor.
Even with two interpreters working in sync, the stream of information that is provided is necessarily a gross reduction of what is happening in the environment and at the table. Interpreters tend to focus on utterances and minimal visual context needed to interpret those utterances. If there are other DeafBlind people at the table, their utterances are translated in the same way that the utterances of sighted people are translated (as opposed to being exchanged directly). Therefore, although both DeafBlind people might be using tactile reception in some capacity or another (i.e. with the sighted interpreter who is providing supplementary visual information), the field of engagement is organized along visual lines, and utterances are designed for sighted addressees.
A Tunnel Vision Presenter on Stage
As part of my fieldwork, I attended bi-monthly classes sponsored by the Lighthouse. The class is known as “DeafBlind class” and it functions like a local newspaper. It is a venue for sharing news and also an opportunity to learn about new things that are not directly related to work. One class that I attended was a Discovery channel style presentation about earthquakes that was given by a Deaf sighted person who is well-known in the community. Another class was an introduction to yoga, taught by a DeafBlind woman. At another class, representatives from the Port of Seattle came to address concerns about the airport, and DeafBlind people stood up and told them their stories about difficulties they had encountered with airport personnel and physical accessibility. This helped the representatives understand how they could improve access for DeafBlind people, and it also provided a forum for DeafBlind people to share their experiences with one another.
Before, during, and after class, DeafBlind people communicate mostly via their interpreters, or they communicate with their interpreters and other sighted people who attend. Direct communication between DeafBlind people is rare. I understood Trudy’s description of the presenter-on-stage scenario largely in this context, since this was where I saw DeafBlind people (tunnel vision and tactile) on stage presenting. The number and positioning of sighted relays becomes complicated very quickly. For example, in the scenario in Figure 5.7, a tunnel vision person is giving a presentation. He is standing on stage, and the interpreter next to him is making sure that he is facing the audience, so sighted interpreters can see his signing clearly. If he drifts off to one side, or rotates his body at all, the interpreter will give him cues to adjust his position. If a person in the audience asks a question, their utterance follows the following route.
First, the “DeafBlind Question Asker,” in the lower right hand corner for Figure 5.7, stands up and asks a question. The “platform interpreter” copies the utterance. Next, the interpreter seated at the base of the stage copies the utterance again. The presenter has visual
Figure 5.7: A Tunnel Vision Presenter on Stage
access to this interpreter through his tunnel of vision and this is how the question finally reaches him. It is done this way, because if the presenter had to scan the audience with his tunnel vision, searching for the person with a question, it would take far too long, so the interpreter seated at the base of the stage acts as a stationary animator through which utterances are funneled. This is just one example of many participant frameworks, which together constitute a compensatory mechanism that allows DeafBlind people to approximate visual ways of listening, watching, and interacting.
In contrast to unmediated participant frameworks, the machinery of interaction often intrudes on the explicit aims of participants. A successful presentation like this is a feat of communication engineering that is possible only due to the work of a highly trained and very experienced interpreter coordinator who, like Trudy, has the kind of mind that take into account (in advance!) all possible routes of information transfer, sight-lines, visual capacities, communication skills, etc.
In addition, everyone in the room is wearing clothing that contrasts with their skin color--if their skin is light, they wear black shirts with high necklines. If their skin is dark, they wear teal or white shirts with high necklines. That way, if a tunnel vision person is looking at you, they will be able to see your hands against the background of your body more clearly. There are curtains hung behind the presenter to block out visual noise and there are large pieces of yellow tape on the stage to help DeafBlind presenters keep a visual orientation to their audience. All of this constitutes a compensatory mechanism that allows partially sighted and blind DeafBlind people to approximate visual modes of interaction.
However, as vision is lost, approximation becomes less and less convincing from all perspectives and further compensation is necessary. For example, the following scene unfolded in DeafBlind class (recorded in my field notes during the class):
Allen is doing the announcements today. Someone is standing behind him, changing his position, presumably so that people with low vision can see him, and maybe so he is facing a natural direction for the sighted members of the audience. When he is nudged, he is hyper-responsive--saying quickly and nervously, “Sorry. Sorry.” and moving over in a somewhat dramatized fashion.
I noticed that this type of response was most common among people who had very little vision and had not spent a lot of time cultivating tactile sensibilities. Many DeafBlind people, prior to the pro-tactile workshops, were hyper-responsive to feedback about visual communication norms. They turned jumpy and nervous, like a person who has received a signal that they must act, but has no structure to guide their actions.
These complex networks of mediation did, in fact, allow utterances to circulate among Deaf-Blind members of the community. However, particularities no longer accrued to the situation in which utterances were instantiated. A presenter on stage was just a speaker and there was no sense of how that role was realized in particular bodily configurations, gestures, postures, mutual embodied adjustments, and other emergent phenomena. DeafBlind people may have known, on some level that when they address an audience, they were supposed to orient their body in a particular way. But if they didn’t know where they were, or for that matter, where their addressees were, exactly, the trigger-response loop would remain incomplete.
Breakage in this loop can be compared to the jumpiness and anxiety one feels while lying in bed in an unfamiliar place, listening for intruders. You begin to strain and extend yourself toward whatever clues you receive, but if you don’t know what the clanking coming from the garage indicates, no description of it, no matter how detailed, can really bridge the gap. At this point, along with frustrations about always feeling left out, and one step behind, there is a sense that social norms are always being broken, or they are about to be broken--hence, the nervous side-stepping and repetitive apologizing. These are the limits of displacement, even when it is brilliantly orchestrated and highly elaborate, as it has become among sighted interpreters in Seattle.
A Meeting with a Facilitator
I have already touched on some of the constraints on interaction that derive from the social field and the effects of those constraints on participant frameworks and modes of access. The type of mediation that is provided does not vary straightforwardly according to the amount of vision that a given DeafBlind person has or doesn’t have. Looking at the meeting-with-facilitator scenario reveals some additional perspectives on how embedding in the social field bears on mediation strategies. In this scenario, the following categories of people are involved: DeafBlind people, Deaf sighted people, and hearing sighted people fluent in VASL. As in all previous scenarios, the common language is VASL and the deictic field is organized visually, with various compensatory mechanisms built in. In Figure 5.8, the white crosses
(a) Figure 5.8: A Meeting with a Facilitator
represent hearing people who sign. The white circles represent Deaf, sighted people. The “F” represents the position of the facilitator, or the person running the meeting. The white rectangle to the right of the facilitator is a copy signer, who plays the same role as the platform interpreter in DeafBlind class. The black rectangle on the left side of the semicircle is a DeafBlind person who is using tactile reception, working with interpreters. The arrow traces the sight lines of the interpreters. The DeafBlind person is facing away from the facilitator, and the interpreter is facing the facilitator and copy signer. The interpreter can either watch people in the meeting sign and interpret what they say to the tactile person, or if those people are not visible to them for whatever reason, they can look to the copy signer for reproductions of what they’ve said.
The white triangle represents a tunnel vision person and the black rectangle next to that is a “pointer” who directs attention to the current speaker. The assumption with this kind of compensation is that the tunnel vision person has enough vision to locate a person if directed to the general area, and once they have located them, they can see their signing without the use of an interpreter. However, because of their reduced peripheral vision, they can not follow conversations with multiple participants and rapid turn-taking. Trudy explains the role of the pointer:
That person is [pointing]. They also . . . This is like, this is kind of that transition-- if [the tunnel vision person] missed the fingerspelling, [this interpreter] might do the fingerspelling. . . . By the time they need this [kind of interpreter], they should be doing more tactile [reception, and be able to understand tactile fingerspelling]. Should be. But if, for whatever reason, someone is not very tactile, then this is a way to do the transition, where there’s ‘You know, I’m a visual person . . . ’ and for some reason it really helps if this person is a Deaf interpreter for some reason. It’s more comfortable, or more . . . whatever. Not always.
Trudy talks about this type of compensatory mechanism as a transitional strategy. However, it fits more easily within the broader pattern of resistance to all things tactile. Tactile reception (not to mention tactile practices that require more than just the hand) was considered something you did only if you “had to.” Therefore, it came piecemeal. You use a pointer, then later you add on a couple of relays so you do not have to locate the actual speaker. Later, you back up from the interpreter so you can still see them in your tiny tunnel of vision. The very last step, and only when it is absolutely necessary, is to switch to tactile reception of VASL signs.
This makes sense if shifting to tactile reception is seen as the first stage in a transition toward greater and greater alienation from the social world. The most fundamental insight of pro-tactile theory is that this alienation is not necessary, and it can be avoided given a field of engagement organized along tactile rather than visual lines.
Notice that the explanation given by Trudy has to do with reflexivity regarding personhood. It is not about access. The history of the Seattle DeafBlind community, embedded in a broader history of disability, deisntitutionalization, the rise of sheltered workshops for the blind, “vocational rehabilitation” programs, the recognition of VASL as a language, the rapid uptake of the notion of “culture” in public discourse and the application of this notion to Deaf, sighted people, yielded two, basic contrastive social roles: sighted and blind (see chapter 3). Greater forms of authority accrue to the sighted role, and legitimacy accrues to visual modes of access and representation. Therefore, in an attempt to take up more valued social roles in interactions within their community, many DeafBlind people continued to use Visual ASL long after it served as a useful mode of access to the environment and to utterances. However, another reason, prior to the pro-tactile movement, was the striking lack of any alternative.
Going tactile did not, until recently, mean entering a world dense with particularities and potentials, nor did it mean finally finding your people. Instead, it meant always being a description away from the charged reality of living with others. The social field kept the deictic field organized around visuality, despite the fact that participants couldn’t see. However, this discontinuity led to the slow degradation of the visual deictic field. This, in turn, meant that deictic signs in VASL did their work of referring less effectively as time went on.
5.5 Deictics in Search of a Field
Erosion of the deictic and perceptual fields became visible on occasions when deictic reference could not be resolved; when, for example, directions given in VASL to the kitchen in a friend’s house were misunderstood, when grammatical relations and phonemic distinctions that relied on the discernment of relative spatial locations were treated as ambiguous, or when descriptions could not be linked to the objects they described. In what follows, I discuss a few examples, recorded during Orientation and Mobility (O&M) trainings. Additional examples will be discussed in the following chapter, along with solutions that were eventually applied.
Exchanges between Marcus and his students in O&M trainings offered opportunities to examine the inadequacies of VASL deictic signs given tactile orientation schemes. In most circumstances, there are too many layers of potential confusion to single out the deictic sign as the culprit. However, Marcus is not a typical sighted person. He has been trained for many years to apprehend physical spaces in tactile ways. Nevertheless, the only language he had at his disposal was VASL. VASL is sensitive to the deictic field that has grown up around visual orientation schemes. Therefore, it is not surprising that VASL deictics were often ambiguous for the DeafBlind recipient.
5.5.1 Deictic Reference in the Transit Tunnel
After several failed attempts at finding a good starting place for orientation in the tunnel, Marcus explained to Helen that busses and trains take the same route through the transit tunnel, and he points to the line. Helen is shocked. She yells, “What! How?” And immediately pushes her cane out into the road to find the tracks. Marcus explains that when a train comes through, it uses the tracks and when a bus comes through, it drives over the tracks. Once she felt the track with her cane, she had a perceptible link to an organizing line in the tunnel, and she began to build up structure around this line.
The fact that both busses and trains take the same route through the tunnel is a crucial piece of information for establishing an orienting structure. It contributes to a retrievable “field value” in Bu¨hler’s terms, which is assigned to the meaning of the deictic utterance. However, the process breaks down for Helen because for her, the basic design of the tunnel cannot be taken for granted. Bits of information like this--the design of new transit structures, new clothing styles that sweep the urban landscape, new highways, new technologies (cell phones, iPods, iPads, smartphones, and so-on), are precisely the kinds of things that DeafBlind people miss out on. They are the topic of conversation in the general population, only briefly before fading into the background of urban life. This kind of shared knowledge accrues to the indexical ground of reference, and when the language user does not have access to it, deictic signs become contextual receptors set to receive values that are no longer retrievable.
This problem is compounded by the fact that the signs themselves are positioned in the deictic field and access to them is restricted when vision is restricted. For example, Marcus describes the layout of the transit tunnel in a way that would seem unremarkable to users of VASL. He names places within the tunnel, such as entrance and then locates them in the space in front of his torso. Using a combination of signs like right and left, he traces relative spatial relations between localized elements in space. After a few moments of this, Helen interrupts him, saying she doesn’t understand and she asks him to stop pointing to the “air.”
This problem arises again when Marcus tries to map the length of the tunnel onto the cross-streets above ground (Figure 5.9). Marcus (represented by the figure on the left) raises his non-dominant arm up so it is parallel with his chest. Without touching his signing hand to his arm, he signs “6th,” “7th,” and “8th,” above the arm, moving from the space just above the elbow to the space just above the wrist. With this information, Helen (represented by the figure on the right) would know that the tunnel was three blocks long and she would also know something about the location of the tunnel relative to the downtown grid. However, Helen
Figure 5.9: 6th to 8th street above tunnel
does not understand the description and asks him to refrain from signing “in space.”
As you can see in Figure 5.9, the DeafBlind recipient only has tactile contact with the signer’s dominant hand. The non-dominant hand, which forms the ground against which relative spatial locations are established, is not available tactually. In both cases, there is no perceptible ground against which deictic relations can be established. Therefore, in addition to a lack of structure in the deictic field, there is also a lack of structure in the perceptual ground of deictic signs themselves. Over time, these problems accumulate and make it increasingly difficult to establish shared orientation schemes. One place where these problems become unavoidable is in interactions organized around the activity of direction-giving.
5.5.2 Direction-Giving
In a sighted world occupied by sighted people, things like transit routes are shared and ways of orienting to them become routinized in practices like direction-giving. As DeafBlind people lose their vision, they become increasingly alienated from these practices. In the beginning, they find themselves giving directions less and less, but later on, they find that they can’t understand directions either. This all points to a disarticulation of deictic signs from the deictic field, compounded by the breakdown of figure/ground relations in the signs themselves.
After Helen and Marcus boarded the bus on the way to the transit tunnel, Helen asked Marcus about the route. Marcus explained that the bus goes “down Eastlake, past REI, into downtown and then into the tunnel.” They were sitting across from me on a crowded bus, and shortly after he explained this, I lost sight of them because the space between us had filled up with people. So I don’t know how Helen responded to this explanation, but the description is worth some consideration. The bus passed by many locations, but Marcus mentions only one road, the name of one business, one area--“downtown,” and the destination for the trip, which is the transit tunnel.
For me, as a sighted person who is familiar with Seattle, this description is adequate because
it distinguishes a limited number of feasible routes from one place to another. The city is not perfectly grid-like because it is built around several bodies of water. These bodies of water force traffic through several bottleneck bridges. From Greenlake to downtown, there are two feasible options--Interstate 5 or Eastlake. Eastlake crosses underneath I-5, and the two form an “X” when viewed on a map from above. They diverge as you enter the downtown area. At that point of divergence, REI appears as a salient visual landmark.
Part of the salience of the building is the architecture. It is a multi-story building the size of a warehouse, and the walls in one large portion of the building are made almost entirely of glass. In addition, REI is a camping and outdoor sporting goods store and Seattle is a place full of camping and outdoor sporting people. Even people who do not camp or engage in any kind of outdoor sports dress as though they do. Therefore, the building has been visited by many residents of Seattle and is likely to be familiar. Its salience as a landmark, then, derives in part from its size and eye-catching design and in part from wide spread familiarity with it.
It is unclear whether Marcus’ description felt adequate to Helen. However, it is safe to say that if a DeafBlind person were describing the route to another DeafBlind person, this is not how they would describe it. Many years ago, I was riding a bus along this very same route north-bound, when I noticed a DeafBlind person I knew coming aboard who happened to be fully blind. I took a seat next to him and we struck up a conversation. He asked me where I was going, I told him, and then we moved on to other topics. At some point, I stopped paying attention to where I was but just before I would have missed my stop, the DeafBlind man interrupted our conversation and told me I had better get my bag because my stop was coming. I thanked him and asked him how he knew. He said that he sometimes gets off at that stop (the DeafBlind Service Center used to be located there) and he knew that prior to that stop, there are characteristic motions of the bus that he sensitized himself to.
It would have struck me as odd if the DeafBlind man had said, “There is a cafe across the street with a giant spinning saucer on top,” or if he had referred to some other visual landmark. Marcus, like this DeafBlind man, has learned to orient to tactile dimensions of setting. However, in this case, he did not produce a descipriton based on a tactile orientation scheme. The reason is that Helen asked Marcus a question to which there is an appropriate and routine response for long-time residents of Seattle, who know that there are a limited number of routes from Greenlake to downtown. The routine association of particular questions with particular kinds of responses derives from the patterns of activity those questions and responses are embedded in, and the shared modes of access that participants have to those activities. Since Helen no longer has access to the visual dimensions of the route, Marcus’ description did not articulate to any structure outside of it. Although Marcus would be more equipped than most to understand why, he is still bound by routine patterns of action and exchange. Furthermore, the only language at his disposal was VASL, which responds to and is shaped by those routine patterns.
5.6 Conclusion
Stripped down to their most basic functions, deictic signs do two things: they name and they point. Both functions were disrupted by the deterioration of the deictic and perceptual fields among DeafBlind people. The naming function was disrupted because the ground of the signs themselves became inaccessible rendering the “name” uninterpretable. The pointing function was disrupted because from the perspective of the DeafBlind recipient, there was not enough differentiation or density in the field to which the signs articulated. Around these two basic functions, additional layers of mediating structure also broke down, including orientation schemes, modes of access, structures of participation, conventions for maneuvering within those structures, and shared knowledge. Any act of deictic reference is undergirded by complex networks of overlapping coordinate structures. If the deictic system fails to shift with the deictic field, it ceases to function. In the next chapters, I will argue that as the deictic field was reconfigured across a group of language users, the deictic system shifted as a result. This process contributes to the grammatical divergence of VASL and TASL.
Heroic measures were taken by interpreters and members of the DeafBlind community before tactility was turned to en masse. Almost every dimension of communication and interaction was mediated, channeled through complex systems of relays. Modes of attention were manipulated, literary devices were employed, and yet, in the end, it became clear that displacement was only possible given a reality that felt immediate, intense, and indeterminate. In order to act on this realization and built up new structures around tactile modes of access and orientation, a reorganization of the social field was necessary. Put another way, a prerequisite to changing the structure of the deictic field was nothing less than a social movement. In this sense, the deictic field presupposes the social field and its role in processes of language emergence cannot be understood in isolation.
Chapter 6
Reconfiguration of the Deictic Field of TASL
Prior to the pro-tactile movement, DeafBlind people relied heavily on sighted interpreters to access utterances, participate in interactions, and navigate physical and social spaces. Early on in the process of vision loss, interpreters were fairly effective. Eventually, though, the interpreter’s task became ludicrous. Filling in a missing word here or there became replication of entire utterances, which became replication of utterances and non-linguistic communicative cues, which became detailed descriptions of the crowd, the way light interacts with surfaces, the way styles among the youth keep changing. Interpreters found themselves doing cross sections of rooms, tracking patterns in the width and texture of pants, describing pale-skinned women sulking on the giant billboard above, or trying to capture the 5:00 malaise gathering itself on the inside of a city bus. In short, interpreters found themselves trying to reproduce reality in real time. Needless to say, such ambitions cannot be maintained, and even if they could, DeafBlind people eventually lose interest as their concerns and curiosities turn tactile.
When Helen was losing the last of her usable vision, she started responding to visual descriptions by laughing and yelling, “I’m blind!” One day, her husband told her that their dog had a dead mouse and was eating it on their living room carpet. He started describing the scene. She interrupted him saying, “I’m sorry dear, but your wife is blind as a bat.” Then she crawled onto the floor, opened up the dog’s mouth and smelled inside. She sniffed around the scene and felt the dog’s mouth where there was blood. She noted that blood does not have a distinctive smell, and her curiosity was satisfied.
Around this same time, Helen also started substantiating her claims about people with tactile facts. For example, one day she told me that the skin on Jodi’s arms is soft all the way down, but when you get to her palms and fingers, it turns rough. Helen wondered what was going on over there at Jodi’s house that made her hands feel like that. There was only one conclusion to be drawn, she said--that there is more to Jodi than there seems to be. Jodi is interesting’ it’s something about her discrepant textures and what they conceal about her home life. Then there was Joseph, whose signing, Helen reported, was often repetitive and light. She said that the rhythm of his false starts and the weightlessness of his movements suggested shyness, but she dwelled on the physicality of his hands longer than she needed to to reach this conclusion.
When DeafBlind people start talking like this, it is a sign that tactility has become a positive reality and is no longer an encroaching fear. Long before this moment, Visual American Sign Language has begun to feel inadequate to all involved. Directions to the bathroom in a restaurant are misunderstood. Stories vivid with visual detail conjure one-dimensional, faded scenes and are no longer interesting. Grammatical relations and phonemic distinctions that rely on the discernment of relative spatial locations become ambiguous.
There is not much an individual can do about problems like these, but in 2007 with the inception of the pro-tactile movement, DeafBlind people set out to address such problems collectively. Toward this end, a series of 20 pro-tactile workshops were organized for 11 DeafBlind participants by Adrijana and Lee, two DeafBlind leaders who had been developing new tactile communication practices in their professional and personal networks for about four years at the time. The workshops took place over the course of 10 weeks in the winter of 2010 and 2011.1 In this chapter, I analyze shifts in the structure of interaction that took place during the workshops. My central claim is that this transformation is not reducible to, or best understood as, a linguistic process, nor is it best understood as a cognitive process. Rather, it is an interactional process, which affects the organization of the deictic field.
The deictic system is analytically distinct from the deictic field. The former belongs to the language, while the latter belongs to context. The deictic system, like a collection of distinguishable signposts, can only point this way and that; in order for an object to be individuated, the signposts must articulate to distinguishable and external referents. Bu¨hler compares the deictic field to pathways where corresponding signposts are positioned (2001 [1934]:93-6). The processes through which those pathways are carved out and navigated are not linguistic in nature (Hanks 1990, 2005). Rather, they have to do with the modes of access that participants have to the immediate environment, and the routine patterns in activity that make some pathways more common and more expectable than others (ibid.).
The deictic field is also not a social construct. In the social field, the body is evaluated against social frames of value. Habitual bodily movements, gestures, acts of touching, patterns in how words are pronounced, and so-on are judged as polite or not polite, appropriate or not appropriate, and so-on. Habituated motoric patterns like these accrue to the “habitus” via socialization processes, which unfold in ontogenetic and historical time (Bourdieu 1990 [1980], also see Chapter 1 of this dissertation). In contrast, postures, movements, and semiotic cues in the deictic field, get recruited “enchronically” (Enfield 2009:10) in the back and forth of face-to-face interaction. Here, they function as turn-taking cues, backchanneling cues, signals to modulate and direct attention, etc. These signals are organized around, and constrained by, shared modes of access and they require certain bodily configurations to be exchanged. Bodily configurations are associated, in more or less conventional ways, with participant frameworks, thereby persisting beyond a single interaction (or not).
These are analytic distinctions. In practice, the deictic field is always already embedded in the social field. This accounts for the fact that if it is considered impolite or inappropriate to touch other people or objects, tactile modes of access will never be established in the deictic field. Nevertheless, deictic phenomena do not yield to social or linguistic analytics and the reverse is also true in each case. Therefore, the deictic field must be distinguished from the social field and the linguistic system before each can be productively linked to the other. Analytically isolating the deictic field, and setting it apart from social, linguistic, and cognitive constructs, is essential for generating a coherent account of the grammatical divergence of TASL and VASL.
In this chapter, I focus on two moments in this process, which are pivotal for the overarching analysis: (1) the reconfiguration of orientation schemes, and (2) a reconfiguration of participant frameworks and bodily configurations. In both cases, material clues (as Bu¨hler calls them) were incorporated into, and subsumed by, the structures of the deictic field. Textures, densities, tensions, and temperatures were subsumed by rhythms, trajectories, and olfactory singularities. Unlike cognitive representations and universal human capacities, these are concrete things, which respond to and are subsumed by other concrete things. I argue that the reorganization of these material clues into new configurations yields channels through which the immediate environment can be grasped in reciprocal ways by tactile people. This, in turn, is triggering a reconfiguration of the deictic field of TASL.
I begin, in section 6.1 with an ethnographic account of how DeafBlind people establish new orientation schemes. The main argument in this section is that establishing an orientation scheme is not not equivalent to building a conceptual representation. Rather, orientation requires the traveler to incorporate material qualities such as texture, density, and line into situated, location-specific patterns. In section 6.2, I show how the orientation schemes of DeafBlind individuals were aligned via conventionalization of participant frameworks and the bodily configurations they incorporate. Here, embodied particularities must be integrated with participant frameworks. Like the reconfiguration of orientation schemes, this amounts to a process of contextual integration, as opposed to a process of conceptual representation.
6.1 Establishing New Orientation Schemes
Prior to the pro-tactile movement, DeafBlind people in Seattle tried to maintain orientation schemes that incorporated visual coordinates. Those who had enough vision left occupied participant roles that were built up around those schemes. Attention-getting strategies involved waving a hand in the direction of the addressee. Signals for regulating turn-taking involved head-nods, nose-wrinkles, and visible shifts in body posture. People stood at visual distances from one another. Those who did not have enough vision to occupy participant roles and move between them, produced and received utterances via a sighted interpreter. For this reason (along with social pressures discussed in Chapters 3 and 4), changes in sensory capacity were not generally followed by a reconfiguration of orientation schemes. Tools for the reconfiguration of orientation schemes have been available since the 1980s in the Seattle DeafBlind community via orientation and mobility or “O&M” specialists. In order to understand how these practices contributed to new orientation schemes, I observed six O&M training session, each one lasting between 2 and 3 hours, with a total of two DeafBlind people.2 These training sessions were led by an instructor I will call Marcus.3 My central thesis in this section is that reconfiguration of an orientation scheme is not primarily a matter of conceptual representation.
6.1.1 Learning to Fly: Orientation is in motion
On my first day with Marcus, we met Allen at his house and we all drove together to Alki Beach. Upon arrival, Marcus tells Allen that they will be starting in the same place they started last time. He draws his attention to the strong smell of the water, and says, “Remember?” As they begin the session, Allen is nervous. We are all standing on a path that runs parallel to the beach, which is set back, near the road. On either side of the path, there are strips of grass, and further down there are obstacles, such as poles and stairs. Marcus hangs back and tries to interfere only when necessary for safety reasons, or when certain issues that he planned to address in the session arise.
Allen starts out holding his cane in his right hand. Marcus places his hand on top of Allen’s and explains that the arc of the cane should be only as wide as the shoulders. He tells me later that Allen has a habit when he first sets out of standing still and sweeping his cane across the entire width of the sidewalk and back again several times. There are reasons this is not allowed. One reason is that you can trip people who are walking by. But the more fundamental reason is that the cane, when used properly, is not a tool, but one element in a very precise relational system. Other elements include joints, such as the wrist, the knees, and the ankles, and the soles of the feet where they make contact with the ground. The relations are largely rhythmic and in order to cohere, forward motion must be consistent and focused.
The wrist snaps to the right, pulling the cane into its shallow arc. Pressure must be applied and relieved as necessary to make the cane float across the concrete on the sidewalk--too heavy, and it will get caught on things; too light and it will be uninformative. When the cane comes in line with the right shoulder, the wrist snaps in the other direction, pulling it again into its arc. Each time the wrist snaps, the leading foot raises up, off of the ground, and floats forward. As the cane comes in line with the shoulder, the foot is planted. A single rhythm must form in the stepping of the feet, the snapping of the wrist, and the tapping of the cane. When an obstacle is encountered, or the cane snags on a surface, Marcus says you do a “military one-two” recovery. Miss a beat and you’re lost. Marcus tells me that is why he continually reminds Allen of the importance of confidence and a positive attitude--because orientation is in motion.
The first stretch of the pathway is fairly clear, but further out, there are obstacles, such as curbs. The first time Allen encountered a curb, he stopped moving. He was, no doubt, focusing on restricting the arc of his cane and coordinating his joints and feet with its movement as instructed. He had a lot to think about. So when the cane slipped off the edge of the curb, he stopped cold in his tracks, and moved sideways instinctively. Marcus described this move as reactive and said it is the most dangerous response to obstacles.
When Allen shu✏ed sideways, his rhythmic field retracted, like a fountain being turned off, and he was left totally unprotected. Marcus was emphatic. When new information comes, you have to be able to “turn on a dime” because with good technique, you have very little reaction time. You are walking along--snap, tap, step, snap, tap, step, snap tap, step, snap, bam! You hit a large metal pole with your cane. From that moment, you have the interval of one step before your face hits the pole. And if you respond in an arrhythmic manner, you risk complete disorientation.
Walking behind Allen, Marcus shares his observations with me: “The arc on the right is too shallow, the wrist is too stiff, the right foot is dragging...” All impediments to a smooth and coherent rhythm. In addition, Allen has a tendency to zig-zag from side to side on the sidewalk, adjusting his course as he reaches strips of grass on either side. This causes the protective field to become asymmetrical in addition to arrhythmic-- an almost equally hazardous situation. Marcus repeats that “[i]t’s all about your line of travel.” If you don’t pay attention to that, you end up in “pocket spaces”--doorways, entryways, stair cases, or worse. Being able to walk straight is key.
I asked Marcus how any DeafBlind person who is fully blind can keep track of whether or not they are walking in a straight line. He said, “It’s like flying. There are no visual points of reference like sighted people have, just proprioception. It’s all in the feet, ankles, and knees. Information goes straight from the joints to the brain.” Marcus told me a few weeks earlier that he wears socks made out of something like wet-suit material. He trains for marathons in them--running for miles on trails in the woods. He said they’re better for your joints because your feet become sensitive to the ground and can respond in ways that are better for your body. In shoes, the connection to the ground is blocked, responsiveness in the joints is stifled, and the whole process is more course and ultimately, more wearing. He says it would make a lot of sense for DeafBlind people to use shoes like this, though he has never asked anyone to try it. With the weakened proprioception of a shoed foot, movement is even more important. Marcus explained that that is why breakthroughs often happened while walking downhill. A couple of months into Allen’s training, after he had been struggling to find his rhythm, this is precisely what happened. All of a sudden, while walking down a steep incline, rhythm, orientation, and movement aligned. Marcus said you could tell--something clicked.
Marcus contrasted the body-state of a person walking downhill (which is optimal) with that of a “curious traveler” (which is not optimal). In the ideal case, DeafBlind travelers use their mobility equipment in the same way that they use visual interpreters who do basic, “useful” interpreting (see previous chapter). They distinguish objects only insofar as the distinctions are relevant to their aims of traveling from one place to another. The difference is that in the former context, they are reliant on the sensory orientations of the interpreter, whereas in the latter context, they must rely on their own sensory orientations. Since they are not accustomed to tactility, Marcus says they must start by developing tactile awareness around materials--brick, concrete, gravel--the differences between them, and their patterns and sequencing. All of this has to be incorporated into the rhythm and the line of travel without causing any delay or disturbance.
In cities there are many doorways. Sometimes the material on the ground in the entryway has a different texture than the main sidewalk. This can sometimes be felt by the cane. Sometimes, entryways are set back from the rest of the wall, and form a negative space that is detectable with the cane, or with the “mini guide,” which is a small, handheld device that bounces sonar off of surfaces, returning different intensities of vibration depending on how close the object or surface is. Marcus used these facts as a point of departure for later, more advanced lessons with Allen. For instance, the goal of one session I attended was for Allan to learn the route from his home to a bus he would be using regularly to get to school. The trickiest part of this route was the end. Once Allen had found the block where the bus was located, he had to find the actual bus stop. Standing at the corner, he couldn’t be sure how far down it was. So Marcus taught him to count doorways. He did this by tracing the “shoreline” (any detectable, orienting line, in this case, the line that is formed where the walls of the businesses on that block come in contact with the sidewalk) until he found a gap. The first gap would be counted as “one.”
There is no abstract structure that orients. Material fragments are concretely incorporated into a trajectory and a rhythm. A doorway is a tactile silence in the rhythm--no resistance, texture, or density. This silence is preceded by a hard tap against the brick-sided building and it is followed by the same. This sequence of material cues is incorporated into the pathway between the street corner and the bus stop along with other material clues, all of which guide the forward-moving traveler. It is not entire objects that get picked up and organized by the pathway, but material fragments, qualities, and “clues.”
Working with a visual interpreter, the vivid present is reduced to signposts that guide a pre-set plan. When working with a cane and other mobility equipment, the vivid present is reduced to bits and pieces of material. In both cases, excess, to some degree, drops out. However, there is one very significant difference. The minimal bits that are incorporated into orienting structures in O&M trainings are perceived tactually. When working with visual interpreters, the point is to have (indirect) access to visual stimuli and respond as sighted people would. This loop breaks down, though, when DeafBlind people can no longer
reconstruct the pathways the signposts are pointing to. O&M helps re-build those pathways, this time, along tactile lines.
Reconstructing the pathways in the deictic field is not only, or even primarily, about developing conceptual representations of the immediate environment. It is about cultivating modes of receptivity and responsiveness to the material qualities of actual things. Material qualities must be linked to the schematic map-like structures of the deictic system. If they aren’t, the map is useless. This focus on material things distinguishes the deictic field (Buhler 2001 [1934], Hanks 2005b) from constructs such as Real Space (Liddell 2003:82) and Gestural Space (Rathmann and Mathur 2012:144), which link the linguistic system to non-linguistic phenomena by way of cognitive representations, thereby excluding actual material things, which resist our actions in particular ways(4).
DeafBlind individuals like Allen work hard to incorporate material elements into rhythms and trajectories, and over time, these patterns extend out around them like a grid, or subsume them like a forcefield. Orientation and mobility training is one place where they do this work, but as a result of the pro-tactile movement, individuals started looking for their own ways of cultivating tactile modes of orientation and access individually and in groups. However, orienting to the tactile dimensions of objects and events was not enough to transform the deictic field. The next step was to coordinate orientation schemes by establishing participant frameworks for direct, reciprocal, tactile interaction.
6.2 Participation Frames and Frameworks
Goffman’s work on participation frameworks begins with the insight that the roles people occupy in interaction cannot be understood by starting with one speaker and one hearer (1981:127). A common assumption that follows from this, says Goffman, is that interactions begin with one person who is expressing feelings and thoughts, and another person who is listening, until the speaker and hearer roles are exchanged, and the one previously listening begins to talk. This suggests that the speaker and the hearer are the only two people involved, and are the only two people who have access to the interaction. From there, necessary changes are made such as adding participants and nonparticipants, but the terms of analysis cannot deviate from the initial “statement-reply” format (ibid.:129).
Goffman argues that adding and subtracting from this basic format will never suffice. Instead, the primary categories themselves must be analyzed into smaller, coherent elements (1981:129). To this end, he turns away from the dyadic encounter (i.e. speaker-hearer) as a starting point, and toward the whole of a communicative event. The communicative event opens, he says, when participants turn “from their several disjointed orientations, moving together and bodily addressing one another” (ibid.). The event is closed when people break from shared orientation, “departing in some physical way from the prior immediacy of cop-resence” (ibid). We can often recognize these events by “ritual brackets” such as greetings and goodbyes that mark the end of ratified participation (ibid.). When viewed this way, the encounter takes on an organization of its own.
Therefore, information is not simply added to the statement-reply format. Rather, our entire perspective on what counts as a relevant dimension of the encounter changes. We begin to ask questions such as-- how do conversations get started? How do topics get established as such? How is a “common information state” built up between participants? How are new participants brought up to speed in the conversation? What constitutes a “preclosing”? (ibid.:131). Many roles and functions become discoverable in the context of a whole interaction, which would have seemed otherwise peripheral. For example, in addition to the speaker and hearer, there might be people listening who are not ratified participants.
Goffman introduces two such cases: eavesdropping and overhearing (ibid.:131-2). Based on these and other examples, he argues that the precondition of ratified participation for the analysis of talk excludes all sorts of possibilities, which are in fact possibilities that participants are aware of and orient to. This is evidenced by easily observable behavior aimed at “managing accessibility.” Once the dyad is replaced by the interaction as a whole, many communicative activities other than stating and replying emerge. For example, the following (ibid.:134):
Byplay: subordinated communication of a subset of ratified participants
Crossplay: communication between ratified participants and bystanders across the boundaries of the dominant encounter
Sideplay: respectfully hushed words exchanged entirely among bystanders
Collusive Byplay: collusive subordinate communication
Collusive Crossplay: collusive subordinate communication within the boundaries of an encounter
Collusive Sideplay: collusive subordinate communication outside of the boundaries of an encounter
Each of these headings is a label for a type of communicative activity and each one hints at a certain configuration of participants and certain corporeal relations between them. However, multiple possibilities can be imagined in each case. For example, sideplay suggests that there are at least four participants--two who are communicating in some sustained way and two who break off from the dominant interaction to engage in some kind of subordinate communication. However, it could be that there are only three participants present, two of whom are engaging in sideplay, unbeknownst to the third. Or there may be many people involved in the dominant interaction and more than two break off to engage in subordinated communication.
It is also easy to imagine that the participants engaged in byplay are physically closer to one another than the participant(s) who are sustaining the dominant encounter. ‘Hushed exchange’ makes me think of whispering and whispering makes me think of two people in physical proximity, one with a hand cupped to their mouth, leaning forward toward their co-conspirator. Alternately, one could imagine sideplayers who are on opposite sides of the room, communicating via a signed language using a reduced signing space that functions like “hushed speaking.” Or maybe the dominant interaction is occurring in another place all together and the sideplayers have joined via video technology. In order to have a side conversation, they move out of the video frame and press “mute” but remain physically distant from one another.
If Goffman’s categories specified every one of these possibilities they would be of no use. They work because they are analytic constructs that describe regularities in interaction at some (unspecified) level of generality. At this point in Goffman’s argument, we have gone from an a priori set of participant roles (speaker-hearer) and utterances with a priori functions (to state and to reply), to “the whole interaction,” where neither participant roles, nor utterance functions are determined prior to activity. From there, the analytic vocabulary must be built up via observation of many interactions.5 Across these interactions, patterns begin to emerge.
This procedure implies an analytic distillation that leads to the more general categories and types listed above, which omit certain details and retain others (e.g. manner: “respectfully” and volume: “hushed,” but not physical distance between participants or mutual spatial orientation). So the totality within which categories emerge is larger than it looks, extending across many encounters, and yet, there is no conceptual framework that accounts for this larger unit of analysis, nor is there any way of accounting for the movement from particular to general. How is it, for example, that manner and volume make their way into the categories, but not corporeal relations between participants such as physical distance or mutual spatial orientation? According to what criteria and from what perspective were these selections and omissions made?
The participant frameworks and corporeal relations that were used in the pro-tactile workshops were new. Upon being established, they did not accrue seamlessly to the structures of orientation that had previously been maintained. Instead, new participant frameworks incorporated new corporeal relations and a broader reconfiguration resulted, which had consequences for the grammar of TASL. While Goffman provides a good starting point for understanding participant frameworks as a relevant unit of analysis, he is not helpful in trying to understand how new frameworks and bodily configurations can affect the emergence of new linguistic structures.
In order to address this question, we must follow Hanks in asking not only how the analyst moves from actual communicative events to the structures organizing them, but also how native actors schemamatize and maintain participant frameworks in the course of communicating to generate participant frames, and therefore, maximally expectable contexts within which signs are produced and received (1990:148).
First, Hanks argues that the language acts as a repository of conventional categories, and those categories are in a dynamic relation to the fields where they are instantiated (1990:148). For example, person categories in the deictic system of a language are linked to participant roles in the deictic field via reference and indexicality, so the use of pronouns “tends to sustain an inventory of participant frames by focalizing them, engaging them as ground for further reference, or both” (ibid.). Second, if you ask, participants can draw on their understandings of participant frames and reason from them as a resource for working through potential interactional scenarios. So talk about interaction is another way that participant frames are generated and sustained. Third, genres can maintain participant frames by linking them to something larger than the individual interaction. Genres work by incorporating “typical participant relations as schematized aspects, thereby making them expectable, repeatable, [and] automatically inferable” (ibid.).
While each of these processes contributes to the creation and/or maintenance of participant frames, the overarching process that Hanks points to is habituation, which, he argues, “is more general than either language structure or discourse genres (but it is related to both)” (ibid.:148). He argues that habituation simplifies the practical task of managing participant frameworks and occupying roles with them. In part, this explains why the apparent analytic complexity of participant frameworks poses no practical problem for social actors in the course of an interaction (ibid.:149). In addition, habituation introduces a hierarchy into an array of participant frames. This results in a kind of “taxonomy” which contains a set of “basic level” categories.
Following Coleman and Kay (1981), Lakoff (1987:46-7), Lounsbury (1964::205) and other cognitive theorists, Hanks defines a taxonomy as “a taxonomic structure plus a set of terms, where the former consists of a hierarchy of inclusion relations among sets and the latter of a set of labels standing for taxa” (Hanks 1990:151). There is a “unique beginner” at the top of the taxonomic structure with subordinated, included levels beneath it. Two sets that are subordinated to a common taxa “contrast” with one another. Moving from top to bottom, specificity increases. Moving from the lowest to the highest level, abstraction increases. The “basic level” in such a structure is located neither at the top, nor at the bottom. Rather, it is located at an intermediate level, where the tension between abstraction and specificity is optimal for mirroring the structures of attributes in the perceived world schematically (ibid.:151). Perception is shaped by routine motor interaction with objects of perception. Therefore, the basic level is grounded in “habitual motoricity” (ibid.:152). For participant frames, the highest position contains the most abstract, most inclusive category of “participant frame” and
the sets subordinated to it might include: (ratified participants vs. non ratified participants), (producers vs. receivers), (addressee vs. other), (animator vs. author vs. principle), (message bearer vs. ultimate target) and (perhaps) (bystander [copresent unratified] vs. overhearer [noncopresent unratified])” (ibid.).
Now the task is to determine the basic level within the taxonomic structure. The basic level should correspond closely to the way that participants perceive participant frameworks, and should therefore be relatively simple, since participants do not generally struggle as they inhabit and manage those frameworks. Some clues about how participants perceive participant frameworks can be found in the conventional and commonly used labels participants have for participant frames. Those that are most consistently and frequently labeled are likely to be included in the basic set (Hanks 1990:152.). Another kind of evidence is the default usage of a certain set of participant frames, which are altered according to circumstances that participants take to be exceptional in one way or another. In other words, the participant frames that are treated by participants as usual or expectable, are likely to be included in the basic set (ibid.).
6.2.1 Basic Participant Frames in a Tactile Field
As new participation frameworks were being established in the pro-tactile workshops, the frames that had been shaped by routine motoric patterns in a visual world no longer exhibited the characteristics of basic level categories. That is to say that they no longer corresponded to the way participants perceived participant frameworks. Not surprisingly, labels for visual participant frames were quickly abandoned. The basic level in the taxonomic structure, and everything above it had to be thrown out and replaced. This process began with establishing new participant frameworks, and over time, some developed labels, while others, which were used less frequently did not. By the end of the workshops, participants referred to a particular kind of two-person configuration consistently using a specific sign.6 Furthermore, this label was used with great frequency. The same held for the label associated with particular kind of three-person configuration.
In addition, participants began to approach interactions as though two or three participants were included and they adjusted easily and fluidly between those two configurations. However, when a fourth person joined the interaction, this required an explicit intervention, where participants would remind one another of the rule governing the extension of three-person participant frames to a four-person configuration.7 This is evidence that two and three-person configurations were treated as a default or basic configuration and other frames were treated as extensions or alterations of the default.
If (speaker-addressee) was a basic participant frame in a visual field of engagement, in a tactile frame, the corresponding slot in the taxonomy for a tactile field contained two categories: (speaker-addressee) and (speaker-addresses). While a distinction between one and two addressees does not have significant consequences for sign production in visual participant frameworks, it is highly salient in tactile frameworks, as we will see in Chapter 8.
Interestingly, it was not the configuration of participant roles that DeafBlind people thema-tized in their metapragmatic categories, but the bodily configurations. Therefore, in order to recognize the crucial corporeal component of these basic participant frames, I refer to them not as (speaker-addressee) and (speaker-addresses), but as “two-person configurations” and “three-person configurations.”
DeafBlind people had to adjust to these new participant frames in many ways. One of the most important adjustments was in the motoric patterns that were fit to the routine tasks at hand. Motoric patterns cohered earliest and most completely around two and three-person configurations. In early weeks of the workshops, participants struggled to occupy and manage frameworks since their visually derived participant frames had become obsolete. Working their way from the bottom up in the taxonomic hierarchy of categories, the immediate environment was, at first, overrun with specificity. This led to many disfluencies and frustrations in determining relations between speaker, animator, and author (i.e. is the person whose hand I am in contact with the one who is the author of this utterance?), how to address one versus two interlocutors, how to occupy the position of the “bystander,” how to join an ongoing interaction without disrupting it, and so-on.
The problem stemmed from the fact that the basic level was missing, so “category members” had no parent category. The motoric effects of this were visible in a wide variety of arrhythmias-- widespread choppiness in bodily movements, extreme hesitance, awkward pauses, failures to maintain rhythmic sequentiality in conversation, collisions, accidents, and flat out confusion. As the problems were worked out, corporeal relations began to fall into place and regular patterns emerged that allowed DeafBlind people to navigate participant frameworks and the transitions between them fluidly and with apparent ease. By the end of the workshops, basic participant frames were in place. All of this is highly consequential for the grammatical divergence of TASL and VASL, including sublexcial structure (Chapter 8), the emergence of a new system for generating polycomponential signs (Chapter 9), and the reconfiguration of the deictic system (Chapter 7).
Two and Three Person Configurations
In both of the basic configurations, tactile contact between participants increased. For example, in Figure 6.1, Adrijana (left) is listening to Collin (right) using her left hand. Adrijana uses her right hand to provide tactile back channeling cues. In addition, Adrijana and Collin’s thighs are in contact from the knee to the hip. In Figure 6.2, Chantelle (center) is signing to Adrijana (right) and Nina (left). The legs of all three participants are intertwined up to the mid-thigh. In addition, the hands of both addressees are resting on one another and on the knee of the signer. In this kind of configuration, all participants have access to the feedback that is being exchanged, including things such as backchanneling signals, turn-taking cues, signs of boredom, interest, annoyance, and fascination.
If Chantelle produces an utterance with shaking, clammy hands, it will be construed differ-ently than if she produces the same utterance with warm, dry hands and a clear, decisive rhythm. In configurations like these, utterances were re-united with the embodied partic-
Figure 6.1: Two-person Configuration
Figure 6.2: Three-Person Configuration
ularities of their production. DeafBlind people began to respond in to material clues in particular ways, and those ways of responding could be coordinated, given the kind of access that basic participant frameworks allowed.
Given basic participant frameworks, plus the embodied particularities that came with them, DeafBlind people had all they needed to elaborate, generating alternate frameworks as well. They could participate in a conversation, but they could also start new conversations, end conversations, overhear a conversation in which they were not previously involved, and observe the activity of others, even when utterances were not being exchanged. For example, in Figure 6.3, two people are seated, playing a game of tactile dictionary, while the two people standing behind them are observing their activity. Establishing basic participant frames made derivative frameworks like this intuitive(8).
Figure 6.3: Tactile Observation
As DeafBlind people established new orientation schemes, the material dimensions of objects were incorporated into motoric and perceptual patterns in new ways. The same is true for patterns in interaction. For example, playing tactile pictionary with direct access to your competitors, you pick up on all kinds of things--You know that playdough is being rolled out, but beyond that, you know how it is being rolled out--at what pace, with what intensity, and to what effect. From there, you can speculate about the temperament of the roller, or you can notice traces of their culinary habits, mixed with the smell of their dog and their body, and you can associate this unique olfactory combination with them, like a fingerprint or a signature that can be recognized anywhere. You know that there is another player there as well, but beyond that, you have access to the tension in the tendons and muscles of their hands, arms, and neck. From there, you can speculate about their level of interest in the game, or you can begin to appreciate their tactile agility as their fingers dart around the curves and corners of the sculpture, and then leap up off of the object to announce a best guess to the group.
After a while, you begin to like people, or not. You begin to feel drawn into things. The meanings of utterances begin to be overdetermined and expectable, and this leads you to feel that you in something and that you are not alone. People with stable sensory capacities take such things for granted, but for the participants of the pro-tactile workshops, recovering participant frameworks that allowed for the observation of others felt novel and thrilling. When everything was mediated by interpreters, utterances were dissociated from the authors that produced them, from the activity that preceded them, and from the kinds of affection, repulsion, and curiosity that grow only through watching at close range, how people habitually interact with objects and with other people. On one hand, these embodied particularities and the concrete patterns they were subsumed by accrued to the indexical ground of reference. On the other hand, the very same embodied particularities began to be evaluated against new frames of social value. The former accrues to the structure of the deictic field, while the latter accrues to an emergent tactile habitus.
6.3 Conclusion
The reconfiguration of the deictic field did not transpire (primarily) by means of cognitive representation. An olfactory signature is not a cognitive representation, nor is a rhythmic field that subsumes the textures of gravel, marble, and brick as it moves over them. These are concrete patterns that subsume material elements as they go, not abstract concepts that represent them once and for all. Concrete patterns form pathways, forcefields, configurations, and trajectories, about which, and through which, shared knowledge can be produced; all of this contributes to the structure of the deictic field. These structures presuppose certain cognitive, perceptual, and motoric capacities, such as proprioception and olfaction. However, the transformation that gave rise to them can only be grasped by analyzing specific practices and the material clues that participants use to organize them.
In the next chapter, I continue to analyze pro-tactile communication practices in order to understand how deictic signs were transposed onto the new deictic field, calibrated to it, and created within it. I argue that this process constitutes a divergence in the deictic systems of TASL and VASL, and in the remaining chapters of the dissertation, I show how changes in the deictic system of TASL echo in the grammar, affecting multiple subsystems, ultimately leading to the emergence of a new, tactile language.
Chapter 7
The Deictic System of TASL
In the previous chapter, I argued that the deictic field of Tactile American Sign Language was reconfigured as a result of pro-tactile communication practices. This chapter examines the effects of that transformation on the deictic system of TASL. Unlike the deictic system, which is part of the grammar, the deictic field is organized by modes of access and the structures of participation that are built up around them. In order to use a deictic sign, the language-user must coordinate grammatical elements and relations with elements and relations organized by the deictic field. Coordination can be loose or it can be tighter and more restricted. The tightening of relations between linguistic and deictic elements, as a language develops, is what I call “deictic integration.” In this chapter, I identify deictic integration as a driving force in the grammatical divergence of Tactile American Sign Language (TASL) and Visual American Sign Language (VASL).
Integration is a type of “embedding.” Embedding describes a process whereby linguistic elements undergo “reshaping” “conversion” and “transformation” as values are retrieved from deictic and social fields (Hanks 2005a:194). Patterns of retrieval align the linguistic system with the fields it articulates to so that, as Bu¨hler says, language is not “taken by surprise” when it encounters the world (2001 [1934]:197). Rather, the linguistic system acts like a network of receptors, which have been shaped by these patterns and are therefore set to receive certain field-values and not others.
Four mechanisms of embedding have been proposed: practical equivalences, counterparts, rules of thumb (Hanks 2005b) and integration (Edwards 2012). In the first three types of embedding, transformations affect the meaning of the sign, while the form remains constant. Integration, in contrast, accounts for cases where both form and meaning are transformed as they are embedded (See Section 1.2.3 in Chapter 1 for more on embedding). In this chapter, I argue that as new patterns of retrieval in a tactile field began to cohere, the deictic system was transformed. This is where the grammatical divergence of TASL and VASL begins.
In order to understand the scope of the phenomenon, as well as its projected implications, I begin by introducing three categories of signs that rely on a coordination of linguistic and deictic elements. They are: “pointing signs” (Section 7.1.1), “polycomponential signs” (Section 7.1.2), and “directional verbs” (Section 7.1.3). Once the deictic field was reconfigured, these categories of linguistic signs snapped to a new set of deictic coordinates, which triggered additional, language-internal effects. I identify three mechanisms driving this process: signal transposition, sign calibration, and sign creation. Signal transposition involves the transposition of handshapes onto the body of the addressee, yielding a tactually accessible ground. This process has phonological implications (see Chapter 9), but is driven by the coordination of the linguistic system and the deictic field. Sign calibration is an interactional process through which participants clarify and adjust signs which have lost their capacity to refer to objects in the immediate environment. DeafBlind participants calibrated signs intuitively in the flow of interaction when confusion, irritation, unresponsiveness, or requests for clarification arose. As a result of these procedures, signs grew new receptors for material clues, this time set to receive values via tactile coordinates. As this process was honed in the pro-tactile workshops, new rules for the formation of signs began to emerge and novel forms were created that would not be predicted given the grammar of VASL. I call this process sign creation.
In this and the following two chapters, I argue that these processes affect the internal organization of the deictic system of TASL, and they echo further into the grammar, affecting the phonology, morphology, syntax, and semantics of TASL. At TASL’s current stage of development, effects have only begun to manifest. However, given stable conditions in the social and deictic fields, a more comprehensive reconfiguration of the grammar appears inevitable.
7.1 Three Types of Deictic Signs in Signed Languages
Deictic signs do two things: name and point. Therefore, when a deictic sign is applied in the speech situation, it receives values from two distinct fields. Its naming or “characterizing” component receives values from the “symbolic field, ” while the pointing, or “deictic” component receives values from the “deictic field.” All deictic signs are composite in this respect, composed of both “symbols” and “signals” (Bu¨hler 2001 [1934]:99). In order to speak deictically, values from each field must be coordinated as the utterance unfolds. Together, these processes account for the definiteness and directivity of reference.
In signed languages, coordination of deictic and characterizing elements is often accomplished by directing characterizing elements, such as handshapes an their associated meanings, toward locations in the deictic field. There are three general categories of deictic signs in VASL: pointing signs, polycomponential signs, and directional verbs. In what follows, I show how each category of sign is affected by deictic integration in the Seattle DeafBlind community.
7.1.1 Pointing Signs
A pointing sign canonically involves directing a handshape like the one in Figure 7.1 toward an object of reference that is accessible to both speaker and addressee.1 Mutual accessibility can be established not only via perception, but also via memory, anticipation, imagination, or any other mutually accessible relation (Hanks 2005a). From the perspective of the language-user, directivity and definiteness of reference are easy to achieve because, as Bu¨hler says, the deictic sign “can do nothing other than take advantage--naturally to a greater or lesser extent--of the possibilities the deictic field offers them” (2001 [1934]:145). In other words, the pointing sign does not abandon the addressee in a vast and unstructured space of potential. Rather, like a signpost positioned at a fork in a pathway, the pointing sign clarifies potential ambiguities in a field of already-limited possibilities (ibid.). The deictic system is part of the grammar, while the deictic field is part of “context.” In order to understand the effects of changes in the deictic field on the deictic system, the two must remain analytically distinct.
Figure 7.1: Pointing Handshape
From the perspective of the grammar of VASL, the pointing handshape in Figure 7.1 is a semantically minimal linguistic element containing a signal to direct one’s attention toward a definite object. Definiteness derives from the linguistic system. For example, in English, here is not there, I am not you, and this is not that. Each of these oppositions generates definite categories, which analyze objects and phenomena in particular ways.
In spoken languages, the deictic system is composed of discrete, oppositional categories, which encode highly schematic semantic distinctions. There is growing evidence that pointing signs in signed languages do too. It has been shown that pointing signs can act as determiners, demonstrative pronouns, anaphoric deictic elements, personal pronouns, and that they can be lexicalized as temporal deictics such as yesterday and tomorrow, and these different functions correspond to stable differences in form (Pfau 2011:148-151). For example, locative pointing signs and nominal pointing signs can be distinguished according to differences in the orientation of the handshape, the extension of the arm, and eye-gaze (ibid.). These differences contribute to the definiteness of reference, and they inhere in the linguistic system.
Directivity, on the other hand, derives not from the language, but from the deictic field. In the deictic field, we orient to pathways, grids, channels, and trajectories, which have settled out of patterns in activity. These structures are organized around particular modes of access and orientation, participant frameworks, bodily configurations. We become habituated to those frameworks, and a hierarchy is established, which contains a “basic” level. These basic, maximally expectable participant frameworks are called “participant frames” (Hanks 1990:148). As particular frameworks become more expectable, certain bodily configurations that are associated with them also become more expectable.2 For example, users of VASL can communicate with one another while riding side by side on bicycles, sitting side by side in a car, or laying side by side in bed, but each of these bodily configurations requires adjustments and elaborations of a more expectable configuration, namely, standing or sitting face to face, about 3 to 5 feet from each other. This is not a “neutral context” but rather a basic bodily configuration, in the sense that it is assumed by participants on a habitual, motoric level as they move through interactions (Hanks 1990:151-2). Divergences from the assumed configuration require adjustment, elaboration, or compensation.
Participant frameworks contribute to the structure of the deictic field and when configurations become routine for participants, the grammar is not caught by surprise. Rather, it develops contextual receptors for values retrievable from those frameworks. For example, grammatical person categories in pronominal systems are set to receive values from participant roles in the deictic field according to particular relations that have emerged out of that field (Hanks 1990:148). Participant roles are organized by participant frameworks that incorporate particular bodily configurations, and in signed languages, those configurations become important for formal distinctions between pointing signs.
For example, in VASL, the pronominal system makes a two-way distinction between first and non-first person (Meier 1990:377).3 The first-person pronoun is produced with a pointing sign directed toward the signer and the second is produced with a pointing sign directed away from the signer. These formal characteristics align with a basic bodily configuration occupied by signer and addressee. When these signs are instantiated in the deictic field, they can be subject to momentary formal modifications. However, insofar as basic participant frameworks are in play, this two-way formal distinction in the pronominal system remains stable. In other words, the pronominal system in VASL has contextual receptors built in for basic bodily configurations, as opposed to actual bodily configurations. This is the difference between a pointing gesture and a pronoun in VASL: the former can retrieve a wide range of values from the deictic field, while the latter is set to receive a very narrow range of values (e.g. the obligatory selection of first person or second person forms). From this perspective it seems likely that pronouns, in VASL, have been derived from pointing gestures via deictic integration, leading to tighter and more restricted pathways for indexical retrieval.
There are many other types of pointing signs, which integrate linguistic and deictic ele-ments in more or less restricted ways.4 At the far end of the spectrum, deictic elements can be caught up in and coordinated by the grammar in highly restricted ways, thereby taking on grammatical functions. Directional verbs, for example, integrate characterizing and anaphoric deictic elements to mark syntactic relations (see Mathur and Rathmann 2002 on directional verbs). The anaphoric deictic signs retrieve values from the anaphoric deictic field. However, once the values have been retrieved, they act like arguments of the verb, as opposed to referents. This type of deictic integration has been associated with the emergence of new languages (A. Senghas 1999, A. Senghas and Coppola 2001, Kegl et al. 2001) and language-like gestural communication systems (Goldin-Meadow and Feldman 1977, Goldin-Meadow and Mylander 1983, Goldin-Meadow and Morford 1985). In other words, gestural communication systems become more grammatical, characterizing elements tend to “point more” (Meier and Lillo Martin 2012:154).
This process, which leads characterizing signs to point more, is what I am calling deictic integration. The fact that deictic integration plays a significant role in processes of language emergence suggests that languages do not emerge by abstracting away from their contexts of use (Sandler et al. 2005:2664-5).5 Rather, new languages emerge as linguistic and deictic elements and relations are coordinated in tighter and more restricted configurations.
So far, we have examined the effects of deictic integration on pointing signs. In the next section, we examine the effects of deictic integration on “polycomponential signs,” which combine characterizing and deictic elements to form complex constructions.
7.1.2 Polycomponential Signs
Polycomponential signs also integrate characterizing and deictic elements, however, they do so in more complex configurations than pointing signs (Slobin et al. 2003, Quinto-Pozos 2007, Morgan and Woll 2007, Schembri 2003, also see section 9.2 in Chapter 9). The semiotic status of polycomponential signs varies. At one end of the continuum, they are highly responsive to momentary dynamics in the deictic field, and at the other end, deictic elements are integrated in tighter and more restricted ways with the grammar so that only a limited set of values (which remain stable across contexts) can be retrieved.
In 2006, I conducted an interview with a Deaf Interpreter6 in Seattle, who I will call Harli. At the time, Harli was working full time in the DeafBlind community and was known for his mastery of polycomponential signs in VASL, or “classifiers” in the local discourse. His analysis highlights the responsiveness of polycomponential signs to dynamics and relations that shape the deictic field.
The interview was part of a larger project, aimed at understanding how sighted interpreters and DeafBlind people worked together to gain access to the immediate environment. Like many other people I interviewed, Harli insisted on the importance of polycomponential signs in this context (see Edwards 2012). So I asked him why they were so important. He explained that, for example, “ASL has the sign wat e r . But that’s just a word. Classifiers are differ-ent,” he said. “They’re broad in scope, they can do anything, include anything . . . They’re wide open.” So I asked him for examples. He produced a sequence of polycomponential signs that might be used to talk about water:
There can be rolling waves, undisturbed stillness, the first ripples of a rowboat, the first tap of the oars, a watery surface breaking from beneath, concentric circles extending, reverberating. There’s sweat on the brow that forms relentlessly, no matter how many times you wipe it off, the accumulation of moisture, wetness. You can take a gulp of water from a glass or you can take a quick sip. You wipe moisture off of your face when you’re sweating. You can’t capture all of that with the word wat e r , but you can with classifiers.
(a) (b)
Figure 7.2: A Perfectly Still Body of Water
I have reproduced one small portion of this explanation in order to explore its composition. In Figure 7.2, Harli characterizes the surface of the water as flat. The b-handshape is a characterizing element that corresponds to a quality of flatness and/or rectangularity. The signer’s right hand extends out in front of his body, thereby attributing the quality of flatness to a broad surface. In this context, the handshape takes on a deictic function. It is transformed into a “reception signal” (Bu¨hler 2001 [1934]:122).7 It causes the the addressee’s gaze to turn in the sphere of the imagination, ready to receive particularities associated with the characterizing aspect of the signal. A lifetime of encounters with flat things--synthesized and distilled--flashes before the mind and a connection is activated between that and what is present to the senses. Unless it doesn’t.
Notice that the sign is produced directly under the eyes of the signer. The location of the hands relative to the eyes of the signer anchors the representation in a perspective.8 The possibility of embedding the b-handshape in the deictic field turns on the mutual accessibility of this perspective to both speaker and addressee, or a Schutzian “reciprocity of perspectives.”9 The representation in Figure 7.2 articulates to the deictic field of VASL and resemblance relies on an integration of the two. Given nonreciprocal perspectives generated by a difference in the structure of the deictic field occupied by speaker and addressee, the resemblance no longer holds and the sign no longer signifies.
Perspective is built up around orientation schemes and shared modes of access and orientation, which are, in turn, built up around sensory systems with certain capacities and limitations. If the reader is sighted, she will likely perceive a resemblance, or iconic relation, between the b-handshape and an undisturbed watery surface. However, there is no field that structures that connection for the grammar; Indeed, “there is no pictorial field in language” at all (Bu¨hler 2001 [1934]:220). Rather, linguistic elements are filtered through a series of requisite “barriers” or fields-- syntax, morphology, phonology, and “it is only beyond this point that they display something like a secondary touch of a sound painting” (ibid.). Resemblance relies on the the coordination of linguistic and deictic phenomena.
This same kind of coordination is enacted in the next segment of the polycomponential sign (Figure 7.2b). Here, the signer sucks his cheeks in and seals his lips, while holding his hands motionless on the same plane that was established in Figure 7.2a. The sucked-in cheeks combined with sealed lips are a recognizable and repeatable linguistic element, which contrasts with puffed-out cheeks and sealed lips. The former is associated with flat, thin, empty, or motionless things, while the latter, is associated with thick, fat, full, or moving things. The placement of the hands near the eyes and the backward tilt of the signer’s head are not linguistic elements, but rather, contribute to the representation of a perspective. Perspective organizes the deictic field so that modes of access and orientation snap to a shared grid of overlapping coordinate structures.
Finally, the anaphoric deictic field often comes into play in polycomponential signs. Here, consistency in the location of the construction as a whole as it is built up sequentially as the signer links wat e r to flatness, flatness to a surface, a surface to a lack of visible movement and depth. Without that first sign: wat e r , there is no semantic clue that this is a watery surface, as opposed to some other--a concrete, nylon, or molecular surface, for example. Characterizing and deictic elements must be coordinated anaphorically as the polycomponential sign is constructed, and the anaphoric deictic field is constrained by modes of access and orientation shared across the group of language users.In a polycomponential sign like this, linguistic and deictic elements are loosely coordinated. They can easily be detached and rearranged, which is what gives language users the sense that they are “wide open” and “capable of anything.” Over time, though, certain combinations can become integrated with one another in more restricted ways, as is the case in some directional verbs.
7.1.3 Directional Verbs
The third type of deictic sign in VASL is directional verbs, or “verbs that point” (Meier and Lillo-Martin 2012). Directional verbs can be understood in contrast to “plain verbs” like love (Padden 1990:119). In the sentence, “I love you” and “you love me” love is produced in precisely the same way. give, on the other hand, is a directional verb. For the sentence “I give you the book,” the sign begins near the signer’s body and moves toward a location associated with the receiver. If there is more than one recipient, the sign will move from the body of the signer to a series of locations, marking the number of recipients involved. There are several different types of directional verbs, some of which are more like polycomponential signs in that they can retrieve a wider range of values from the deictic field. Some directional verbs, such as “agreeing verbs,” retrieve only a limited range of values from the deictic field. Agreeing verbs incorporate those values into the grammar in such restricted ways that their status as either “referents” or “arguments” becomes ambiguous.
7.1.4 The Problem
Every approach to directional verbs in signed languages encounters the same problem: how can symbolic and indexical elements be accounted for in a unified framework? For example, Klima and Bellugi, in their pathbreaking work The Signs of Language, appeal to an “indexic plane,” which extends out around the signer’s body as a kind of surface on which “target loci” are organized (1979:273-4). It is not clear, however, whether the indexic plane is part of the linguistic system or part of the extralinguistic context.
On the one hand, the indexic plane is part of “signing space.” Signing space is the space within which signs are produced (ibid.:51). It is organized internally by arbitrary distinctions and relations in the linguistic system (ibid.). On the other hand, loci within the indexic plane are determined by the actual positions of people, objects, and events in the immediate environment. For example, in the case of person reference, they claim that “[t]he actual positions of the signer and addressee determine the locations of their indexic loci in the indexic plane ... The same can be the case with objects and other individuals that happen to be in sight, though here other conventions also come into play” (ibid.:277). The indexic plane is then incorporated into polycomponential signs or “classifier constructions,” as well as certain classes of verbs in more or less obligatory ways. They explain:
In discourse that extends beyond the speaker, the addressee, and the here and the now, to objects, events, and persons not present, there are a variety of conventions for establishing indexical loci. The signer as narrator can use the indexic plane as a kind of stage on which indexical loci are created by indexic signs alone, or in conjunction with noun signs, or by positioning certain noun signs or classifier signs in particular locations on the indexic plane. Verb signs can move toward and between such loci and can be articulated at them, thereby expressing anaphoric reference. In addition, verbs can themselves establish indexic loci (and thus express differences in indexic reference). Such referential distinctions must be
incorporated into ASL verbs in specific sentenial contexts. Thus *JOHN LOOK-AT-(ME), with a verb uninflected for referential indexing, is ungrammatical in ASL.
In other words, producing the VASL sign look-at in the direction of the addressee, and then tacking on the pronoun me is ungrammatical. The integration of the pointing sign is obligatory. Under this analysis, the indexic plane organizes linguistic elements in relation to the speech situation. However, the linguistic system also integrates deictic elements in restricted ways.
This tightening of the relation between the linguistic system and the deictic field, or deictic integration, results in what Klima and Bellugi call “indexical inflection” (1979:273-4). They list seven types of indexical inflection, including reciprocal, number, distributional aspect, temporal aspect, temporal focus, manner, and degree (1979:273-4). Like inflection in spoken languages, these processes involve the modification of a root. Unlike inflectional processes in spoken languages, the root is modified by moving it toward locations in space. The locations to which they are moved are not discrete, listable forms.
Therefore, despite their role in linguistic processes, they do not yield to linguistic analysis.
For example, the “uninflected” (or unmodulated) form of give is produced with an outward movement from the torso of the signer. In order to make the verb reciprocal, the movement is modified so it begins in a location away from the signer and moves toward the torso of the signer (Klima and Bellugi 1979:274). The same sign can be inflected for “distributional aspect” by sweeping it in an arc across the torso of the signer, stopping along the way at multiple loci (ibid.:276). The status of these locations, or “loci” as linguistic, non-linguistic, or some combination of the two has been a major source of debate in the field of sign language linguistics.
These problems are all rooted in a conflation of deictic and linguistic phenomena. In Klima and Bellugi’s work, it manifests as an ambiguity between “signing space” and the “indexic plane.” In signing space, syntactic relations are established between a verb and its arguments by moving the verb between discrete loci. On the indexic plane, deictic relations are established between a verb and its referents. So which is it? And how can a verb have referents? This is the problem. Rathmann and Mathur (2002) and Mathur and Rathmann (2012) identify three main approaches to this problem, which they apply to directional verbs. Each analysis presupposes an approach to deictic signs more generally, which can be productively compared to the notion of deictic integration that I put forth in this chapter.
The R-Locus Analysis
The first approach yields the “R-locus analysis,” which is short for “Referential Locus.” Mathur and Rathmann sum up this approach as follows:
In this analysis, each noun phrase is associated with an abstract referential index. The index is a variable in the linguistic system which receives its value from discourse and functions to keep the referent of the noun phrase distinct from referents of other noun phrases. The index is realized in the form of a locus, a point in signing space that is associated with the referent of the noun phrase. This locus is referred to as a ‘referential locus’ or R-locus for short (2012:140).
The location of the entity with which the verb “agrees” (the R-locus) is a formal manifestation of an abstract variable, which is associated with, but not identical to, a referent. It is not the actual location of the referent that is listed in the grammar, but the abstract, underlying category.
In the sentence “Jayne gave Bob (something),” the signer finger spells j-a-y-n-e and then localizes jayne in space by pointing to “R-locus (1).” The signer then finger spells b-o-b and localizes bob by pointing to R-locus (2). R-locus (1) is clearly distinct from R-locus (2) (See Figure 7.3a). In Figure 7.3b, the verb give moves from R-locus (1) to R-locus (2). The NPs jayne and bob are represented by loci, which are kept distinct from one another. As the discourse unfolds further, those loci can be referenced again, without explicitly identifying them with their associated NPs. Therefore, the R-locus is referential in the sense that it derives its value from the anaphoric deictic field, or what Mathur and Rathmann call “discourse.” However, insofar as those loci “represent” their associated NPs, they also establish syntactic relations between the verb and its arguments.
(a) (b)
Figure 7.3: Referential Locus
From a practice perspective, “R-loci” are anaphoric deictic elements, which have been caught up in and coordinated with the syntactic system of the language. In other words, they have undergone deictic integration. Since deictic integration is a bi-directional process, this also means that the grammar has grown more dependent on the anaphoric deictic field to express syntactic relations. This dependence is unavoidable from a linguistic perspective because there is no way of restricting possible coordinates for the loci, and therefore no way of listing them as discrete, repeatable elements.
This problem is solved in the R-locus analysis by positing an abstract linguistic variable, which is associated with formally non-specific loci. The signer can point anywhere, as far as the grammar is concerned, as long as the NPs can be identified and kept distinct, via their anaphoric proxies (Mathur and Rathmann 2012:140). In a practice approach, these pointing signs are constrained not by the grammar, but by modes of access and orientation, as well as the participant frameworks, participant roles, and bodily configurations that become conventional within those constraints. These constraints cohere in the deictic field, not in the language, and yet, in order to produce a coherent and comprehensive theory of VASL syntax, the deictic field of VASL must be taken into account. In this approach, abstraction is not necessary. Instead, a lateral process of integration accounts of the interdependence of the syntactic system and the anaphoric deictic field.
The second approach to directional verbs identified by Rathmann and Mathur (2002) and Mathur and Rathmann (2012) is the “featural analysis.” This approach, like a practice approach, posits rules for coordinating semiotically distinct elements in restricted ways. Unlike a practice approach, the analysis relies on “gestural space” which is conceived of as a mental space. In a practice approach, the relevant construct is the deictic field. The deictic field is an historically emergent configuration of participation structures, built up around shared modes of access and orientation. It is not defined negatively with respect to language, i.e. it does not contain everything that linguistic principles cannot account for. Rather, it is governed by its own, deictic principles of organization. Since deictic principles organize historically emergent fields of activity, and are constrained by physical capacities and modes of orientation, they are not reducible to universal cognitive principles. Therefore, while cognition is clearly involved, the deictic field is not reducible to a “mental space.” Nevertheless, in the following section, I argue that a synthesis of the featural analysis with a practice approach is a useful and promising endeavor.
The Featural Analysis
Rathmann and Mathur argue that any approach to spatial or agreeing verbs must address, more explicitly, the interface between gesture and language (2012:144).10 Gesture inheres in “gestural space11” which interfaces with grammar, but is not included in it. Gestural space and grammar are both mental constructs. The former is relatively unstructured and the latter is highly structured. With this as the starting place, the following problem is immediately encountered:
[T]he linguistic system cannot directly refer to areas within gestural space (Lillo-Martin/Klima 1990; Liddell 1995). Otherwise, one runs into the trouble of listing an infinite number of areas in gestural space in the lexicon, an issue which Lid-dell (2000) raises and which Rathmann and Mathur (2002) describe in greater detail and call the listability issue. For example, the claim that certain verbs ‘agree’ with areas in gestural space is problematic, because that would require the impossible task of listing each area in gestural space as a possible agreement morpheme in the lexicon (Liddell 2000) (cited in Mathur and Rathmann 2012).
Mathur and Rathmann (2012) argue instead that for a subset of directional verbs, which encode number and person (“agreeing verbs”), the NP is marked with a finite set of person and number features.12 The verb agrees not with all aspects of the conceptual representation of the referent, but only the finite set of features that are linguistically significant (i.e. person and number).
For a sign like give, the first person form is specified phonologically for a location near the torso of the signer. The non-first person forms are realized via a “zero morpheme” which is then paired with a deictic gesture as it is realized. Via an interface between “spatio-temporal conceptual structure” and “the articulatory-phonetic system,”13 the form of the sign undergoes a phonological readjustment process called “alignment” where an abstract geometrical relation between elements is pre-given in the syntactic structure, a vocabulary item is inserted, and a phonological readjustment rule is applied to bring the abstract geometric coordinates in line with phonological and phonetic constraints in the language. This process generates the specific form of the verb, including directionality, but also orientation and other small variations in form that are attested in agreeing verbs (Mathur 2000:38-9).
The featural analysis is consistent with a practice approach in the sense that semiotically distinct phenomena are distinguished, establishing a firm boundary between grammatical and contextual phenomena. These elements are then coordinated, or “aligned” as they are instantiated via a phonological readjustment rule. This is a rule-governed, grammatically determined version of “embedding.” Via embedding, linguistic elements also undergo reshaping, conversion, and transformation as values are retrieved from non-linguistic sources (Hanks 2005a:194). Over time, patterns of retrieval align the linguistic system with the fields it articulates (Bu¨hler 2001 [1934]:197). Rather, the linguistic system grows receptors (cf. “zero-morphemes”), which have grown sensitive to these patterns, and are therefore set to receive a more restricted set of field-values (e.g. highly schematic person and number val u e s ) .
Agreeing verbs have undergone a process like this. This tightening of linguistic and deictic relations, into more restricted configurations, is what I am calling deictic integration. Another example of verb that has been formed via deictic integration is look-at. look-at-you is produced with a directional movement toward the addressee, while look-at-me is produced with a directional movement toward the signer. At this point in its diachronic development, the verb look-at has a deictic receptor that requires the signer to retrieve one of a limited set of values in the deictic field. These values look more like grammatical person categories than those retrieved by polycomponential signs, since there is a restricted set of alternating values, one of which must be selected. However, this shift toward more language-like semiosis does not imply a “loss” of indexicality. Rather, it is a tightening and restriction of possible relations between the linguistic system and the deictic field.
In a practice framework, the emphasis is (not surprisingly) on the determinate effects of practice, rather than the determinate effects of grammar. Nevertheless, the process that account for the alignment of language and context in the featural analysis and a practice approach are not contradictory; they are complementary, and a synthesis of the two is promising.14 Such a synthesis would involve replacing “gestural space” with the deictic field,15 the former a relatively unstructured mental construct governed by universally applicable cognitive principles, and the latter, an internally complex contextual construct, governed by deictic principles. Second, the “zero morpheme” would be replaced with a contextual receptor, primed to receive a restricted set of values from the deictic field. In other words, the NP would be marked by way of deictic integration.
The Indicating Analysis
In contrast to the featural analysis, Scott Liddell argues that the “locus” does not need to be treated as a linguistic element that is specified phonologically and stored in the lexicon as a distinct morpheme at all (Rathmann and Mathur 2002:375). Instead, he says, it should be treated as a conceptual representation of spatial relations in the world. In defense of this claim, Liddell points out that “give-to-a-tall-person would be directed higher in the signing space, whereas give-to-a-child’ would be directed lower, relative to the body of the signer. These verbs, then, are best described as being directed to entities in “mental spaces” and not to linguistic loci, specified in the grammar. Therefore, Liddell calls this class of verbs “indicating” verbs rather than “inflecting” or “agreement” verbs. However, any sign can be modified as it is instantiated in the deictic field (Edwards 2012:52-60). The question is whether the verb is momentarily sensitive to a particular dimension of context, or if it requires retrieval of a particular value, which remains stable across contexts. In the former case, linguistic and deictic elements are merely coordinated. In the latter case, they are integrated.
Deictic integration makes something like “indexical inflection” possible, since deictic elements can become integrated with syntactic structures in highly restricted ways. This returns us to Klima and Bellugi’s initial analysis, but with a more principled way of accounting for the linguistic and non-linguistic dimensions of the process. Under this perspective, the featural and indicating analysis are more consistent with one another than they would otherwise appear to be. However, the indicating analysis extends further into the language-external world, and in the process, reveals certain key distinctions between cognitive and practice approaches.
In a cognitive framework, points to participants that function as pronouns and verbs are both directed at elements in what Liddell calls “real space” (1995, 2003:81-7). Real space is “a person’s current conceptualization of the immediate environment based on sensory input” (Liddell 2003:82). In real space, people treat objects as if they were real, so that a conceptual entity is “treated as a real physical entity, having all the physical properties of the physical entity, including being located at a particular place in the immediate environment”
(ibid.). Using a book as an example, Liddell emphasizes the distinction between real space and physical space:
The physical book is not part of real space since real space only contains conceptual entities. The real-space book is an internal representation of the book conceptualized as being external to me. Fortunately, the locations of physical entities and the corresponding conceptualized locations of real-space entities generally overlap. That is, I reach toward the book as conceptualized in real space. Years of experience give me confidence that I will encounter a physical object there (ibid.:83).
Under this analysis, directional verbs are constrained by cognitive capacities that enable us to make functionally adequate, mental replicas of our physical surround, and point at elements situated in those replicas. These capacities are universal, so real space is guaranteed to be reciprocal for speaker and addressee (ibid.:86). Therefore, the person speaking deictically is “in a position to be of assistance in terms of providing clues that will help identify the real space entities being discussed” (ibid.). In cases where cognitive and perceptual schemes align, this works out well. However, where cognitive and perceptual patterns diverge, as is the case for people whose sensory orientations shift, problems arise.
Among DeafBlind people real space and physical space do not align. Under these conditions, each of Liddell’s assumptions, which undergird his analysis of directional signs becomes a research question: How do objects and relations in the immediate environment get incorporated into conceptual representations? How are they linked with linguistic and deictic elements in the language? How can sensory orientations become stable across a group of language users, allowing for a reciprocity of perspectives? How is pointing guided by these shared orientation schemes and modes of access? The answers to these questions require attention to a broader range of phenomena, viewed through a broader range of analytics.
7.2 A Practice Approach to the Deictic Systems of Visual and Tactile ASL
In all three approaches given above, the analysis begins and ends in conceptual and/or linguistic representations, which maintain a non-problematic relation to the external world. For Liddell, there is no analytic advantage in separating cognitive representations from the things they represent, since “[i]n general, real space lines up well with physical things in the world” (Liddell 2003:84). Real space is, for all intents and purposes, a copy of physical space. Among DeafBlind people, links between cognitive and linguistic representations, on the one hand, and experience on the other, are disjointed. The project of realigning them is a practical one, constrained by socio-historical and interactional processes, which are not reducible to, or best understood as, cognition.
In a practice approach, these problematic relations must be approached at the outset by examining the historical development of orientation schemes, which are built up around a particular habitus in a particular place and time (Section 6.1 in Chapter 6). From there, structures of interaction, such as participant frameworks and the bodily configurations they incorporate, conventional turn-taking, attention-getting, and back-channeling mechanisms, must be brought into alignment with the socio-historically given habitus.16 That is to say, interaction is constrained by socio-historical dynamics. If touch is a highly restricted modality in the social field, for example, it will not be drawn on in the development of new interactional practices.
All of this shapes and constrains the deictic field of any particular language. Therefore, the deictic field is not reducible to a conceptual representation of the immediate environment, nor is it unstructured physical space. It is organized around and constrained by shared modes of access and orientation that emerge under particular social and historical circumstances. This does not contradict the fact that representations of physical space are constrained by the universal cognitive capacities of humans; it is a complementary fact, which can account for the alignment of “real space” and “physical space,” not as a given, but as an outcome of ethnographically discoverable processes.
Moving this way from the social field to the deictic field to the linguistic system, it becomes clear that there are mutual dependencies between linguistic, cognitive, and deictic principles in directional verbs and other deictic signs. The grammar does not simply retrieve values from the deictic field; it is shaped by it. And as grammatical and deictic elements are coordinated with one another in tighter and more restricted ways, semiosis becomes more language-like.
In the next section, I show how the deictic system of TASL was transformed as values were retrieved from a new, tactile deictic field. I identify three interactional mechanisms through which this transformation took place: signal transposition, sign calibration, and sign creation. Signal transposition involves a transposition of handshapes onto locations on the body of the addressee, yielding a tactually accessible ground. Sign calibration is a process through which participants intuitively adjust signs that have lost their referential capacity. As this process is honed, new rules for the formation of signs are generated and novel forms are created that would not be predicted given the grammar of VASL. I call this process “sign creation.”
7.2.1 Signal Transposition
Signal transposition is a type of deictic transposition, or a “displacement or alteration of the indexical ground of utterances” (Hanks 1990:197). For example, in quoted speech, the pronoun “I” can, and often does, refer to someone other than the speaker, as in the sentence, “You said, ‘I don’t want any’ ” (ibid.). In this example, the formal element “I” is projected onto a displaced plane by placing it after the phrase “You said.” This is an example of a deictic transposition. In signal transposition, the formal element, which is the handshape, is projected onto a displaced physical plane, which is the body of the addressee. As the deictic field was reorganized along tactile lines in the Seattle DeafBlind community, signal transposition emerged as part of a broader figure/ground shift in the immediate environment. It is an interactional process, however, it has linguistic consequences.
Prior to the pro-tactile movement, deictic signs were produced as they would be in VASL. That is to say that they were directed toward referents situated in the deictic field of VASL. Visual access to the immediate environment was assumed, as were visual memories, and the capacity to imagine visual relations and dynamics.
From the perspective of a tactile person, attuned to the tactile dimensions of setting, a pointing sign like the one in Figure 7.4, is uninterpretable in two respects. First, given visual access to the immediate environment, the sign launches a trajectory against the visible backdrop of the signer’s body and other visible dimensions of context. If the context is not visually accessible, the trajectory will be abstract. Second, the sign articulates to the deictic field of VASL, which requires visual access and modes of orientation. Without access to that field, reference will be more difficult to resolve.
Figure 7.4: Tactile Reception of VASL Pointing Sign
The solution to these problems was twofold. First, DeafBlind people established a deictic field, which was accessible to anyone who cultivated tactile sensibilities and modes of orientation. This structured the space within which pointing signs are directed. Second, the sign itself was transposed onto the body of the addressee. For example, in Figure 7.5, the signer has just established a correspondence between the palm of the addressee and the United States.17 She then points to a location on the palm of the addressee in order to locate a specific state in relation to the rest of the country. This is an example of pointing in an anaphoric deictic field organized along tactile lines. Just as VASL users establish locations in the space in front of the signer and then refer back to them as the discourse unfolds, TASL signers establish locations on the body of the addressee and refer back to them as the discourse unfolds. While this change is motivated by changes in the deictic field, it has implications for the internal organization of the deictic system of TASL.
The deictic system of TASL is new. However, given the changes that have taken place in the deictic field, further developments are expectable. First, pointing signs in visual
Figure 7.5: A Transposed Pointing Sign
signed languages are distinguished from one another by differences in the orientation of the handshape, the extension of the arm, and eye-gaze patterns (Pfau 2011:148-151). All of these formal mechanisms for language-internal distinctions require visual access to the ground of sign production. The orientation of the handshape is only accessible if the visible backdrop of the body is accessible; the extension of the arm is only accessible if the addressee has access to the whole arm; and eye gaze patterns require visual access as well. None of these mechanisms are likely candidates for marking linguistic oppositions, given a tactile habitus in a tactile deictic field. Instead, some dimension of the tactually (as opposed to visually) accessible ground should be recruited to distinguish pointing signs from one another. In its current state of development, these distinctions have not settled into formally stable, contrastive patterns. However, a key question for further research is whether or not tactile forces on the body of the addressee might be recruited for these purposes.
For example, will signers distinguish nominal and locative points by using different and distinguishable amounts of pressure on the body of the addressee? Will proximal and distal meanings be distinguished via differences in movement, for example, a tracing, linear movement versus a punctual movement? My experience using TASL in its early phases of development has led me to these intuitions, and in future research, after the system has developed further, I plan to pursue these questions. For the time being, it is clear that TASL signers are transposing deictic signs onto a tactually accessible ground. This is putting pressure on constraints at the phonetic and phonological levels, as new places of articulation are incorporated into “signing space.”
For example, in Figure 7.6 pointing signs are produced on the arm and chest of the addressee to mark relative spatial relations between locations. The locations were associated with cities in the world in prior discourse. This process of establishing temporary correspondences is structured by the anaphoric deictic field. The anaphoric deictic field is not a free-floating, empty space, nor is it a product of a single interaction. It is constrained by modes of access and orientation, which outlast any one encounter. The only locations that can be admitted into the tactile anaphoric deictic field are those that can be identified and distinguished from each other against a mutually accessible ground. Practices for establishing an anaphoric deictic field had to be developed in the pro-tactile workshops. These practices involved deliberate tactile explorations of the objects at hand, through which participants gained reciprocal access.
An example of this is the napkin-folding exercise led by Adrijana, which involved learning how to do a “pocket fold.” The explicit aim, according to Adrijana, was to demonstrate that DeafBlind people are not slow learners as many of them had come to believe. Rather, sighted people are bad at explaining things from a tactile perspective. With each student she used specific examples to illustrate their speed and ability in learning a new task when the task is explained to them “in the tactile way.” In the terms being developed here, Adrijana was replacing the deictic field of VASL with a new field, organized along tactile lines. Deictic signs were transposed onto a tactile ground as part of this broader transformation, which increased coherence between the deictic system of the language and the field to which it articulates. Indeed, DeafBlind people were much faster learners when their language and the contexts of its use were aligned.
(a) (b) (c)
Figure 7.6: Transposed Pointing Signs
Linking the language to the deictic field was accomplished slowly over the course of many interactions like the following. In Figure 7.7, Adrijana guides Hank’s hands to the napkin. From there, she puts her hands flat on top of the napkin (Figure 7.7a) and then she slips her hands out from under Hank’s, so he has direct access to it (Figure 7.7b). In Figure 7.9, Adrijana re-folds the napkin, places it back on the table, and presses it down with both hands, making sure the edges are lined up. In Figure 7.9a, Hank follows Adrijana’s hands and his fingers are in a position where the movements of her fingers are perceptible. In Figure 7.9b, Adrijana places the napkin back onto the table, and Hank’s fingers slip off of her’s to touch the napkin. In Figure 7.9c, Adrijana flattens her hands out and smoothes out the napkin, pausing at each corner to feel that the layers are stacked directly on top of one another. Hank’s hands follow Adrijana’s so this sequence of actions draws his attention to the rectangular shape of the object. In Figure 7.9d, Adrijana, once again, slips her hands out from under Hanks’ so he can explore the object further on his own.
No linguistic signs are exchanged in this sequence. However, each move is important for establishing a structured, mutually accessible space within which deictic reference can be accomplished. Attention has been drawn to the edges of the napkin, the distances between corners, and therefore, the overall shape of the object. Attention has also been drawn to the multiple layers, folded over one another, the texture of the material, and whatever other
(a) (b)
Figure 7.7: Adrijana draws Hank’s hands to the object
(a) (b)
Figure 7.8: Adrijana picks up the napkin
(a)
(b)
(c) (d)
Figure 7.9: Adrijana re-folds and flattens napkin so edges are lined up
qualities present themselves in the course of Hank’s exploration. This kind of sequence, where reciprocal access to the referent was established, and particular aspects were foregrounded, became an expected prerequisite to acts of referring.
(a) (b) (c)
Figure 7.10: Adrijana directs Collin’s attention to the pocket
Once access to the object is established, characterizing signs are used to individuate aspects of the object, linking those aspects to other objects and to categories in the language. For example, in the following sequence, Adrijana embeds the sign pocket in the deictic field, and in doing so, links it to two pockets in the immediate environment: the one on Collin’s shirt, and the one they have just created by folding the napkin. The interaction begins the same way that Hank and Adrijana’s interaction began--by establishing reciprocal access to the object. Then, Adrijana folds the napkin into a pocket, while Collin follows along tactually, his hands on top of hers. Then, in Figure 7.10, Adrijana draws Collin’s attention to the pocket she has just created by using a flat-handed pointing sign (Figure 7.10a), followed by the sign feel (Figure 7.10b), followed by the sign pocket.
(a) (b)
Figure 7.11: Collin reaches into the pocket of the napkin
In Figures 7.11a-7.11b, Collin responds by reaching up toward the top part of the pocket in the napkin. In Figure 7.12, Adrijana and Collin link the pocket on the napkin to the pocket on Collin’s shirt. In Figure 7.12a, Collin signs pocket. In Figure 7.12b, Adriana signs pocket on Collin’s shirt and finds an actual pocket there, at which point, she slips her hand into his pocket while signing pocket. Collin smiles and tilts back his head. In Figure 7.12c,
(a) (b)
Figure 7.12: Pocket on napkin is linked to sign napkin and to pocket on Collin’s shirt
Adrijana grabs the edge of Collin’s pocket, pulls it out, and lets it snap back against his body in Figure 7.12d. In Figure 7.12e, Collin emphatically signs understand.18
In this example, you can see the migration of the language toward the coordinates of the deictic field. Not only are deictic signs directed at mutually accessible dimensions of the object, but the characterizing sign pocket is also transposed onto the body of the addressee. Everything is shifting to a tactile ground, including the sign itself. In other words, along with a shift in orientation to the immediate environment, the signal, generated by the grammar and subject to its constraints, is also affected. The movement and location parameters of the sign have changed so that all that remains from VASL, post-transposition, is the handshape. This example shows that signal transposition is just one part of a broader shift in the indexical ground of utterance, and yet, there are consequences for how signs are produced and received, which, as we will see in the following chapters, echo in the grammar in arbitrary ways. In the next section, signal transposition is taken a step further, so that aspects of the handshape are modified as well. These modifications help signers establish coherent relations between the linguistic system and the deictic field.
7.2.2 Sign Calibration
During the pro-tactile workshops, participants transposed signs onto the body of the addressee, but they also calibrated signs to multiple dimensions of the deictic field, leading to greater divergences between TASL and VASL. Sign calibration is an interactional process, through which a linguistic element or process is transformed as deictic relations are incorporated. One activity that elicited sign calibration at greater rates than other activities was called “the object game.” In this game, dyads were given a bag full of objects--things like old cell phones, toy snakes, and tea strainers--and they were asked to describe one in detail. When they were done, they handed the object to their partner, who explored it tactually, and then evaluated the description in terms of how well it prepared them for the qualities of the object, or in the terms of the game, whether or not the description “matched” the thing. Lee, one of the instructors of the workshops explained the game to two participants as follows:
The point of this game is not to guess what the object is based on it’s function. A function-based explanation would be like this: The first person says: ‘It’s something you pour hot water through to make tea or coffee,’ and the second person says: ‘Oh! I know! It’s a filter!’. Instead of that, what I want you to do is find a way to describe the tactile qualities of the specific object-- textures, patterns, bumps, etc. and then decide if the description matches or not.
Participants all started out using VASL polycomponential signs for this task. However, these forms often led to frustration, blank stares, confusion, and eventual requests for intervention on the part of the instructors. Lee intervened in these cases, and introduced new constructions, which were calibrated to the relevant and accessible dimensions of the object from a tactile perspective. In contrast to the VASL constructions, these new, TASL signs elicited memories, questions, and/or expressions of understanding (e.g. Oh! I see! Or “I get it!” Or laughter, while signing “Yes”).
Figure 7.13: The Measuring Tape
The following series was taken from an interaction between Nina and Allen, where poly-componential signs from VASL failed to prepare the recipient for the relevant and accessible qualities of a measuring tape, like the one in Figure 7.13. Nina begins her description by combining a b-hand shape with a bent-b-hand shape, as in Figure 7.14, and repeats this sequence once. This characterizes the shape of the object as rectangular in a way that would not be surprising for users of VASL.
Figure 7.14: Nina specifies a rectangular shape
Then, in Figure 7.15, Nina describes what is typically done with an object like the one she is describing. First, in Figure 7.15a-7.15b she pulls the imaginary tape out of its base on a plane that is horizontal relative to her torso (as if she is measuring a table). Her mouth is pursed here and she is blowing out air through partially closed lips to create a flapping movement. In VASL this kind of mouth movement has been analyzed as having morphemic status (e.g. Frishberg 1975, Liddell 1980). However, Allen does not have perceptual access to Nina’s mouth. In Figure 7.15c- 7.15d, Nina repeats the previous sequence, but this time she pulls the tape out on a vertical plane rather than a horizontal plane (as if she were measuring a wall instead of a table). In Figures 7.15e-7.15i, Nina signs one, two, three, four, five, from left to right along the path that had previously been associated with the measuring tape as it comes out of its base. Finally, in Figure 7.15j, Nina signs inch.19
After Nina’s initial description, Allen tells her he doesn’t understand and she responds by starting over. At this point, frustration is mounting. These kinds of tense interactions were common prior to the pro-tactile movement. One of the strategies that some of the most experienced and skillful interpreters used in cases like this was to draw on their extensive knowledge about the life history of the DeafBlind person they were communicating with. They would look for a past experience they could use as a jumping off point for description and in the way, fill in the ground of reference. This was a way of compensating for the absence of an accessible deictic field, including not only perceptible objects in the immediate environment, but also shared knowledge and “common sense” (Hanks 1990). If Nina knew that in highschool Allen used to make birdhouses for fun (this is hypothetical), she might start out by saying, “Do you remember in highschool when you used to make birdhouses? Explain to me how you did it.” Then at some point, Allen would get to the part where he measures the wood, and Nina would ask him to describe the thing that he used to measure the wood.
There are two problems with this approach here. First, it became evident as the pro-tactile classes went on that because interaction had been so heavily mediated by sighted people,
(a)
(b)
(c)
(d)
(e)
(f)
(g)
h) (i) (j)
Figure 7.15: Nina’s First Description using VASL Polycomponential Signs
DeafBlind people didn’t actually know very much about each other, and definitely not the kind of detailed information that would allow them to trigger specific memories. Second, Allen has been blind for many years. Even if he does remember the birdhouses, he may not remember how he measured the wood, let alone the physical details of the instrument he used for measuring. There is only so far that visual memory can take you, and when it runs out, a piece of the indexical ground of reference erodes.
(a) (b)
Faced with these challenges, Nina tries again. She starts this time by appealing to the more general category “tool” (Figure 7.16). She signs tool and then uses a combination of a b-handshape (in Figure 7.16a) and a bent-b-handshape (in Figure 7.16b) to describe the rectangular shape of the object. In Figure 7.17, she continues by describing the way one typically uses a measuring tape, by pulling it out of its base (Figures 7.17a-7.17b). Then Nina signs table and in Figures 7.17c and 7.17d specifies the size and shape of the table using a b-handshape and a bent-b-handshape respectively. Then she repeats her representation of a person pulling the tape out of its base. Finally she signs inch, fingerspells i-n-c-h, and starts to repeat the sequence in Figures 7.15e-7.15j--“one, two, three, four, five,” but she is interrupted by Lee, who joins the interaction.
(c) (d)
Figure 7.16: Nina’s Second Attempt
Nina’s attempt to describe the measuring tape involved a familiar procedure for users of VASL. First, she establishes a geometric shape (a small rectangle). Then she moves to how it is handled and for what purpose--you pull out the tape from the base, and measure things like tables with it. The description assumes that the rest can be filled in. In the pro-tactile workshops, it became clear that polycomponential signs like this had to be produced with the expectation that the addressee could not fill in the rest. Therefore, singers began to
(a)
(b)
(c) (d)Figure 7.17: Nina’s Second Attempt, Continued
(a)
(b)
(c) Adr’s Hand (d) Signer’s Hand
Figure 7.18: TASL Representation of a Rectangular Shape
(a)
(b)
(c)
(d) Adr’s thumb,(e) Adr’s thumb,
Signer’s fingers Signer’s fingers
Figure 7.19: TASL representation of width (g-handshape on thumb)
include far more detail, and the details were more specific to the actual object of reference, as opposed to the general category to which it belonged. However, Nina was unable to do this in a way that Allen could understand. Tensions between Nina and Allen grew.
Eventually, Lee intervenes in the interaction and asks what the problem is. Nina tells her that she has already tried to explain that the object is a rectangular tool used for measuring that has a tape that is wound up and measures by the inch as you pull out the tape. She essentially repeats what she had already said twice before to Allen. She says with frustration that Allen doesn’t understand. The problem here is not only that Allen is having difficulty perceiving the formal properties of the signs; signal transposition alone would remedy that. The problem also stems from asymmetrical access to visual memories and visually derived knowledge. Allen doesn’t know what a measuring tape is and Nina can’t imagine that this is the case.
In order to address both problems, Allen must have tactile access to the object, learn about its material properties, its physical functionality, and its typical uses. The signs used to draw attention to these aspects of the object must be perceptible and they must articulate to mutually relevant and accessible aspects of the object. When Lee intervenes, she calibrates her description to these parameters(20).
Like Nina, she begins with the shape of the object. However, rather than using the B and bent-b hand configurations, she draws a rectangle on Allen’s palm with her index finger (Figure 7.18a). She then repeats this on Nina’s palm (Figure 7.18b). Schematic representations of this sign are given in Figures 7.18c and 7.18d. This sign establishes relative spatial locations on the tactually perceptible ground of the addressee’s hand. In VASL it is possible to trace a rectangular shape in the space in front of the signer, using the non-dominant finger as an anchor for relative spatial relations. Signs like this have been analyzed as “size and shape specifiers,” (Schick 1990; Engberg-Pedersen 1993; Aronoff et al. 2003:67; Schembri, Jones and Burnham 2005) which fall under the broader category of polycomponential signs. Generating the TASL sign in Figure 7.18 follows the same general pattern as generating size and shape specifiers in VASL, however, it embeds the conventional pointing handshape in a different deictic field. As we will see, this has further consequences.
In Figure 7.19, Lee describes the shape and size of the tape that pulls out from the base. In Figure 7.19a, she signs with, indicating that what she is about to describe is a part of the object as opposed to the entire object. She then traces the length of Allen’s thumb with a G-handshape, moving her thumb and index finger up, down, and back up (Figure 7.19b). She repeats this motion on Nina’s thumb in Figure 7.19c. These signs are represented schematically in Figures 7.19d and 7.19e. This is a way of establishing the width of the object, without specifying the length, or the overall shape. She uses it here to characterize the width of the tape that can be pulled out of the base of the measuring tape. Lee does this by repeatedly tracing the outer edges of the addressee’s thumb, refraining from adding perpendicular lines of any kind.
A g-handshape, like the one used in this sign, was also used in the comparable VASL construction to describe the relatively narrow shape of the tape measure. In the comparable VASL polycomponential construction, there was also a sign that represented the way the object is typically handled, which includes some information about its shape. After that, the focus was on the numbers marked on the measuring tape, and then the description ended with the sign measure.
In the TASL example, corresponding parts of the construction have been transposed onto the body of the addressee. This requires a modification of the movement and location parameters of the sign. In addition, in this example, handshapes have also been modified as they articulate to a mutually accessible, tactile ground. The b/bent-b handshapes that Nina used in the VASL example were replaced by a pointing sign, which was used to trace a shape on the addressee’s palm and instead of describing the measuring tape in the space in front of the torso (as in Figure 7.19), the the signer traces the shape of the thumb of the addressee, by making several tracing movements, one after the other. This differs from the corresponding VASL sign, which incorporates a single movement that extends all the way across the space in front of the signer’s torso, mapping the shape of the tape onto its trajectory when it is pulled out. In the TASL example, the shape and the trajectory are separated out and there is no spatial redundancy between the two path movements. Finally, the numbers on the measuring tape are not marked in Braille, so they are not relevant given tactile modes of orientation and access. Therefore, they are not incorporated into the TASL sign.
These changes are a result of a principled shift in the organization of the deictic field. This broader transformation led TASL signers to transpose signs onto the body of the addressee. However, this led to further changes in how polycomponential signs were constructed. Not only were the signs altered to make them more perceptible, they also incorporated differ-ent dimensions of the objects they represent. In other words, there are new rules emerging for generating polycomponential signs in TASL, which can be expected, over time, to have phonological and morphological implications. I am calling the interactional process contributing to this divergence “sign calibration.” Signal transposition and sign calibration, which are both driven more broadly by deictic integration, are also having further effects on the internal organization of deictic signs in TASL. In order to capture these effects, I introduce a third and final term: “sign creation.”
7.2.3 Sign Creation
Sign creation involves signal transposition and sign calibration, but goes further, allowing new kinds of signs to be created that would not be predicted or permitted by the grammar of
(a)
VASL. Sign creation gives rise to forms that are far more predictable from the perspective of the deictic field of TASL than they are from the grammar of VASL. In the previous sections, changes in the production and reception of signs was linked to a broader reconfiguration figure-ground relations in the immediate environment. In this section, I argue that as signs are calibrated to those relations, novel possibilities for the the production, reception, and derivation of signs arise.
(b) (c)
Figure 7.20: Snake Sequence (Lee describes the shape of the snake’s body)
In Figure 7.20, Lee is describing the shape of a toy snake’s body. First, she grabs Manuel’s right arm and rotates it so his palm is facing down and pulls it back and up near the top of her head. Then, she cups her hand around his arm (See Figure 7.20a), and traces a line from the wrist (Figure 7.20b) to the armpit (Figure 7.20c). Then, in Figure 7.21, she describes the way the snake’s body moves. She does this by gripping Manuel’s arm just below the armpit and keeping hold of his wrist. Then she moves each point of contact alternately to produce a snake-like motion in his arm. There is nothing in the grammar of VASL that would predict or allow a form like this.21 However, it is expectable from the perspective of
(a) (b) (c)
Figure 7.21: Snake Sequence (Lee coaxes Manuel’s arm into a snake-like motion)
the deictic field of TASL, and it has grammatical consequences. Manuel’s arm is not just a surface on which signs are produced; he must use his arm to actively participate in producing signs. This requires a kind of motor coordination between the signer and the addressee that is never required of visual signed language users. In addition, if TASL signs can be derived by drawing on the addressee as a source of actively articulated, meaning-bearing forms, this presents the signer with new morphological possibilities.
These new ways of generating signs emerged out of the pro-tactile workshops as a way of linking the language to context. Participants did this by tacking back and forth between the objects they were describing and the signs used to describe them, tightening relations between the linguistic system and the deictic field as they went. This resulted in a divergence between the visual and tactile systems. For example, VASL signers do not recruit the body of the addressee in routine communicative contexts. The introduction of additional articulators brings new affordances and limitations for the production and reception of signs, as well as new derivational possibilities. These changes began with the emergence of a new, tactile habitus and the reconfiguration of the social and deictic fields. As deictic signs were instantiated in these new fields, they were calibrated to them. Calibration eventually took on a logic of its own, which permitted the creation of signs that would not be predicted by the grammar of VASL, and yet are expectable from perspective of the deictic field of TASL.
7.3 Effects of Deictic Integration on the Deictic System of TASL
In this chapter I have argued that deictic integration is leading to a divergence in how deictic signs are produced, received, and distinguished from one another. While there are elements, such as handshapes, borrowed from VASL (as in Figure 7.18), those elements are increasingly caught up in and organized by the deictic field of TASL. This, in turn, is leading to a morphological divergence in how polycomponential signs are generated, making it possible for TASL signers to create new signs, which would not be predicted and are not allowed by the grammar of VASL. This is the first moment in the emergence of TASL as a distinct, linguistic system.
In this chapter, I have focused on two categories of deictic signs: pointing signs and poly-componential signs. However, since agreeing verbs also integrate deictic and linguistic elements, it is expectable that they will also be affected by these processes, leading to a divergence in the syntactic systems of TASL and VASL as well. In addition, I predict that as the morphology of TASL becomes more systematized, it will diverge further from the morphology of VASL. This prediction is, in part, based on the fact that polycomponential signs, like those analyzed in this chapter, are a source of new lexical signs in most signed languages (Aronoff et. al. 2003, McDonald 1982, Enberg-Pedersen 1993, Klima and Bellugi 1979, Schembri 2000, Shepard-Kegl 1985, Zeshan 2003). If the rules for generating polycomponen-tial signs are being reconfigured, this should affect morphological processes in TASL more broadly, as the language changes over time. TASL is new and the effects of deictic integration have only begun to manifest. However, given stable conditions in the social and deictic fields, a more comprehensive reconfiguration of the grammar appears inevitable. In the next chapter, I discuss the effects of deictic integration on the sublexical structure of TASL. This is the second moment in the emergence of TASL as a distinct, linguistic system.
Chapter 8
The Sublexical Structure of TASL
8.1 Introduction
In this chapter, I argue that the a reconfiguration in the deictic field of Tactile American Sign Language is leading to changes in the sublexical structure of the language. Research on language use among DeafBlind people in the United States1 conducted prior to the pro-tactile movement, describes differences in production and reception of signs as “accommodations” and “adjustments” (Collins and Petronio 1998; Collins 2004; Petronio and Dively 2006). Collins states that “Tactile ASL is a clear example of a dialect in a signed language,” (2004:23) and Petronio and Dively concur, defining it as “a variety of ASL used in the DeafBlind community in the United States” (2006:57). I am arguing that the pro-tactile movement triggered a more radical divergence, resulting in two distinct linguistic systems: Tactile American Sign Language (TASL) and Visual American Sign Language (VASL). This chapter compares the sublexical structure of these two systems.
In section 8.2, I begin by distinguishing between tactile reception of VASL on the one hand and TASL on the other. I argue that tactile reception of VASL allows a visual language to be (partially) perceived tactually, without affecting the sublexical structure of Visual ASL, much as lip-reading allows a spoken language to be (partially) perceived visually, without affecting the sublexical structure of English. In contrast, TASL is an emergent language. Previous work on language use among DeafBlind people is not directly comparable to the phenomena examined here because the research was conducted prior to the pro-tactile movement, when DeafBlind people were engaging only in the tactile reception of VASL. This earlier work does, however, raise several problems that are relevant to the changes currently under way. These problems are addressed in section 8.2. In section 8.3, I introduce the sublexical structure of VASL as a base-line for comparison. In this section, I also introduce the notion of “phonology” as it has been applied in signed languages. Drawing on the analysis presented in Chapter 6, I argue that in order to know whether you are examining a phonological phenomenon or an interactional phenomenon, a “basic” set of participant frames must be established. Prior to the establishment of a basic frame, core lexical items cannot be distinguished from the instances of their use. In section 8.6, I show how changes in basic participant frames are affecting the production and reception of signs in a tactile field. Since these changes are occurring in basic participant frames, they constitute changes in the sublexical structure of the language, as opposed to momentary, pragmatic effects. In section 8.5, I review sublexical constraints that are relevant to the changes observed in TASL and in section 8.7, I show how these constraints are being reconfigured. I conclude that changes in the deictic field of TASL are putting pressure on the grammar in ways that are leading to a divergence in the sublexical structure of TASL and VASL.
8.2 Tactile reception of VASL versus TASL
Modifications that DeafBlind people were making to VASL prior to the pro-tactile movement have been analyzed as variations on the standard. Variation at the sublexical level has been documented in the use of signing space, changes in orientation, location, and movement (Collins and Petronio 1998:21-7). Many of these changes are linked analytically to non-linguistic elements and relations in the immediate environment. For example, differences in the use of signing space are linked to shifts in bodily configurations among participants.
Figure 8.1: The “Signing Circle”
Collins and Petronio argue that comparable phenomena can be observed among sighted users of VASL. The “signing circle” they refer to in the following passage is a canonical representation of the space within which signs are produced. A version of the signing circle is reproduced in Figure 8.1.2 The circle is meant to mark the outer boundary of this space for VASL. Collins and Petronio note that
[u]nder certain conditions, the signing space (the circle) can shift in visual ASL. For instance, if a signer is standing in the street and signing to someone who is looking out a second-floor window, the circle shifts upward. When the signing space shifts, the location of signs shift in relation to the signer’s body. For example, the citation form of now is located about lower chest level. When the signing space shifts upward as the signer communicates with someone on the second floor, the location of now shifts upward to about chin level from the normal chest level. If two people want to have a private conversation and “whisper,” they will greatly reduce their signing space. If a person signs to someone very far away, the signing space will be noticeably increased.
Two significant problems are raised by these observations. First, momentary shifts in perspective within a given interaction must be distinguished from more lasting shifts in the sensory orientation of the language user. One of the most explicit aims of the pro-tactile workshops was to establish participant frameworks that would allow DeafBlind people to communicate directly with one another, rather than relying on sighted people to mediate. This required DeafBlind people to cultivate tactile sensibilities. Toward this end, they wore blindfolds to discourage reliance on remaining vision; they engaged in activities where the aim was to describe objects according to tactile, rather than visual qualities; and they played games such as “tactile pictionary” in order to develop tactile ways of observing the non-linguistic activity of others. These efforts, in addition to the fact of significant vision loss, led to lasting shifts in sensory orientation and new DeafBlind subjectivities and modes of interaction (see Chapters 5 and 6). This kind of shift in habitual modes of orienting to the immediate environment must be distinguished analytically from transient shifts in perspective.
A second related problem is the relationship between the signing circle and the space within which actual utterances unfold. The signing circle is a typified representation in the same way that the citation form of a word in a dictionary is a typified representation. When we look up a word in the dictionary, we do not assume that the form we see is specific to loud environments, bright lights, or situations where the person we are talking to can’t hear certain frequencies. The same is true for representations of “signing space.” This is because in both cases, there is a distinction operating between phenomena organized by the linguistic system and phenomena organized by the deictic field. A limit on where lexical signs can be produced within basic participant frameworks (see chapter 6), constitutes a linguistic constraint on the sublexical structure of the language. Variation in the way signs are produced in the course of interaction does not necessarily signal a change in those underlying constraints.
Collins and Petronio’s comparison between signing space in Tactile and Visual ASL is operating across linguistic and non-linguistic domains. In order to describe changes in the sublexical structure of TASL, momentary effects of language use must be distinguished from changes in the linguistic system. This is only possible given a clear analytic distinction between “participant frameworks” and “participant frames.”
Participant frameworks are the emergent configurations that communicative agents occupy in the unfolding of an interaction.3 Particular configurations found on one occasion, such as a Deaf, sighted person on the first floor, signing to a Deaf sighted person on the second floor balcony is an example of a participant framework. In contrast, participant frames are the repository of regularities that emerge in participant frameworks across encounters. Participant frameworks can be highly contingent on momentary dynamics in the physical or interactional environment, however, under the weight of repeated use and habituation, variation in certain frameworks settles out over time, yielding relatively stable and repeatable “participant frames.4 As was discussed in Chapter 6, participant frames in the deictic field of TASL have shifted. This means that the unmarked contexts for the production of lexical signs has also shifted, making changes in the sublexical structure of TASL distinguishable, analytically, from momentary effects of language use.
Collins and Petronio were observing communication between DeafBlind people in Seattle prior to the pro-tactile movement and therefore prior to the conventionalization of participant frameworks in a tactile field. This is why we see such a wide range of configurations in their analysis and no hierarchy among them. On this topic, they explain that
[t]he data contained many examples of tactile conversations with the signer and receiver in different positions. Varying positions included the following: both standing face-to-face; both sitting side-by-side; the signer sitting and the receiver standing, or vice versa; and in some cases the signer and receiver leaning across a table or another person as they communicated tactilely.
Just as underlying phonological units are realized differently in different contexts, underlying participant frames manifest in different ways in situated frameworks. However, a description of participant frames should not include any information that requires reference to the infinite array of possible contextual circumstances in which participant frames might or could be instantiated. They must assume typified spatial relations between speaker and addressee, typified acoustics, lighting, etc. The cases described by Collins and Petronio take into account many contingent dimensions of context. For example:
In one occurrence, two people were leaning across a table. Both had their arms almost completely outstretched; their hands touched over the table. The signer signed neat, a sign located on the lower cheek. As neat was signed, the signer leaned forward and shortened the distance the hand had to move to contact the lower cheek. Because of the shortened distance, the receiver was able to remain connected with the singer’s hand.
They note that this type of adaptation occurred most frequently when signers were at different heights, or were not able to move closer to one another for some reason (ibid.:24). Retrospectively, it is clear that the variation they witnessed was due to the absence of participant frames in a deictic field organized around tactile modes of access and orientation. As a result of the pro-tactile movement, a tactile deictic field was established and a repository of participant frames emerged (chapter 6). In what follows, analyses of sublexical constraints in TASL rely on this baseline of relatively stable participant frames, thereby excluding momentary effects of language use from the analysis.
8.2.1 Tactile Reception in a Visual Field
Prior to the pro-tactile movement, tactile reception of VASL was a compensatory strategy used to perceive a visual language. As such, tactile access to VASL signs was partial, and like lip-reading, required various forms of reconstruction and inference. In a study of the tactile reception of sign language, Reed et al. (1995) found that DeafBlind people received VASL signs with 60-85% accuracy.5 Four categories of error were identified: (1) “semantic/syntactic, in which the substituted sign was dissimilar phonologically to the stimulus sign but had a semantic or grammatical relation to the target”; (2) “phonological, in which the formational properties, but not meaning were similar between stimulus and response”; (3) semantic/phonological, in which the target and response were similar phonologically and semantically (often morphologically related)”; and (4) “random, which included errors that could not be classified into any of the preceding categories.” The study showed that the largest source of errors was due to inaccuracies in the reception of the phonological parameters of VASL. This finding is explained as follows:
Given that ASL has evolved for reception through the visual sense, it is not surprising that some of its phonological properties are not easily perceived tactually. Perhaps further accommodations and adaptations of ASL for reception through the tactual sense would contribute to increased efficiency of communication with this method (Reed et al. 1995:15).
Patterns in communication among DeafBlind people in Seattle support the finding that tactile reception of VASL disrupts phonological processing. In the past, attempts to circumvent this problem have included further accommodations, as Reed et al. suggest. For example, The distinction between the VASL sign man and the VASL sign woman is inaccessible in a tactile field of engagement because the two signs constitute a minimal pair, differing only in the initial place of articulation. man makes contact with the forehead of the signer and then the chest, while woman makes contact with the chin of the singer and then the chest (See Figure 8.2 on page 197). Since the landmarks of the face are not visible, they cannot be used as a backdrop to differentiate between the locations of the two signs. This problem recurs whenever two signs require a visible ground to be distinguished from one another. To accommodate, it has become common among some interpreters and DeafBlind people to use an older, less common sign for man that differs from woman in terms of both location and handshape instead of just location when their addressee lacks the visual capacity to distinguish between the more common signs.
Substituting semantically equivalent signs in cases like these can patch up the problem, and one can imagine a scenario in which this type of patching becomes the main mechanism for adapting a visual language to a tactile mode of reception. All you would need is a rule or set of rules that could be applied consistently. For example: For all minimal pairs in VASL that
(a) man (b) (c)
(d) woman (e)
Figure 8.2: man and woman in VASL
differ only in location, substitute one sign in the pair for a different, semantically equivalent sign. If this rule were adopted by everyone, then the replacement sign would become the standard sign and any ambiguity in distinguishing man from woman would be resolved. The result would be Visual American Sign Language plus a set of rules for sign-substitution based on phonological and semantic criteria. There are many other ways in which the visual system could have, and has been, adapted on a case-by-case basis as needed (For example, see Chapter 5, Collins 1994, Collins and Petronio 1998, Petronio and Dively 2006, Quinto-Pozos 2002, Reed et al. 1990, Reed et al. 1995). However, with the inception of the pro-tactile movement, this approach was abandoned and reciprocal, tactile access to the sign-vehicle was established instead. This led to a more radical reorganization of the language, which, I am arguing, included a divergence in the sublexical structure of TASL and VASL.
The practices that allowed for tactile reception of VASL are similar to what Sapir calls a “substitutive” system in several respects (1995 [1927]). According to Sapir, language (as opposed to other semiotic systems) is defined in part by its ability to directly communicate feelings and thoughts via a system of “phonetic symbols.” If the thoughts and feelings of a communicator have to pass through another system first, then you know you are dealing with a substitutive semiotic system like writing, or a supplementary semiotic system like
gesture(6).
In signed languages we see this in the distinction between finger spelling and signing. Fin-gerspelling is not a language, but a substitutive system for representing language. In order to understand a fingerspelled word, knowledge of English must be drawn on.7 In contrast, VASL signs are understood without passing through English. The only knowledge that is required is knowledge of VASL and of the world within which VASL is used. Systems like fin-gerspelling are useful because because they allow for transfers in modality. A written English sentence represents a spoken English sentence visually. Likewise, fingerspelling represents English words in a rapid-fading visual channel, which is easily integrated into a signed utterance. The practices that DeafBlind people had developed for receiving visual signs tactually are like a substitutive system in the sense that they allow for a transfer of modality--visual to tactile, while preserving certain formal characteristics of the represented word or phrase. However, only part of the message is transferred, which causes the primary semiotic channel to be de-linked from the supplementary semiosis it would otherwise be embedded in, hence the necessity of reconstruction and inference.
Goffman argued for the central importance of paralinguistic (i.e. supplementary) cues such as gaze, shifts in posture, touch, for such things as managing turns, assessing reception via back-channeling, linking speech to the situated present, showing evidence of attention, etc. According to Goffman these things are so important that, “for the effective conduct of talk, speaker and hearer had best be in a position to watch each other” (1981:129). The fact that people understand each other on the phone is not evidence of the singular importance of words, but rather the power and efficacy of “reconstruction” and “transformation” (ibid.:129-30). It follows that if users of English only ever talked on the phone, the structure of interaction surrounding the English language would change, and audible conventions for marking intended addressee(s), providing back channeling cues, showing evidence of attention, etc. would become required.
VASL, when received tactually, is detached from the supplementary semiosis which, from a sighted perspective, surrounds it. In VASL, primary and supplementary systems, which are produced by many parts of the body, are received visually. Prior to the pro-tactile movement, access for DeafBlind people was restricted mostly, if not entirely, to the hands of the signer. All other aspects of signs and the bodily cues that surround them had to be reconstructed via memory and inference. The same was true for non-linguistic facial expressions, bodily postures, back-channeling cues, and other supplementary semiotic signals. These sources of ambiguity were added to already-strained reception of manual, lexical signs.
Therefore, the reconstruction and transformation that was necessary is comparable to the kinds of reconstruction and transformation executed by the hearer in a patchy cell phone conversation. However, reconstruction and transformation are only effective for DeafBlind people insofar as visual memories and visual sensibilities are still intact. As orientations to, and memories of, the visual world fade, tactile reception of VASL grows increasingly ineffective. Leaders of the pro-tactile movement recognized this problem intuitively, and sought to re-unite lexical signs with the situated present. Before continuing on to the effects of this process on the sublexical structure of TASL, the sublexical structure of VASL is introduced as a base-line for comparison. The following section also serves as an introduction to the notion of “phonology” as it has been applied to signed languages.
8.3 The Sublexical Structure of VASL
There is far more work on the sublexical structure of VASL than can or should be reviewed here. What is important for our purposes is two-fold: (1) to grasp, in the most schematic sense, how morphemes in VASL are broken down into meaningless elements, and (2) to review some relevant constraints on how those elements combine with one another. Apart from a general introduction, the sublexical structure of VASL is considered only insofar as it contrasts with emergent regularities in TASL. These points of contrast, for the most part, involve categories of analysis that are so basic to the description of VASL that in more recent work, they are folded into any argument as part of the common sense of the field. For this reason, I focus on some of the earliest work on VASL, where basic structural facts are made maximally explicit (e.g. Stokoe 1960, Stokoe et al. 1965, Battison 1978, Friedman 1977, Mandel 1981, Supalla 1982).
8.3.1 Cherology and the Aspects of the Sign
William Stokoe and his colleagues (1960, 1965) produced the first grammatical description of VASL. In this early work, they made the case that American Sign Language has sublexical structure. They called the enterprise (and the level of linguistic organization) “cherology” (from the Greek qeir (cheir) meaning “hand”). Stokoe’s most basic categories correspond to location, hand configuration, and movement, which he calls the “aspects” of the sign.8 In order to avoid potential confusions, Stokoe proposes a set of technical terms for the formational parameters of any manual sign: tabula, designator, and signation, which he abbreviates as tab, dez, and sig. The tab is the surface on which a sign is produced. The dez is the configuration of the active hand(s). The sig is the movement--either the external movement of a hand configuration from one tab to another, or an internal movement in the hand configuration which may or may not result in a different hand configuration.
8.3.2 Tabula
At first glance, Stokoe says, the tabula of a sign appears to be determined by its proximity to readily distinguishable parts of the body, such as the forehead, the temple, the cheek, the ear, and so on (2005 [1960]):21). However, according to Stokoe, these areas of the body are not distinguished as such by the language. His example is the sign see. The tab for this sign is the eyes, however, in its phonetic production
the forefinger of the dez hand can easily brush the tip of the nose in passing across the front of the face, but when the sig is motion outward from the same region, particularly when the dez is such that the sign is interpreted as “see,” the signer and viewer tend to think of the marker as the eyes. Since no significance attaches to a contrast solely between nose and eyes as tab, these are analyzed as allochers of the tab “mid-face” (ibid.:21).
In other words, tabs are not specific places on the body, but regions with spatial thresholds. The phonetic production of the sign can vary within those thresholds, but once they are crossed, the meaning of the sign will change. The mid-face tab, for example, includes several areas of the face that in a nonlinguistic frame would be distinct, such as the eyes, the upper part of the cheek, and the bridge of the nose. It also excludes parts of the face that would be part of a coherent area, such as the lower and inner parts of the nose.
Initially, Stokoe identifies 10 tabs that are distinctive in ASL: the whole face or head, the upper face or brow, mid-face, lower face, cheek or side face, the neck, the trunk, the upper arm, the lower arm (below the elbow), and the hand (Stokoe 2005 [1960]:21). Lastly, he adds the trunk of the signer, which, he points out is much larger than the face, and is not divided into smaller contrastive regions like the face is. He also adds the non-dominant arm and the non-dominant hand as potential tabs for the dominant hand, in addition to other roles they may play (ibid.:21). All of the tabs described thus far are what Stokoe calls “body tabs.” There are also signs in which the tab is zero, meaning that the sign is articulated in the “neutral” space in front of the signer (ibid.:25). On this topic, he says: “The zero tab is less precisely located than the others but it is still a place, that space in front of the signer’s body, where the hand can freely and comfortably move”(ibid.).
8.3.3 Designator
In order to describe the handshapes of the active hand, Stokoe appropriates the names of the fingerspelled letters of the English alphabet. However, he does not mean to say that these two categories of handshapes are equivalent. He compares the relationship between between them to the relationship between phoneme and grapheme in spoken languages. Fingerspelling is a digital representation of a graphemic representation of sound units in English. Therefore, it as an “evanescent graphemic system,” or a graphic system of representation that is rapid-fading, like speech.
The finger-spelled word is a series of digital symbols which stand in a one to one relationship with the letters of the English alphabet, but the word itself is a morpheme or combination of morphemes constructed from English language sounds on principles systematically described by the phonemics and morphophonemics of English” (Stokoe 2005 [1960]:25).
Fingerspelled words are representations of units--either phonemes or morphemes that are organized and shaped by the principles of spoken English, and not the principles of the sign language. For example, Stokoe argued that from the perspective of cherology, the hand configurations a, s, and t, which are distinct letters in the manual alphabet, are non-contrastive in the sign language, and therefore are allochers of a single chereme. In part, he attributes the grouping of these three configurations to phonetic constraints. a, s, and t are all formed with a closed fist, but the position of the thumb relative to the fist is slightly different in each. With such minimal perceptual differences, “conditions of visibility must be good for these differences of configuration to be distinguished” (Stokoe 2005 [1960]:22).
For a distinction to be contrastive in the language, Stokoe argues, the phonetic differences must be more perceptually salient: “The sign language [ . . . ] never makes a significant contrast solely on these differences. Instead the contrast is between any fist-like hand and all other (non-fist-like) configurations” (ibid.:22). Stokoe labels this chereme a/s. Another example is the b/5 chereme, which includes several flat-hand configurations. Its allochers look like the b-hand of the alphabet, the 4-hand, the 5-hand, and like a b-hand with the thumb extended. The flat hand is the common element. The fingers are either spread or closed, and the thumb is either extended or not (22). In total, Stokoe identifies 16 contrastive hand configurations, most of which include several allochers.
8.3.4 Signation
Stokoe breaks movement down into distinguishable types, including “gross movements,” which are made with the elbow or shoulder joints and smaller movements using the wrist and/or fingers (Stokoe 2005 [1960]:25). There are also movements that can be described according to the relation between dez and tab. These include descriptors for relative directions and qualities of movement such as “approach, touching, crossing, entrance, joining, and grazing, [ . . . ] separation and interchange.”
Lastly, there are major planes and directional lines in the space in front of the signer that can distinguish one sign from another (ibid.:24). It is not the actual movement that matters, but the ways in which differences in motion result in differences in meaning. Stokoe writes: “The exactitude with which these approximate directions coincide with the coordinates of three dimensional space is immaterial. Polarity is important, and in some signs the opposite direction of sig motion is used to make a pair of antonyms: ‘borrow’ and ‘lend’ differ in sig only, the motion being respectively toward the signer and away. But both directions may combine in the sig of other signs, as in “explain” where the dez moves to and fro” (ibid.).
8.3.5 Morphocheremics
Stokoe argued that there are meaningless elements that combine to produce morphemes in the sign language, but that those processes of sign formation are patterned. He writes:
If every sign in this sign language were simply composed of a tab, a dez, and a sig, the morpheme list of the language could simply be determined by the formula:
no. of tabs X no. of dez X no. of sigs
= no. of morphemes
But there are several different patterns of sign formation, not to mention compound signs and contractions: and the language in true linguistic fashion allows certain combinations of elements and not others (Stokoe 2005 [1960]:25).
Stokoe did not posit any systematic phonological constraints on VASL, but he did make some preliminary observations which were pursued by those who wrote after him. For example, the zero tab, he notes, is limited to “the space in front of the signer’s body, where the hand can freely and comfortably move” (Stokoe 2005 [1960]:25). He also suggests that with frequent use, signs shift from a body tab to a zero tab if the resulting sign is “sufficiently distinct in dez dez and sig from other signs” (ibid.:25). Likewise, he notes the tendency for frequently used two-handed signs to become one-handed (ibid.:27).
Later research addressed many of the topics raised by Stokoe in more depth. One thing that was not carried on beyond him, however, was his terminology. Stokoe established his terms in order to bring out similarities between spoken and signed languages, but at the same time, he was unsure how strong the comparisons were, and therefore, felt that distinct but similar terms were necessary. In 1976, when the new edition of The Dictionary of American Sign Language was published, more evidence had been produced. Some of this evidence suggested that in addition to the analyzability of signs into meaningless elements, the ways in which those elements combine are systematically constrained. If the meaningless elements of spoken and signed languages are constrained in similar ways, Stokoe writes,“the 1960 coinages chereology, chereme, and allocher are no longer needed” (1965:iv). Even for Stokoe, then, these terms went out of usage, and the standard terms used for spoken languages replaced them.
After Stokoe, researchers began to discover constraints on the way meaningless elements were combined in VASL, at which point the phonological system began to look like a series of reductions. For example, Battison (1978) begins with the unrestricted human vocal apparatus. The human body, he says, can make a wide range of sounds of which only a small portion can be recruited for speech (ibid.:20). Phonological constraints act on this limited range of sound to produce a finite set of units. These units are combined in rule-governed ways to yield the allowable morphemes of a specific language, including their alternations when they occur in utterances (ibid.). By analogic extension, the human body can make a wide range of gestures. Phonological constraints in signed languages act on some sub-set of physically possible gestures to produce a finite set of units, which when combined in rule-governed ways, produce the allowable morphemes in a language (as well as their alternations when combined with one another in utterances) (ibid.). These units include handshape, location, and movement, and combine to form signs that are systematically distinguishable from other signs in the language (ibid.:21-3).
In the case of both spoken and signed languages there is a series of reductions enacted in theory as increasingly demanding constraints are imposed on the capacities of the human body. At the outer phonetic limits, capacity is primary. That is to say--there will be no gestural or sonic material admitted into the language that cannot be produced or perceived by the human body. However, the changes that triggered a reconfiguration of the sublexical structure of TASL can only be partially explained by limits on sensory capacity. At least as significant were changes in sensory orientation and embodied sensibilities. These are not matters of capacity, but matters of convention and habituation. Given this, the relevant question isn’t whether or not DeafBlind people can see or feel the sign-vehicle. The relevant question is whether or not they have access to it, given habitual modes of attention in conventional participant frames and bodily configurations. One of the things that structured access to the sign vehicle among DeafBlind people in Seattle was the emergence of two, competing participant frames.
8.4 Participant Frames in the Deictic Field
During the pro-tactile workshops in 2010 and 2011, two competing participant frameworks and their attendant bodily configurations emerged as “basic” (Hanks 1990:148-152): (speaker-addressee) and (speaker-addressees). The first is realized via conventionalized two-person bodily configurations, as in Figure 8.3, and the second is realized via conventionalized three-person bodily configurations, as in Figure 8.4. Each framework exerted different pressures on the production and reception of signs.
Figure 8.3: Two-person Configuration
In Figure 8.4, Adrijana, who is in the middle, is signing no to two interlocutors. In a three-person configuration like this, all signs must be duplicated, so there is one copy for each addressee (See Figure 8.5). In the case of no, duplication is straightforward, because in VASL, this is a one-handed sign (See Figure 8.6.9)
However, in the case of two-handed signs, production is more complicated. There are three types of two-handed signs in VASL and two types of one-handed signs. Each sign type is
Figure 8.4: Three-Person Configuration
Figure 8.5: Duplicated One-Handed Sign
(a) (b)
Figure 8.6: no in VASL
defined as follows (Battison p.28-9):
Type 0: One handed signs articulated in free space without contact (e.g. preach as in Figure 8.7).
Type X: One handed signs which contact the body in any place except the opposite hand (e.g. apple as in Figure 8.8).
Type 1: Two handed signs in which both hands are active and perform identical motor acts; the hands may or may not contact each other, they may or may not contact the body, and they may be in either a synchronous or alternative pattern of movement (e.g. which as in Figure 8.9).
Type 2: Two-handed signs in which one hand is active and one hand is passive, but both hands are specified for the same handshape (e.g. name as in Figure 8.11).
Type 3: Two-handed signs in which one hand is active and one hand is passive, and the two hands have different handshapes (e.g. discuss as in Figure 8.10).
Figure 8.7: Type 0 Sign preach in VASL
Type C: Compounds which combine two or more of the above types.
Figure 8.8: Type X Sign apple in VASL
The interaction of the two manual articulators in all VASL signs is constrained at the sub-lexical level (e.g. van der Hulst 1996, Sandler 1993, Eccarius and Brentari 2007, Morgan
205
(a) (b) (c) (d)
Figure 8.9: Type I Sign which in VASL (movement is alternating)
(a) (b) (c) (d)
Figure 8.10: Type 3 Sign discuss in VASL
Figure 8.11: Type 2 Sign name in VASL
206
and Mayberry 2012, Stokoe 1960, Battison 1978, Channon 2004, Napoli and Wu 2003). New, and importantly, conventional, participant frameworks among DeafBlind people are exerting pressure on the way the manual articulators interact, and therefore on this level of grammatical organization.
In particular, the role of the non-dominant hand is changing in three-person configurations. While in VASL, the hands work in tandem to produce two-handed signs, in TASL, each hand must produce an independently meaningful sign: one for each addressee. Therefore, the reconfiguration of basic participant frameworks is leading to language-internal changes.
8.5 Sublexical Constraints on Two-Handed Signs in VASL
In comparing the sublexical structure of spoken and signed languages, Battison points out that the bilateral symmetry of the body (two arms, two hands, two sets of fingers, and so on) is imperfect from the perspective of the signer (Battison 1978:26). One side of the body is always more dominant than the other. Battison writes that “this opposition between potential visual symmetry and the actual manual asymmetry of the body creates a dynamic tension of great importance for the formational organization of signs” (ibid.:26). In order to capture some of the formal consequences of this fact, Battison provides several terms.
Like Stokoe, he rejects the terms “left” and “right” because the left or right handed production of a sign is non-distinctive in ASL. The first set of terms used in place of “left” and “right” are “dominant” (the hand preferred for most motor tasks) and “non-dominant” (the other hand) (ibid.:27). The second set of terms is “active” and “passive,” which together describe the roles taken by either the dominant or non-dominant hand in the production of a given sign. The active hand is the hand in motion, while the passive hand is the hand that does not move, or moves very little relative to the active hand. In other words, “The active hand has a much larger role and executes a more complex motor program than its passive partner, which can be absolutely stationary” (ibid.). Despite noted exceptions (Battison 1974; Klima and Bellugi 1975; Frishberg 1976b [cited in Battison 1978:27]), Battison argues that the dominant hand tends to assume the active role, while the non-dominant hand tends to assume the passive role (ibid.).
In describing the orientation and location of the hands relative to the body, the same issue of left/right arises, and another pair of terms is proposed. For signs that make contact with the same side of the body with respect to the active hand, the term “ipsilateral” is used. For signs that make contact with the opposite side of the body with respect to the active hand, the term “contralateral” is used. Battison’s examples are the pledge of allegiance and a military salute. In the first, the dominant hand contacts the contralateral breast (ibid.:28). In the second, the dominant hand contacts the ipsilateral forehead.
8.5.1 Symmetry and Dominance Conditions
For the sub-set of signs that are produced using two hands, Battison proposes two phonological constraints, which are interlocking-- the Symmetry Condition and the Dominance Condition.
The Symmetry Condition states that (a) if both hands of a sign move independently during its articulation, then (b) both hands must be specified for the same location, the same handshape, the same movement (whether performed simultaneously or in alternation) and the specifications for orientation must be either symmetrical or identical” (34).
The Dominance Condition ...states that (a) if the hands of a two-handed sign do not share the same specification for handshape (i.e. they are different), then (b) One hand must be passive while the active hand articulates the movement, and (c) the specification of the passive handshape is restricted to be one of a small set: a, s, b, 5, g, c, o. . . . Type 3 signs obey this constraint with very few exceptions (Battison 1978:35)
These handshapes that occur on the passive side of two-handed signs are unmarked in two respects. In terms of both articulation and perception, they are maximally distinct and geometrically basic:
a and s are closed and maximally compact solids; b is a simple planar surface; 5 is the maximal extension and spreading of all projections; g is a single projection from a solid, the most linear; c is an arc; o is a full circle (Battison 1978:36).
Battison argues that these handshapes are unmarked phonologically as well, since they appear very frequently and in many contexts in VASL, they were present in all signed languages that had been described when Battison was writing, and they are the first handshapes mastered by deaf children learning VASL (Battison 1978:37).10
8.5.2 Weak Drop in VASL
Two-handed signs in VASL can undergo a phonological process called “weak drop” (Padden and Perlmutter 1987), where the non-dominant, or “weak” hand drops out and a one-handed variant is expressed. However, this process is constrained. First, in VASL, alternating signs do not undergo weak drop (Padden and Perlmutter 1987:350). Second, once a sign has undergone weak drop, it cannot undergo certain morphological processes (such as compounding) (Sandler 1993:347-353) and certain forms of inflection (Padden and Perlmutter 1987:367-8). Third, two-handed variants are basic, while one-handed variants are not (Padden and Perlmutter 1987:351). If the two-handed variant were to disappear in the underlying representation, or be replaced by the one-handed variation, distinctions between minimal pairs in VASL would be obscured (ibid.).
8.6 Changes in Sign Production
In order to understand how new participant frames are affecting the sublexical structure of TASL, I located signs, which in VASL, would fit each of Battison’s two-handed categories (Type I, Type II, and Type II). I then documented how their production and reception changed when instantiated in a tactile field. For each type of two-handed VASL sign (Type I, Type II, and Type III), three sets of data were collected. Set 1 includes signs produced by people who have had minimal exposure to pro-tactile practices. This set was taken from the first few weeks of the pro-tactile workshops. Set 2 includes signs produced by people who had attended 2 1/2 weeks or more of the pro-tactile workshops. Set 3 includes signs produced by the instructors of the workshops, who had been engaged in developing pro-tactile practices for about four years already at the time of the workshops.
In this section, I argue that constraints on symmetry in two-handed signs are growing more demanding as a result of deictic integration. This is leading to a reduction in formational complexity, when compared to VASL lexical signs. In the next chapter, I show how this reduction in complexity is complemented by an increase in formational complexity in poly-componential signs. This redistribution of complexity across grammatical subsystems is evidence that the tactile and visual systems are undergoing a grammatical divergence.
8.6.1 Type I Signs
Type I VASL signs are defined by Battison (1978:28-9) as follows:
Two handed signs in which both hands are active and perform identical motor acts; the hands may or may not contact each other, they may or may not contact the body, and they may be in either a synchronous or alternative pattern of movement (which, car, restrain-feelings).
In a tactile field, the aim of the signer in a three-person configuration is to produce a perfectly duplicated message so there is one copy for each addressee. Given this aim, Type I signs should change the least, since the motor activity of each hand is, by definition, already identical in VASL. However, there are two features of this sign type that consistently changed over the course of the workshops. First, in VASL, the movement of the two articulators can be alternating rather than synchronous (as in which). As the workshops progressed, there were more and more instances where, in VASL, alternating movement would be expected, and a synchronous movement was produced instead. This was coded as “sync.”
Second, this type of sign can contact the ipsilateral, contralateral, or mid-line body. It can also be produced in neutral signing space but in alignment with the ipsilateral or contralateral body. In the workshops, there was a trend toward ipsilateral contact or alignment where contralateral or central contact or alignment would be expected. Also, orientation of the hands tended to shift so instead of the hand extending from contralateral alignment to ipsilateral alignment (as in the VASL sign now), the hand rotated so that it extended forward away from the ipsilateral body on both sides.
Lastly, in some signs, such as the two-handed version of inform-you, one hand may contact the body, while the other hand does not, despite the fact that the motor activity of the articulators is the same. These signs tended to change, so that both hands made contact with the ipsilateral body. All of these cases were coded as “ipsi.” The signs that did not change were coded as “no change.” In Figure 8.12, the percentage of signs in each data set that diverged from what would be expected in VASL is represented.
(a) Figure 8.12: Changes in Type I
Signs
For signers with little exposure to pro-tactile practices, almost 100% of signs were produced as one would expect in VASL. As exposure increased, greater percentages of signs diverged from VASL. This trend is represented by the line labeled “NO CHANGE” in Figure 8.12. The line labeled “IPSI” represents Type I signs where contralateral or central contact or alignment would be expected in VASL, but ipsilateral contact or alignment occurred instead. As is shown, ipsilateral contact or alignment became increasingly common as exposure increased. Lastly, the line labeled “SYNC” represents signs where alternating motion would be expected in VASL and synchronous motion occurred instead. Again, the percentage of signs in the data set where this change occurred increased steadily with exposure.
Type I Signs (Set 1)
In the first set,11 as shown in Figure 8.12, there was very little divergence from VASL. This sign type is maximally symmetric compared to the other two-handed sign types, so there are few asymmetries in access for the two addressees. However, there were some issues that arose. There are near-minimal pairs in VASL that become minimal pairs in a tactile field. For example, the signs culture and class, differ in two respects, but in a three-person configuration, only one of these is perceptible. culture is produced with the active hand in a c-handshape. The passive hand is in a g-handshape, which functions as a place of articulation (as opposed to an active articulator). In a three-person configuration, the passive hand tends to duplicate the handshape of the active hand (See section 8.6.4). If this occurs, the resulting sign culture is indistinguishable from class.12 The same ambiguity arises in the two-person configuration if the addressee is using one-handed reception. In both cases, the distinction between the two meanings is either not signaled formally in the language or not accessible, so an inferential processes is required.
Another source of ambiguity is alternating vs. synchronous movement of the two hands. This distinction is no longer perceptible with access to only one of the signer’s articula-tors. For example, at the beginning of the pro-tactile workshops, the participants used a modified version of the VASL sign sign to describe the duplicate signing they were doing in three-person configurations. In VASL, sign would be produced with both hands in a g-configuration and the movement of each hand would be alternating. In order to describe duplicate signing, the movement was made synchronous. The resulting sign reflected the meta-linguistic observation that in duplicate signing, symmetry is maximized. Ironically, the difference in meaning signaled by alternating vs. synchronous movement was not perceptible in the configurations it was meant to describe. Within a couple of weeks, the sign changed to sign same-time, where sign was once again alternating as in VASL. Although the participants of the workshops did not orient to these problems in any observable way, these issues foreshadowed changes that manifested in Set 2 and Set 3.
Type I Signs (Set 2)
In this set,13 there was an overall shift toward greater synchronization and symmetry between the two articulators. In Figure 8.12, an increase in signs produced with ipsilateral contact and an increase in signs produced with synchronous movement is represented. There were also instances where the signer started with an alternating sign, and mid-sign, altered it so it was or could be synchronous. In one case, the signer started to articulate dialogue, produced with two g-configurations alternating at the chin. Before he completed the sign, however, he switched to a VASL Type O sign talk and duplicated it.
This kind of repair happened not only with the replacement of one sign type with another, but with the production of a particular sign. In these cases, phonological features were replaced and the sign itself was changed. For example, In VASL, eat is a one-handed sign. Inflected for progressive aspect, it becomes a two-handed, alternating sign. When this sign occurred in a three-person configuration, the signer started out alternating, and then mid-sign her hands fell into alignment, and the movements became synchronous. Signs that occurred more frequently, like people, began to be predictably produced with synchronous rather than alternating movement.
In signs that make contact with the signer’s body, two patterns were observed. First, a preference for ipsilateral contact over contralateral contact emerged, as did a preference for horizontal symmetry over vertical symmetry. These tendencies led to changes where signs were produced. For example, the VASL sign enjoy is produced with both hands in a 5/b-configuration, stacked vertically on the mid-line of the signer’s chest. In a three-person configuration, the place of articulation shifted, so the hands were horizontally aligned and both made contact with the ipsilateral chest. The same shift from vertically aligned mid-line contact to horizontally aligned ipsilateral contact occurred with the sign happy. Another example is ask as in “request,” which is produced with two hands in a 5/b- configuration. The hands make contact with one another at the mid-line. This sign occurred three times in this data set. In one of these cases, there was no contact between the hands, and rather than being aligned with the vertical mid-line of the signer’s body, both hands moved toward ipsi-lateral alignment. In VASL, information is symmetrical, except that the dominant hand contacts the forehead and the non-dominant hand does not. In this data set, information occurred twice. Once, it was produced in the same way one would expect in VASL. The second time, both hands contacted the ipsilateral forehead, increasing symmetry.
Type I Signs (Set 3)
Among the instructors of the workshops, the same patterns held. For example, the VASL sign body is produced with two hands--one stacked vertically above the other on the mid-line of the signer’s chest. In this data set, it was produced with the hands aligned horizontally, each one making contact with the ipsilateral chest, rather than the mid-line. Likewise, the sign interesting is produced in VASL with the hands in vertical alignment with one another on the mid-line of the signer’s chest. In this data set it is produced with horizontal alignment, both hands contacting the ipsilateral chest. explain is sometimes signed with alternating movement (as in VASL) and sometimes with synchronous movement. The VASL sign enjoy, like body, involves two hands, vertically stacked on the mid-line of the signer’s chest in VASL. In this data set, it is produced with horizontal alignment, both hands contacting the ipsilateral chest. people is signed in this data set with synchronous movement, where in VASL, it would be signed with alternating movement. communicate is produced with alternating movement in VASL, but in this set it is produced with synchronous movement.
One additional issue that was raised in this data set was the degradation of iconic relations that can sometimes result from changes in production. For example, the sign replace. In VASL, this sign represents the idea of replacement with two f-handshapes. Via alternating movement, one f-handshape “replaces” the other. As with the other Type I signs, this sign moves from alternating to synchronous movement and both hands move further toward ipsilateral alignment with the signer’s chest. In the resulting sign, iconic links to the activity of replacement are severed.
8.6.2 Type II Signs
Type II Signs are defined by Battison as follows:
Two-handed signs in which one hand is active and one hand is passive, but both hands are specified for the same handshape (name, short/brief, sit/chair).
Type II signs present more of a challenge than any other sign type to the signer in a three-person configuration. The aim is to duplicate the message so there is one copy for each articulator. Type II signs are symmetrical in terms of hand configuration, but potentially asymmetrical in all other respects. The passive hand often acts as a place of articulation for the active hand (as in sit). In Type O signs, the two hands are maximally asymmetrical, since one hand is not used at all. These signs were easily duplicated by signers in a three-person configuration. On the other end of the spectrum, Type I signs are almost symmetrical and duplicating them required minimal adjustment. Type II signs are a mixture of symmetrical and asymmetrical. When sublexcial constraints for the formation of this sign type were integrated with deictic constraints on three-person communication, new regularities in sign formation emerged.
8.6.3 Changes in Type II Signs
(a) Figure 8.13: Changes in Type II Signs
Type II Signs (Set 1)
Participants in the early weeks of the workshops often failed to duplicate this entire category of signs. Out of 74 tokens, 58% were not duplicated. This meant that one of the addressees did not have access to these signs, except via the non-dominant hand. If the addressee on the non-dominant side noticed, they intervened. It was not always clear to them what was happening, though, and participants were not all that reflective about mistakes until later on in the workshops. After being reminded many times, signers started pausing awkwardly when they encountered Type II signs, but usually moved on without executing any kind of repair.
Where duplication was attempted, there were two possibilities for how the signs changed. The first possibility was that the signer would duplicate the sign sequentially, the dominant hand playing the active role first and then the non-dominant hand (or in some cases vice versa). This was coded as “sequential alternation” shortened to “alternate” or “alt.” There were 16 instances in this set (about 22% of tokens were alternated). The second possibility was that the non-dominant hand would be dropped altogether. This was coded as “non-dominant dropped” shortened to “drop.” There were 14 instances of dropping in this set (about 19% of tokens were dropped).
Type II Signs (Set 2)
In the second set, there were far fewer cases where the signs were simply not duplicated.14 Earlier on in this set, most Type II signs were duplicated sequentially. In the first production, the dominant hand played the active role and the non-dominant hand took on the passive role (or vice versa) and in the second production, the roles were reversed. As the workshops went on, there was an increasing tendency to drop the non-dominant hand altogether, duplicating the active hand’s role with the dominant and non-dominant hand simultaneously. Of the first 24 tokens in this set, only two dropped. Of the last 27 tokens of this set, 12 dropped. The tendency toward more dropping continued to increase.
Type II Signs (Set 3)
Among the instructors dropping was still more common. Out of 66 tokens produced by the instructors, 39% were alternated and 42% were dropped. The remaining tokens were not duplicated.15 There was one sign in this set that was changed further. The VASL sign interrupt is signed with a b/5 passive hand and an active b/5 hand contacting the passive hand at the web between the thumb and the index finger. In the instantiation of interrupt in this data set, the passive hand was dropped and the active hand was duplicated.
8.6.4 Type III Signs
Type III signs are defined as follows by Battison:
Two-handed signs in which one hand is active and one hand is passive, and the two hands have different handshapes. Note that signs which were excluded specifically in type X fit into types 2 and 3--one hand contacts the other (discuss, contact (a person)).
Type III signs are very similar to Type O (one-handed) signs with two exceptions. First, the place of articulation is the non-dominant hand rather than the body of the signer or neutral space. Second, this type of sign almost always obeys the dominance constraint, so the configuration of the non-dominant hand is restricted to one of the following unmarked handshapes: a, s, b, 5, g, c, o.
8.6.5 Changes in Type III Signs
(a) Figure 8.14: Changes in Type III Signs
When Type III signs were embedded in a tactile field, they were reconfigured in much the same way that Type II signs were reconfigured (See Figure 8.14.). With less exposure to pro-tactile practices (Set 1), signers tended to produce these signs as they would be produced in VASL. Set 1 included 61 tokens produced by 10 signers. 46% were produced as one would expect in VASL, 23% were alternated, and 30% were dropped. As exposure to pro-tactile practices increased (Set 2), signers tended to alternate the dominant/non-dominant configuration of the sign. Set 2 included 51 tokens produced by 6 signers. 25% were produced as one would expect in VASL, 51% were alternated, and 24% were dropped. Among the instructors, who had the most exposure to pro-tactile practices (Set 3), Type III signs were produced most often by dropping the non-dominant hand altogether, which was coded as “drop.” Set 3 included 39 tokens produced by 2 signers (the instructors). 0% were produced as one would expect in VASL. 47% were alternated, and 51% were dropped.
This tendency toward dropping the non-dominant hand was also visible in patterns of self-repair. There are two instances in the data where a signer starts out alternating and part way through drops the non-dominant hand instead, or alternates the sign and then immediately repeats the sign, dropping the non-dominant hand instead. There are no instances where the signer starts out dropping the non-dominant hand and then switches to alternation. This is further evidence that the system is losing an articulator for purposes of lexical production in Type II and Type III signs. These changes have implications for sublexical constraints on two-handed signs in VASL, including constraints on symmetry across the two manual articulators and constraints on “weak drop.”
8.7 Implications for Sublexical Constraints in TASL
Since the pro-tactile movement took root in Seattle in 2006, basic participant frameworks have shifted, and as a result, the production and perception of two-handed signs has changed. In this section, I show how these changes are causing a reconfiguration in sign types as well as changes in constraints on symmetry and on weak drop.
8.7.1 Symmetry
In a three-person configuration, from the perspective of the signer, Type 0 signs, which are “articulated in free space without contact” (Battison 1978:28), become type I signs, which are “two-handed signs in which both hands are active and perform identical motor acts; the hands, may or may not contact each other, they may or may not contact the body.” Except that the following portion of the definition of that sign type does not hold: “[the hands] may be in either a synchronous or alternative pattern of movement” (ibid.).
In a three-person configuration, signs tend toward synchronous movement and away from alternating movement. Type X signs in VASL, or “one-handed signs which contact the body any place except the opposite hand” (Battison 1978:28), become Type 1 signs, which are “two-handed signs in which both hands are active and perform identical motor acts.” However, as with all other Type 1 signs in TASL, they tend to be produced with ipsilateral contact or alignment with the body of the signer, where contralateral or mid-line contact or alignment would be expected in VASL. In addition, synchronous movement is preferred to alternating movement. This means that in TASL, Type 0, Type X, and Type 1 signs are collapsed into a single category, all of which are under more demanding symmetry constraints than their corresponding category (Type 1 signs) in VASL.
For example, in Figure 8.15, the VASL sign fine (Figure 8.15a) is duplicated (Figure 8.15b). Contact with the signer’s body moves from the mid-line to ipsilateral contact on both sides. The long line in the middle is an approximation of the mid-line on the signer’s body and the two shorter lines on either sides show the approximate point where the signer’s thumbs contact his chest. In both cases--where synchronous movement is replacing alternating movement, and where ipsilateral contact is replacing mid-line contact, constraints on symmetry are becoming more demanding. The two articulators must be perfectly identical or motorically symmetrical in every respect. Type II and Type III signs are also collapsed into
(a) VASL fine
(b) fine (duplicated) Figure 8.15: fine duplicated with ipsilateral contact
this category when duplicated as well since they too must be perfectly symmetrical. Perfect symmetry is achieved by dropping the non-dominant hand and transforming it into a second active hand.
8.7.2 Complexity
This collapse of all sign-types into one allows TASL signers to produce two-handed signs that are maximally redundant, thereby enabling them to address two people at the same time. Given this communicative aim, the two manual articulators no longer work in tandem as they do in VASL. Rather, they produce identical copies of a single sign (symmetry is maximized). In Battison’s terms, this maximization of symmetry constitutes a minimization of formational “complexity,” which Morgan and Mayberry succinctly capture: “A two-handed sign that shares all phonological aspects is the most redundant and therefore least complex [ . . . . ] Increasing mismatches (departures from symmetry between the two hands) in each of these aspects create more complexity” (Morgan and Mayberry 2012:148).
8.7.3 Place of Articulation Features in TASL
From the analyst’s perspective, there appears to be a shift from the mid-line toward ip-silateral contact. However, from the perspective of TASL signers, the signing space itself may have been halved. Under this analysis, the two shorter “ipsilateral” lines marked in Figure 8.15b represent duplicated mid-lines and the larger line in the same figure would be the boundary between the first and second signing space. This suggests a reconfiguration of constraints on signing space (and therefore the distribution of places of articulation) for the production of lexical signs16 in three-person configurations.
Insofar as phonological distinctions within the reduced signing space dissolve, and perceptual ambiguity increases, distinctive locations can be expected to be redistributed as the system develops further. Indeed, this is already occurring. As we will see in Chapter 9, signing space is extended in the production of polycomponential signs to incorporate places of articulation on the body of the addressee.
8.7.4 Weak Drop
In addition, the constraints on “weak drop” (Padden and Perlmutter 1987) where the non-dominant, or “weak” hand in two-handed signs drops out and a one-handed variant is expressed, are changing. Weak drop in TASL violates constraints imposed by the grammar of VASL. For example, alternating signs in VASL do not undergo weak drop (Padden and Perlmutter 1987:350). In TASL, they do. In addition, minimal pairs become indistinguishable (as was discussed previously) if the distinction between one-handed and two-handed signs is collapsed. Padden and Perlmutter use the example of interesting and like; the former is a two-handed sign while the latter is a one-handed sign. In all other respects the two signs are identical (1987:351). Finally, morphological processes in VASL require both manual articulators (see Sandler 1993:347-353). Given a one-handed system, these processes must be accomplished some other way. Therefore, while it is true that the non-dominant hand is, in many cases, optional; this is not the case for all classes of two-handed signs and it is not true when morphological processes like compounding are in play.
In TASL, communication pressures are leading to decreased formational complexity in two-handed signs and constraints on weak-drop are relaxed. This is leading to ambiguities, which are being resolved by DeafBlind people in novel ways. These strategies and their implications for the ongoing grammatical divergence between TASL and VASL are discussed in the following chapter.
8.7.5 Which Participant Framework is Basic?
The analysis presented thus far relies on the assumption that the three-person configuration is, in fact, a basic level participant framework. In order to determine if this is the case, I examined the production of one-handed signs in two-person configurations. According to strictly pragmatic constraints, one-handed signs would only need to be duplicated in three-person configurations. If they are duplicated in two-person configurations, this would suggest that the motoric patterns shaped by the habituation of signers to three-person configurations are spreading to the linguistic system, proper. It would also suggest that the changes in constraints discussed in the previous section will continue and the visual and tactile systems will continue to diverge.
In a two-person configuration, reception tends to be one-handed and in three-person configurations, reception is necessarily one-handed. In Figure 8.16, the woman in the middle is signing the number three to two addressees. I have outlined the addressee on the right in the image. Her right hand is receiving the sign tactually, while her left hand is in contact with the second addressee. The duplicated three is being received by the other addressee’s left hand. Backchanneling cues produced by the addresses are duplicated so that both the signer and the second addressee have access to them. This configuration also works to maintain co-presence between all three participants.
For one-handed signs in VASL, this three-person configuration requires the signer to duplicate the sign so there is one copy for each addressee. In a two-person configuration, there are two possibilities for the production of one-handed signs. They can be produced as they would be in VASL, or they can be duplicated, as they would be in a three-person configuration such as that pictured in Figure 8.16. If the second articulator is being used as it would be in three-person configurations, this is evidence that signers are becoming habituated to a different configuration of articulators, initiated by changes in the deictic field, but consequential for the sublexical structure of TASL.
Although it is quite early in the emergence of this new system, something like this would surely be necessary, since languages generally do not vary the complexity of the articulatory apparatus as the number of addressees changes. The lower level cognition required to produce
Figure 8.16: One-handed Reception in 3-person configuration
signs within the phonological parameters of a particular language should recede into the liminal zones of a speaker’s consciousness so that cognitive and motoric resources can be freed up for other communicative tasks. If DeafBlind people duplicate one-handed signs regardless of whether they are in a two- or three-person configuration, the formal composition of signs remains constant, and sign production does not require the coordination of higher and lower-level cognitive resources.
In order to find out whether or not singers were duplicating one-handed signs in two-person configurations, I selected stretches of interaction where two-person configurations were in use and coded all of the one-handed signs used therein for ± duplication, the name of the signer, and the sign being used.
I collected three sets of data. Set 1 was produced by signers who were in their first couple of weeks in the pro-tactile workshops, and therefore, had had very little exposure to pro-tactile practices. Set 2 was produced by signers who were in their last few weeks of the workshops, and therefore had had more exposure to pro-tactile practices. Set 3 was produced by the instructors, who had been developing pro-tactile practices for several years before the workshops.
For Type 0 signs, signers who had had very little exposure to pro-tactile practices (Set 1) did not duplicate one-handed VASL signs in two-person configurations. Out of 40 tokens produced by 3 signers, 0% were duplicated. After a few weeks of exposure, duplication of one-handed signs increased dramatically. Out of 49 tokens produced by 5 signers 35% were duplicated. Among the instructors, who had been developing pro-tactile practices for years,
Figure 8.17: Type 0 Signs in Two-Person Configurations
the rates for duplication were significantly higher than rates for Set 1, but they fell below those recorded for Set 2. Out of 43 tokens produced by 2 signers, 12% were duplicated (See Figure 8.17).
Figure 8.18: Type X Signs in Two-Person Configurations
After finding this pattern in Type 0 signs, I expected to find a similar pattern in Type X signs. However, rates for duplication increased among the instructors for Type X signs relative to Set 2 (See Figure 8.18). In Set 1, out of 47 tokens produced by 4 signers, 2% were duplicated. In set 2, out of 53 tokens produced by 7 signers, 11% were duplicated. Among the instructors, 42 tokens were produced and 24% were duplicated. On the one hand, these results indicate a clear increase in duplication of one-handed signs in two-person configurations. However, the results for Set 3 in each sign type suggest conflicting projections for the development of TASL. In Set 1 for both sign types, signers are new in the workshops and are therefore very likely to be communicating in ways they would have communicated outside of the workshops. Given the data for Type 0 signs only, it seems possible that duplication increases in the learning phase, when signing in three-person configurations is still far from automatic. As the interactional patterns become naturalized, signers can switch more fluently between duplication and non-duplication in three- and two-person configurations respectively. However, the data for Type X signs suggests instead that signers will continue to duplicate one-handed signs in two-person configurations.
More research will be necessary to resolve this discrepancy, as I was unable to find any patterns external to the data set that could explain these findings. I considered the difference in sign-type--Type 0 signs do not make contact with the body of the signer, while Type X does. However, there is no reason why this difference should be so significant for duplication. Second, I looked into the semantics of the signs that were used, but could find no relevant pattern. The only significant external factor I found was that one signer duplicated one-handed signs more than all other signers. This signer had less experience using tactile reception prior to the workshops than others. She also had a physical problem affecting her tendons and joints during the workshops, so her mobility was slightly restricted. This suggests that increased cognitive and motoric demands leads to less variation in the production of signs across participant frameworks.
During the pro-tactile workshops, many new practices were introduced and signers had to make previously automatized processes of production and reception the focus of attention. This put more strain on cognitive and motoric coordination. It is possible that as cognitive and motor demands increase, signers will intuitively reduce variation in sign-production. Since three-person configurations require a system that operates on a single manual articu-lator (which can be duplicated or not), and two-person configurations are more flexible, the former is more likely to become the default. Therefore, it is possible that TASL, as it develops, will provide phonological specifications for only one manual articulator and the second articulator will optionally produce an identical copy. If this occurs, then the phonological system is losing the non-dominant hand as a place of articulation and a resource for marking phonological distinctions. This prediction is consistent with changes already taking place in the formation of polycomponential signs. Instead of using the non-dominant hand of the signer as a place of articulation, the hand and other areas of the body of the addressee are being recruited as places of articulation.
8.8 Effects of Deictic Integration on Sublexical Structure
The reconfiguration of the deictic field of TASL has led to the emergence of two competing basic participant frameworks among DeafBlind people in Seattle. One framework incorporates three participants, while the other incorporates two participants. In three-person configurations, signs must be duplicated so that one copy is produced for each addressee. The integration of the deictic field, which contains these structures, with the language, is putting pressure on the sublexical structure of TASL. From the addressee’s perspective, the language is moving from a two-handed to a one-handed system. From the signer’s perspective, more demanding constraints on symmetry are imposed on two-handed signs. Deictic integration is also pushing the phonological process of “weak-drop” beyond what the grammar of VASL allows. As a result, ambiguities arise often, which are difficult for DeafBlind people to resolve in interaction.
These changes mark the second moment in the divergence of Tactile and VASL. In the next chapter, I show how DeafBlind participants are resolving ambiguities that arise from the loss of complexity in lexical signs by recruiting the hands and arms of the addressee as places of articulation and articulators. I discuss the implications of these changes for further grammatical divergence between TASL and VASL.
223
Chapter 9
Formational Constraints on Complex Signs in TASL
9.1 Introduction
This chapter analyzes changes in formational constraints on signs known as “classifier constructions.” These constructions can be distinguished from lexical signs in at least two respects. First, they tend to encode meanings that are more complex than the meanings associated with lexical signs, and second, they tend to incorporate both linguistic and non-linguistic elements (Edwards 2012, Liddell 2003, Schembri 2003, Morgan and Woll 2007, among others).
In TASL, these signs are not produced on the body of the signer or in the space in front of the signer as they are in VASL, but rather, on the body of the addressee. This change is rooted in a broader shift in how DeafBlind participants orient to and access their environment. Prior to the pro-tactile movement, visual access was assumed. Individuals who could no longer communicate in ways that were normative for sighted people were expected to compensate in whatever way would be most effective for them, such as making adjustments in how signs were received and relying on sighted interpreters to relay information. Since the inception of pro-tactile movement, reciprocal, tactile communication is becoming the norm instead. Everyone, whether they are sighted, partially sighted, or blind, is now expected to produce and receive signs in a reciprocal tactile channel.
This shift has led to a reconfiguration of figure/ground relations in the immediate environment, so that a tactually accessible ground is required for individuating objects, whether talk about those objects is involved or not. Linguistic signs are increasingly caught up in this pattern, since they, too have to be individuated, or rather differentiated, against an accessible ground. Therefore, rather than being produced on and around the body of the signer, new TASL signs are often produced on the body of the addressee, where relative spatial locations can be easily perceived.
This process is not linguistic. However, as signs are transposed onto the body of the ad-dressee, signers encounter new motor-perceptual affordances and limitations for producing and receiving signs and a divergence in the visual and tactile systems appears. For example, the amount of surface area in a given region of the addressee’s body will limit the number of distinct locational targets allowed in that region. While several locations on the palm of the addressee can easily be kept distinct, only one location can be marked on the tip of the addressee’s finger. I argue that differences like this will, over time, give rise to a new set of constraints on the production and reception of TASL signs.
Classifier constructions are deictics in the sense that they integrate characterizing elements that are retrieved from the linguistic system, with deictic elements that are retrieved from the deictic field (see chapter 7). Over time, patterns in retrieval are coordinated in tighter and more restricted ways and language-internal relations adjust to accommodate these restrictions. This is what I am calling deictic integration. The focus of this chapter is the effect of deictic integration on formational constraints in polycomponential signs.
Interactional mechanisms that are driving this process include signal transposition, sign calibration, and sign creation.1 Signal transposition involves the transposition of handshapes onto the body of the addressee, yielding a tactually accessible ground. This process has implications for formational constraints, but is driven by the coordination of the linguistic system and the deictic field. Sign calibration is an interactional process through which participants clarify and adjust signs which have lost their capacity to refer to objects in the immediate environment. This process, in turn, led to the formation of signs and novel forms were created that would not be predicted given the grammar of VASL. I call this process sign creation. In this chapter, I show how these processes are leading to divergent constraints on the formation of “classifier constructions.”
In section 9.2, I provide a brief introduction to classifier constructions in VASL, which, I argue, can be analyzed as composites composed of “characterizing” and “indexical” elements (Morris 1971 [1938]). Iconicity and gesture fall out from these relations and therefore are not essential, definitional components. This approach to sign language classifiers (like many other approaches) departs from canonical understandings of classifiers in spoken languages. Therefore, I follow Slobin et al. (2003) in adopting the term “polycomponential signs.” This term allows for the combination of semiotically distinct elements, without specifying the nature of those elements (e.g. gestural, linguistic, indexical, iconic).
In section 9.3, I show how DeafBlind people created new polycomponential signs in the pro-tactile workshops and I argue that this process is a result of deictic integration. In section 9.4, I compare constraints on location in VASL and TASL. In order to isolate these constraints, I make a clear analytic distinction between social, deictic, and linguistic phenomena, all of which influence the production of signs. For example, social constraints limit possible places of articulation on the body of the addressee by applying social frames of value to communicative acts. In TASL, there are no places of articulation on the groin of the addressee--not because it is difficult to reach, but because it is considered inappropriate to touch the groins of others. Deictic constraints, on the other hand, have to do with the modes of access participants have to the immediate environment via an established set of participant frameworks in a given field.
Distinguishing between social, deictic, and linguistic constraints prevents intrusions of nonlin-guistic phenomena on the linguistic analysis. It also provides a principled way of accounting for the role of nonlinguistic processes in the structuring of TASL. Finally, in section 9.4.1, I track the transformation of particular components in polycomponential signs as values are retrieved from a tactile, rather than a visual deictic field. I show how the affordances and limitations of the tactile modality subsequently force changes in production, and how these constraints are applied to new TASL signs. I conclude with some thoughts about potential trajectories for the continued development of TASL.
9.2 Classifier Constructions in VASL
Classifier constructions in signed languages were initially named for their similarity to a subcategory of spoken language classifiers called “verbal classifiers.” Spoken language verbal classifiers consist of a morphological element, affixed to the verb, which classifies one of the verb’s nominal arguments according to semantic criteria. For example, the forms represented below, which are found in Diegueno, a Native American language spoken in California (Langdon 1970:78, cited in Grinevald 2000:67):
a’mi ... ‘to hang (a long object)
p’mi ... to cary (like bucket)
tumi ... ‘to hang (a small, round object)
In visual signed languages there are similar constructions. For example, in VASL, a morphological element that looks like the b-handshape (Figure 9.1) can be incorporated into a verbal sign to classify one of its nominal arguments as a flat, rectangular thing. When the
Figure 9.1: A morpheme used to classify objects as flat and rectangular
b-handshape is embedded in a representation of an action involving an object, it systematically draws attention to the flat and rectangular qualities of that object. Therefore, its form is tied to a stable semantic function. However, the movement and location parameters of the verbal element are not stable in the same way. Rather, their formal properties and meanings vary according to dynamics and relations outside of the language.
For example, if the remembered, imagined, or actual location of the table is to the signer’s left, then the activity of “laying” is conveyed by moving the semantic element to the left, toward the remembered, imagined or actual table. This part of the sign often incorporates gestural material. However, the gestural material, upon incorporation, is subject to formational constraints, which are linguistic.
These more context-sensitive dimensions of classifier constructions have often been associated with “iconicity.” For example, following Supalla and Newport (1978), Mandel defines VASL classifiers as “a rule-governed system of iconically-derived morphology that allows signers to generate novel verbs of motion and location with complex meanings” (1981:204).
However, iconicity must be “limited to allow signers to chunk and process material as phonology at the high speeds of linguistic interaction which require choosing between discrete alternatives, with the room for imprecision that that implies” (ibid.:206). Distinctions of direction, distance, and speed are far more limited than what the visual body is physically capable of perceiving and what the musculature can produce in non-linguistic processing. Fo r e x a m p l e , t h e d i fference between a 90 degree left turn and a 105 degree left turn can not be coded in the ASL classifier system because direction is “digitized” in quanta greater than 15 degrees (ibid.:208).
Therefore, under this view, classifier constructions are composed of (1) a semantic element, or stable form-meaning correspondence; (2) an “iconic,” gestural component that is coordinated with the semantic element; and (3) analysis of the composite sign to the forma-tional parameters of the language, which allow the addressee to process the sign at linguistic speeds.
In what follows, I argue that in TASL, constructions like these are formed through a coordination of linguistic and indexical elements. Iconicity is understood as an effect of coordination, and is therefore attributed very limited significance in sign creation. Indexicality can be understood in many ways. In this chapter, I am drawing on a specific definition of the term, which I take from the semiotician Charles Morris (1971 [1938]).
In order to account for the relationship of the sign to context, Morris posits a three-way distinction between indexical, characterizing, and universal signs. Indexical signs denote an object and are exemplified by pointing. Characterizing signs denote objects and analyze them in some way, highlighting certain aspects (1971 [1938]:17). In order for an object to be responded to, it must be located in terms of its relevant characteristics, which requires the combination of a characterizing sign and an indexical sign. The characterizing sign provides the determinateness of expectation (if I say “round,” you expect something round); and the indexical sign provides both the directivity of reference (you know where to direct your attention). Lastly, there must be signs that indicate the relation of these signs to one another and their relation to the class they are members of. These are “universal signs” (ibid.:17).
In Morris’s terms, classifier constructions incorporate characterizing and indexical elements. Characterizing elements are coded in conventional handshapes, movements, and locations in the language. Indexical elements allow signers to place these characterizations in spatial configurations, which direct the addressee’s attention to referents in particular ways. Rela-tions of resemblance between the characterizing element and the referent only appear after shared modes of access to the referent have been established, and are therefore, relatively unimportant for processes of language emergence.2 It is the composite form, which combines indexical and characterizing elements, which is central in the creation of new signs. These composites, which derive, in part, from nonlinguistic phenomena, become signs as they are analyzed to the formational parameters of the language.
The combination of semiotically distinct elements in sign language classifiers departs from canonical understandings of spoken language classifiers (See also Edwards (2012:43-9)). In response to this and other discrepancies, alternate terms have been proposed, including “polycomponential signs,” which has been gaining ground in recent years (e.g. Slobin et al. 2003, Quinto-Pozos 2007, Morgan and Woll 2007, Schembri 2003). Slobin et al. justify their use of “polycomponential signs” as follows:
In [the Berkeley Transcription System], signs that incorporate “classifiers” are treated like other complex signs, which we refer to as polycomponential signs. Like Elisabeth Engberg-Pedersen (1993), Adam Schembri [2003], and others, we seek to represent the range of meaning components, both manual and nonmanual, that co-occur in complex signs.[ . . . ] We have chosen to use polycomponential, rather than Engberg-Pedersen’s polymorphemic, because we are not ready to determine the linguistic status of each of the components, manual and non-manual, in complex signs. And we have replaced Engberg-Pedersen’s verbs and Schem-bri’s predicates, with signs, because the handshape expressions under study are used in verbal, adjectival, and nominal constructions.
The focus of this chapter is a new system used by DeafBlind people to create new signs. These signs often incorporate gestural and linguistic elements into a range of construction types--e.g. adjectival, verbal, and nominal. Therefore, the term “polycomponential” used by Slobin et al. is fitting, and will henceforth be adopted.
9.3 Polycomponential Signs in TASL
During the pro-tactile workshops,3 participants engaged in certain activities that required the creation and use of polycomponential signs. One of these activities was a game where DeafBlind participants were organized into dyads and each dyad was given a bag full of objects--things like old cell phones, toy snakes, and tea strainers. One DeafBlind person would pull an object out, explore it tactually, and then describe it in detail to the other DeafBlind person. When they were done, they handed the object to their partner, who explored it tactually, and then evaluated the description in terms of how well it prepared them for the qualities of the object, or in the terms of the game, whether or not the description “matched” the thing.
This required a formal mechanism for characterizing the object in terms of its relevant and accessible qualities. Participants all started out using VASL constructions for this task. However, these forms often led to frustration, blank stares, confusion, and eventual requests for intervention on the part of the instructors. When Lee intervened, she resolved the problem by introducing constructions like the ones presented below, which I consider new TASL signs.
In contrast to the VASL constructions, TASL signs tend to elicit recognition and participation. This interactional effect can be attributed to two things. First, these signs represent tactile qualities of objects, rather than visual qualities; and second, the composite sign composed of characterizing and indexical elements is analyzed to the formational parameters of TASL rather than those of VASL. This results in a meaningful, perceptible sign that can be distinguished from other signs given tactile production, reception, and modes of access to the immediate environment.
The following series was taken from an interaction between Lee, Allen, and Lina, who were playing the game described above. Allen had been using VASL constructions to describe the object, and Lina could not understand. The object was a phone charger like the one in Figure 9.2. Because Allen and Lina are having trouble communicating, Lee intervenes. She
Figure 9.2: The Car Charger
begins by describing the body of the car charger. She clasps her index finger and thumb around the wrist of the addressee, while holding the addressee’s hand in place. She slides her hand toward the elbow of the addressee (Figures 9.3 and 9.4).
Geometrically, the sign is composed of two circular shapes and a relative spatial relation between them, which together yield a cylindrical shape. The spatial relation is established by holding the hand of the addressee in place, thereby signaling the ongoing relevance of the first circle and anchoring its location relative to the second circle.
From the perspective of the addressee, the cylindrical shape is, at this point, abstract. However, as the interaction continues, this region on the body of the addressee is used to ground relative spatial relations between the body of the car charger and other parts of the car charger such as the cord and the tip, where it is plugged in. The handshapes used to represent these various parts encode meanings that are transferable across contexts (round thing, thing that moves in and out when you push on it, etc.). Therefore, they can be analyzed as characterizing elements, which provide a “determinateness of expectation.”
Figure 9.3: Sketch of Sign Representing Body of Car Charger
On the other hand, relative spatial information about the various parts of the car charger is established in interaction to draw attention to specific features of the object and distinguish it from other objects. In other words, they provide the “directivity of reference,” which in Morris’s view, is the function of indexical signs. Together, characterizing and indexical components allow signer and addressee to individuate the body of the charger in terms of its relevant characteristics, and a “match” between the sign and its referent is achieved. This match is a result of integrating characterizing and deictic elements, or what I am calling “deictic integration.”
(a) (b)
Figure 9.4: Sign Representing the Body of the Car Charger
In Figure 9.5, Lee continues by describing the shape of the cord. First, she manipulates the addressee’s hand into a partially open fist. Then she runs her pinky finger through the inside, tracing a tight, spiral pattern on the addressee’s palm (as in Figure 9.6). She continues with this spiral motion, out and away from the addressee’s arm (Figure 9.7). The i-handshape is a conventional VASL handshape used to characterize long, thin things. However, unlike VASL signs that incorporate this handshape, the motion is produced on the inside of the addressee’s hand.
(a) (b)
Figure 9.5: Sign Representing Shape of Cord
I encourage the reader to place their pinky finger inside of their partially-closed fist, and in a spiral motion, move from the center to the outside of the fist. If you have a spiral cord, like the one shown in the picture above, pull it slowly through your partially closed hand or move your hand slowly over it. If you have done this, you will notice a tactile resemblance between the sign and its referent. However, in order for this resemblance to appear as such, you must turn your attention to the tactile qualities of the object and the tactile dimensions of the representation. This kind of shift in orientation is possible, but not habitual for visual people, and losing vision does not automatically cause it.
(a) Adr.’s hand (b) Signer’s hand
Figure 9.6: Sign Representing Cord
Prior to the pro-tactile movement, DeafBlind people were visual people who could not see very well, if at all. As a result of the movement, embodied sensibilities were reconfigured and formerly visual people became tactile people. In order to effectively direct the attention of a tactile person to a specific characteristic of an object, tactile modes of access must be assumed. Only then, can “resemblance” function as such for both signer and addressee. The primary reason that this form is effective in conveying relevant aspects of its referent, is not that it is iconic, but rather, that it is embedded in a particular deictic field.
Next, Lee continues to hold the hand of the addressee in place. This anchors the previously described car charger body, allowing other aspects of the car charger to be described in relation to it. Constructing polycomponential signs in TASL requires an anaphoric deictic field, organized by tactile modes of access. One of the reasons that VASL polycomponential signs became difficult to perceive was that anaphoric relations were difficult to track against a visible backdrop. These moments of anchoring in a tactile field turn the previously objectified aspect of the charger into the ground against which other aspects are objectified.
You can see this process continue to unfold in the next move, when Lee describes the cord by articulating the spiral motion in an outward trajectory from a tactile point of contact on the elbow of the addressee. This establishes a spatial relationship between the cord and the body of the car charger. The relation is signaled by continuing to hold onto the addressee’s hand, thereby keeping the tactually accessible ground present in the description (Figure 9.7). Finally, she uses the VASL sign “plug-in,” indicating that the spiral shaped portion of the object she has just described is a cord for an electrical device.
(a) (b)
Figure 9.7: Representation of Cord Location Relative to Body of Charger
In Figure 9.9, Lee describes the button at the tip of the charger (Figure 9.8) by grasping the index, middle, and ring finger of her addressee. She presses on the tip of the middle finger several times as in Figure 9.10. Imagine yourself exploring this object tactually. As you run your fingers over the body of the charger, and up toward its tip, you encounter a small piece of metal, which gives way to your touch. The most salient thing about this part of the charger, from a tactile perspective, is the fact that it moves when pressed on, while the rest of the charger remains stationary. The sign representing this metal button is, therefore, iconic. However, the assumption that the addressee will explore the object tactually has to do with conventional modes of access, which are organized by deictic, not iconic relations.
Finally, Lina is given the actual car charger to explore tactually. She explores the cord first, then the body of the car charger, and finally, its tip, which she presses on several times. Lee taps on her arm and then on her leg and asks her if the representation matches her experience of the object. Lina says no, so Lee asks her why not. Lina runs her fingers over the body
Figure 9.8: Button at tip of charger
Figure 9.9: The Car Charger Tip
Figure 9.10: Representation of button
233
of the charger and then pushes down on the button at the tip and says that Lee failed to describe the button. Lee insists that she did describe the button and repeats her previous description (Figure 9.10). Lina laughs and emphatically signs “oh-i-see,” meaning that she understands. But Lina draws Lee’s attention to another feature of the object--a small metal spring on the side of the body of the charger that holds the charger in place once it it is plugged in (Figure 9.11). Lee says, “Oh! I didn’t notice that!” In order to describe this
Figure 9.11: Metal springs on car charger
portion of the car charger, Lee isolates the the index and middle fingers of both interlocutors and then pushes and releases several times on the sides of the fingers, as in the sketch in Figure 9.12. I encourage the reader to produce this sign on your own hand, or even better, someone else’s hand.4 You will notice a feeling that is tactually similar to pressing on small, metal springs. Once again, however, the assumption that the addressee will have tactile, rather than visual knowledge of the object follows from a certain configuration of indexical relations, and this is a prerequisite for a relation of resemblance to appear. In Figure 9.13,
Figure 9.12: Representation of metal springs
Lee is duplicating the sign--one copy for each addressee.5 At this point, Lina, Lee, and Allen
all agree that the various parts of the description correspond to the various aspects of the object and their combination counts as a legitimate, way of representing the tactile qualities of the car charger. This kind of negotiation was common in the pro-tactile workshops. The
Figure 9.13: Lee Duplicates a Representation of the Metal Springs
workshops were experimental and collaborative, and though the instructors had far more experience and were clearly leading the group, all participants contributed to clarifying and adjusting signs to integrate them more seamlessly with their shared experience. Novel signs were evaluated either explicitly (as in this case), or implicitly in interaction (e.g. addressees expressing confusion, irritation, requests for clarification, etc.).
This interactional process, which I call “sign calibration” is leading to the integration of linguistic and deictic elements, or deictic integration. Deictic integration is, in turn, contributing to an emergent set of constraints for generating polycomponential signs, which diverge from those found in VASL. The remainder of this chapter will examine the nature of those constraints and their relation to corresponding constraints in VASL.
9.4 Constraints on Location in VASL
In VASL, there are restrictions on where signs can be produced. For example, Stokoe (1960) observed that the “zero tab” (or the space in front of the signer) is constrained by motor capacity as well as economy. While other areas may be physically possible to articulate a sign in, a restricted area in front of the signer’s body allows for the greatest ease of articulation (2005[1960]:25). Klima and Bellugi sharpen this observation via a comparison with non-linguistic body movements in pantomime.
In free pantomime there are only physiological restrictions on the space used differentially in conveying a message. To mime opening a door, putting on a boot, or picking apples off a tree, a person may walk around, reach down to his feet, or extend his arms high above his head. By contrast, ASL signs in citation
form are made within a highly restricted space defined by the top of the head, the waist, and the reach of the arms from side to side (with elbows bent) (1979:51).
The fact that signs are not produced in locations outside of this space, despite the physical possibility of doing so, shows that location is constrained, at the very least, by economy. In addition, there are arbitrary constraints that come into view in a cross-linguistic frame. For example, the back of the head and the underarm are never used in VASL, but in other signed languages they are (Mandel 1981:11).
9.4.1 Implications for Formational Constraints in TASL
The use of locations in TASL, which are never used in VASL,6 suggests a divergence in underlying constraints--some of which follow from conditions of production and reception in a tactile modality, and some from arbitrary and/or nonlinguistic orders. In Figure 9.14, I have highlighted regions on the addressee’s body where polycomponential signs are produced in TASL. Examples of some of these locations are represented in Figures 9.15, 9.16, 9.17, 9.18.
Figure 9.14: Locations on Addressee’s Body Where TASL Signs are Produced
Notice that articulation is not performed on the the groin area, the area below the knees, the inner portion or backs of the thighs, the feet, or the front of the neck of the addressee. Some of these restrictions are attributable to principles of economy or motor-perceptual capacity. For example, it is hard to envision a bodily configuration in which the feet of the addressee would be readily accessible to the signer. Likewise, in a standing configuration, the backs of the thighs are hard to reach, and while sitting, they are inaccessible.
(a) (b) (c)
Figure 9.15: Examples of Locations on Addressee’s Arm
(a) (b) (c)
Figure 9.16: Examples of Locations on Addressee’s Head and Face
(a) (b)
Figure 9.17: Examples of Locations on Addressee’s Shoulder and Neck
Figure 9.18: Example of a Location on Addressee’s Back
237
However, there are also many non-linguistic constraints. The groin of the addressee, for example, cannot be admitted into the linguistic system because it is socially unacceptable to touch this area of the body for routine communicative purposes. The same is true of the inner portions of the thighs. These kinds of constraints derive from particular, historically constituted social fields. For DeafBlind people in the pro-tactile workshops, all tactile contact with the body of the addressee, even in relatively uncontroversial locations such as the arm, required major adjustments in evaluative frames.
In addition, there are constraints on sensory orientation and modes of access that do not derive from the language or from the social field, but from the deictic field. As outlined in Chapter 1, the deictic field includes: (1)“the positions of communicative agents relative to the participant frameworks they occupy”; (2)“The position occupied by the object of reference”; and (3)“The multiple dimensions whereby agents have access to objects” (Hanks 2005b:192-3).
The physical relation of one body to another is organized by participant frames and frameworks. Participant frameworks become conventional, and this leads certain physical relations in interaction to become expectable, such as standard distance between speaker and addressee, relative symmetry in height, reciprocal sensory orientations, etc. In order to identify constraints on production and reception in a given language, observed instances of use must be performed in unmarked interactional contexts.7 For TASL, this kind of regularity in reciprocal, tactile interaction, has only begun to emerge over the past 5-7 years. This, in combination with shifts in sensory orientation, have made the emergence of stable, tactile constraints on production and reception, possible.
Figure 9.19: The Structure of the Deictic Field
In the deictic field (schematically represented in Figure 9.19), access to objects is grounded in the bodily configurations through which participant frames are realized.8 Therefore, objects are objectified against a background which includes the corporeal sphere occupied by speaker and addressee, as well as many other things such as “common sense,” shared knowledge, etc. There is a shift in perspective that is necessary for grasping this fact. Rather than viewing the body as a producer and receiver of signs, it must be viewed as part of the indexical ground of communicative activity. The body that appears under a deictic perspective interacts in crucial ways with the body that appears under a linguistic perspective, but it is not identical with it, and must be distinguished analytically. As a result of the pro-tactile movement,
relations between the body and objects in the immediate environment snapped to a new set of coordinates organized by tactile, rather than visual modes of access (See Chapter 5). This essentially non-linguistic transformation affects the linguistic system, since signs are among the objects that must be accessed via particular bodily configurations.
The transfer of signs from a visual to a tactile field among DeafBlind people in Seattle can be productively broken into two moments. The first consists of a kind of deictic transposition, which I call “signal transposition.” The second consists of a change in formational constraints on location triggered by this process.
In Figure 9.3, the signer uses a handshape that is similar to the f-handshape in VASL (See Figure 9.20) to characterize the body of the car charger as a small, round thing. However, rather than being produced against the visible backdrop of the signer’s body, it is produced against the tactile backdrop of the addressee’s body. This is what I am calling signal transposition.
Figure 9.20: The “F” Handshape in VASL
The tactile surface of the addressee’s body has different limitations and affordances than the space on and in front of the signer’s body. Therefore, signal transposition triggers changes in constraints on the production and reception of signs. For example, for the curve of the f-handshape to be perceptible tactually, it has to wrap around a curved surface. The curve of the addressee’s forearm lends itself to this function, since it is also curved. However, the index finger and thumb of the signer cannot close entirely around the arm of the addressee, so the handshape must be open, rather than closed, as it is in VASL. In this case, a kind of borrowing can be reconstructed, where a VASL handshape is fit to motor-perceptual constraints in a tactile channel.
However, this kind of link between the visual and tactile forms is not always recoverable. For example, there is no VASL handshape that corresponds in any obvious way to the TASL handshape in Figure 9.9. This has to do, in part, with the fact that tactile, rather than visual dimensions of objects are being represented. However, it also has to do with motor-perceptual limitations of the tactile modality. The tip of the charger itself is small and round. VASL has ways of characterizing small round objects, which can involve the f-handshape in figure 9.20. However, the tip of the addressee’s finger has a highly restricted surface area relative to the size of the signer’s hand. It is not clear how this handshape could be used in this location. Instead of using the VASL handshape, the signer presses on the tip of the
finger several times to show how the button at the tip of the charger moves when pressed. This sign is shaped by a tension between articulatory constraints on the signer’s hand and the limitations and affordances of the surface on which signs must be produced.
Analogous tensions undergird the sublexical organization of VASL. Battison (1978) observes that in VASL, the configurations of the hands with respect to one another and the relative positioning of the fingers within each of the hands imply a fairly compact spatial zone of activity. When signs are articulated by moving the whole hand from one location to another, a different spatial scale and correspondingly different motoric and perceptual requirements are involved. The internal features of handshapes maximally occupy the space of an extended 5-hand and a bit of space around it. In contrast, locations require differentiations in a much larger spatial zone that includes the space in front of the torso and the face. This discrepancy between the motor-perceptual activities required to produce and perceive handshapes and those required to produce and receive signs articulated at locations in the signing space call for some kind of “compensation.”
Compensation is achieved in three ways. First, locational targets in larger spaces must be further apart. Second, the “visual backdrop of the body itself” serves to differentiate locations. Battison writes, “Locations in signing space are not differentiable by relative distance alone, but by their proximity or relations to the gross landmarks of the body--the head, chin, shoulders, waist, etc” (1978:41). Third, different areas of signing space allow for different levels of articulatory complexity-- from less complex to more complex moving vertically from the waist to the head (ibid.). In support of this last claim, Battison shows that there are greater numbers of marked handshapes produced as the location in signing space grows higher, approaching the head. (ibid.:42-3).
Thus, it does appear that the vertical location component of signs is systematically restricted in a manner consistent with the need to keep visual elements perceptually distinct. Areas higher in the signing space permit more complex combinations of manual visual elements, both in terms of fineness of location distinctions and the complexity of individual handshapes (ibid.:43).
Addressees tend to fix their gaze not on the hands of the signer, but on the lower part of the signer’s face, therefore this pattern may also follow from visual acuity. The closer the location is to the central field of vision of the addressee, the more complex and finely differentiated the handshapes can be. Further from this area of high visual acuity, more unmarked handshapes (simpler handshapes) and two-handed signs would be used to increase redundancy in the signal (Siple via Battison 1978:43).
Restrictions on new TASL signs diverge from those described for VASL because the signing space on the surface of the addressee’s body carries different affordances than the signing space in front of the signer’s body. First, it is not necessarily the case that locational targets in larger spaces must be further apart. It seems (so far) that tactile locations within a large body area, such as the back of the addressee, can be just as finely differentiated as on the addressee’s palm without causing perceptual difficulty. Second, it is not clear yet what regions of the body will become most salient in distinguishing locations from one another, however, they are not likely to be the same as those in VASL. For example, from a tactile perspective, the elbow joint is more perceptually salient than the chin, and therefore, is a better candidate for landmark status. Third, in TASL, the palms of the hands, the forearms, and the back of the addressee permit more complexity in handshapes and fineness of location distinctions than either the face of the signer or the face of the addressee. This suggests that the vertical arrangement of articulatory complexity described by Battison does not hold for TASL. Finally, and related to this, areas of greater tactile acuity are not identical to areas of greater visual acuity.
In VASL, the addressee rests their gaze on a particular region of the signer’s body. In contrast, the addressee’s body is not only available to the TASL signer given conventional bodily configurations, but is actively manipulated by her. For example, in Figure 9.21, the signer (right) is manipulating the arm of the addressee (left). First, she raises his arm, and holding his hand in place, she touches three locations on his body. In Figure 9.21a, she touches his shoulder near the outer edge of his collar bone. In Figure 9.21b, she touches the outside of his elbow. In Figure 9.22, she touches the palm of his hand. This establishes a relative spatial relationship between three geographic locations she is representing. The signer in this
(a) (b) (c)
Figure 9.21: Signer Manipulates Addressee’s Arm to Produce Sign
example is Adrijana, one of the instructors of the workshops, who had been developing pro-tactile practices for about 4 years at the time. In Figure 9.21, a less experienced tactile signer (left) is learning to represent relative spatial locations in this way. Over the course of the interaction, he attempts to produce signs by manipulating the addressee’s hands and arms, but he encounters limitations in the mobility of the joints. In Figure 9.23a, he attempts to mark locations on the back of the addressee’s hand, and in doing so, flexes her wrist beyond what is comfortable, and has to adjust. In Figure 9.23b, he has the opposite problem, where he encounters the limits of flexion in the addressee’s wrist. After this attempt, he leans back, and he and his interlocutor laugh and comment on the awkward position they ended up in. In Figure 9.23, he encounters similar problems, but this time, the problems include the shoulder joints as well.
Movements like these, which require hyper flexion or extension of the joints, are not permitted in TASL. They are only found among people with very little or no exposure to pro-tactile practices, and are often followed by laughter and comments about how awkward or uncomfortable it is to produce such signs. These types of constraints are not a question of tactile acuity, but they are a question of mobility in the joints of the addressee and the spatial
241
(a) (b)
Figure 9.22: Hyperflexion and Hyperextension of Addressee’s Wrist
(a) (b)
Figure 9.23: Hyperflexion of Addressee’s Shoulder
242
resources it affords for producing and receiving signs.
The fact that the boundaries of the arm, as well as its position relative to the body of the signer, are both resources for producing the sign raises an interesting problem. Is the arm of the addressee serving a strictly perceptual role? Or is there also an articulatory function involved?
In some cases, the answer to this question is less ambiguous. For example, in Figure 9.24, the signer (left) is describing the movement of a snake’s body. She grips Manuel’s arm just below the armpit, and holds onto his wrist. Then she moves each point of contact alternately to produce a snake-like motion in Manuel’s arm. In Figure 9.24a, she moves Manuel’s arm away from her body, and in Figure 9.24b, she moves it back again.
(a) (b)
Figure 9.24: Addressee’s Arm as Articulator
This requires motor coordination between signer and addressee. The addressee must be responsive, like a dancer following their partner’s lead. Therefore, motoric constraints on polycomponential signs like these would have to be distributed over the dyad. In addition, there are three, rather than two articulators involved.
In VASL, there are constraints on articulatory complexity in two-handed signs, which can be succinctly stated as follows: “Maximize symmetry and restrict complexity in the handshape features of the two hands9 (Eccarius and Brentari 2007:1198).
TASL permits signs that require three articulators, each one with a distinct motor task. This constitutes an increase in complexity that exceeds constraints on two-handed VASL signs. At this time, there are not enough data to attempt a systematic analysis of constraints on complexity in three-handed signs in TASL. However, the fact that signs like this are being produced suggests that the rules for generating polycomponential signs in TASL are diverging in fundamental ways from those in VASL.
9.5 Effects of Deictic Integration on Formational Constraints in TASL
In this chapter, I have shown that new modes of access and orientation and new participant frameworks, are leading to the emergence of a new set of formational constraints in TASL. This transformation is occurring in two moments. First, signs are being transposed from a visual to a tactile ground, a process I call signal transposition. This leads the signer to encounter new affordances and limitations on sign production, which, in turn, influences the way signs are distinguished from one another.
In VASL, locational targets must be further apart when they are located on or in front of large body areas, such as the torso. In TASL, this is not the case--locational targets on larger areas, such as the back of the addressee can be just as close together as in smaller areas, such as the palm of the addressee. This may be related to the fact that the tactile backdrop of the addressee’s body is itself differentiated in ways that differ from the visual backdrop of the signer’s body. From a visual perspective, certain body areas, such as the chin, nose, eyes, etc. are visually salient, and therefore make good “landmarks” which can be used to help distinguish one location from another in a visual modality. From a tactile perspective, the elbow of the addressee is more perceptually salient than the nose, or the chin of the signer. Therefore, as the language develops, the tactile ground of signs will likely be split into contrastive regions that do not correspond to those found in VASL.
There are also constraints on the formational complexity of handshapes and the fineness of location distinctions in VASL, which do not correspond to emergent constraints on TASL. For example, the palms of the hands, the forearms, and the back of the addressee permit more complexity in handshapes and fineness of location distinctions than the face and head do in TASL. In contrast, complexity increases as you move vertically from the waist to the head of the signer in VASL. In addition, in TASL, the hands and arms of the addressee are manipulated. These manipulations are limited by the mobility in the joints of the signer and addressee, and their ability of the dyad to coordinate movements. The system is new, however, these kinds of limitations point to emergent cognitive and motoric constraints on manual coordination in TASL, which differ from those found in VASL.
All of this is evidence that new formational constraints are emerging in the tactile system. Some of these constraints, such as limitations on mobility in the joints may be particular to all tactile signed languages, and therefore attributable to the the modality itself. Others, such as the body areas within which signs are permitted, might vary across tactile signed languages, and therefore be attributable to social, interactional, or arbitrary constraints. In order to pursue these lines of inquiry, additional tactile languages, which are used in a reciprocal sensory channel, will need to examined.
With respect to VASL, the most dramatic divergences are found not in the lexicon, but in polycomponential signs, which incorporate both characterizing and indexical elements and are therefore, more sensitive to context. Constructions like these have been shown to be a new source of lexical items in nearly all signed languages studied to date (Aronoff et. al. 2003, McDonald 1982, Enberg-Pedersen 1993, Klima and Bellugi 1979, Schembri2000, Shepard-Kegl 1985, Zeshan 2003). Therefore, it is expectable that these changes will contribute to a more comprehensive restructuring of TASL at the formational level. These changes are all driven by a process of deictic integration, through which characterizing and deictic elements are coordinated with one another in tighter and more restricted ways over time.
Chapter 10 Conclusion
In this dissertation, I have shown that the grammar of Tactile American Sign Language (TASL) and Visual American Sign Language (VASL) are currently diverging as a result of changes in the social and deictic fields engaged by DeafBlind people in Seattle, Washington. I have argued that this grammatical divergence is a result of contextual integration, which involves the coordination of the linguistic system with deictic and social fields it is instantiated in.
I compare the emergence of TASL with three previously documented cases: homesign systems in Philadelphia and Chicago (Goldin-Meadow and Feldman 1977, Goldin-Meadow and Mylander 1983, Goldin-Meadow and Morford 1985), Nicaraguan Sign Language (A. Senghas 1999, A. Senghas and Coppola 2001, Kegl et al. 2001), and Al-Sayyid Bedouin Sign Language (Sandler et al. 2005, 2011, Forthcoming). I show that in all three cases, the emergence of a language-like system corresponds to a tightening of relations between linguistic, deictic, and social phenomena. In the homesign case, deictic and characterizing signs combine in increasingly predictable orders. The reason homesign does not develop into a full-fledged language is that it is not embedded in a viable social field.
In Nicaragua, language emergence is associated with the emergence of spatially modulated verbs. I have argued that spatial modulation in signed languages is the result of deictic integration. Furthermore, I show how the integration of linguistic and deictic systems in Nicaragua was preceded by the establishment of a social field with an internally asymmetric structure. Therefore, I identify the broader phenomenon of contextual integration as a driving force in the emergence of Nicaraguan Sign Language as well.
I have also argued that deictic integration plays an important role in the emergence of Al-Sayyid Bedouin Sign Language (ABSL). ABSL has recently developed a productive morphological process whereby one deictic and one characterizing sign are compounded to produce place names. As these connections have become increasingly conventionalized, the order of the compounded elements has become fixed; the deictic component is word-final. This consistent ordering of elements, in addition to changes and reductions in the movements of the signs, enact the same kind of tightening of relations between deictic and linguistic phenomena that were noted in the NSL and homesign cases.
In addition, a reconfiguration of the social field among ABSL signers is threatening the viability of the language (Kisch 2012). Over the past 30 years, many changes have taken place, including the establishment of separate schools for deaf and hearing children, changes in marriage patterns, and shifts in the availability of employment (ibid.). These changes are all converging to make ABSL a less legitimate means of position-taking in a viable social field. I have argued that deictic integration is not enough. In order for a full-fledged language to emerge and be sustained, a broader process of contextual integration must transpire, through which linguistic, deictic, and social orders are coordinated with one another in tighter and more restricted ways over time. This means that “a language” is not strictly linguistic. Rather, it coheres in the relations of embedding between linguistic, deictic, and social phenomena.
This approach to language emergence is complementary to those that focus on the innate capacities of the human mind. While those approaches have focused on the role of abstraction in “liberating” language from its contexts of use, I have emphasized the role of integration, through which deictic and social relations are increasingly caught up in, and coordinated by, linguistic processes, and vice versa. Whether the goal is to understand context or to factor it out, practice theory is useful for understanding the emergence of new grammatical systems as influenced by, but distinguishable from, broader socio-historical and interactional processes.
Endnotes
Chapter 1
1 This orthographic representation emerged along with the pro-tactile movement, and has since come into wide-spread usage.
2 See section 8.2 on page 193 for more on the efficacy of tactile reception of VASL signs.
3 The pro-tactile movement is not an identify movement, nor is its focus language standardization. Rather, its focus is co-presence and the hope of communicating in ways that feel effortless and “natural.” It is also about building a home world that can truly be inhabited. The sighted world cannot be inhabited, but given a strong and intuitive grasp of the tactile world, analogic relations can be established, and the world of the sighted-- the broader society in which DeafBlind people live--can be imagined and therefore maneuvered within. Without a home world, the worlds of others cannot begin to be grasped or changed (see chapter 4).
4 The conceptual framework that accounts for this process is not ‘integrationist’ in the sense of Toolan 1999, Harris 2002, or Love 2006. See Edwards (2012:65) for a more detailed discussion of the differences between the two frameworks.
5 See chapter 2 for discussion of three cases in which language-like systems, or full-fledged languages have emerged, and the role of contextual integration in these processes.[i]
6 Sidnell and Enfield use this term in precisely the opposite way. They mean that as interactants select certain lexico-grammatical resources to accomplish interactional goals, there are consequences for how the interaction unfolds (Sidnell and Enfield 2012:313). I mean that socio-historical changes unfold in a semi-autonomous field, governed by distinct principles of organization. Likewise, interaction is structured by principles which are unique, and therefore, the field in which language-users interact is also semi-autonomous. Finally, languages and the sub-systems they are composed of are also semi-autonomous. Nevertheless, when socio-historical processes affect the structure of the social field, there can be collateral effects for the structure of interaction and for the organization of the language itself.
7 Saussure identifies three aspects of language: langue, parole, and langage. Langue is the formal system, parole is language-in-use, and langage is the whole thing together. Although not unimportant, parole is ultimately left to other disciplines, and Saussure names langue as the proper object of linguistics (1972 [1915]:66). In the approach taken here, formal systems are distinguished from interactional and social processes. However, the semiotic status of a whole language cannot be ascertained from a linguistic, interactional, or social perspective; all three are necessary, and a theory of the relations that obtain between them is required.
8 I have examples of VASL signs produced in this way, however, I am not including frames in the dissertation because I need to protect the identity of these signers.
9 Green (2014) and Goodwin (2000) show many of the ways that radically non-reciprocal linguistic competence can be overcome (or not) via social and interactional means. I am arguing that similar procedures can act not only as a means of circumventing asymmetries, but also as a means of correcting them via augmentation of the linguistic system itself.
10 The authors thank Stephen Anderson, David Perlmutter, and Maria Polinsky for independently raising these questions in person.
11 RJ Senghas has made as similar point with respect to second hand accounts of the Nicaragua case (2003:272). He notes that Chomsky, in an interview with the BBC, claimed that the Nicaragua case involved the development of a new language based on “no external input.” Senghas points out that this is observably
untrue. What was missing was linguistic input, but both socio-cultural and non-linguistic semiotic resources
were available to deaf Nicaraguans. Also see Russo and Volterra (2005) and Fusellier-Souza (2006). Kisch (2012) makes similar observations about research on ABSL.
12 This observation also applies to language maintenance and language shift. When a language can not be used as a legitimate means of position-taking, it is likely to be replaced by one that can. This perspective can be understood in contradistinction to the idea that languages preserve or transmit culture (see Muehlmann 103:146-69 for discussion).
13 The transmission of the habitus in the Deaf and DeafBlind communities is less straightforward, since most Deaf and DeafBlind people do not have Deaf or DeafBlind parents. The habitus is transmitted within the community, usually in later stages of childhood and beyond. Nevertheless, a Deaf habitus forms and can be recognized. For example, Bahan describes a scene where a father and daughter are sitting in a cafe people-watching. The father tells the daughter to look into the crowd outside and identify the Deaf person among them and she does so successfully, despite the fact that he was not signing. Bahan attributes her success to the fact that she and the man she identifies are both “people of the eye” (2008:83). In the present framework, it is attributable to a shared, visual habitus, which can be identified via habitual modes of orientation, navigation, and comportment.
14 see index in Bu¨hler (2001[1934]:499) for specific page numbers.
15 For more on Bourdieu’s sources in connection with the field concept, (including structuralist thinkers, the Russian formalists, and others), see Hanks 2005a:72.
16 Dignity is therefore a “fieldable” value, while wealth is not. See following sections for more on fieldability.
17 There is a great deal of work on perspective in language, which I will not discuss here. However, see, e.g., Dancygier and Sweetser 2012 and Dudis 2004 for more in-depth discussion of this topic in signed and spoken languages.
18 Saussure says, “All conventional values have the characteristic of being distinct from the tangible element which serves as their vehicle” (1972 [1915]:116-17).
19 See Hanks 2005a:194.
20 Contextualization is an inferential process (i.e. Sperber and Wilson 1986, Levinson 1983), which involves “hypothesislike tentative assessments of communicative intent” (Gumperz 1992:230).
21 Keying involves a change in frame through which an activity is understood, for example, when playful, “biting-like behavior” turns to biting (Goffman 1974:41-4).
Chapter 2
1 See also Zeshan and de Vos (2012) for typological, anthropological, and sociolinguistic factors in the emergence (and in some casedecline) of new signed languages.
2 See Kisch (2012), Zeshan and DeVos (2012), Russo and Volterra (2005), and Fusellier-Souza (2006) for critical commentary.
3 This story was also used to frame an ethical debate about scientific studies of “Genie,” a girl who was deprived of all social and communicative contact for the first 13 years of her life (Rigler 1993, Rymer 1993).
4 These were their ages at the beginning of the study.
5 They explain that the caregivers used both speech and gesture in communicating with their children. Although “gesture and speech might form an integrated communication system” for hearing people, they analyzed the mothers communications from a visual perspective, since they took this to be the point of view of the deaf children (Goldin-Meadow and Mylander 1983).
6 Fillmore recognizes the irony in the fact that Benjamin Lee Whorf made the earliest, most forceful case for covert categories, or “cryptotypes” (see Whorf 1956:70-80) in support of linguistic relativity--precisely the opposite of their use in generative grammar, where they were the basis for universals (Fillmore 1968:3).
7 RJ Senghas has made as similar point with respect to second hand accounts of the Nicaragua case (2003:272). He notes that Chomsky, in an interview with the BBC claimed that the Nicaragua case involved the development of a new language based on “no external input.” Senghas points out that this is observably untrue. What was missing was linguistic input, but both socio-cultural and non-linguistic semiotic resource were available to deaf Nicaraguans. In first hand accounts, the picture is much more complex.
8 See also R. J. Senghas, Polich 2005, Fusellier-Souza 2006, and Kisch 2012.
9 Polich emphasizes that “The model, however, is indebted to outside influences and outside precedents, and did not originate with Nicaraguan deaf members. Attitudes, especially from Sweden, Finland, and the United States introduced the philosophy; but starting in 1990, and especially after 1992, it was adopted by the leading members of ANSNIC, who started a campaign to include more sign language in the schools, and to increase use of Spanish/NSL interpreters for deaf persons in daily life. Without the reification of sign language brought to Nicaragua from Costa Rica, the United States, Sweden, and Spain, or without the financial aid and the anti-integrationist perspective of the SDR, it is possible that this model would have been much longer in the making (ibid.:97). See R.J. Senghas (2003:275-277) for more on the global networks within which deaf Nicaraguans are embedded.
10 They also note a fourth “system,” which is a “pidgin” used between hearing and deaf signers--where “signers view themselves as speaking Spanish, and Spanish speakers view themselves as signing or using Mimicas” (ibid.:182). This phenomenon is recognizable given familiarity with the American Deaf community and is very interesting, but I take it to be on another level of communicative complexity in the sense that it combines the more basic systems. Therefore, I bracket discussions of it in my summary of this research.
11 Since then, similar classes of verbs have been identified in almost every signed language that has been documented (Mathur and Rathmann 2012:137).
12 It is difficult not to put almost every term used to describe spatial modulations in scare quotes since nearly all of them have attracted some kind of controversy. However, when recounting a particular view, I will use the terms put forth by the author of that view. The difficulty, for example, in using the term “affix” here will become clear below.
13 This category has been broken down into at least 5 sub-classes (See Supalla 1986, cited in Padden 190:119). However, for the sake of brevity, they are not recounted here.
14 lifeprint.com
15 Mathur and Rathmann 2010 for a more detailed discussion.
16 see Chapter 7 for a more detailed discussion.
17 This suggests something strikingly similar to Liddell’s analysis, despite the fact that Senghas compares spatial modulations to “grammatical endings appended to words in spoken languages,” which are presumably organized according to strictly linguistic principles, and Liddell sees spatial modulations as governed by the universal capacity to create conceptual representations of objects and relations in the world.
18 In some of the earlier work (e.g. Kegl et al. 2001), the various home sign systems that children came into school with with were viewed as substrates, which, in the absence of an accessible superstrate, combined with one another to form something like a pidgin. Over time, the pidgin was “elaborated” as it underwent creolization. The word “elaboration” implies an increase in complexity, not a process of abstraction. However, in this work, elaboration is seen as the product of language acquisition. In this process, the innate structures of the language-ready mind act on imperfect, or impoverished input (the home sign systems) to produce something more complex and systematic. Therefore, there is no construct established for explaining the interaction of linguistic and non-linguistic phenomena, unless one considers the innate structures of the language-ready mind to be non-linguistic, which as was discussed in section 2.1.3, cannot be the case.
19 There was one report of a deaf man who had befriended another deaf man from a neighboring settlement in the 1960s. In addition, one of the deaf members of the first generation of signers had partial literacy in Arabic. However, aside from these very limited kinds of exposure, deaf signers were not exposed to any external signed or spoken languages (ibid.)
20 In the 1960s, a few deaf children were enrolled in a school for one year, where they acquired some basic Arabic literacy and were exposed to Jordanian Sign Language (ibid.).
21 Although, Kisch points out that many different social factors must be considered in constructing boundaries between generations. While others focus on biological lines of descent, Kisch argues for the importance of social networks, including education, and marriage and labor patterns (2012).
22 See chapter 8 for a brief introduction to the phonology of VASL.
23 See Brentari (1998), Perlmutter (1992), Sandler (1989), and Sandler and Lillo-Martin (2006) for proposed feature hierarchies in more established signed languages.
24 Both examples were given in precisely these terms in a lecture at the University of California, Berkeley by William Hanks on 2/18/09.
25 This insight draws on a synthesis of Pierce’s notion of indexicality and Spinoza’s concept of “memory” (1985 [1677]:465-467). Spinoza argues that bodies (in the most general, philosophical sense) are affected by one another (which the mind perceives) in the present, but associations build up in the present through past affections as well. If the human body has been affected by more than one body, and if the mind later imagines one of those bodies, the others will be recollected as well (ibid.:465). This is what memory is for Spinoza: “a certain connection of ideas involving the nature of things which are outside the human Body--a connection that is in the Mind according to the order and connection of the affections of the human Body” (465). This order that emerges out of the connections and affections of the human body is distinct from the order that emerges from the intellect. The intellect is the mode through which “the Mind perceives things through their first causes, and which is the same in all men” (ibid.:466). Because these two orders meet in the mind, our thoughts do not proceed from thing to thing based on the likeness between them, in themselves, but because of the association they have with each other according to the order of connections and affections of the body (ibid.). The mind perceives affections of the body, but it also perceives the ideas of those affections (ibid.:468). And so, “the Mind and the Body are one and the same Individual, which is conceived now under the attribute of thought, now under the attribute of extension” (467).
Chapter 3
1 This chapter draws on research that was conducted in several visits to Seattle: 2 months of fieldwork in the summer of 2006, 4 months of fieldwork in the spring of 2008, and 1 year of sustained dissertation fieldwork in 2010 and 2011. During each visit, I conducted interviews with DeafBlind people, people involved in their community and its development, and people who make decisions that affect DeafBlind people, such as city planners, advocates, and state officials. I also videorecorded interaction between DeafBlind people and visual interpreters as well as interaction between DeafBlind people. Lastly, I collected fieldnotes during each visit, sometimes written during an event I was observing and/or participating in, and sometimes written afterwards. Interviews and videorecordings of interaction were subsequently transcribed and analyzed. Nearly all of the DeafBlind people who were directly involved in my research were born Deaf and lost their vision slowly. Everyone who was involved in the pro-tactile workshops has Usher Syndrome, which is a genetic condition that causes congenital deafness and Retinitis Pigmentosa, which leads to a slow degeneration of the retina. The effect is a slow loss of vision from the periphery in. Rates of vision loss vary. However, the idea behind the pro-tactile movement is that anyone who cultivates tactile sensibilities will find a pro-tactile field of engagement easy to engage. Acquisition of the practices and of the language will feel natural and easy compared to the languages used by hearing and sighted people. Therefore, people who grew up hearing and lost both their hearing and sight--as is the case for people with Usher Syndrome Type III, or people who are injured in mid-life and become both deaf and blind--will not be excluded in any way from the pro-tactile movement or the tactile world it is generating.
2 On the topic of myths, taboos, and stereotypes about blind people, Frances A. Koestler (1976) describes the dual figuration of blind people in the popular imagination. On the one hand, they are figured as tragic and dependent, worthy of pity and charity. On the other, they are imbued with magical or extra-sensory powers (ibid.:7). She cites many examples, including a young woman who, it was claimed, could distinguish colors by smell (ibid.:5), or another who could distinguish them by touch (ibid.:6). Another woman could purportedly read the bible, thanks to her “eyeless sight” (ibid.). These and many more cases were shown to be hoaxes or misunderstandings in the end, and Koestler implies, have more to do with entertaining the public than with the lives of blind people. Koestler points out that “what most people continue to misunderstand, is that both acuteness of hearing and sensitivity to touch in blind people are not compensatory gifts of nature but the products of long, hard concentration and training” (ibid.:4). In other words, the sensory orientations of blind people are the outcome of practices which incorporate sensory dimensions. They are not reducible to a natural outcome of sensory capacity or change. Recognition of this fact is the starting point of this chapter. However, I am not only interested in showing that this is the case, but also in how, particular practices were shaped by social and historical forces, and how these developments set the stage for the pro-tactile movement.
3 Giddens’ distinction between “social integration” and “system integration” is useful here. In both cases, the notion of integration implies a “reciprocity of practices” which can be understood as “involving regularized relations of relative autonomy and dependence between the parties concerned” (1979:76). Reciprocity does not require “cohesion” but rather, demands asymmetries of various kinds. Social integration applies at the level of face-to-face interaction and it concerns reciprocity between actors (ibid.:76-7). System integration applies on the level of social systems, institutions, and other collectivities and it concerns reciprocity between groups (ibid.:77). The aim of the pro-tactile movement was to establish reciprocity among actors in face-to-face interaction in order to establish system integration with the broader society. One of the mechanisms of social integration is the “reflexive monitoring of conduct” (Giddens 1979:77). As we will see, this is precisely what led to new forms of social integration in the Seattle DeafBlind community as part of the pro-tactile movement.
4 See chapter 1 for a discussion of habitus.
5 No sighted people were allowed, apart from the research crew, which included three videographers, one of whom was the ethnographer. During one class, a few select sighted people were invited to give DeafBlind people the chance to try out their pedagogy. Ultimately, the goal was to slowly invite sighted people back in, insofar as they were open to cultivating tactile sensibilities and learning to do things the “DeafBlind way.”
6 There are many historical developments, important events, people, and issues that I was made aware of during the course of my research. However, I am highly selective in what I include here. I only address those early events and dynamics that are important for understanding how communication conventions among DeafBlind people developed. I do not include anything about the history of Seabeck camp, for example, which deserves an entire chapter of its own in the overarching history of the DeafBlind community. I include very little about the development of DBSC between the time it was founded and the time the pro-tactile movement was initiated there. I would like to thank everyone who shared their memories of these times, and I plan to incorporate those memories into a separate historical project to be pursued at a later date.
7 This information was accessed in 2011.
8 This date was taken from a timeline compiled by an administrator currently working at the Lighthouse who was also involved in the earliest stages of the DeafBlind program.
9 I found the original hand-drawn matrix in a box of pictures and old newsletters and such at the Lighthouse, while I was conducting fieldwork. It was hand-written and faded and was charmingly informal for its important role in the history of the community.
10 The term was originally taken from AADB, but has diverged since then as it has developed in Seattle.
11 DVR, DDD, and DSB.
12 People contrast this time with the increasingly professionalized role that interpreters have now. Back then, they thought of themselves as political allies, fighting for civil rights, first, and interpreters second. Now, this would likely be seen as a conflict of interest and a breach of the code of ethics on the part of interpreters.
13 A well-known Deaf interpreter with native command of Visual ASL and a flare for eloquent, artistic renderings.
Chapter 4
1 See Chapter 1 for a discussion of habitus.
2 For example--In 2006,I conducted a series of interviews aimed at understanding what makes a good SSP, or visual interpreter. A DeafBlind person who had been involved for many years in training interpreters told me the following:“Really, you can’t train SSPs. [ ...]. You can’t fix a bad attitude or a difficult personality. You can teach them what their attitude should be like, but if they can’t really internalize it, and make it part of who they are, then they will fail. There are habitual ways of being that are very difficult to change. [ ... ] It has to do with whether the person sees themselves as above DeafBlind people or sees themselves as their equal. If they see themselves as superior to DeafBlind people, then it’s never going to work out to try to train them. But really, most of the SSPs who are really good, who have a good attitude are also successful elsewhere and they leave the community to pursue other opportunities. The ones who are iffy at best are the ones we see consistently. [...]. I think the only way to recruit the good SSPs is to acquire enough money to pay them well. But then, I’m sure it’s not only money.
3 I have heard the term “pod” applied within the community to capture the scope of communication norms. Small groups form, which are comprised of sighted and DeafBlind people, and within those small kin or kin-like networks, communication conventions develop. For Adrijana, her “pod” was important at this stage, because the people in it knew how to communicate with her and had a shared vision for the kinds of communication practices that should spread. This was seen by some a “favoritism” since she was essentially hiring her friends. But for Adrijana, it was largely a communication issue. Tactile communicative practices had become conventional enough within her pod that affect could circulate. She saw this as an essential part of moving the organization forward and to reaching the people it was supposed to serve.
4 Another situation in which DeafBlind people have communicated directly with one another has been in families where DeafBlind people had older siblings who also had Usher Syndrome, or among couples who were both DeafBlind. One sighted person talked about going to a pro-tactile workshop in the summer of 2011, and as she was learning some of it, thought, “Who does this? Joe and Ellen [A DeafBlind, tactile couple] and whoever they’re talking to do that all the time. Also Jack and Eileen [who were siblings and were both DeafBlind] used to do that all the time-- If I told Jack something, he would tell Eileen at the same time. Not if I was talking to Eileen, but if I was talking to Jack and Jack wanted to include Eileen. They did that all the time- maybe Jack would do that when he had vision, and then when he lost his vision, he continued doing it.” In both of these cases it seems that when there were two DeafBlind people, one person would copy what a third participant was saying, thereby occupying the position of the sighted interpreter. This is not the same thing as signing with two dominant hands to two addressees at the same time. The latter became the convention for three-way communication in a pro-tactile context. When there were more than three people conversing, though, one person (the one to the right of the signer) would relay what was being said to the person to their right. Although communication practices like these-- between DeafBlind siblings and spouses were not identical to emergent pro-tactile conventions, they surely had an influence on them. Several of the participants in the pro-tactile classes that were held in 2010 and 2011 had siblings with Usher Syndrome. It is highly likely that they drew on their experiences in building the communicative repertoire that has since become more widely shared.
5 See chapter 3 on the history of the Lighthouse and the history of “sheltered workshop.”
Chapter 5
1 I also have been doing this kind of work for many years, and I incorporate my own intuitions about this work here.
2 See Chapter 1.
3 See chapter 1 and also Hanks 1990:137-187 via Goffman 1981, Levinson 1987, C. Goodwin 1981, M.H. Goodwin 1985.
Chapter 6
1 I recorded 120 hours of video data during these workshops. This video corpus was subsequently indexed, selectively transcribed, and thematically organized. This, in addition to detailed ethnographic field notes recorded in a variety of contexts, and the intuitions I have developed over many years of involvement in the Seattle DeafBlind community, form the empirical basis of the argument presented in this chapter.
2 O&M training has been in place long before the pro-tactile movement. However, the pro-tactile social field favors people who can orient to their immediate environment without support from sighted persons. Therefore, the kinds of changes that occur in people working with Marcus became more desirable, and contributed to the overall shift in the deictic field.
3 Marcus contracts with the Seattle Lighthouse for the Blind. Funding for his services come from Metro King County, and grant funds secured by two employees of the Lighthouse (one of whom is DeafBlind). State agencies, such as the Department for Vocational Rehabilitation and the Department of Services for the Blind also occasionally contract with the Lighthouse, but this money comes with restrictions that don’t make sense for DeafBlind people, so Marcus avoids relying on it too heavily. Unlike other O&M instructors in Seattle, Marcus uses Visual American Sign Language to communicate with his clients. In these sessions, I walked with Marcus behind his students. As they practiced, Marcus narrated their actions, explaining what they were doing right, what they were doing wrong, why he was or was not going to intervene, etc. I took detailed notes as we walked (while holding an umbrella, so my paper didn’t get too wet). I drew little maps of what was happening in moments of trouble. When I went home afterwards, I typed up these notes, and drew the diagrams in a Word document.
4 These alternate constructs are discussed in more depth in the following chapter.
5 Goffman was working with research conducted by Gumperz and Cook-Gumperz over a period of several years, where an attempt was made to list the motivations and functions of instances of code-switching in a particular bilingual setting. The list included: direct or reported speech, selection of recipient, interjections, repetitions, personal directness or involvement, new and old information, emphasis, separation of topic and subject, and discourse type (ibid.:127). In the process, they discovered “code-switching-like” behavior that didn’t involve the switching of actual codes. This is the initial point of departure for Goffman and it leads him to the broader category of “footing,” which describes shifts in alignments between the speaker, his “projected self,” and his utterances--whether he is play acting, serious, unsure of the truth of his statement or not, and so-on.
6 which includes configurations like the one pictured in figure 6.1 as a category member (but also variants in which participants were standing)
7 The rule was explained in terms of 4-way stop-signs. The person in contact with the right hand of the signer was responsible for copying their utterance for the fourth participant. This only began to be fluidly accomplished by a few of the workshop participants at the very end of the workshops.
8 Participants in this frame are wearing blindfolds. This was common during the workshops. It was a way of cultivating tactile sensibilities by blocking out disruptive, and often useless, visual stimuli.
Chapter 7
1 Eye-gaze, lips, and other body parts can also function this way in signed languages, just as they can in spoken languages (Enfield 2001, Sherzer 1973, Kendon 2004, Wilkins 2003, also see Meier and Lillo-Martin 2010:347-353).
2 See Section 6.2 in Chapter 6 3This claim has been generalized across signed languages. However, Berenz (2002) claims that if eye gaze is taken into account, in LSB, there is a three-way distinction between first, second, and third person forms (cited in Pfau 2011:154).
4 See Pfau 2011 and Kita 2013 for more on pointing.
5 Also see Cormier 2002 and DeVos 2012 for interesting discussions about the integration of pointing signs into the grammar of signed languages.
6 By Deaf Interpreter, I mean a Deaf person with a native command of VASL, who works as an interpreter. Not an “interpreter for the Deaf.”
7 A reception signal, for Bu¨hler, is the inverse of an “action signal” such as an imperative.
8 See Dancygier and Sweetser (2012) for more on viewpoint in language in multiple modalities.
9 Schutz’s reciprocity of perspectives can be summed up as follows: “I take it for granted--and assume my fellow man does the same--that if I change places with him so that his ‘here’ becomes mine, I would be at the same distance from things and see them in the same typicality as he actually does; moreover, the same things would be in my reach which are actually in his. (All this vice versa).” (Schutz 1970:183).
10 The featural analysis is a more recent contribution to this long standing debate, however, Mathur and Rathmann (2012) also find enough similarities in their approach and Padden’s original (1983) analysis to group them together under the “featural” heading.
11 Mathur elsewhere appeals to “referential space” (2000:75). That term would be more consistent with the perspective put forth here.
12 See p.143 for a breakdown.
13 This is modeled on Jackendoff’s architecture of grammar.
14 For example, as TASL develops further, it will interesting to see if phonological adjustment rules can be posited, and what their relation is to those found in VASL.
15 Signs that retrieve values exclusively from the deictic field, as opposed to combining grammatical and deictic elements, are “gestures.” But gesturing is only one kind of semiosis that retrieves values from the deictic field and the explanatory power of the deictic field extends far beyond gesture.
16 Section 6.2 in Chapter 6
17 I have outlined the pointing finger to make it more visible.
18 The emphasis comes from the strength of movement, which is not visible in the frame grabs, but is visible in the video clip from which the frame grabs were taken.
19 This sign could mean “measure,” “inch,” or “size.” I have glossed it as “inch” because Nina specifies this meaning by fingerspelling i-n-c-h later in the interaction.
20 Nina and Lee’s descriptions were shown to two users of ASL who live in California and have no contact with the Seattle Deaf-Blind community. Neither of them understood Lee’s description and both of them understood Nina’s description (which were showed to them in that order). The first treated Lee’s description as a degraded version of visual ASL and told me that Nina’s description was obviously more clear and that Lee’s description “needed work.” The second person said that she couldn’t understand Lee’s description, and in particular found all of the signs articulated on the hand of the addressee unfamiliar and unintelligible. Even with the benefit of understanding some of the signs Lee used, she couldn’t tell what was going on in the interaction or what Lee was trying to get across. Then I showed her Nina’s description and she understood with no difficulty that Lee was describing a measuring tape.
21 Signal transposition, while not standard in basic participant frameworks, is imaginable if two Deaf people are trying to communicate in the dark, for example, I have been told that children in Deaf residential schools, sometimes signed on each other’s bodies, or used tactile reception, after the lights had been turned off at night. However, this form is, under no circumstances imaginable, even in non-standard participant frameworks.
Chapter 8
1 This research included the Seattle DeafBlind community but also included other places such as Boston and Washington, D.C.
2 This is a sketch of a sketch. The original sketch was published in Klima and Bellugi (1979).
3 See chapter 6 and also Hanks 1990:137-187 via Goffman 1981, Levinson 1987, and Goodwin 1981.
4 I also use the term “basic participant frameworks,” which I treat as interchangeable with the term “participant frames.”
5 It is unclear if tactile reception would have been comparably accurate prior to the pro-tactile movement in Seattle. There are many differences between Reed et al.’s research subjects and the members of the Seattle DeafBlind community who participated in this research. However, it would be interesting, taking these difference into account, to test whether or not accuracy is significantly higher since a new, tactile language has begun to emerge.
6 The status of gesture as “supplementary” is contentious in current frameworks, and I do not mean to support Sapir’s position on this point.
7 Insofar as the fingerspelled word has not been borrowed into VASL. Also see Mulrooney (2002) for a more detailed discussion.
8 Stokoe compares facial expressions to suprasegmental features of spoken languages such as stress and pitch. He considers these “metaspectual” parts of the language important, but he does not attend to them further
9 All VASL examples from this section were taken with permission from an online ASL dictionary-- www.lifeprint.com
10 Also see Battison1978:37 for further evidence.
11 In total, 69 Type I signs produced by three different signers comprise this set.
12 At one point, an instructor signs culture in a three-person configuration and she does so by alternating her dominant and non-dominant hands, repeating the sign sequentially, rather than producing both C-handshapes simultaneously. Both because the addressee has access to the non-dominant hand and because there is a temporal lag between the production of that sign and the next, class may be distinguishable from culture. But this is the kind of complicated inference that would be demanded less by a truly tactile language. Later in this same stretch of interaction, the same signer starts to sign culture a couple of times and replaces it with other signs instead of completing the sign. For example, she is comparing the DeafBlind way of doing something and sets up a comparison with how Deaf sighted people would do it. She signs deaf then starts to sign culture but says “at Gallaudet” instead.
13 141 Type I signs, produced by four people were analyzed in this set.
14 Out of 51 tokens 12 were not duplicated. Four of these signs were borderline Type I and Type II signs like interpret and how. Although the dominant hand is active and the non-dominant hand is passive in these signs, the movement of the active hand affects movement in the passive hand that is probably perceptible tactually. Other than this difference, the two articulators are mirrors of one another.
15 8 of the 12 that were not duplicated were the sign right, and two were the sign can’t. These signs have been duplicated both by alternation and by dropping in other instances.
16 see Chapter 9 for a detailed account of these constraints.
Chapter 9
1 See Chapter 7.
2 However, iconicity may be very important for language acquisition, or for other processes.
3 See Chapter 4
4 Make sure to ask for permission first.
5 Up until this point, she has been alternating between addressees, producing a description for Lina while Allen waited or listened in as best he could, and then the reverse.
6 Locations in TASL examples above include: addressee’s palm, addressee’s wrist, addressee’s arm, the inside of addressee’s elbow, the tip of addressee’s middle finger, or the outer edge of middle phalanx on addressee’s index and middle fingers.
7 See section 8.2 on page 193 for more on this.
8 This is a reproduction of a figure from Hanks 2005a.
9 See Chapter 8 for more on this.
Bibliography
Aronoff, Mark, Meir, Irit, Padden, Carol and Sandler, Wendy (2003). Classifier Constructions
and Morphology in Two Sign Languages. In Karen Emmorey (ed.), Perspectives on
Classifier Constructions in Sign Languages Mahwah, NJ: Lawrence Erlbaum and
Associates. Aronoff, Mark, Meir, Irit, Padden, Carol and Sandler, Wendy (2004). Yearbook of Morphology.
In Geert Booij and Jaap van Marle (eds.), The Netherlands: Kluwer. Aronoff, Mark, Meir, Irit, Padden, Carol and Sandler, Wendy (2008). Holophrasis,
compositionality and protolanguage. Special Issue of Interaction Studies, 133-149. Bahan, Benjamin (2008). Upon the Formation of a Visual Variety of the Human Race. In H-Dirksen L. Bauman (ed.), Open Your Eyes: Deaf Studies Talking Minneapolis: University
of Minnesota Press. Barthes, Roland (1984). The Rustle of Language. Berkeley: University of California Press. Battison, Robbin (1978). Lexical Borrowing in American Sign Language. Silver Spring, MD:
Linstock Press. Bloom, Lois (1970). Language development: Form and function. Cambridge, MA: MIT Press. Bourdieu, Pierre (1990 [1980]). The Logic of Practice. Stanford: Stanford University Press. Brentari, Diane (1998). A Prosodic Model of Sign Language Phonology. Cambridge,
Massachusetts: MIT Press. Brentari, Diane, Coppola, Marie, Mazzoni, Laura and Goldin-Meadow, Susan (2012). When
does a system become phonological? Handshape production in gesturers, signers, and
homesigners. Natural Language and Linguistic Theory 30, 1-31. Bühler, Karl (2001 [1934]). Theory of Language: the representational function of language.
Amsterdam/Philadelphia: John Benjamins. Bynon, Theodora (1977). Historical Linguistics. Cambridge: Cambridge University Press. Channon, Rachel (2004). The Symmetry and Dominance Conditions Reconsidered. Chicago
Linguistic Society 44-57. Chicago. Chomsky, Noam (1965). Aspects of a Theory of Syntax. Cambridge: MIT Press. Clark, John Lee (2014). Pro-Tactile: Bursting the Bubble. In: Where I Stand: On the Signing
Community and my DeafBlind Experience. Minneapolis: Handtype Press Cleve, John Vickery Van (2007). The Academic Integration of Deaf Children: A Historical
Perspective. In John Vickery Van Cleve (ed.), The Deaf History Reader 116-135.
Washington, DC: Gallaudet University Press. Coleman, Linda and Kay, Paul (1981). Prototype Semantics: The English Word Lie. Language
57, 26-44. Collins, Steven and Petronio, Karen (1998). What Happens in Tactile ASL? In Ceil Lucas (ed.),
Pinky Extension and Eye Gaze: Language Use in Deaf Communities 18-37. Washington,
D.C.: Gallaudet University Press. Collins, Steven Douglas (2004). Adverbial Morphemes in Tactile American Sign Language.
Interdisciplinary Studies: Graduate College of Union Institute and University. Comrie, Bernard (1989 [1981]). Language Universals and Linguistic Typology. Chicago: The
University of Chicago Press.
248
Coppola, Marie and Senghas, Ann (2010). Getting to the point: How a simple gesture became a
linguistic element in Nicaraguan signing. In Donna J. Napoli and Gaurav Mathur (eds.),
Deaf Around the World. Oxford: Oxford University Press. Cormier, Kearsy Annette (2002). Grammaticization of Indexic Signs: How American Sign
Language Expresses Numerosity. Linguistics 204. Austin: The University of Texas at
Austin. Crystal, D. (1987). The Cambridge Encyclopedia of Language. Cambridge: Cambridge
University Press. Dancygier, Barbara and Sweetser, Eve (2012). Viewpoint in Language: A Multimodal
Perspective. New York: Cambridge University Press. Danesi, Marcel (1993). Vico, Metahpor, and the Origins of Language. Bloomington: Indiana
University Press. Descartes, Rene (1985[1647]). The Passions of The Soul, Part One. The Philosophical Writings
of Descartes Cambridge: Cambridge University Press. Dorian, N.C. (1981). Language Death: The Life Cycle of a Scottish Gaelic Dialect. Philadelphia:
University of Pennsylvania Press. Dudis, Paul G. (2004). Body Partitioning and Real Space Blends. Cognitive Linguistics 15, 223-238. Eccarius, Petra and Brentari, Diane (2007). Symmetry and Dominance: A cross-linguistic study
of signs and classifier constructions. Lingua 117, 1169-1201. Edwards, Terra (2012). Sensing the Rhythms of Everyday Life: Temporal integration and tactile
translation in the Seattle Deaf-Blind Community. Language In Society 41. Enfield, Nick (2001). Lip Pointing? A Discussion of Form and Function with Reference to Data
from Laos. Gesture 1, 185-212. Enfield, Nick J. (2009). Composite Utterances. The Anatomy of Meaning: Speech, Gesture, and
Composite Utterances. Cambridge: Cambridge University Press. Engberg-Pedersen, Elisabeth (1993). Space in Danish Sign Language: the semantics and
morphosyntax of the use of space in a visual language. Hamburg: Signum Press. Fauconnier, Gilles and Turner, Mark (1998). Conceptual Integration Networks. Cognitive
Science 22, 133-187. Feldman, Heidi, Goldin-Meadow, Susan and Gleitman, L. (1978). Beyond Herodotus: The
creation of a language by linguistically deprived deaf children. In A. Lock (ed.), Action,
Symbol, and Gesture: the emergence of language. New York: Academic Press. Fillmore, Charles (1975). An Alternative to Checklist Theories of Meaning. Berkeley Linguistics
Society 123-131. Berkeley: eLanguage. Fillmore, Charles (1976). Frame Semantics and the Nature of Language. Annals of the New York
Academy of Sciences 280, 20-32. Fillmore, Charles J. (1968). The Case for Case. In Emmon Back and Robert T. Harms (eds.),
Universals in Linguistic Theory 1-90. New York: Holt, Rinehart and Winston. Friedman, Lynn (1977). Formational properties of ASL. In Lynn Friedman (ed.), On the other
Hand NY: Academic Press. Fusellier-Souza, I. (2006). Emergence and development of sign languages: from a semiogenetic
point of view. Sign Language Studies 7, 30-56. Gal, Susan and Irvine, Judith T. (1995). The Boundaries of Languages and Disciplines: How
Ideologies Construct Difference. Social Research 62, 976-1001. Giddens, Anthony (1979). Central Problems in Social Theory: Action, Structure and
Contradiction in Social Analysis. Berkeley and Los Angeles: University of California Press.
249
Goffman, Erving (1964). The Neglected Situation. American Anthropologist 66, 133-136. Goffman, Erving (1974). Frame Analysis: An Essay on the Organization of Experience. Boston:
Northeastern University Press. Goffman, Erving (1981). Footing. Forms of Talk Oxford, UK: Basil Blackwell. Goldin-Meadow, Susan (2010). Widening the Lens on Language Learning: Language Creation
in Deaf Children and Adults in Nicaragua. Human Development 53, 303-311. Goldin-Meadow, Susan and Feldman, Heidi (1977). The Development of Language-Like
Communication Without a Language Model. Science 197, 22-24. Goldin-Meadow, Susan and Morford, Marolyn (1985). Gesture in Early Child Language: Studies
in Deaf and Hearing Children. Merrill-Palmer Quarterly 31, 145-176. GoldinMeadow, Susan and Mylander, Carolyn (1983). Gestural Communication in Deaf
Children: Noneffect of Parental Input on Language Development. Science 221, 372-374. Goodwin, Charles (2000). Gesture, Aphasia, and Interaction. In David McNeill (ed.), Language
and Gesture Cambridge: Cambridge University Press. Goodwin, Charles (1981). Conversational Organization: Interaction Between Speakers and
Hearers. New York: Academic Press. Goodwin, Marjorie (1985). Byplay: The Framing of Collaborative Collusion. Annual Meeting of
the American Anthropological Association Washington, D.C. Green, Elizabeth Mara (2014). The Nature of Signs: Nepal’s Deaf Society, Local Sign, and the
Production of Communicative Sociality. Ph.D. Thesis. The University of California,
Berkeley. Grinevald, Colette (2000). A morphosyntactic typology of classifiers. In G. Senft (ed.), Systems of
nominal classification Cambridge: Cambridge University Press. Groce, Nora Ellen (1985). Everyone Here Spoke Sign Language: Hereditary Deafness on
Martha's Vineyard. Gumperz, John J. (1992). Contextualization and Understanding. In Alessandro Duranti and
Charles Goodwin (eds.), Rethinking Context 229-252. Cambridge: Cambridge University
Press. Haiman, John (1985). Introduction. Natural Syntax: Iconicity and Erosion. Pp. 1-18.
Cambridge: Cambridge University Press. Hanks, William F. (1990). Referential Practice: Language and Lived Space among the Maya.
Chicago: The University of Chicago Press. Hanks, William F. (1996). Language and Communicative Practice. Boulder: Westview Press. Hanks, William F. (2005a). Pierre Bourdieu and the Practices of Language. Annual Review of
Anthropology 34. Hanks, William F. (2005b). Explorations in the Deictic Field. Current Anthropology 46, 191-220. Hanks, William F. (2009). Fieldwork on Deixis. Journal of Pragmatics 41, 10-24. Hanks, William F. (2013). Counterparts: Co-presence and ritual intersubjectivity. Language and
Communication 33, 263-277. Harman, Gilbert (ed.) (1982). On Noam Chomsky. Amherst: University of Massachusetts Press. Harris, Roy (2002). The Language Myth in Western Culture. Richmond, Surrey: Curzon Press. Hockett, Charles F. (1960). The Origin of Speech. Scientific American. Hulst, Harry van der (1996). On the Other Hand. Lingua 98, 121-143. Jackendoff, Ray (1990). Semantic Structures. Cambridge: MIT Press.
Jakobson, Roman (1971 [1939]). Signe Zero. The Collected Writings of Roman Jakobson 211-219.
250
Keating, Elizabeth and Mirus, Gene (2003). Examining Interactions Across Language
Modalities: Deaf Children and Hearing Peers at School. Anthropology and Education
Quarterly 34, 115-135. Kegl, Judy, Senghas, Ann and Coppola, Marie (2001). Creation through Contact: sign language
emergence and sign language change in Nicaragua. In Michel DeGraff (ed.), Language
Creation and Language Change: creolization, diachrony, and development London:
MIT Press. Kendon, Adam (2004). Gesture: Visible Action as Utterance. New York: Cambridge. Kisch, Shifra (2008). ``Deaf Discourse'': The Social Construction of Deafness in a Bedouin
Community. Medical Anthropology: Cross-Cultural STudies in Health and Illness 27,
283-313. Kisch, Shifra (2012). Demarcating generations of signers in the dynamic sociolinguistic landscape
of a shared sign-language: The case of the Al-Sayyid Bedouin. In Ulrike Zeshan and
Connie de Vos (eds.), Sign Languages in Village Communities Berlin: de Gruyter. Klima, Edward S. and Bellugi, Ursula (1979). The Signs of Language. London: Harvard
University Press. Koestler, Frances A. (1976). The Unseen Minority: A social history of blindness in the United
States. New York: McKay. Kuschel, Rolf (1973). The Silent Inventor: The Creation of a Sign Language by the Only Deaf-Mute on a Polynesian Island. Sign Language Studies 3, 1-27. Labov, William (1972). The Study of Language in Its Social Context. Sociolinguistic Patterns
Philadelphia: University of Pennsylvania Press. Lakoff, George (1987). Women, Fire, and Dangerous Things. Chicago: University of Chicago
Press. Lakoff, George and Johnson, Mark (1980). Metaphors We Live By. Chicago and London: The
University of Chicago Press. Lane, Harlan, Robert Hoffmeister, Ben Bahan (1996). A Journey into the Deaf World. San
Diego: Dawn Sign Press. Levinson, Stephen C. (1983). Pragmatics. Cambridge: Cambridge University Press. Levinson, Stephen C. (1987). Putting Linguistics on a Proper Footing: Explorations in Goffman's
Concepts of Participation. In P. Drew and A. Woolton (eds.), Goffman: An Interdisciplinary Appreciation 161-227. Oxford: Polity Press. Liddell, Scott K. (2000). Blended Spaces and Deixis in Sign Language Discourse. In David
McNeill (ed.), Language and Gesture Cambridge: Cambridge University Press. Liddell, Scott K. (2003). Grammar, Gesture, and Meaning in American Sign Language.
Cambridge: Cambridge University Press. Love, Nigel (2006). Language and history: integrationist perspectives. London: Routledge. Mandel, Mark Alan (1981). Phonotactics and Morphophonology in American Sign Language.
linguistics 323. Berkeley: The University of California, Berkeley. Mathur, Gaurav (2000). Verb Agreement as Alignment in Signed Languages. Dissertation.:
Massachusetts Institute of Technology. Mathur, Gaurav and Rathmann, Christian (2010). Verb agreement in sign language morphology.
In D. Brentari (ed.), Sign Languages: A Cambridge Language Survey 173-196.
Cambridge: Cambridge University Press. Mathur, Gaurav and Rathmann, Christian (2012). The features of verb agreement in signed
languages. In R. Pfau, M. Steinbach and B. Woll (eds.), Handbooks of Linguistics and
Communication Sciences on Sign Languages 136-157. Berlin: Mouton de Gruyter.
251
Mayberry, Rachel I. (1992). The cognitive development of deaf children: Recent insights. In S.J.
Segalowitz and I. Rapin (ed.), Handbook of neuropsychology Amsterdam: Elsevier. McCawley, James D. (1976). Syntax and Semantics 7: Notes from the linguistic underground.
New York: Academic Press. McDonald, B. (1982). Aspects of the American Sign Language Predicate System. Buffalo:
University of Buffalo. Meier, Richard P. (1990). Person Deixis in American Sign Language. In Susan D. Fiscer and
Patricia Siple (eds.), Theoretical Issues in Sign Language Research Chicago: The
University of Chicago Press. Meier, Richard P. and Lillo-Martin, Diane (2010). Does Spatial Make It Special? On the
Grammar of Pointing Signs in American Sign Language. In Donna B. Gerts, John C.
Moore and Maria Polinsky (eds.), Hypothesis A/Hypothesis B: Linguistic Explorations in
Honor of David M. Perlmutter London: MIT Press. Meier, Richard P. and Lillo-Martin, Diane (2012). Response: The apparent reorganization of
gesture in the evolution of verb agreement in signed languages. Theoretical Linguistics 38. Meir, Irit (2002). A cross-modality perspective on verb agreement. Natural Language and
Linguistic Theory 20, 413-450. Milroy, James (2001). Language ideologies and the consequences of standardization. Journal of
Sociolinguistics 5, 530-555. Morgan, Gary and Woll, Benice (2007). Understanding sign language classifiers through a
polycomponential approach`. Lingua 117, 1159-1168. Morgan, Hope E. and Mayberry, Rachel I. (2012). Complexity in two-handed signs in Kenyan
Sign Language. Sign Language &Linguistics 15, 147-174. Morris, Charles (1971 [1938]). Foundations of the Theory of Signs. Chicago: University of
Chicago Press. Muehlmann, Shaylih (2013). Where the River Ends: Contested Indigeneity in the Mexican
Colorado Delta. Durham: Duke University Press. Mulrooney, Kristin J. (2002). Variation in ASL fingerspelling. In Ceil Lucas (ed.), Turn-taking,
fingerspelling, and contact in signed languages Washington, D.C.: Gallaudet University
Press. Napoli, Donna jo and Wu, Jeff (2003). Morpheme structure constraints on two-handed signs in
American Sign Language: notions of symmetry. Sign Language &Linguistics 6, 123-205. Newport, Elissa (2001[1999]). Reduced Input in the Acquisition of Signed Languages:
Contributions to the Study of Creolization. In Michel DeGraff (ed.), Language Creation
and Language Change: Creolization, Diachrony, Development 161-178. Cambridge,
Massachusetts: MIT Press. Nonaka, Angela M. (2007). Emergence of an Indigenous Sign Language and a Speech/Sign
Community in Ban Khor, Thailand. Los Angeles: University of California, Los Angeles. Nuccio, Jelica and Smith, Theresa B. (2010). Providing and Receiving Support Services:
Comprehensive Training for Deaf-Blind Persons and Their Support Service Providers. In
Robert I. Roth (ed.), Seattle, WA. Nyst, Victoria (2007). A descriptive analysis of Adamorobe Sign Language (Ghana). PhD
Dissertation.: University of Amsterdam. Padden, Carol (1990). The Relation Between Space and Grammar in ASL Verb Morphology. In
C. Lucas (ed.), Proceedings of the Second International Conference on Theoretical Issues
in Sign Language Research Washington, D.C. : Gallaudet University Press.
252
Padden, Carol A. (1983). Interaction of Morphology and Syntax in American Sign Language,
Ph.D. Thesis. Linguistics San Diego: The University of California San Diego. Padden, Carol A. and Perlmutter, David M. (1987). American Sign Language and the
architecture of phonological theory. Natural Language and Linguistic Theory 5, 335-375. Peirce, Charles Sanders (1955/1940 [1893-1910]). Logic as Semiotic: The Theory of Signs. In
Justus Buchler (ed.), Philosophical Writings of Peirce New York: Dover. Perlmutter, David M. (1992). Sonority and Syllable Structure in American Sign Language.
Linguistic Inquiry 23. Petronio, Karen and Dively, Valeria (2006). YES, #NO, Visibility, and Variation in ASL and
Tactile ASL. Sign Language Studies 7. Pfau, Roland (2011). A point well taken: On the typology and diachrony of pointing. In Donna J.
Napoli and Gaurav Mathur (eds.), Deaf Around the World Oxford: Oxford University
Press. Polich, Laura (2005). The Emergence of the Deaf Community in Nicaragua. Washington, D.C.:
Gallaudet University Press. Quinto-Pozos, David (2002). Deictic Points in the Visual-Gestural and Tactile-Gestural
Modalities. In Richard P. Meier, Kearsy Cormier and David Quinto-Pozos (eds.),
Modality and Structure in Signed and Spoken Languages 442-467. Cambridge:
Cambridge University Press. Quinto-Pozos, David (2007). Why Does Constructed Action Seem Obligatory? An Analysis of
Clasifiers and the Lack of Articulator-Referent Correspondence. Sign Language Studies 7,
458-506. Rathmann, Christian and Mathur, Gaurav (2002). Is verb agreement the same crossmodally? In
Richard P. Meier, Kearsy Cormier and David Quinto-Pozos (eds.), Modality and
structure in signed and spoken languages Cambridge: Cambridge University Press. Reed, Charlotte M., Delhorne, Lorraine A., Durlach, Nathaniel I. and Fischer, Susan D. (1990).
A Study of the Tactual and Visual Reception of Fingerspelling. Journal of Speech,
Language, and Hearing Research 33, 786-797. Reed, Charlotte M., Delhorne, Lorraine A., Durlach, Nathaniel I. and Fischer, Susan D. (1995).
A study of the tactual reception of Sign Language. Journal of Speech and Hearing
Research 38. Rigler, David (1993). Letter to the Editor. The New York Times. Rochester, Jumius (2004). Seattle's Best-Kept Secret: A history of the Lighthouse for the blind.
Seattle: Tommie Press. Russo, Tommaso and Volterra, Virginia (2005). Comment on ``Children Creating Core
Properties of Language: Evidence from an Emerging Sign Language in Nicaragua.
Science 309. Rymer, Russ (1993). Genie: A Scientific Tragedy. New York: HarperCollins. Sadock, Jerrold M. (1985). Autolexical Syntax: a proposal for the treatment of noun
incorporation and similar phenomena. Natural Language and Linguistic Theory 3, 379-439. Sandler, Wendy (1989). Markedness in American Sign Langauge handshapes: a componential
analysis. In H.G. van der Hulst & J. van de Weijer (ed.), HIL Phonology Conference
Leiden: Leiden University Press. Sandler, Wendy (1993). Hand in hand: The roles of the nondominant hand in Sign Language
Phonology. The Linguistic Review 10, 337-390.
253
Sandler, Wendy, Aronoff, Mark, Meir, Irit and Padden, Carol (2011). The Gradual Emergence
of Phonological Form in a New Language. Natural Language and Linguistic Theory,
503-543. Sandler, Wendy, Aronoff, Mark, Padden, Carol and Meir, Irit (Forthcoming). Language
Emergence: Al-Sayyid Bedouin Sign Language. In Nick Enfield, Paul Kockelman and
Jack Sidnell (eds.), Cambridge Hnadbook of Linguistic Anthropology Cambridge:
Cambridge University Press. Sandler, Wendy and Lillo-Martin, Diane (2006). Sign Language and Linguistic Universals.
Cambridge: Cambridge University Press. Sandler, Wendy, Meir, Irit, Padden, Carol and Aronoff, Mark (2005). The Emergence of
Grammar: Systematic Structure in a New Language. Proceedings of the National
Academy of Sciences of the United States of America 102, 2661-2665. Sapir, Edward (1949 [1934]). The Grammarian and His Language. In David Mandelbaum (ed.),
Selected Writings of Edward Sapir in Language, Culture, and Personality 564-568.
Berkeley: University of California Press. Sapir, Edward (1995 [1927]). The Unconscious Patterning of Behavior in Society. In Ben Blount
(ed.), Language, Culture, and Society 29-42. Long Grove, Illinois: Waveland. Saussure, Ferdinand de (1972 [1915]). Course in General Linguistics. New York: McGraw Hill. Schembri, Adam (2003). Rethinking `classifiers' in signed languages. In Karen Emmorey (ed.),
Perspectives on Classifier Constructions in Sign Languages 3-34. Mahwah, NJ: Erlbaum. Schembri, Adam, Jones, Caroline and Burnham, Denis (2005). Comparing Action Gestures and
Classifier Verbs of Motion: Evidence from Australian Sign Language, Taiwan Sign
Language, and Nonsigners' Gestures without Speech. Journal of Deaf Studies and Deaf
Education 10, 272-290. Schick, Brenda (1990). Classifier Predicates in American Sign Language. International Journal of
Sign Linguistics 1, 15-40. Schutz, Alfred (1970). On Phenomenology and Social Relations. Chicago and London: The
University of Chicago Press. Scott, Robert A. (1969). The Making of Blind Men: A Study of Adult Socialization. New York:
Russell Sage Foundation. Senghas, Ann (2000 [1999]). The Development of Early Spatial Morphology in Nicaraguan Sign
Language. In S.C. Howell, S.A. Fish and T. Keith-Lucas (eds.), The Proceedings of the
Boston University Conference on Language Development Boston: Cascadilla Press. Senghas, Ann (2010). The Emergence of Two Functions for Spatial Devices in Nicaraguan Sign
Language. Human Development 53, 287-302. Senghas, Ann and Coppola, Marie (2001). Children Creating Language: How Nicaraguan Sign
Language Acquired a Spatial Grammar. Psychological Science 12. Senghas, Richard (2003). New Ways to be Deaf in Nicaragua: Changes in Language,
Personhood, and Community. In L. Monaghan, K. Nakamura, C. Schmaling and G.H.
Turner (eds.), Many Ways to be Deaf: International, Linguistic, and Sociocultural
Variation 260-282. Washington D.C.: Gallaudet University Press. Shepard-Kegl, Judy (1985). Locative relations in American Sign Language: Word formation,
syntax, and discourse. Cambridge: MIT. Sherzer, Joel (1973). Verbal and Nonverbal Deixis: The Pointed Lip Gesture among the San Blas
Cuna. Language in Society 2, 117-131. Sidnell, Jack and Enfield, Nick J. (2012). Language Diversity and Social Action. Current
Anthropology 53.
254
Silverstein, Michael (1996). Monoglot "Standard" in America: Standardization and Metaphors of
Linguistic Hegemony. In D. Brenneis (ed.), The Matrix of Language: Contemporary
Linguistic Anthropology Boulder, CO: Westview. Slobin, Dan I., Hoiting, Nini, Kuntze, Marlon, Lindert, Reyna, Weinberg, Amy, Pyers, Jennie,
Anthrony, Michelle, Biederman, Yael and Thumann, Helen (2003). A
cognitive/functional perspective ont eh acquisition of ``classifiers''. In Karen Emmorey
(ed.), Perspectives on Classifier Constructions in Sign Languages 271-296. Mahwah, NJ:
Erlbaum. Sperber, Dan and Wilson, D. (1986). Relevance. Cambridge: Harvard University Press. Spinoza, Baruch (1985 [1677]). Descartes' Principles of Philosophy. In Edwin Curley (ed.), The
Collected Works of Spinoza Princeton, NJ: Princeton University Press. Stokoe, William, Casterline, Dorothy and Croneberg, Carl (1965). A Dictionary of American
Sign Language on Linguistic Principles. Silver Spring, Maryland: Linstock Press. Stokoe, William C. (2005 [1960]). Sign Language Structure; An Outline of the Visual
Communication Systems of the American Deaf. Journal of Deaf Studies and Deaf
Education 10. Supalla, Ted (1982). Structure and acquisition of verbs of motion and location in ASL.
Unpublished Doctoral Dissertation. San Diego: University of California, San Diego. Supalla, Ted (1982). Structure and acquisition of verbs of motion and location in ASL.
Unpublished Doctoral Dissertation. San Diego: University of California, San Diego. Supalla, Ted (1986). The classifier system in American Sign Language. In C. Craig (ed.), Noun
classes and categorization 181-214. Amsterdam: John Benjamins. Taub, Sarah (2001). Language from the Body: Iconicity and Metaphor in American Sign
Language. Cambridge: Cambridge University Press. Toolan, Michael (1999). Integrationist linguistics in the context of 20th century theories of
language: Some connections and projections. Language and Communication 19, 97-108. Trudgill, Peter (2008). Colonial dialect contact in the history of European languages: On the
irrelevance of identity to new-dialect formation. Language in Society 37, 241-280. Urciuoli, Bonnie (1995). Language and Borders. Annual Review of Anthropology 24, 525-546. Vos, Conny de (2012). Sign-Spatiality in Kata Kolok: how a village sign language of Bali
inscribes its signing space, PhD Thesis. Nijmegen: Radbound University. Washabaugh, William (1991). Providence Island Sign: A Context-Dependent Language.
Anthropological Linguistics 20. Wilkins, David (2003). Why Pointing with the Index Finger Is Not a Universal (in socio-cultural
and Semiotic Terms). In Sotaro Kita (ed.), Pointing: Where Language, Culture, and
Cognition Meet 117-215. Yuasa, Etsuyo and Sadock, Jerry M. (2002). Pseudo-subordination: a mismatch between syntax
and semantics. Journal of Linguistics 38, 87-111. Zeshan, Ulrike (2003). 'Classificatory' Constructions in Indo-Pakistani Sign Language:
Grammaticalization and Lexicalization Processes. In Karen Emmorey (ed.), Perspectives
on Classifier Constructions in Signed Languages 113-141. London: Erlbaum. Zeshan, Ulrike and Vos, Connie de (eds.) (2012). Sign Languages in Village Communities.
Boston/Berlin: Gruyter.
255
Proudly powered by Weebly