Protactile Research Network
The difference intersubjective grammar makes in protactile DeafBlind communities
Terra Edwards
University of Chicago
Department of Comparative Human Development
Science Direct
Terra Edwards
University of Chicago
Department of Comparative Human Development
Science Direct
1. INTRODUCTION
Over the past several decades, linguists have established an empirically grounded case for the existence of “engagement systems”—a type of grammatical system that has the special role of facilitating intersubjective engagement (Evans et al., 2018; Landaburu, 2007; Hyland, 2005; Desclès, 2009, among others). This work has yielded a typological framework for analyzing grammatical mechanisms that track, compare, and engage “the attentional and epistemic states of interlocutors” (Evans et al., 2018:110). In other words, engagement systems encode a kind of “grammaticalized intersubjectivity” (Evans et al., 2018:113). Evans et al. (Evans et al., 2018:110) specify three main types of engagement systems based on their primary function: (1) reference to the immediate environment (e.g. deictics); (2) reference to the discourse environment (e.g. definite vs. indefinite, ‘the’ vs. ‘a’ in English; and (3) ‘event-depicting’ propositions. An example, given by Landaburu (2007) and reported in Evans et al. (Evans et al., 2018), involves a pair of auxiliaries in Andoke, an isolate language of the Columbian Amazon (p.113). According to Evans et al. (Evans et al., 2018), the auxiliaries “are made up of two parts: the first element encodes the dimension of engagement—the relative access of speaker and hearer—and the second element marks subject agreement (i.e. who is undertaking the activity; in this case, the day or the sun itself, which is encoded in a third person singular inanimate subject).” There are two additional engagement terms and all together, the system forms a 2 x 2 matrix one element from the engagement set. (addressee knowledge and speaker knowledge), and “[n]o descriptive sentence can be constructed without employing one element from the engagement set.” (p. 114).
* a. “The day is dawning (as we can both see).” b. “The day is dawning (as I witness, but which you were not aware of).”
It is clear, given the English translation in example (1) that epistemological alignment can be facilitated by language, whether or not there are special grammatical systems dedicated to that function. It has also been amply demonstrated that semiotic resources of all kinds can be integrated with language to achieve similar ends (DuBois, 2007; Dudis et al., 2020; Enfield and Sidnell, 2014; Fenlon et al., 2019; Green, 2014a; Goldin-Meadow and Brentari, 2017; Goodwin et al., 2007; Haviland, 2014; Heritage, 2012; Kusters, 2017; Kusters, 2020; Mesh and Hou, 2020; Murphy, 2005; Núñez and Sweetser, 2006; Streeck, 2015; Shaw, 2019). Given this, Rumsey (Rumsey et al., 2014) and Evans et al. (Evans et al., 2018) have asked: What difference does grammar, as such, really make when there are so many other resources available?
This article addresses that question by analyzing interactions between DeafBlind people in Seattle, Washington at a time when shifting cultural and historical pressures were giving rise to new grammatical systems that target intersubjective coordination (Edwards and Brentari, 2021). Many members of these communities were born sighted, acquired American Sign Language (ASL) as children, and became blind slowly over the course of many years. As that process unfolded, visual communication became increasingly untenable. Historically, this problem was addressed via increased dependence on sighted interpreters. However, in the early 2010s, two DeafBlind leaders in Seattle initiated the “protactile” movement, which was built on the idea that all human activity can be re-routed through tactile channels. Vision and hearing are not necessary. Since then, there has been a shift away from visual “access” via interpreters or otherwise (Clark, 2021) and an explicit push, instead, toward maximizing the potential of tactile channels— not only for language and communication (Clark and Nuccio, 2020), but for life in general (Clark, 2019; Granda and Nuccio, 2018; McMillen, 2015).
It is well-known that linguistic signs can be routed and re-routed through diverse infrastructural, material, technological, and sensory channels (Barker and Nakassis, 2020; Friedner and Helmreich, 2012; Gershon, 2017; Harkness, 2014; Hull, 2012; Inoue, 2004; Keating and Mirus, 2003; Kockelman, 2010; Larkin, 2013; Lemon, 2018; Russell, 2020; Shankar and Cavanaugh, 2012). And going in reverse, it has also been shown that the (re)-channeling of linguistic signs affects the internal organization of the grammar so that over time, languages come to anticipate the dimensions of the environment that have intersubjective affordances for physically, historically, culturally, and geographically situated speakers (Bühler, 2001; Cooperider et al., 2016; Diessel and Coventry, 2020; Edwards, 2014; Evans, 2003; Forker, 2020; Hanks, 1990; Sicoli, 2016). Studying intersubjective engagement at a time when a new deictic system was beginning to emerge, generates opportunities to understand how the re-channeling of linguistic signs can affect the internal organization of grammatical systems.
This article also contributes to a growing body of work on interaction and communication among DeafBlind people in communities in and outside of the United States (Collins and Petronio, 1998; Mesch, 2001; Mesch, 2013; Quinto-Pozos, 2002; Collins, 2004; Petronio and Dively, 2006; Mesch et al., 2015; Checchetto et al., 2018; Iwasaki et al., 2018). This work has shown that pragmatic aims such as asking a question, identifying a referent, or signaling a bid for a turn, can be accomplished by DeafBlind people via linguistic and non-linguistic means (see Willoughby et al. Willoughby et al., 2018 for an overview). Because visual signed languages, such as ASL, are not fully perceptible via touch (Reed et al., 1995), the majority of pragmatic mechansisms that have been documented are non-linguistic or modifications of a visual language. For example, in ASL, Quinto-Pozos (Quinto-Pozos, 2002) reports an avoidance of, and restricted range of functions for, pointing signs. Iwasaki et al. (Iwasaki et al., 2018) describe how DeafBlind signers of Auslan manage turns at talk without the benefit of nonmanual features such as eye gaze, eyebrow movements, and facial expressions that sighted Auslan signers depend on in performing corresponding communicative functions. Petronio and Dively (Petronio and Dively, 2006) also report a higher frequency of the words “yes” and “no” in discourse, which they attribute to a lack of access to non-manual expressions that usually do that pragmatic work such as head nods and eye brow movements. In protactile DeafBlind communities in the U.S., a new tactile language has begun to emerge under pressures exerted by the protactile movement (Edwards, 2014). Protactile language is perceptible through touch (Edwards and Brentari, 2020), and it also retrieves values from an environment organized along tactile lines (Edwards, 2017; Edwards and Brentari, 2021). Rather than finding non-linguistic ways to compensate for lack of access to language, protactile people have a new language at their disposal.
One of the first things to emerge in this language was a new engagement system. In what follows, I analyze interactions where this system is applied and ask how it differs from pragmatic or physical strategies that could be used instead.
*In this article, I begin by analyzing data that was collected in the early stages of the protactile movement. In 2010, I attended and videorecorded a series of protactile workshops, led by two DeafBlind instructors for 11 DeafBlind participants for a total of 10 weeks. Later, in 2016, I asked three protactile DeafBlind people, in groups of two, to give one another directions to various locations nearby and videorecorded both the act of direction-giving and the addressee’s attempts to find the target in order to elicit deictics and test their efficacy. In 2019, I repeated a sub-set of these tasks with 4 dyads, composed of 8 DeafBlind people who did not know protactile language and analyzed them as I analyzed the protactile data. For the purposes of this paper, I reviewed these data and isolated interactions where epistemic alignments were attempted by way of deictic reference. In analyzing those interactions, I drew not only on what was present in the videorecordings, and what is known from prior linguistic analysis (Edwards, 2014; Edwards, 2017; Edwards and Brentari, 2021), but also on prior historical research, semi-structured ethnographic interviews in protactile communities (70 interviews in total), and more than 30 months of participant observation. This combination of methods allowed me to understand not only how grammatical systems were applied moment-to-moment to facilitate intersubjective engagement in the interaction (or not), but also how those activities accrued social and political significance and effectiveness for participants in a particular historical moment.*
In Section 2, I begin with a brief summary of the relevant linguistic structures and how they differ from corresponding structures in visual languages, as previously reported in (Edwards and Brentari, 2020). I then move on, in Section 3, to analyze interactional sequences where those linguistic structures are instantiated by protactile people to facilitate convergence on referents in the immediate environment. In Section 4, I show how similar referential tasks are accomplished by non-protactile DeafBlind people without the benefit of special grammatical resources.
The question motivating these analyses is: If intersubjective work can be accomplished without grammatical resources, why should scholars interested in intersubjectivity care about grammar at all? Isn’t it just one of many available semiotic resources? I argue that the only way to answer that question in a satisfactory manner is to foreground local, historically and socio-politically embedded processes. Given this framing, the question becomes: For whom, and in what circumstances, do engagement systems make a difference, or not? In Section 5, I argue that engagement systems play a crucial role in a particular historical moment in the Seattle DeafBlind community. I conclude in Section 6 by proposing an interdisciplinary approach to typologies of engagement systems that can account not only for how they are structured, but also why they matter for the people who use them.
2. ENGAGEMENT SYSTEMS IN PROTACTILE COMMUNITIES
New grammatical systems that target intersubjective engagement are emerging in communities of protactile, DeafBlind signers in the U.S., including a conventional pointing or “deictic” system, used to identify the locations of referents (“locatives”) and to individuate referents against a horizon of alternate possibilities (“demonstratives”) (Edwards and Brentari, 2021). While research on the emergent deictic system is ongoing, attested locative values include “path” vs. “discrete” and demonstrative values include “foregrounded” vs. “backgrounded” (Edwards, 2015). In order to understand how these meanings are expressed and distinguished from one another, it is helpful to review some key findings reported in Edwards and Brentari, 2020 regarding phonological patterns in protactile language.
Unlike ASL, where signs are produced with the two articulators of the signer, protactile language has four potential articulators: the hands and arms of Signer 1 and the hands and arms of Signer 2 (co-animator). The incorporation of the listener’s body into the articulatory process has many consequences for the internal structure of the language, which begin with a crucial observation by Granda and Nuccio, 2018 that in ASL, signs are produced on, and in front of, the body of (one) signer, or in “air space”. In air space, the relative locations of signs are perceived against the backdrop of the signer’s body. Receiving ASL through touch, one has access to the hand of the signer, but not the visual backdrop that is necessary for making relevant distinctions. For example, the ASL signs SECRET and SELF are differentiated based on their relative proximity to aspects of the signer’s body. SECRET is produced at the chin of the signer (Fig. 1a) and SELF is produced at the chest (Fig. 1b).
[[Figure 1. Pictures showing the differentiation of the ASL sign SECRET and the sign SELF (Hoschgesang et. al., 2018)]]
In contact space, signs are easily and consistently perceived by the addressee against the backdrop of their own body. For example, in Fig. 3d Signer 1 (left) combines a protactile locative (“press”) with a conventional ASL verb meaning “to put” in order to explain where he had put a block on the table. Together, they mean “put-here”. Just prior to this in the unfolding interaction, he had established the location of the table by tracing a square on the upturned palm of Signer 2 and fingerspelling “table”. Referring back to that space, “put-here” would be understood to mean, “put here near the corner of the table”. As discussed below, the relative spatial relations in the description are clear because they are perceived by Signer 2 against the proprioceptive backdrop of their own body. Recruiting the addressee’s body as part of the articulatory apparatus unnlocks the proprioceptive channel, thereby generating more material for the linguistic system to operate on. It also, however, generates a problem for the language, since the articulators of Signer 1 and Signer 2 must somehow be coordinated in an efficient and effective manner.
Edwards and Brentari, 2020 argue that early in the emergence of protactile phonology, the language resolved this problem by establishing conventional ways of inviting Signer 2 to contribute to the co-articulation of signs. They show that the conventionalization of these mechanisms involved assigning specific linguistic tasks to four articulators (“A1”-“ A4” as in Fig. 2), in the same way that two hands in visual signed languages (“H1” and “H2”) are assigned consistent and distinct tasks (Battison, 1978). Signs can also be produced using a single articulator on the body of the addressee, however, analyzing those signs would not reveal how the four articulators are coordinated. Therefore, we looked specifically at four-handed constructions used to express complex relational meanings, which we call “proprioceptive constructions,” or “PCs”.
PCs always include four types of functional units, which are produced in a particular temporal sequence (Edwards and Brentari, 2020). The first type of unit initiates the PC, telling Signer 2 that they will need to take an active articulatory role in what comes next. In Fig. 2a-b, Signer 1 (left) initiates the PC by using her dominant hand (A1) to tap on the back of Signer 2’s non-dominant hand (A3) prompting him to select a specific form. The “initiate” produced by Signer 1 (author) alerts Signer 2 (animator) that their active participation in the articulatory process is required and gives specific instructions for the next step in articulation of the PC.
*See (Edwards and Brentari, 2020) for attested categories of Initiate and all other superordinate PC categories*
Once the PC has been initiated, “contact space” has been activated. The next step is to generate a meaningful and phonologically constrained space, where further information can be conveyed. That space, which is actively produced by Signer 2, is called the “proprioceptive object,” or “PO”. In Fig. 2c, Signer 2 produces PO-PLANE with his dominant hand (A2), which he was instructed to do in Fig. 2b.
The PO is, from one perspective, an important part of the phonological organization of the language (i.e. how articulation and perception are constrained in systematic ways). From another perspective, it is important for understanding the emergent deictic system because it constitutes a meaningfully structured space, to, and through which, deictics can refer. For example, once a PO-PLANE like the one in Fig. 2c has been produced, Signer 1 can locate relations between referents in the immediate environment in a diagramtic fashion on that plane, as well as pathways to the referent from “here”.
The third task in producing a PC is to maintain the active contact space generated by the PO using a category we call Prompt-to-Continue (PTC). PTCs tell Signer 2, “Leave this hand here. There is more to come”— or in the case of PTC-
PUSH, “relax this hand, we are done with it.” For example, in Fig. 2d, after Signer 2 has produced the requested PO (using A2), Signer 1 grips the PO (using A3) and holds onto it for the remainder of the PC. This gripping action is an example of a PTC unit. PTCs are significant for understanding protactile deictics because they maintain the meaningfully structured space, through which, attention is modulated.
The fourth and final task in producing a PC is to draw attention to and characterize referents that correspond to aspects of the PO by producing combinations of movement and contact that convey information about size, shape, location, or movement of an entity. These units are called “Movement Contact Types,” or “MCs”. For example, in Fig. 2e, Signer 1 (right) uses A1 (her right hand) to trace a line from the palm of Signer 2’s right hand (A2), to the inside of the elbow. Fig. 2e shows the end of an MC-SLIDE describing a long, rectangular object.
*Within the context of the PC, the terms prompt and select should be understood as implying categorical perception. I do not mean, for example, that Signer 1 prompts Signer 2 to select any unit they choose in the co-construction of meaning. The distinction between “conveyor” of information and “receiver” of information, or “speaker” and “addressee,” is as clear in PT as it is in any language. What these terms are meant to capture is the fact that PT signers do not assume that “handshapes” produced by Signer 1 will be perceptible to Signer 2. Rather, handshapes are treated as indeterminate clues for what Signer 2 should “select” from an inventory of conventional “PO” categories. For people who have a command of protactile grammar, such inventories are readily accessible, while those who do not have a command of the grammar would be reduced to a more particularized process of mimicry, based on incomplete and partially perceptible input. One protactile expert compared the experience of trying to co-produce PCs with a non-PT DeafBlind signer to trying to sing in a large auditorium with a broken microphone. You do your part in initiating the construction, but the signal dies in transit due to faulty equipment, i.e. no PO is selected. Others have explained that the rhythm of articulation is slowed so much when this occurs that they can’t think and whatever they were about to say is not worth the effort when people are mimicking rather than selecting. Observations like these from PT language-users can be taken as ethnographic evidence that Signer 2 is expected to have independent linguistic resources for their side of the articulatory process. In particular, they should have an inventory of POs cognitively accessible, which can be selected according to the prompts they receive from Signer 1. Edwards and Brentari, 2020 have identified three clear patterns in how PCs are articulated: First, there is a constraint on order. The functional units described above must unfold in sequence: Initiate, PO, PTC, MC. Second, There is a redundancy rule: Information introduced in the MC must incorporate and contextualize information introduced by the PO. The MC cannot be used to introduce new information. Finally, each unit is assigned consistently to one of the four articulators (A1-A4). In other words, faced with the complex task of coordinating four articulators, PT signers now know what to do with their hands, when. The protactile deictics I focus on in this article are produced within a PC structure, and are therefore subject to these constraints. We are also beginning to identify some important patterns in how constraints on articulation may effect productivity. For example, preliminary analysis suggests that initiates involve different levels of articulatory complexity, which correspond inversely to the number of POs they can elicit. The most articulatorily simple option, I-TOUCH, can only initiate a new PC within the context of an already-established PC. On the opposite end of the spectrum, the most articulatorily complex option, IPROMPT can initiate all attested POs, and seems to be crucial for the process of creating new POs. Between these two extremes, there are several additional values. For example, INITIATE-HOLD, which is only slightly more articulatorily complex than I-TOUCH, can (unlike I-TOUCH) elicit POs that are not already established in a prior PC, but those POs are limited to two types (PO-PLANE and POCYLINDER). Further analysis is needed to confirm these preliminary observations. *
In the next section, I show how the protactile deictic system, which is constrained by these patterns, is employed by protactile people to direct one another to locations in the immediate environment. The response of the addressee is analyzed in order to discern how the directions given were effective. In Section 4, I compare patterns observed in these direction-giving sequences to patterns observed among non-protactile DeafBlind people engaged in a comparable task.
[[Figure 2. Proprioceptive Construction (“PC’). (a) initiate-prompt-tap: Signer 1 uses their A3 to tap the back of the A4 hand on Signer 2. (b) initiate-PO: Signer 1 uses their A1 to make a palm-up flat handshape. (c) PO-plane: Signer 2 copies the palm-up flat handshape with their A2. (d) PTC-grip: Signer 1 uses their A3 to grip the fingers of the A2 on Signer 2. (e) MC-press: Signer 1 uses their A1 to race a line from the palm of Signer 2’s A2, to the inside of the elbow.]]
3. PROTACTILE DIRECTION-GIVING
In 2016, in order to understand whether and how protactile deictics were effective for converging on objects of reference, I asked three DeafBlind people, who were active participants in the local protactile community in Washington, D.C., to play a game of hide and seek with one another (two at a time). One person in each dyad would take a toy block and hide it somewhere in the room, or somewhere in the linguistics department at Gallaudet University, where we were conducting the study. They would then return to their partner, and give them directions to the block. The film crew videorecorded the directions given and then the route taken by the addressee as they (in all cases) looked for, and found, the block. In this section, I begin by carefully analyzing one of these interactions to exemplify observed patterns
*All proper names used in examples are pseudonyms*
The participants in the interaction analyzed below, Oliver and Dominic, are both DeafBlind. In 2016, when the videos were recorded, they were living in Washington, D.C. and were frequent participants in local protactile events. They had also recently taken a 5-week protactile workshop, where they were deepening their skills and knowledge of protactile practices with DeafBlind teachers from Seattle. In this example, I asked Oliver (left) to place a soft plastic toy block anywhere on the table behind him and then explain to Dominic (right) how to find it. Oliver starts by initiating a PC and prompting Dominic to select PO-PLANE to represent a schematic map-like surface. He holds that hand in place (PTC-GRIP). Then he says, “you”, by pressing the pad of his finger into Dominic’s chest and“me” (by doing the same to his own chest). Dominic can perceive this sign because his non dominant hand (A4) is, by default and in this case, placed on top of Oliver’s dominant hand (A1), tracking its movements and receiving some handshape information (Clark and Nuccio, 2020). In Fig. 3, he presses two fingers into a location on the plane (MC-PRESS + MC-PRESS). This establishes the zero-point, or “origo” from which directions can proceed, as in, “We are here.”
Next, Oliver traces a square on Dominique’s palm (Fig. 3b), taps on the square (MC-TAP, Fig. 3c), and fingerspells, “table”. Together this sequence means roughly, “This [is the] table”, where MC-TAP functions as a demonstrative within the PC. The PC as a whole functions like a diagram of the immediate environment. In other words, it is through the PC that these complex deictic expressions are articulated. In the context of the unfolding interaction, this expression establishes a range of possible locations within a tactually discoverable boundary (i.e. somewhere on the square-shaped table). Next, in order to narrow the range of possible locations further, Oliver traces the square again and taps on the upper, right corner of the square. He then traces the corner he has just tapped on again (without the rest of the square). Then, he starts at the right edge of the corner and traces inward just a little and taps several times in rapid succession. Finally, Oliver says, “that where” and then he touches his own chest to mean “me”, and last, he says, “put-there” (Fig. 3d). Together, this sequence can be translated, “I put [the block] just to the left of the upper righthand corner of the table”. In response, Dominic walks toward the table, follows the right edge of the table with his hand until he reaches the corner. Then he moves his fingers just to the left of the corner and locates the block. The instructions were produced and received efficiently and Dominic quickly located the referent. This is due not only to the increasingly systematic patterns in articulation and perception in PCs, but also to the fact that the directions articulated to an environment with tactile structure. No vision or hearing is required to locate the corner of a table and the location of a block can easily be identified against that structured backdrop. This sequence shows that having a grammatical system that targets intersubjectivity can facilitate convergence on a referent in a tactile environment, for people who habitually orient to their environment in tactile ways.
[[Figure 3. (a) “We are here”: Oliver uses his A1 to establish the zero-point with two-fingers on the palm of Dominic’s A2. (b) “Table is here”: Oliver uses his finger to trace a square on the palm of Dominic’s A2. (c) “This [table]”: Dominic taps the square he traced with his A1. (d) “I put here”: Oliver uses his A1 on Dominic’s A2 to tell Dominic where on the table he put the block. All together Oliver stated, “I put the block here near this corner of the table.”]]
This type of engagement system enables the addressee to navigate in an environment autonomously, without physical guidance from another person. Prior to the protactile movement, there was no way to tell a DeafBlind person where the bathroom was, or how to get to the kitchen. This meant that someone (usually a sighted person) had to act as a guide, which increased dependence and reduced autonomy (Granda and Nuccio, 2018). It also led to a kind of passive orientation to the physical environment, which restricted tactile exploration and ultimately, tactile modes of existence. DeafBlind protactile theorist, Clark, 2017 describes this sense of unnecessary restriction, as it applies broadly in social life.
Despite the many barriers we encounter in society, we can gain much awareness about the world around us. But when we go exploring or when we just exist, sighted and hearing people rush into intervene. Can they help us? Please don’t touch. They will be happy to describe it to us. They will guide us. No, they will get it for us. It’s much easier that way.
Clark is focusing here on normative, ideological commitments to “distance”, or what he calls “distantism”. Against this backdrop, protactile engagement systems that direct attention in the immediate environment crystalize and enshrine routine acts of resistance among DeafBlind people. Each time protactile directions are given, interpreted, and acted on, the contours of a tactile world that can be known without sighted intervention are subversively re-inscribed.
4. POINTING IN AIR SPACE: ITS A STRETCH
In this section, I ask how a referential task similar to the one represented above is accomplished by DeafBlind people who have not acquired protactile language, and are therefore doing intersubjective work without the benefit of the specialized grammatical resources described in Section 2. The participants in this portion of the study were evaluated by DeafBlind protactile experts on our research team as being “tactile ASL signers,” meaning they communicate by receiving ASL through touch, and they have not acquired protactile language. They also self-reported that they were not “protactile”. In these interactions, reference was in many cases resolved, despite the absence of a functional engagement system, by bypassing language altogether.
For example, in the following interaction, two DeafBlind men, Tom (left) and Eli (right) are standing on a mat with edges that are detectable through the sole of a shoe. They are facing one another, with their hands in contact. There is a small, round, hip-height table, where three pens have been placed. A DeafBlind member of the research team placed the pens there, directed Tom to them, and asked him to explain to Eli what the pens feel like, and where the pens are so that Eli can find them. Once the researcher is gone, Tom begins to move his feet around and he says, hesitantly, to Eli, “Sit?”. Eli responds by pointing (in air space) to the mat below his feet. Tom then shuffles his right foot over several times, tracing the edge of the mat, while leaving his left foot planted. The table where the pens are hits Tom’s hip several times as he moves. Once Tom has completed the exploration, he says, in ASL “Oh I see”, suggesting that he is now oriented. These activities seem to establish a structured tactile space for Tom, within which the interaction can unfold.
However, Eli has stayed put in a single location and has not explored his environment along with Tom. Therefore, it is unclear if this environmental structure is shared between them. Eli turns his palms up as if to say, “Well? Ready?” and Tom begins his description of the pens. He isn’t able to get the description out, entirely, and there are repeated false starts along the way. The overall impression one gets from watching his description is that he can’t think of how to describe the pens. Then, as shown in Fig. 4a, Tom tells Eli where the pens are by pointing in air space in the direction of the pens. He pauses and moves his feet back and forth a few times, appearing to grimace. He seems unsure of how to direct Tom’s attention to the pens. After this long, grimacing pause (Fig. 4b), he says, “touch” in ASL, and then guides Eli’s hand to the pens (Fig. 4c). In the end, they converge on the location of the pens, however, they do so, by literally stretching the ASL demonstrative through air space, all the way to the referent. At this point, the pointing function of the sign has been replaced by a guiding function. Regarding engagement systems, we return here to our initial question: So what! If the same kind of intersubjective coordination can be accomplished without the benefit of protactile grammatical structures, why should those of us interested intersubjectivity care about grammar, as such, at all? Isn’t is just one of many available semiotic resources? What I want to propose is that a satisfactory approach to that question must foreground historical and socio-politically embedded instances of language-use, to ask instead: For whom, and in what circumstances, does intersubjective grammar make a difference? In the following section I argue that in a particular historical moment in the Seattle DeafBlind community, an emerging deictic system played a crucial role in converting DeafBlind people to a new way of being in the world, and in this sense was significant.
[[Figure 4. Ineffective visual pointing leads to “guiding”. (a) visual pointing: Tom uses both hands to point to the pens on the table while Eli looks at Tom’s face. (b) long pause: Tom looks off to the side with a grimace and holds his hands together in an anticipatory pose while Eli looks at Tom’s face. (c) guiding: Tom guides Eli to the pens on the table with their A1 and A4 joined.]]
5. CONVERSION MOMENTSThe data I analyze in this section were collected at a protactile workshop held in 2010 and 2011 in Seattle, just as the movement was beginning to gain ground. The DeafBlind instructors were actively trying to convert members of their community to the new protactile way of being. In the unfolding of interaction, deixis was often at the center of this conversion process. For example, in Fig. 5, Walter, a DeafBlind man who had recently moved to Seattle and was new to protactile practices, is drinking a soda on a break from the workshop. He runs into Adrijana, one of the instructors, and strikes up a conversation. While chit-chatting, he mentions that Victor, one of the sighted videographers, is “over there in the middle of the room, filming”. He produces the ASL signs: VICTOR, MIDDLE, and then he points, as in Fig. 5a. In Fig. 5b, Adirjana looks in the general direction that Walter is pointing in, pauses, and squints. However, she fails to locate Victor. In Fig. 5c, Walter responds to her silence by pointing again, this time with his arm extending further toward the referent. Next, Adrijana says, “You see Victor? I don’t see anything.” (not pictured). In Fig. 5, Walter is pointing in air space, which means that Adrijana can only perceive the finger itself, and the small trajectory created by moving the finger toward the location of the referent. The demonstrative point in Fig. 5 is not easy for Adrijana to perceive, as evidenced by her need to feel around the pointing handshape with both hands (not pictured), as well as her response in the subsequent interactional sequence (discussed below).
[[Figure 5. Failure to resolve reference leads to modified visual pointing. (a) Walter points using airspace to an area behind Adrijana while Adrijana uses both her hands to touch Walter’ pointing handshape. (b) Walter looks down while Adrijana squints toward the direction where Walter pointed. (c) Walter point again with his arm extended further out while Adrijana still squints in the general direction of the point.]]
Beyond this, though, there is a problem regarding the channels in the environment to which the point is meant to articulate. Sighted people navigate environments via various kinds of channels—systems of roads, sidewalks, tunnels, sight-lines, and so-on. To direct attention within an environment, a sense of these channels, where they go, and how you know when you’ve found one, must be shared by both parties in the interaction. Walter’s point was not only ambiguous as a sign against an imperceptible background. It was also articulated to an environment organized by visual channels. Sight-lines that go from one side of the room to the other, proceed through an environment, which, without visual access, lacks structure entirely (Fig. 6a). For a DeafBlind person, pointing out into unstructured air space works insofar as there are sighted people (such as interpreters) around who can patch it all together. For two DeafBlind people in their own environment, though, a different approach is needed.
For these reasons, a protactile person is not likely to set off into unstructured space. Instead, they would proceed around the edge of the room, following the orienting line where the wall meets the floor, or the ”shoreline” (Fig. 6b). Following shorline after shorline, a certain feeling for tactile relations develops, which extends out beyond the individual, as part of their experience of the world. In order to direct attention in a way that articulates to tactile channels, the pointing sign itself has to be perceptible, but so do the channels in the environment to which the pointing sign articulates. In a successful act of demonstrative reference, the intersubjective, or “shared” world and the means of representing that world, should align. Air space is inadmissible for both, because for protactile people, it lacks affordances for communication, both in and about the world.
[[Figure 6. visual vs. tactile navigation channels. (a) visual navigation: There is a room with two doors facing one another one on the far right wall and the other one on the far left. One person in front of the door on the left and another person stands on the front of the door on the right. They are able to use a “sight-line” to see each other. If the person on the left wanted to walk to the person on the right, the person could follow the sight-line. (b) tactile navigation: The same room is shown with the same people; however, this time a shoreline is shown. If the person on the left wanted to walk to the person on the right, they would need to follow the walls until they reached the person on the right.]]
After Walter and Adrijana fail to individuate and locate Victor, Adrijana tells Walter, “Hang on.” She puts down something she was holding and takes hold of Walter’s hand so it is facing palm up. While holding his hand in place, she says, “you” by pressing a finger into his chest (Fig. 7a), then “me” by pressing the same finger into her own chest (Fig. 7b). Then, in Fig. 7c, she presses her finger into a location on Walter’s upturned palm. Together, this means, “We are here,” and establishes an origo, or zero-point, in relation to which, Victor can be localized. In this case, the signer, Adrijana, does not know where Victor is. She is just giving Walter an example of how pointing in contact space works. Note that in order to interpret Adrijana’s utterance effectively and produce one like it, Walter would end up not only producing and receiving signs in a more tactile way, but also uncovering tactile navigation pathways in his environment. The two go hand in hand. In moments like these, both parties to the interaction must be in the tactile world to which the deictic values articulate (i.e. conventional MCs such as press, trace, tap, and grip, produced within a rule-governed PC structure). This kind of prospective orientation of one’s way of being in the world to the categories and relations encoded in engagement systems is what I call elsewhere Being for Speaking (Edwards, in preparation).
[[Figure 7. Protactile Pointing. (a) you: Adrijana uses her A1 to press her finger into Walter’s chest. (b) me: Adrijana uses her A1 to press her finger into her own chest. (c) here: Adrijana holds Walter’s A2 palm-up with her A3, she uses her A1 to press on Walter’s palm. (d) Victor: Adrijana signs Victor’s name sign, a V handshape on the chin. (e) there: Adrijana uses her A1 to press on another spot of Walter’s A2 palm.]]
My point of departure is the idea that our thoughts are prospectively oriented toward acts of speaking, which is what Dan Slobin calls “thinking for speaking”. Slobin was building on Roman Jakobson, who drew attention to the fact that grammar has requirements for which aspects of experience must be expressed. Therefore, Slobin writes (p.71): “[w]hatever else language may do in human thought and action, it surely directs us to attend—while speaking—to the dimensions of experience that are enshrined in grammatical categories” (Slobin et al., 1996). Being for speaking builds on that idea, drawing attention to acts of speaking that lead people not only to attend to one thing or another, but to be one thing or another.
*Slobin’s “Thinking for Speaking” is part of a much larger and heterogeneous body of work that falls under the heading of “linguistic relativity”. This body of work brings with it many debates that I do not want to wade into in this article, and therefore have not cited. Still, the reader may wonder about the relationship of this work to recent interventions that have looked explicitly at the relationship between linguistic resources and “social action”, such as Sidnell and Enfield (Sidnell and Enfield, 2012). While a more thorough discussion is outside the scope of this article, one crucial difference between my proposal and theirs (which has significant conceptual and methodological consequences) is that for Sidnell and Enfield (Sidnell and Enfield, 2012), interactional moves and instances of language-use can constitute, in and of themselves, a kind of “social action” even if they tend toward reproduction of the current social order. In the argument presented here, interactional moves and instances of language-use are taken as signs of social action, insofar as their effects have some demonstrable impact on the current social order, i.e. the privileging of vision over touch, and therefore visual over tactile modes of existence. They do not count, in and of themselves, as social action.*
In the Seattle DeafBlind community, there were two key, historical moments, which brought about new options for being DeafBlind (See Edwards, 2014). In the 1970s, a company that merged social services and manufacturing, called the Seattle Lighthouse for the Blind, established a DeafBlind employment program. This drew DeafBlind people from across the country. Meanwhile, an interpreter training program was established, which prepared students to work with DeafBlind people. These two institutions converged in ways that made ubiquitous mediation possible. In meetings, at social events, on vacation, at the grocery store, and elsewhere, each DeafBlind individual was paired with an interpreter. This situation yielded ways of being DeafBlind, which were based on the kinds of interpreting accommodations each person required and by the end of the 1990s, a range of possibilities had settled into one primary opposition: You could be “tunnel vision”, which meant that you communicated visually, but the message was modified as necessary to accommodate a shrinking field of vision. Or you could be “tactile”, which meant that you would receive ASL via touch. Note that this is not a transition to “contact space” or “protactile language”. It is a way to accommodate deteriorating vision by receiving messages meant to be seen, through touch (not unlike lip-reading to perceive a spoken language visually).
As the protactile movement took root, choices for how one could be DeafBlind shifted. Rather than choosing between being tactile or tunnel vision, both of which involved compensatory strategies for accessing visual phenomena, the new choice was between being protactile or not-protactile. The interaction represented in Fig. 7, unfolded at the beginning of that shift, when Adrijana and other DeafBlind leaders were doing all they could to convert members of their community to protactile ways of being DeafBlind. Therefore, this seemingly simple correction in Walter’s referential behavior is actually a request to embrace a new way of being in the world. This connection between deictic reference and protactile conversion was made more or less explicit during those works on many occasions. For example, in another encounter during the same workshop series, Adriana says:
Adrijana: I’m going to explain protactile philosophy to you. I’m not going to preach. It’s going to be a discussion between the two of us. So let’s say that I come up to you, and I start explaining: ‘There’s a table over there, and there’s a door further over there.’ Do you understand me?.
DB Participant: Yes.
Adrijana: No you don’t.
DB Participant: You said that there is a wall over there [points] and a door over there [points] right?.
Adrijana: No, the door is over there [points].
DB Participant: Well, whatever.
Adrijana: Yeah, but that’s exactly it. It’s important. When people point like that to direct you, and you’re standing in the middle of the room, you’re totally lost. Right? [student nods]. You’re sitting here, and it might seem clear for a minute, but when you stand up and try to find the things I just located for you, the directions won’t seem to match the environment and you’ll be confused. Deaf [sighted] people do that—they point to places, but that’s not clear.
DB Participant: Well, yeah. That’s visual information.
Adrijana: Right. But it has to be adapted to be protactile. So instead of pointing, we have to teach them to do this...
To direct her student to the door, Adrijana produced a pointing sign foreign to ASL. Instead of extending a finger out into space along a visual trajectory, she took the student’s hand, and turned it over so the palm was facing up. Just as in Fig. 7, she held it in place with her left hand from underneath. Then, with her right hand, she located herself and her interlocutor by pressing a finger into the upturned palm to mean “here”. Then, she touched her finger first to her interlocutor’s chest (meaning “you”) and then she touched her own chest to mean, “me”. This sequence can be glossed, “here, you, me,” and the translation would be, “We are here”. This is a representation of the origo. From there, Adrijana establishes the relative location of the door. First, she presses the thumb of her left hand into the location she has associated with “here”, and keeps it pressed down. Then, she traces a path from “here” to the door. Finally, she presses once in the location associated with the door, to mean “the door is here in relation to us”.
In order to receive and interpret Adirjana’s directions, her student is once again faced with a choice: Will she stick with the old ways of being DeafBlind, or will she give herself over to the new ways. When people are becoming blind slowly, they adapt, little by little. However, every time an addressee is put in a position to resolve reference in this way, a kind of pressure is exerted. The slow gradual process of becoming blind becomes a switch. Either the structure of the environment snaps to tactile coordinates or it snaps to visual coordinates and each of those grids comes with a whole system of norms and values, internalized as a way of moving through, and directing attention to, the world. At some level, people choose how they want to be. But if someone gives you directions to the door in protactile language, you have to commit, in that moment, to being protactile in order to interpret the instructions, hence, “being for speaking”, where one’s way of being in the world is oriented to the categories and relations encoded in the language being spoken. It is important to note that the options available to DeafBlind people in any one speech situation or interaction are sociohistorical products. However, when DeafBlind leaders wanted to convert people from one way of being to another, intersubjective grammar, instantiated in particular kinds of institutional and pedagogical interactions, played a key role.
6. THE DIFFERENCE INTERSUBJECTIVE GRAMMAR MAKES
In this article, I have shown how intersubjective alignments required to individuate objects of reference can be accomplished by DeafBlind people via both linguistic and non-linguistic means. Protactile DeafBlind people identified the location of an object on a nearby table using an emerging deictic system, while non-protactile DeafBlind people did the same by physically guiding their interlocutor to its location. However, I have shown that given the broader socio-historical context of the Seattle DeafBlind community, having intersubjective grammar made a difference— namely, it reinforced new ways of being DeafBlind and imposed a choice between old and new ways of being at a critical political juncture. Protactile leaders were aware of this, which is why, in moments when they wanted to convert a member of their community to the new way of being, they often employed and thematized deictic reference.
This is one way engagement systems can matter. However, the significance of engagement systems will of course vary across contexts. In order to account for this, typological work should (continue to) be conducted in the context of, or paired with, deep ethnographic inquiry that includes analyses of interaction and language-use, embedded in local patterns of activity over an extended time period and such a project carries with it certain methodological entailments, since you can’t just ask people, “How does this pair of auxiliaries reinforce your way of being in the world?”, or “What sociohistorical forces were most important in shaping your emergent deictic system?” Instead, the analyst must work backward from observable effects of attachments like these in everyday life and interaction. While any ethnographic context would likely generate relevant insights, communities where new languages are emerging, or partially conventionalized communication systems are in use, promise to be particularly productive. (e.g. Abner et al., 2019; Brentari and Goldin-Meadow, 2017; Coppola and Senghas, 2010; de Vos and Pfau, 2015; Lutzenberger et al., 2021; Richie et al., 2014; Meir et al., 2010)
For example, in The Nature of Signs (2014), Mara Green examines interactions in Nepal between hearing speakers of Nepali, Deaf signers of the highly conventional national sign language (Nepali Sign Language, or “NSL”) and users of “natural sign”, which is a partially-conventionalized communicative repertoire (p. 1). Over the course of 25 months of fieldwork, Green tracks instances of “partial understanding, mis-understanding and non-understanding” (p. 140), and observes that grammatical resources for expressing mood, or rather their absence, in natural sign is the commonality across those instances. The following includes two such cases (p. 141):
[M]ultiple times when Shirla told Bhola and me that her daughter-in-law had thrown Shrila’s notebook in the toilet and then showed up the following day with said notebook. I wonder if the daughter-in-law had actually threatened to throw the notebook away (e.g. by holding the notebook near the toilet), or had signed that she would throw it away (e.g. by pointing to the toilet and to the notebook and making a throwing sign), and if in fact Shrila was actually telling us that, and we misinterpretered her. Similarly, Bohla and I once understood a deaf signer (who I’m not naming due to the sensitive nature of the comment) to be claiming that a close relative was pregnant by a man who was not her husband. Later she criticized the woman’s inappropriate behaviors without mentioning the pregnancy. In retrospect, I wonder whether the signer was saying that the relative could get pregnant or that she (the signer) was worried about such an eventuality or that she (the signer) had told the relative that she (the relative) might get pregnant.
Green reports many similar cases, where intersubjective alignments falter, and argues that “what unites these examples is the relationship of the action (above, hitting, getting arrested, and below, throwing away a notebook, getting pregnant, accepting food, vomiting) to ‘reality’ in the linguistic sense. Actions,” she writes, “do not merely occur. They may be threatened, portended, possible, impossible, likely, desired, feared, or about to happen.” (p. 141). One of several possibilities Green puts forth to explain the observed pattern is the absence of conventional grammatical resources for making distinctions like these via markers of mood or modality (p. 141). Importantly, failures like these to achieve mutual understanding lead to asymmetrical and unfavorable outcomes for the person associated with natural sign. This is because other people in the situation do not attribute misunderstandings to the sparseness of natural sign as a system. Instead, they assign negative social attributes directly to the natural signer, rendering them “unintelligible”, “unreliable”, or even casting them as “liars” (Green 2014:11).
This analysis highlights precisely what is at stake in having or not having such resources. In Nepal, during the historical moment when Green conducted fieldwork, intersubjective grammar mattered for natural signers because its absence was blamed on them, thereby aligning them with forms of personhood that were devalued, stigmatized and viewed (by some) as not worth trying to communicate with.
* Grammatical mood is not a prototypical example of an engagement system, as defined by Evans et al. (Evans et al., 2018). However, along with evidentiality, miratives, and focus, mood and modality are identified as ”neighboring” systems and the boundaries are not always clear cut. The definition they use for “engagement” is, “a grammatical system for encoding the relative accessibility of an entity or state of affairs to the speaker and addressee.” (2018a:118). In the examples given by Green (Green, 2014b), it is a matter of “access”, where the thing being accessed is a relation between (1) a description of an event and (2) the “reality” of the event, or the way in which the speaker experiences it (i.e. as a memory, inference, prediction, or actuality). Because that relation does not correspond for speaker and addressee, conflicting expectations arise regarding the consequences of the reported event. Something similar happens on the “entity” level (as opposed to the “event”) when a person points to depict the event of pointing on the narrative plane. The referent must be retrieved in that case from an imagined space which the space in front of the speaker in the speech situation stands in for (this happens frequently in ASL discourse). However, the addressee mistakenly interprets the point as an instruction to search for a referent in the immediate environment (something I have observed in ASL discourse). Reference in a case like that is not resolved because the point is interpreted as a prompt to search in the actual environment but the intended referent was in the imagined environment.*
Many analyses of new, emerging, or partially conventionalized languages take into account social factors. However, those factors are often treated as static qualities attributable to the language community as whole, or to sub-groups within it. For example, the typological features of the language are correlated with the ratio of hearing to deaf signers, the presence or absence of an institutional context, the size of the community, the presence or absence of intergenerational transmission, and so-on (e.g. see Le Guen et al., 2020:9 for a discussion.) In contrast, the argument presented here (like Green’s) treats sociality as a process, within which the effectiveness or significance of particular grammatical systems becomes ethnographically graspable as those systems emerge (or not) in historical time. I hope, in analyzing the effectiveness and significance of a new deictic system in protactile language, I have demonstrated the utility of such an approach for understanding how intersubjective grammar matters, for whom, and in what circumstances.
ACKNOWLEDGEMENTS
Funding for ethnographic aspects of this research provided by the Wenner-Gren Foundation (Grant #8110 and #9146). Funding for linguistic aspects of this work provided by the National Science Foundation (BCS-1651100). Support for the writing phase provided by the Saint Louis University Research Institute and the Andrew W. Mellon Fund. Thank you to Nick Evans, Alan Rumsey, and the participants of the Centre of Excellence for the Dynamics of Language at Australian National University, who commented on earlier versions of this work and to Jelica B. Nuccio and John Lee Clark for their ongoing and influential engagement as well as the DeafBlind people who participated in this research. Finally, thank you to two anonymous reviewers, who made helpful suggestions for this and future work.
References
Abner, N., Flaherty, M., Stangl, K., Coppola, M., Brentari, D., 2019. SusanGoldin-Meadow, 2019. The Noun-Verb Distinction in Established and Emergent Sign Systems. Language 95, 230–267.
Barker, M., Nakassis, C.V., 2020. Images: An Introduction. Semiotic Rev. 9.
Battison, R., 1978. Lexical Borrowing in American Sign Language. Linstock Press, Silver Spring, MD.
Brentari, D., Goldin-Meadow, S., 2017. Language Emergence. Annual Rev. Linguist. 3, 363–388.
Bühler, K., 2001. Theory of Language: The Representational Function of Language. John Benjamins, Amsterdam, Philadelphia, PA.
Checchetto, A., Geraci, C., Cecchetto, C., Zucchhi, S., 2018. The Language instinct in extreme circumstances: The transition to tactile Italian Sign Language (LISt) by Deafblind signers. Glossa: A J. General Linguist. 3, 1–28. https://doi.org/10.5334/gjgl.357. Clark, J.L., 2017. Distantism (https://johnleeclark.tumblr.com/). URL: https://johnleeclark.tumblr.com/..
Clark, J.L., 2019. Tactile Art. Poetry Magazine (Online) URL: https://www.poetryfoundation.org/poetrymagazine/articles/150914/ tactile-art..
Clark, J.L., Nuccio, J.B., 2020. Protactile Linguistics: Discussing recent research findings. J. Am. Sign Languages Literat. (https:// journalofasl.com/protactile-linguistics/). URL: https://journalofasl.com/protactile-linguistics/.
Collins, S., 2004. Adverbial Morphemes in Tactile American Sign Language. Ph.D. thesis. Graduate College of Union Institute and University.
Collins, S., Petronio, K., 1998. What Happens in Tactile ASL?. In: Lucas, C. (Ed.), Pinky Extension and Eye Gaze: Language Use in Deaf Communities. Gallaudet Unviersity Press, Washington DC, pp. 18–37.
Cooperider, K., Slotta, J., Núñez, R., 2016. Uphill and Downhill in a Flat World: The Conceptual Topography of the Yupno House. Cognitive Sci. 41, 768–799.
Coppola, M., Senghas, A., 2010. Deixis in an emerging sign Language. In: Brentari, D. (Ed.), Sign Languages: A Cambridge Language Survey. Cambridge University Press, Cambridge, pp. 543–569.
Desclès, J.P., 2009. Prise en charge, engagement et désengagement:. Langue française n" 162, 29–53. https://doi.org/10.3917/lf. 162.0029. URL: https://www.cairn.info/revue-langue-francaise-2009-2-page-29.htm?ref=doi.. de Vos, C., Pfau, R., 2015. Sign Language Typology: The Contribution of Rural Sign Languages. Annual Rev. Linguist. 1, 265–288. https://doi.org/10.1146/annurev-linguist-030514-124958, URL: https://www.annualreviews.org/doi/10.1146/annurev-linguist030514-124958.
Diessel, H., Coventry, K.R., 2020. Demonstratives in Spatial Language and Social Interaction: An Interdisciplinary Review. Front. Psychol. 11, 555265.
DuBois, J., 2007. The Stance Triangle. In: Engelbretson, R. (Ed.), Stancetaking in Discourse. Benjamins, Amsterdam, pp. 139–182.
Dudis, P., Hochgesang, J.A., Shaw, E., Villanueva, M., 2020. Introduction to Motivated Look at Indicating Verbs in ASL (MoLo) Project (https://osf.io/h8gk4/)..
Edwards, Terra, 2014. Language Emergence in the Seattle DeafBlind Community. PhD Thesis, The University of California,Berkeley.
Edwards, Terra, 2015. Bridging the Gap Between DeafBlind Minds: Interactional and social foundations of intention attribution in the Seattle DeafBlind community. Front. Psychol. (Language Sci. Section) 6.
Edwards, Terra, 2017. Sign Creation in the Seattle DeafBlind Community: A Triumphant Story about the Regeneration of Obviousness. Gesture 16 (2), 304–327.
Edwards, Terra, Brentari, Diane, 2020. Feeling Phonology: The emergence of tactile phonological patterns in protactile communities in the United States. Language 96 (4), 819–840.
Edwards, Terra, Brentari, Diane, 2021. The Grammatical Incorporation of Demonstratives in an EmergingTactile Language. Front. Psychol. 11(579992).
Enfield, N., Sidnell, J., 2014. Language Presupposes an Enchronic Infrastructure for Social Interaction. In: Dor, D., Knight, C., Lewis, J. (Eds.), Social Origins of Language: Studies in the Evolution of Language. Oxford University Press, Oxford, pp. 92–104. Evans, N., 2003. Context, Culture, and Structuration in the Languages of Australia. Annual Rev. Anthropol. 32, 13–40. https://doi.org/10.1146/annurev.anthro.32.061002.093137.
Evans, N., Bergqvist, H., Roque, L.S., 2018. The grammar of engagement I: framework and initial exemplification. Language Cognition 10, 110–140.
Fenlon, J., Cooperrider, K., Keane, J., Brentari, D., Goldin-Meadow, S., 2019. Comparing sign language and gesture: insights from pointing. Glossa: A J. General Linguist. 4, 1–26. https://doi.org/10.5334/gjgl.499/.
Forker, D., 2020. Elevation as a grammatical and semantic category of demonstratives. Front. Psychol.. https://doi.org/10.3389/fpsyg.2020.01712.
Friedner, M., Helmreich, S., 2012. Sound Studies Meets Deaf Studies. Senses Soc. 7, 72–86.
Gershon, I., 2017. Language and the Newness of Media. Annual Rev. Anthropol. 46, 15–31.
https://doi.org/10.1146/annurevanthro102116041300.
Goldin-Meadow, S., Brentari, D., 2017. Gesture, sign and language: The coming of age of sign language and gesture studies. Behav. Brain Sci. 40. https://doi.org/10.1017/S0140525X15001247.
Goodwin, C., 2007. Environmentally Coupled Gestures. In: Duncan, S., Cassell, J., Levy, E. (Eds.), Gesture and the Dynamic Dimensions of Language. John Benjamins, Amsterdam Philadelphia.
Granda, A., Nuccio, J., 2018. Protactile Principles. Tactile Communications URL: https://www.tactilecommunications.org/ ProTactilePrinciples. Green, E.M., 2014a. Building the Tower of Babel: International Sign, linguistic commensuration, and moral orientation. Language Soc. 43, 445–
465.
Green, E.M., 2014b. The Nature of Signs: Nepal’s Deaf Society, Local Sign, and the Production of Communicative Sociality. Ph.D. thesis. The University of California Berkeley..
Hanks, W.F., 1990. Referential Practice: language and lived space among the Maya. University of Chicago Press, Chicago.
Harkness, N., 2014. Songs of Seoul: An Ethnography of Voice and Voicing in Christian South Korea. University of California Press, Berkeley.
Haviland, J.B., 2014. The emerging grammar of nouns in a first generation sign language: specification, iconicity, and syntax. Gesture 13, 309–353.
Hochgesang, J., Crasborn, O., Lilo-Martin, D., 2018. ASL Signbank. URL: https://aslsignbank.haskins.yale.edu/..
Heritage, J., 2012. Epistemics in action: action formation and territories of knowledge. Res. Language Social Interact. 45, 1–29.
Hull, M., 2012. Government of Paper: The Materiality of Bureaucracy in Urban Pakistan. University of California Press, Berkeley. Hyland, K., 2005. Stance and engagement: a model of interaction in academic discourse. Discourse Stud. 7, 173–192. https://doi. org/10.1177/1461445605050365, URL: http://journals.sagepub.com/doi/10.1177/1461445605050365.
Inoue, M., 2004. What Does Language Remember?: Indexical Inversion and the Naturalized HIstory of Japanese Women. J. Linguistic Anthropol. 14, 39–56.
Iwasaki, S., Barlett, M., Manns, H., Willooughby, L., 2018. The challenges of multimodality and multisensorality: Methodological issues in analyzing tactile signed interaction. J. Pragmatics 143, 215–227. https://doi.org/10.1016/j.pragma.2018.05.003.
Keating, E., Mirus, E., 2003. American Sign Language in virtual space: interactions between deaf users of computer-mediated video communication and the impact of technology on language practices. Language Soc. 32, 693–714.
Kockelman, P., 2010. Enemies, Parasites, andNoise: How to take up residence in a system without becoming a term in it. J. Linguistic Anthropol. 20, 406–421.
Kusters, A., 2017. Gesture-based customer interactions: deaf and hearing Mumbaikars? multimodal and metrolingual practices. Int. J. Multilingual.14, 283–302. https://doi.org/10.1080/14790718.2017.1315811.
Kusters, A., 2020. The tipping point: On the use of signs from American Sign Language in International Sign. Language Commun. 75, 51–68.
Landaburu, J., 2007. La modalisation du savior en langue andoke (Amazonie colombienne). In: L’énonciation médiatisée II. Le traitementé?pistémologique de l’information. Peeters, Louvain, pp. 23–47..
Larkin, B., 2013. the Politics and Poetics of Infrastructure. Annual Rev. Anthropol. 42.
Le Guen, O., Safar, J., Coppola, M. (Eds.), 2020. Emerging sign languages of the Americas. Number volume 9 in Sign language typology series, De Gruyter Mouton, Boston..
Lemon, A., 2018. Technologies for Intuition. University of California Press, Oakland.
Lutzenberger, H., de Vos, C., Crasborn, O., Fikkert, P., 2021. Formal variation in the Kata Kolok lexicon. Glossa: A J. General Linguist. 6.
https://doi.org/10.16995/glossa.5880. URL: https://www.glossa-journal.org/article/id/5880/..
McMillen, S.K., 2015. Is Protactile Habitable at Gallaudet University: What does it take? Ph.D. thesis. Gallaudet University. Washington, D.C..
Meir, I., Sandler, W., Padden, C., Aronoff, M., 2010. Emerging sign languages. In: Oxford Handbook of Deaf Studies, Language, and Education. vol. 2, pp. 267–280..
Mesch, J., 2001. Tactile Sign Language: Turn Taking and Questions in Signed Conversations of Deaf-blind People. Signum, Hamburg.
Mesch, J., 2013. Tactile signing with one-handed perception. Sign Language Stud. 13, 238–263. https://doi.org/10.1353/ sls.2013.0005.
Mesch, J., Raanes, E., Ferrara, L., 2015. Co-forming real space blends in tactile signed language dialogues. Cognitive Linguist. 26. https://doi.org/10.1515/cog-2014-0066.
Mesh, K., Hou, L., 2020. Negation in San Juan Quiahije Chatino Sign Language. Gesture 17, 330–374. https://doi.org/10.1075/ gest.18017.mes.
Murphy, K., 2005. Collaborative imagining: the interactive uses of gestures, talk, and graphic representation in architectural practice. Semiotica 156, 113–145.
Núñez, R., Sweetser, E., 2006. With the future behind them: convergent evidence from language and gesture in teh cross-linguistic comparison of spatial construals of time. Cognitive Sci. 30, 401–450.
Petronio, K., Dively, V., 2006. YES, #NO, Visibility, and Variation in ASL and Tactile ASL. Sign Language Stud. 7..
Quinto-Pozos, D., 2002. Deictic Points in the Visual-Gestural and Tactile-Gestural Modalities. In: Meier, R.P., Cormier, K., QuintoPozos, D. (Eds.), Modality and Structure in Signed and Spoken Languages. Cambridge University Press, Cambridge, pp. 442– 467.
Reed, C.M., Delhorne, L.A., Durlach, N.I., Fischer, S.D., 1995. A study of the tactual reception of Sign Language. J. Speech Hear. Res. 38.
Richie, R., Yang, C., Coppola, M., 2014. Modeling the Emergence of Lexicons in Homesign Systems. Topics Cognitive Sci. 6, 183– 195. https://doi.org/10.1111/tops.12076, URL:https://onlinelibrary.wiley.com/doi/10.1111/tops.12076.
Rumsey, A., 2014. Language and Human Sociality. In: Enfield, N., Kockelman, P., Sidnell, J. (Eds.), The Cambridge Handbook of Linguistic Anthropology. Cambridge University Press, Cambridge.
Russell, K., 2020. Facing Another: The Attenuation of Contact as Space in Dhofar, Oman. Signs Soc. 8, 290–318.
Shankar, S., Cavanaugh, J.R., 2012. Language and Materiality in Global Capitalism. Annual Rev. Anthropol. 41, 355–369. https://doi.org/10.1146/annurev-anthro-092611-145811.
Shaw, E., 2019. Gesture in Multiparty Interaction. Gallaudet University Press, Washington, DC.
Sicoli, M., 2016. Repair organization in Chinatec whistled speech. Language 92, 411–432.
Sidnell, J., Enfield, N.J., 2012. Language Diversity and Social Action: A Third Locus of Linguistic Relativity. Current Anthropol. 53, 302–333.
Slobin, D.I., 1996. From ‘Thought and Language? to ‘Thinking for Speaking?. In: Gumperz, J.J., Levinson, S.C. (Eds.), Rethinking Linguistic Relativity.Cambridge University Press, Cambridge, pp. 70–96.
Streeck, J., 2015. Embodiment in Human Communication. Annual Rev. Anthropol. 44, 419–438.
https://doi.org/10.1146/annurevanthro102214014045.
Willoughby, L., Iwasaki, S., Bartlett, M., Manns, H., 2018. Tactile sign languages, In: Östman, J.O., Verschueren, J. (Eds.), Handbook of Pragmatics.Benjamins. vol. 21, pp. 239–258.
END
Over the past several decades, linguists have established an empirically grounded case for the existence of “engagement systems”—a type of grammatical system that has the special role of facilitating intersubjective engagement (Evans et al., 2018; Landaburu, 2007; Hyland, 2005; Desclès, 2009, among others). This work has yielded a typological framework for analyzing grammatical mechanisms that track, compare, and engage “the attentional and epistemic states of interlocutors” (Evans et al., 2018:110). In other words, engagement systems encode a kind of “grammaticalized intersubjectivity” (Evans et al., 2018:113). Evans et al. (Evans et al., 2018:110) specify three main types of engagement systems based on their primary function: (1) reference to the immediate environment (e.g. deictics); (2) reference to the discourse environment (e.g. definite vs. indefinite, ‘the’ vs. ‘a’ in English; and (3) ‘event-depicting’ propositions. An example, given by Landaburu (2007) and reported in Evans et al. (Evans et al., 2018), involves a pair of auxiliaries in Andoke, an isolate language of the Columbian Amazon (p.113). According to Evans et al. (Evans et al., 2018), the auxiliaries “are made up of two parts: the first element encodes the dimension of engagement—the relative access of speaker and hearer—and the second element marks subject agreement (i.e. who is undertaking the activity; in this case, the day or the sun itself, which is encoded in a third person singular inanimate subject).” There are two additional engagement terms and all together, the system forms a 2 x 2 matrix one element from the engagement set. (addressee knowledge and speaker knowledge), and “[n]o descriptive sentence can be constructed without employing one element from the engagement set.” (p. 114).
* a. “The day is dawning (as we can both see).” b. “The day is dawning (as I witness, but which you were not aware of).”
It is clear, given the English translation in example (1) that epistemological alignment can be facilitated by language, whether or not there are special grammatical systems dedicated to that function. It has also been amply demonstrated that semiotic resources of all kinds can be integrated with language to achieve similar ends (DuBois, 2007; Dudis et al., 2020; Enfield and Sidnell, 2014; Fenlon et al., 2019; Green, 2014a; Goldin-Meadow and Brentari, 2017; Goodwin et al., 2007; Haviland, 2014; Heritage, 2012; Kusters, 2017; Kusters, 2020; Mesh and Hou, 2020; Murphy, 2005; Núñez and Sweetser, 2006; Streeck, 2015; Shaw, 2019). Given this, Rumsey (Rumsey et al., 2014) and Evans et al. (Evans et al., 2018) have asked: What difference does grammar, as such, really make when there are so many other resources available?
This article addresses that question by analyzing interactions between DeafBlind people in Seattle, Washington at a time when shifting cultural and historical pressures were giving rise to new grammatical systems that target intersubjective coordination (Edwards and Brentari, 2021). Many members of these communities were born sighted, acquired American Sign Language (ASL) as children, and became blind slowly over the course of many years. As that process unfolded, visual communication became increasingly untenable. Historically, this problem was addressed via increased dependence on sighted interpreters. However, in the early 2010s, two DeafBlind leaders in Seattle initiated the “protactile” movement, which was built on the idea that all human activity can be re-routed through tactile channels. Vision and hearing are not necessary. Since then, there has been a shift away from visual “access” via interpreters or otherwise (Clark, 2021) and an explicit push, instead, toward maximizing the potential of tactile channels— not only for language and communication (Clark and Nuccio, 2020), but for life in general (Clark, 2019; Granda and Nuccio, 2018; McMillen, 2015).
It is well-known that linguistic signs can be routed and re-routed through diverse infrastructural, material, technological, and sensory channels (Barker and Nakassis, 2020; Friedner and Helmreich, 2012; Gershon, 2017; Harkness, 2014; Hull, 2012; Inoue, 2004; Keating and Mirus, 2003; Kockelman, 2010; Larkin, 2013; Lemon, 2018; Russell, 2020; Shankar and Cavanaugh, 2012). And going in reverse, it has also been shown that the (re)-channeling of linguistic signs affects the internal organization of the grammar so that over time, languages come to anticipate the dimensions of the environment that have intersubjective affordances for physically, historically, culturally, and geographically situated speakers (Bühler, 2001; Cooperider et al., 2016; Diessel and Coventry, 2020; Edwards, 2014; Evans, 2003; Forker, 2020; Hanks, 1990; Sicoli, 2016). Studying intersubjective engagement at a time when a new deictic system was beginning to emerge, generates opportunities to understand how the re-channeling of linguistic signs can affect the internal organization of grammatical systems.
This article also contributes to a growing body of work on interaction and communication among DeafBlind people in communities in and outside of the United States (Collins and Petronio, 1998; Mesch, 2001; Mesch, 2013; Quinto-Pozos, 2002; Collins, 2004; Petronio and Dively, 2006; Mesch et al., 2015; Checchetto et al., 2018; Iwasaki et al., 2018). This work has shown that pragmatic aims such as asking a question, identifying a referent, or signaling a bid for a turn, can be accomplished by DeafBlind people via linguistic and non-linguistic means (see Willoughby et al. Willoughby et al., 2018 for an overview). Because visual signed languages, such as ASL, are not fully perceptible via touch (Reed et al., 1995), the majority of pragmatic mechansisms that have been documented are non-linguistic or modifications of a visual language. For example, in ASL, Quinto-Pozos (Quinto-Pozos, 2002) reports an avoidance of, and restricted range of functions for, pointing signs. Iwasaki et al. (Iwasaki et al., 2018) describe how DeafBlind signers of Auslan manage turns at talk without the benefit of nonmanual features such as eye gaze, eyebrow movements, and facial expressions that sighted Auslan signers depend on in performing corresponding communicative functions. Petronio and Dively (Petronio and Dively, 2006) also report a higher frequency of the words “yes” and “no” in discourse, which they attribute to a lack of access to non-manual expressions that usually do that pragmatic work such as head nods and eye brow movements. In protactile DeafBlind communities in the U.S., a new tactile language has begun to emerge under pressures exerted by the protactile movement (Edwards, 2014). Protactile language is perceptible through touch (Edwards and Brentari, 2020), and it also retrieves values from an environment organized along tactile lines (Edwards, 2017; Edwards and Brentari, 2021). Rather than finding non-linguistic ways to compensate for lack of access to language, protactile people have a new language at their disposal.
One of the first things to emerge in this language was a new engagement system. In what follows, I analyze interactions where this system is applied and ask how it differs from pragmatic or physical strategies that could be used instead.
*In this article, I begin by analyzing data that was collected in the early stages of the protactile movement. In 2010, I attended and videorecorded a series of protactile workshops, led by two DeafBlind instructors for 11 DeafBlind participants for a total of 10 weeks. Later, in 2016, I asked three protactile DeafBlind people, in groups of two, to give one another directions to various locations nearby and videorecorded both the act of direction-giving and the addressee’s attempts to find the target in order to elicit deictics and test their efficacy. In 2019, I repeated a sub-set of these tasks with 4 dyads, composed of 8 DeafBlind people who did not know protactile language and analyzed them as I analyzed the protactile data. For the purposes of this paper, I reviewed these data and isolated interactions where epistemic alignments were attempted by way of deictic reference. In analyzing those interactions, I drew not only on what was present in the videorecordings, and what is known from prior linguistic analysis (Edwards, 2014; Edwards, 2017; Edwards and Brentari, 2021), but also on prior historical research, semi-structured ethnographic interviews in protactile communities (70 interviews in total), and more than 30 months of participant observation. This combination of methods allowed me to understand not only how grammatical systems were applied moment-to-moment to facilitate intersubjective engagement in the interaction (or not), but also how those activities accrued social and political significance and effectiveness for participants in a particular historical moment.*
In Section 2, I begin with a brief summary of the relevant linguistic structures and how they differ from corresponding structures in visual languages, as previously reported in (Edwards and Brentari, 2020). I then move on, in Section 3, to analyze interactional sequences where those linguistic structures are instantiated by protactile people to facilitate convergence on referents in the immediate environment. In Section 4, I show how similar referential tasks are accomplished by non-protactile DeafBlind people without the benefit of special grammatical resources.
The question motivating these analyses is: If intersubjective work can be accomplished without grammatical resources, why should scholars interested in intersubjectivity care about grammar at all? Isn’t it just one of many available semiotic resources? I argue that the only way to answer that question in a satisfactory manner is to foreground local, historically and socio-politically embedded processes. Given this framing, the question becomes: For whom, and in what circumstances, do engagement systems make a difference, or not? In Section 5, I argue that engagement systems play a crucial role in a particular historical moment in the Seattle DeafBlind community. I conclude in Section 6 by proposing an interdisciplinary approach to typologies of engagement systems that can account not only for how they are structured, but also why they matter for the people who use them.
2. ENGAGEMENT SYSTEMS IN PROTACTILE COMMUNITIES
New grammatical systems that target intersubjective engagement are emerging in communities of protactile, DeafBlind signers in the U.S., including a conventional pointing or “deictic” system, used to identify the locations of referents (“locatives”) and to individuate referents against a horizon of alternate possibilities (“demonstratives”) (Edwards and Brentari, 2021). While research on the emergent deictic system is ongoing, attested locative values include “path” vs. “discrete” and demonstrative values include “foregrounded” vs. “backgrounded” (Edwards, 2015). In order to understand how these meanings are expressed and distinguished from one another, it is helpful to review some key findings reported in Edwards and Brentari, 2020 regarding phonological patterns in protactile language.
Unlike ASL, where signs are produced with the two articulators of the signer, protactile language has four potential articulators: the hands and arms of Signer 1 and the hands and arms of Signer 2 (co-animator). The incorporation of the listener’s body into the articulatory process has many consequences for the internal structure of the language, which begin with a crucial observation by Granda and Nuccio, 2018 that in ASL, signs are produced on, and in front of, the body of (one) signer, or in “air space”. In air space, the relative locations of signs are perceived against the backdrop of the signer’s body. Receiving ASL through touch, one has access to the hand of the signer, but not the visual backdrop that is necessary for making relevant distinctions. For example, the ASL signs SECRET and SELF are differentiated based on their relative proximity to aspects of the signer’s body. SECRET is produced at the chin of the signer (Fig. 1a) and SELF is produced at the chest (Fig. 1b).
[[Figure 1. Pictures showing the differentiation of the ASL sign SECRET and the sign SELF (Hoschgesang et. al., 2018)]]
In contact space, signs are easily and consistently perceived by the addressee against the backdrop of their own body. For example, in Fig. 3d Signer 1 (left) combines a protactile locative (“press”) with a conventional ASL verb meaning “to put” in order to explain where he had put a block on the table. Together, they mean “put-here”. Just prior to this in the unfolding interaction, he had established the location of the table by tracing a square on the upturned palm of Signer 2 and fingerspelling “table”. Referring back to that space, “put-here” would be understood to mean, “put here near the corner of the table”. As discussed below, the relative spatial relations in the description are clear because they are perceived by Signer 2 against the proprioceptive backdrop of their own body. Recruiting the addressee’s body as part of the articulatory apparatus unnlocks the proprioceptive channel, thereby generating more material for the linguistic system to operate on. It also, however, generates a problem for the language, since the articulators of Signer 1 and Signer 2 must somehow be coordinated in an efficient and effective manner.
Edwards and Brentari, 2020 argue that early in the emergence of protactile phonology, the language resolved this problem by establishing conventional ways of inviting Signer 2 to contribute to the co-articulation of signs. They show that the conventionalization of these mechanisms involved assigning specific linguistic tasks to four articulators (“A1”-“ A4” as in Fig. 2), in the same way that two hands in visual signed languages (“H1” and “H2”) are assigned consistent and distinct tasks (Battison, 1978). Signs can also be produced using a single articulator on the body of the addressee, however, analyzing those signs would not reveal how the four articulators are coordinated. Therefore, we looked specifically at four-handed constructions used to express complex relational meanings, which we call “proprioceptive constructions,” or “PCs”.
PCs always include four types of functional units, which are produced in a particular temporal sequence (Edwards and Brentari, 2020). The first type of unit initiates the PC, telling Signer 2 that they will need to take an active articulatory role in what comes next. In Fig. 2a-b, Signer 1 (left) initiates the PC by using her dominant hand (A1) to tap on the back of Signer 2’s non-dominant hand (A3) prompting him to select a specific form. The “initiate” produced by Signer 1 (author) alerts Signer 2 (animator) that their active participation in the articulatory process is required and gives specific instructions for the next step in articulation of the PC.
*See (Edwards and Brentari, 2020) for attested categories of Initiate and all other superordinate PC categories*
Once the PC has been initiated, “contact space” has been activated. The next step is to generate a meaningful and phonologically constrained space, where further information can be conveyed. That space, which is actively produced by Signer 2, is called the “proprioceptive object,” or “PO”. In Fig. 2c, Signer 2 produces PO-PLANE with his dominant hand (A2), which he was instructed to do in Fig. 2b.
The PO is, from one perspective, an important part of the phonological organization of the language (i.e. how articulation and perception are constrained in systematic ways). From another perspective, it is important for understanding the emergent deictic system because it constitutes a meaningfully structured space, to, and through which, deictics can refer. For example, once a PO-PLANE like the one in Fig. 2c has been produced, Signer 1 can locate relations between referents in the immediate environment in a diagramtic fashion on that plane, as well as pathways to the referent from “here”.
The third task in producing a PC is to maintain the active contact space generated by the PO using a category we call Prompt-to-Continue (PTC). PTCs tell Signer 2, “Leave this hand here. There is more to come”— or in the case of PTC-
PUSH, “relax this hand, we are done with it.” For example, in Fig. 2d, after Signer 2 has produced the requested PO (using A2), Signer 1 grips the PO (using A3) and holds onto it for the remainder of the PC. This gripping action is an example of a PTC unit. PTCs are significant for understanding protactile deictics because they maintain the meaningfully structured space, through which, attention is modulated.
The fourth and final task in producing a PC is to draw attention to and characterize referents that correspond to aspects of the PO by producing combinations of movement and contact that convey information about size, shape, location, or movement of an entity. These units are called “Movement Contact Types,” or “MCs”. For example, in Fig. 2e, Signer 1 (right) uses A1 (her right hand) to trace a line from the palm of Signer 2’s right hand (A2), to the inside of the elbow. Fig. 2e shows the end of an MC-SLIDE describing a long, rectangular object.
*Within the context of the PC, the terms prompt and select should be understood as implying categorical perception. I do not mean, for example, that Signer 1 prompts Signer 2 to select any unit they choose in the co-construction of meaning. The distinction between “conveyor” of information and “receiver” of information, or “speaker” and “addressee,” is as clear in PT as it is in any language. What these terms are meant to capture is the fact that PT signers do not assume that “handshapes” produced by Signer 1 will be perceptible to Signer 2. Rather, handshapes are treated as indeterminate clues for what Signer 2 should “select” from an inventory of conventional “PO” categories. For people who have a command of protactile grammar, such inventories are readily accessible, while those who do not have a command of the grammar would be reduced to a more particularized process of mimicry, based on incomplete and partially perceptible input. One protactile expert compared the experience of trying to co-produce PCs with a non-PT DeafBlind signer to trying to sing in a large auditorium with a broken microphone. You do your part in initiating the construction, but the signal dies in transit due to faulty equipment, i.e. no PO is selected. Others have explained that the rhythm of articulation is slowed so much when this occurs that they can’t think and whatever they were about to say is not worth the effort when people are mimicking rather than selecting. Observations like these from PT language-users can be taken as ethnographic evidence that Signer 2 is expected to have independent linguistic resources for their side of the articulatory process. In particular, they should have an inventory of POs cognitively accessible, which can be selected according to the prompts they receive from Signer 1. Edwards and Brentari, 2020 have identified three clear patterns in how PCs are articulated: First, there is a constraint on order. The functional units described above must unfold in sequence: Initiate, PO, PTC, MC. Second, There is a redundancy rule: Information introduced in the MC must incorporate and contextualize information introduced by the PO. The MC cannot be used to introduce new information. Finally, each unit is assigned consistently to one of the four articulators (A1-A4). In other words, faced with the complex task of coordinating four articulators, PT signers now know what to do with their hands, when. The protactile deictics I focus on in this article are produced within a PC structure, and are therefore subject to these constraints. We are also beginning to identify some important patterns in how constraints on articulation may effect productivity. For example, preliminary analysis suggests that initiates involve different levels of articulatory complexity, which correspond inversely to the number of POs they can elicit. The most articulatorily simple option, I-TOUCH, can only initiate a new PC within the context of an already-established PC. On the opposite end of the spectrum, the most articulatorily complex option, IPROMPT can initiate all attested POs, and seems to be crucial for the process of creating new POs. Between these two extremes, there are several additional values. For example, INITIATE-HOLD, which is only slightly more articulatorily complex than I-TOUCH, can (unlike I-TOUCH) elicit POs that are not already established in a prior PC, but those POs are limited to two types (PO-PLANE and POCYLINDER). Further analysis is needed to confirm these preliminary observations. *
In the next section, I show how the protactile deictic system, which is constrained by these patterns, is employed by protactile people to direct one another to locations in the immediate environment. The response of the addressee is analyzed in order to discern how the directions given were effective. In Section 4, I compare patterns observed in these direction-giving sequences to patterns observed among non-protactile DeafBlind people engaged in a comparable task.
[[Figure 2. Proprioceptive Construction (“PC’). (a) initiate-prompt-tap: Signer 1 uses their A3 to tap the back of the A4 hand on Signer 2. (b) initiate-PO: Signer 1 uses their A1 to make a palm-up flat handshape. (c) PO-plane: Signer 2 copies the palm-up flat handshape with their A2. (d) PTC-grip: Signer 1 uses their A3 to grip the fingers of the A2 on Signer 2. (e) MC-press: Signer 1 uses their A1 to race a line from the palm of Signer 2’s A2, to the inside of the elbow.]]
3. PROTACTILE DIRECTION-GIVING
In 2016, in order to understand whether and how protactile deictics were effective for converging on objects of reference, I asked three DeafBlind people, who were active participants in the local protactile community in Washington, D.C., to play a game of hide and seek with one another (two at a time). One person in each dyad would take a toy block and hide it somewhere in the room, or somewhere in the linguistics department at Gallaudet University, where we were conducting the study. They would then return to their partner, and give them directions to the block. The film crew videorecorded the directions given and then the route taken by the addressee as they (in all cases) looked for, and found, the block. In this section, I begin by carefully analyzing one of these interactions to exemplify observed patterns
*All proper names used in examples are pseudonyms*
The participants in the interaction analyzed below, Oliver and Dominic, are both DeafBlind. In 2016, when the videos were recorded, they were living in Washington, D.C. and were frequent participants in local protactile events. They had also recently taken a 5-week protactile workshop, where they were deepening their skills and knowledge of protactile practices with DeafBlind teachers from Seattle. In this example, I asked Oliver (left) to place a soft plastic toy block anywhere on the table behind him and then explain to Dominic (right) how to find it. Oliver starts by initiating a PC and prompting Dominic to select PO-PLANE to represent a schematic map-like surface. He holds that hand in place (PTC-GRIP). Then he says, “you”, by pressing the pad of his finger into Dominic’s chest and“me” (by doing the same to his own chest). Dominic can perceive this sign because his non dominant hand (A4) is, by default and in this case, placed on top of Oliver’s dominant hand (A1), tracking its movements and receiving some handshape information (Clark and Nuccio, 2020). In Fig. 3, he presses two fingers into a location on the plane (MC-PRESS + MC-PRESS). This establishes the zero-point, or “origo” from which directions can proceed, as in, “We are here.”
Next, Oliver traces a square on Dominique’s palm (Fig. 3b), taps on the square (MC-TAP, Fig. 3c), and fingerspells, “table”. Together this sequence means roughly, “This [is the] table”, where MC-TAP functions as a demonstrative within the PC. The PC as a whole functions like a diagram of the immediate environment. In other words, it is through the PC that these complex deictic expressions are articulated. In the context of the unfolding interaction, this expression establishes a range of possible locations within a tactually discoverable boundary (i.e. somewhere on the square-shaped table). Next, in order to narrow the range of possible locations further, Oliver traces the square again and taps on the upper, right corner of the square. He then traces the corner he has just tapped on again (without the rest of the square). Then, he starts at the right edge of the corner and traces inward just a little and taps several times in rapid succession. Finally, Oliver says, “that where” and then he touches his own chest to mean “me”, and last, he says, “put-there” (Fig. 3d). Together, this sequence can be translated, “I put [the block] just to the left of the upper righthand corner of the table”. In response, Dominic walks toward the table, follows the right edge of the table with his hand until he reaches the corner. Then he moves his fingers just to the left of the corner and locates the block. The instructions were produced and received efficiently and Dominic quickly located the referent. This is due not only to the increasingly systematic patterns in articulation and perception in PCs, but also to the fact that the directions articulated to an environment with tactile structure. No vision or hearing is required to locate the corner of a table and the location of a block can easily be identified against that structured backdrop. This sequence shows that having a grammatical system that targets intersubjectivity can facilitate convergence on a referent in a tactile environment, for people who habitually orient to their environment in tactile ways.
[[Figure 3. (a) “We are here”: Oliver uses his A1 to establish the zero-point with two-fingers on the palm of Dominic’s A2. (b) “Table is here”: Oliver uses his finger to trace a square on the palm of Dominic’s A2. (c) “This [table]”: Dominic taps the square he traced with his A1. (d) “I put here”: Oliver uses his A1 on Dominic’s A2 to tell Dominic where on the table he put the block. All together Oliver stated, “I put the block here near this corner of the table.”]]
This type of engagement system enables the addressee to navigate in an environment autonomously, without physical guidance from another person. Prior to the protactile movement, there was no way to tell a DeafBlind person where the bathroom was, or how to get to the kitchen. This meant that someone (usually a sighted person) had to act as a guide, which increased dependence and reduced autonomy (Granda and Nuccio, 2018). It also led to a kind of passive orientation to the physical environment, which restricted tactile exploration and ultimately, tactile modes of existence. DeafBlind protactile theorist, Clark, 2017 describes this sense of unnecessary restriction, as it applies broadly in social life.
Despite the many barriers we encounter in society, we can gain much awareness about the world around us. But when we go exploring or when we just exist, sighted and hearing people rush into intervene. Can they help us? Please don’t touch. They will be happy to describe it to us. They will guide us. No, they will get it for us. It’s much easier that way.
Clark is focusing here on normative, ideological commitments to “distance”, or what he calls “distantism”. Against this backdrop, protactile engagement systems that direct attention in the immediate environment crystalize and enshrine routine acts of resistance among DeafBlind people. Each time protactile directions are given, interpreted, and acted on, the contours of a tactile world that can be known without sighted intervention are subversively re-inscribed.
4. POINTING IN AIR SPACE: ITS A STRETCH
In this section, I ask how a referential task similar to the one represented above is accomplished by DeafBlind people who have not acquired protactile language, and are therefore doing intersubjective work without the benefit of the specialized grammatical resources described in Section 2. The participants in this portion of the study were evaluated by DeafBlind protactile experts on our research team as being “tactile ASL signers,” meaning they communicate by receiving ASL through touch, and they have not acquired protactile language. They also self-reported that they were not “protactile”. In these interactions, reference was in many cases resolved, despite the absence of a functional engagement system, by bypassing language altogether.
For example, in the following interaction, two DeafBlind men, Tom (left) and Eli (right) are standing on a mat with edges that are detectable through the sole of a shoe. They are facing one another, with their hands in contact. There is a small, round, hip-height table, where three pens have been placed. A DeafBlind member of the research team placed the pens there, directed Tom to them, and asked him to explain to Eli what the pens feel like, and where the pens are so that Eli can find them. Once the researcher is gone, Tom begins to move his feet around and he says, hesitantly, to Eli, “Sit?”. Eli responds by pointing (in air space) to the mat below his feet. Tom then shuffles his right foot over several times, tracing the edge of the mat, while leaving his left foot planted. The table where the pens are hits Tom’s hip several times as he moves. Once Tom has completed the exploration, he says, in ASL “Oh I see”, suggesting that he is now oriented. These activities seem to establish a structured tactile space for Tom, within which the interaction can unfold.
However, Eli has stayed put in a single location and has not explored his environment along with Tom. Therefore, it is unclear if this environmental structure is shared between them. Eli turns his palms up as if to say, “Well? Ready?” and Tom begins his description of the pens. He isn’t able to get the description out, entirely, and there are repeated false starts along the way. The overall impression one gets from watching his description is that he can’t think of how to describe the pens. Then, as shown in Fig. 4a, Tom tells Eli where the pens are by pointing in air space in the direction of the pens. He pauses and moves his feet back and forth a few times, appearing to grimace. He seems unsure of how to direct Tom’s attention to the pens. After this long, grimacing pause (Fig. 4b), he says, “touch” in ASL, and then guides Eli’s hand to the pens (Fig. 4c). In the end, they converge on the location of the pens, however, they do so, by literally stretching the ASL demonstrative through air space, all the way to the referent. At this point, the pointing function of the sign has been replaced by a guiding function. Regarding engagement systems, we return here to our initial question: So what! If the same kind of intersubjective coordination can be accomplished without the benefit of protactile grammatical structures, why should those of us interested intersubjectivity care about grammar, as such, at all? Isn’t is just one of many available semiotic resources? What I want to propose is that a satisfactory approach to that question must foreground historical and socio-politically embedded instances of language-use, to ask instead: For whom, and in what circumstances, does intersubjective grammar make a difference? In the following section I argue that in a particular historical moment in the Seattle DeafBlind community, an emerging deictic system played a crucial role in converting DeafBlind people to a new way of being in the world, and in this sense was significant.
[[Figure 4. Ineffective visual pointing leads to “guiding”. (a) visual pointing: Tom uses both hands to point to the pens on the table while Eli looks at Tom’s face. (b) long pause: Tom looks off to the side with a grimace and holds his hands together in an anticipatory pose while Eli looks at Tom’s face. (c) guiding: Tom guides Eli to the pens on the table with their A1 and A4 joined.]]
5. CONVERSION MOMENTSThe data I analyze in this section were collected at a protactile workshop held in 2010 and 2011 in Seattle, just as the movement was beginning to gain ground. The DeafBlind instructors were actively trying to convert members of their community to the new protactile way of being. In the unfolding of interaction, deixis was often at the center of this conversion process. For example, in Fig. 5, Walter, a DeafBlind man who had recently moved to Seattle and was new to protactile practices, is drinking a soda on a break from the workshop. He runs into Adrijana, one of the instructors, and strikes up a conversation. While chit-chatting, he mentions that Victor, one of the sighted videographers, is “over there in the middle of the room, filming”. He produces the ASL signs: VICTOR, MIDDLE, and then he points, as in Fig. 5a. In Fig. 5b, Adirjana looks in the general direction that Walter is pointing in, pauses, and squints. However, she fails to locate Victor. In Fig. 5c, Walter responds to her silence by pointing again, this time with his arm extending further toward the referent. Next, Adrijana says, “You see Victor? I don’t see anything.” (not pictured). In Fig. 5, Walter is pointing in air space, which means that Adrijana can only perceive the finger itself, and the small trajectory created by moving the finger toward the location of the referent. The demonstrative point in Fig. 5 is not easy for Adrijana to perceive, as evidenced by her need to feel around the pointing handshape with both hands (not pictured), as well as her response in the subsequent interactional sequence (discussed below).
[[Figure 5. Failure to resolve reference leads to modified visual pointing. (a) Walter points using airspace to an area behind Adrijana while Adrijana uses both her hands to touch Walter’ pointing handshape. (b) Walter looks down while Adrijana squints toward the direction where Walter pointed. (c) Walter point again with his arm extended further out while Adrijana still squints in the general direction of the point.]]
Beyond this, though, there is a problem regarding the channels in the environment to which the point is meant to articulate. Sighted people navigate environments via various kinds of channels—systems of roads, sidewalks, tunnels, sight-lines, and so-on. To direct attention within an environment, a sense of these channels, where they go, and how you know when you’ve found one, must be shared by both parties in the interaction. Walter’s point was not only ambiguous as a sign against an imperceptible background. It was also articulated to an environment organized by visual channels. Sight-lines that go from one side of the room to the other, proceed through an environment, which, without visual access, lacks structure entirely (Fig. 6a). For a DeafBlind person, pointing out into unstructured air space works insofar as there are sighted people (such as interpreters) around who can patch it all together. For two DeafBlind people in their own environment, though, a different approach is needed.
For these reasons, a protactile person is not likely to set off into unstructured space. Instead, they would proceed around the edge of the room, following the orienting line where the wall meets the floor, or the ”shoreline” (Fig. 6b). Following shorline after shorline, a certain feeling for tactile relations develops, which extends out beyond the individual, as part of their experience of the world. In order to direct attention in a way that articulates to tactile channels, the pointing sign itself has to be perceptible, but so do the channels in the environment to which the pointing sign articulates. In a successful act of demonstrative reference, the intersubjective, or “shared” world and the means of representing that world, should align. Air space is inadmissible for both, because for protactile people, it lacks affordances for communication, both in and about the world.
[[Figure 6. visual vs. tactile navigation channels. (a) visual navigation: There is a room with two doors facing one another one on the far right wall and the other one on the far left. One person in front of the door on the left and another person stands on the front of the door on the right. They are able to use a “sight-line” to see each other. If the person on the left wanted to walk to the person on the right, the person could follow the sight-line. (b) tactile navigation: The same room is shown with the same people; however, this time a shoreline is shown. If the person on the left wanted to walk to the person on the right, they would need to follow the walls until they reached the person on the right.]]
After Walter and Adrijana fail to individuate and locate Victor, Adrijana tells Walter, “Hang on.” She puts down something she was holding and takes hold of Walter’s hand so it is facing palm up. While holding his hand in place, she says, “you” by pressing a finger into his chest (Fig. 7a), then “me” by pressing the same finger into her own chest (Fig. 7b). Then, in Fig. 7c, she presses her finger into a location on Walter’s upturned palm. Together, this means, “We are here,” and establishes an origo, or zero-point, in relation to which, Victor can be localized. In this case, the signer, Adrijana, does not know where Victor is. She is just giving Walter an example of how pointing in contact space works. Note that in order to interpret Adrijana’s utterance effectively and produce one like it, Walter would end up not only producing and receiving signs in a more tactile way, but also uncovering tactile navigation pathways in his environment. The two go hand in hand. In moments like these, both parties to the interaction must be in the tactile world to which the deictic values articulate (i.e. conventional MCs such as press, trace, tap, and grip, produced within a rule-governed PC structure). This kind of prospective orientation of one’s way of being in the world to the categories and relations encoded in engagement systems is what I call elsewhere Being for Speaking (Edwards, in preparation).
[[Figure 7. Protactile Pointing. (a) you: Adrijana uses her A1 to press her finger into Walter’s chest. (b) me: Adrijana uses her A1 to press her finger into her own chest. (c) here: Adrijana holds Walter’s A2 palm-up with her A3, she uses her A1 to press on Walter’s palm. (d) Victor: Adrijana signs Victor’s name sign, a V handshape on the chin. (e) there: Adrijana uses her A1 to press on another spot of Walter’s A2 palm.]]
My point of departure is the idea that our thoughts are prospectively oriented toward acts of speaking, which is what Dan Slobin calls “thinking for speaking”. Slobin was building on Roman Jakobson, who drew attention to the fact that grammar has requirements for which aspects of experience must be expressed. Therefore, Slobin writes (p.71): “[w]hatever else language may do in human thought and action, it surely directs us to attend—while speaking—to the dimensions of experience that are enshrined in grammatical categories” (Slobin et al., 1996). Being for speaking builds on that idea, drawing attention to acts of speaking that lead people not only to attend to one thing or another, but to be one thing or another.
*Slobin’s “Thinking for Speaking” is part of a much larger and heterogeneous body of work that falls under the heading of “linguistic relativity”. This body of work brings with it many debates that I do not want to wade into in this article, and therefore have not cited. Still, the reader may wonder about the relationship of this work to recent interventions that have looked explicitly at the relationship between linguistic resources and “social action”, such as Sidnell and Enfield (Sidnell and Enfield, 2012). While a more thorough discussion is outside the scope of this article, one crucial difference between my proposal and theirs (which has significant conceptual and methodological consequences) is that for Sidnell and Enfield (Sidnell and Enfield, 2012), interactional moves and instances of language-use can constitute, in and of themselves, a kind of “social action” even if they tend toward reproduction of the current social order. In the argument presented here, interactional moves and instances of language-use are taken as signs of social action, insofar as their effects have some demonstrable impact on the current social order, i.e. the privileging of vision over touch, and therefore visual over tactile modes of existence. They do not count, in and of themselves, as social action.*
In the Seattle DeafBlind community, there were two key, historical moments, which brought about new options for being DeafBlind (See Edwards, 2014). In the 1970s, a company that merged social services and manufacturing, called the Seattle Lighthouse for the Blind, established a DeafBlind employment program. This drew DeafBlind people from across the country. Meanwhile, an interpreter training program was established, which prepared students to work with DeafBlind people. These two institutions converged in ways that made ubiquitous mediation possible. In meetings, at social events, on vacation, at the grocery store, and elsewhere, each DeafBlind individual was paired with an interpreter. This situation yielded ways of being DeafBlind, which were based on the kinds of interpreting accommodations each person required and by the end of the 1990s, a range of possibilities had settled into one primary opposition: You could be “tunnel vision”, which meant that you communicated visually, but the message was modified as necessary to accommodate a shrinking field of vision. Or you could be “tactile”, which meant that you would receive ASL via touch. Note that this is not a transition to “contact space” or “protactile language”. It is a way to accommodate deteriorating vision by receiving messages meant to be seen, through touch (not unlike lip-reading to perceive a spoken language visually).
As the protactile movement took root, choices for how one could be DeafBlind shifted. Rather than choosing between being tactile or tunnel vision, both of which involved compensatory strategies for accessing visual phenomena, the new choice was between being protactile or not-protactile. The interaction represented in Fig. 7, unfolded at the beginning of that shift, when Adrijana and other DeafBlind leaders were doing all they could to convert members of their community to protactile ways of being DeafBlind. Therefore, this seemingly simple correction in Walter’s referential behavior is actually a request to embrace a new way of being in the world. This connection between deictic reference and protactile conversion was made more or less explicit during those works on many occasions. For example, in another encounter during the same workshop series, Adriana says:
Adrijana: I’m going to explain protactile philosophy to you. I’m not going to preach. It’s going to be a discussion between the two of us. So let’s say that I come up to you, and I start explaining: ‘There’s a table over there, and there’s a door further over there.’ Do you understand me?.
DB Participant: Yes.
Adrijana: No you don’t.
DB Participant: You said that there is a wall over there [points] and a door over there [points] right?.
Adrijana: No, the door is over there [points].
DB Participant: Well, whatever.
Adrijana: Yeah, but that’s exactly it. It’s important. When people point like that to direct you, and you’re standing in the middle of the room, you’re totally lost. Right? [student nods]. You’re sitting here, and it might seem clear for a minute, but when you stand up and try to find the things I just located for you, the directions won’t seem to match the environment and you’ll be confused. Deaf [sighted] people do that—they point to places, but that’s not clear.
DB Participant: Well, yeah. That’s visual information.
Adrijana: Right. But it has to be adapted to be protactile. So instead of pointing, we have to teach them to do this...
To direct her student to the door, Adrijana produced a pointing sign foreign to ASL. Instead of extending a finger out into space along a visual trajectory, she took the student’s hand, and turned it over so the palm was facing up. Just as in Fig. 7, she held it in place with her left hand from underneath. Then, with her right hand, she located herself and her interlocutor by pressing a finger into the upturned palm to mean “here”. Then, she touched her finger first to her interlocutor’s chest (meaning “you”) and then she touched her own chest to mean, “me”. This sequence can be glossed, “here, you, me,” and the translation would be, “We are here”. This is a representation of the origo. From there, Adrijana establishes the relative location of the door. First, she presses the thumb of her left hand into the location she has associated with “here”, and keeps it pressed down. Then, she traces a path from “here” to the door. Finally, she presses once in the location associated with the door, to mean “the door is here in relation to us”.
In order to receive and interpret Adirjana’s directions, her student is once again faced with a choice: Will she stick with the old ways of being DeafBlind, or will she give herself over to the new ways. When people are becoming blind slowly, they adapt, little by little. However, every time an addressee is put in a position to resolve reference in this way, a kind of pressure is exerted. The slow gradual process of becoming blind becomes a switch. Either the structure of the environment snaps to tactile coordinates or it snaps to visual coordinates and each of those grids comes with a whole system of norms and values, internalized as a way of moving through, and directing attention to, the world. At some level, people choose how they want to be. But if someone gives you directions to the door in protactile language, you have to commit, in that moment, to being protactile in order to interpret the instructions, hence, “being for speaking”, where one’s way of being in the world is oriented to the categories and relations encoded in the language being spoken. It is important to note that the options available to DeafBlind people in any one speech situation or interaction are sociohistorical products. However, when DeafBlind leaders wanted to convert people from one way of being to another, intersubjective grammar, instantiated in particular kinds of institutional and pedagogical interactions, played a key role.
6. THE DIFFERENCE INTERSUBJECTIVE GRAMMAR MAKES
In this article, I have shown how intersubjective alignments required to individuate objects of reference can be accomplished by DeafBlind people via both linguistic and non-linguistic means. Protactile DeafBlind people identified the location of an object on a nearby table using an emerging deictic system, while non-protactile DeafBlind people did the same by physically guiding their interlocutor to its location. However, I have shown that given the broader socio-historical context of the Seattle DeafBlind community, having intersubjective grammar made a difference— namely, it reinforced new ways of being DeafBlind and imposed a choice between old and new ways of being at a critical political juncture. Protactile leaders were aware of this, which is why, in moments when they wanted to convert a member of their community to the new way of being, they often employed and thematized deictic reference.
This is one way engagement systems can matter. However, the significance of engagement systems will of course vary across contexts. In order to account for this, typological work should (continue to) be conducted in the context of, or paired with, deep ethnographic inquiry that includes analyses of interaction and language-use, embedded in local patterns of activity over an extended time period and such a project carries with it certain methodological entailments, since you can’t just ask people, “How does this pair of auxiliaries reinforce your way of being in the world?”, or “What sociohistorical forces were most important in shaping your emergent deictic system?” Instead, the analyst must work backward from observable effects of attachments like these in everyday life and interaction. While any ethnographic context would likely generate relevant insights, communities where new languages are emerging, or partially conventionalized communication systems are in use, promise to be particularly productive. (e.g. Abner et al., 2019; Brentari and Goldin-Meadow, 2017; Coppola and Senghas, 2010; de Vos and Pfau, 2015; Lutzenberger et al., 2021; Richie et al., 2014; Meir et al., 2010)
For example, in The Nature of Signs (2014), Mara Green examines interactions in Nepal between hearing speakers of Nepali, Deaf signers of the highly conventional national sign language (Nepali Sign Language, or “NSL”) and users of “natural sign”, which is a partially-conventionalized communicative repertoire (p. 1). Over the course of 25 months of fieldwork, Green tracks instances of “partial understanding, mis-understanding and non-understanding” (p. 140), and observes that grammatical resources for expressing mood, or rather their absence, in natural sign is the commonality across those instances. The following includes two such cases (p. 141):
[M]ultiple times when Shirla told Bhola and me that her daughter-in-law had thrown Shrila’s notebook in the toilet and then showed up the following day with said notebook. I wonder if the daughter-in-law had actually threatened to throw the notebook away (e.g. by holding the notebook near the toilet), or had signed that she would throw it away (e.g. by pointing to the toilet and to the notebook and making a throwing sign), and if in fact Shrila was actually telling us that, and we misinterpretered her. Similarly, Bohla and I once understood a deaf signer (who I’m not naming due to the sensitive nature of the comment) to be claiming that a close relative was pregnant by a man who was not her husband. Later she criticized the woman’s inappropriate behaviors without mentioning the pregnancy. In retrospect, I wonder whether the signer was saying that the relative could get pregnant or that she (the signer) was worried about such an eventuality or that she (the signer) had told the relative that she (the relative) might get pregnant.
Green reports many similar cases, where intersubjective alignments falter, and argues that “what unites these examples is the relationship of the action (above, hitting, getting arrested, and below, throwing away a notebook, getting pregnant, accepting food, vomiting) to ‘reality’ in the linguistic sense. Actions,” she writes, “do not merely occur. They may be threatened, portended, possible, impossible, likely, desired, feared, or about to happen.” (p. 141). One of several possibilities Green puts forth to explain the observed pattern is the absence of conventional grammatical resources for making distinctions like these via markers of mood or modality (p. 141). Importantly, failures like these to achieve mutual understanding lead to asymmetrical and unfavorable outcomes for the person associated with natural sign. This is because other people in the situation do not attribute misunderstandings to the sparseness of natural sign as a system. Instead, they assign negative social attributes directly to the natural signer, rendering them “unintelligible”, “unreliable”, or even casting them as “liars” (Green 2014:11).
This analysis highlights precisely what is at stake in having or not having such resources. In Nepal, during the historical moment when Green conducted fieldwork, intersubjective grammar mattered for natural signers because its absence was blamed on them, thereby aligning them with forms of personhood that were devalued, stigmatized and viewed (by some) as not worth trying to communicate with.
* Grammatical mood is not a prototypical example of an engagement system, as defined by Evans et al. (Evans et al., 2018). However, along with evidentiality, miratives, and focus, mood and modality are identified as ”neighboring” systems and the boundaries are not always clear cut. The definition they use for “engagement” is, “a grammatical system for encoding the relative accessibility of an entity or state of affairs to the speaker and addressee.” (2018a:118). In the examples given by Green (Green, 2014b), it is a matter of “access”, where the thing being accessed is a relation between (1) a description of an event and (2) the “reality” of the event, or the way in which the speaker experiences it (i.e. as a memory, inference, prediction, or actuality). Because that relation does not correspond for speaker and addressee, conflicting expectations arise regarding the consequences of the reported event. Something similar happens on the “entity” level (as opposed to the “event”) when a person points to depict the event of pointing on the narrative plane. The referent must be retrieved in that case from an imagined space which the space in front of the speaker in the speech situation stands in for (this happens frequently in ASL discourse). However, the addressee mistakenly interprets the point as an instruction to search for a referent in the immediate environment (something I have observed in ASL discourse). Reference in a case like that is not resolved because the point is interpreted as a prompt to search in the actual environment but the intended referent was in the imagined environment.*
Many analyses of new, emerging, or partially conventionalized languages take into account social factors. However, those factors are often treated as static qualities attributable to the language community as whole, or to sub-groups within it. For example, the typological features of the language are correlated with the ratio of hearing to deaf signers, the presence or absence of an institutional context, the size of the community, the presence or absence of intergenerational transmission, and so-on (e.g. see Le Guen et al., 2020:9 for a discussion.) In contrast, the argument presented here (like Green’s) treats sociality as a process, within which the effectiveness or significance of particular grammatical systems becomes ethnographically graspable as those systems emerge (or not) in historical time. I hope, in analyzing the effectiveness and significance of a new deictic system in protactile language, I have demonstrated the utility of such an approach for understanding how intersubjective grammar matters, for whom, and in what circumstances.
ACKNOWLEDGEMENTS
Funding for ethnographic aspects of this research provided by the Wenner-Gren Foundation (Grant #8110 and #9146). Funding for linguistic aspects of this work provided by the National Science Foundation (BCS-1651100). Support for the writing phase provided by the Saint Louis University Research Institute and the Andrew W. Mellon Fund. Thank you to Nick Evans, Alan Rumsey, and the participants of the Centre of Excellence for the Dynamics of Language at Australian National University, who commented on earlier versions of this work and to Jelica B. Nuccio and John Lee Clark for their ongoing and influential engagement as well as the DeafBlind people who participated in this research. Finally, thank you to two anonymous reviewers, who made helpful suggestions for this and future work.
References
Abner, N., Flaherty, M., Stangl, K., Coppola, M., Brentari, D., 2019. SusanGoldin-Meadow, 2019. The Noun-Verb Distinction in Established and Emergent Sign Systems. Language 95, 230–267.
Barker, M., Nakassis, C.V., 2020. Images: An Introduction. Semiotic Rev. 9.
Battison, R., 1978. Lexical Borrowing in American Sign Language. Linstock Press, Silver Spring, MD.
Brentari, D., Goldin-Meadow, S., 2017. Language Emergence. Annual Rev. Linguist. 3, 363–388.
Bühler, K., 2001. Theory of Language: The Representational Function of Language. John Benjamins, Amsterdam, Philadelphia, PA.
Checchetto, A., Geraci, C., Cecchetto, C., Zucchhi, S., 2018. The Language instinct in extreme circumstances: The transition to tactile Italian Sign Language (LISt) by Deafblind signers. Glossa: A J. General Linguist. 3, 1–28. https://doi.org/10.5334/gjgl.357. Clark, J.L., 2017. Distantism (https://johnleeclark.tumblr.com/). URL: https://johnleeclark.tumblr.com/..
Clark, J.L., 2019. Tactile Art. Poetry Magazine (Online) URL: https://www.poetryfoundation.org/poetrymagazine/articles/150914/ tactile-art..
Clark, J.L., Nuccio, J.B., 2020. Protactile Linguistics: Discussing recent research findings. J. Am. Sign Languages Literat. (https:// journalofasl.com/protactile-linguistics/). URL: https://journalofasl.com/protactile-linguistics/.
Collins, S., 2004. Adverbial Morphemes in Tactile American Sign Language. Ph.D. thesis. Graduate College of Union Institute and University.
Collins, S., Petronio, K., 1998. What Happens in Tactile ASL?. In: Lucas, C. (Ed.), Pinky Extension and Eye Gaze: Language Use in Deaf Communities. Gallaudet Unviersity Press, Washington DC, pp. 18–37.
Cooperider, K., Slotta, J., Núñez, R., 2016. Uphill and Downhill in a Flat World: The Conceptual Topography of the Yupno House. Cognitive Sci. 41, 768–799.
Coppola, M., Senghas, A., 2010. Deixis in an emerging sign Language. In: Brentari, D. (Ed.), Sign Languages: A Cambridge Language Survey. Cambridge University Press, Cambridge, pp. 543–569.
Desclès, J.P., 2009. Prise en charge, engagement et désengagement:. Langue française n" 162, 29–53. https://doi.org/10.3917/lf. 162.0029. URL: https://www.cairn.info/revue-langue-francaise-2009-2-page-29.htm?ref=doi.. de Vos, C., Pfau, R., 2015. Sign Language Typology: The Contribution of Rural Sign Languages. Annual Rev. Linguist. 1, 265–288. https://doi.org/10.1146/annurev-linguist-030514-124958, URL: https://www.annualreviews.org/doi/10.1146/annurev-linguist030514-124958.
Diessel, H., Coventry, K.R., 2020. Demonstratives in Spatial Language and Social Interaction: An Interdisciplinary Review. Front. Psychol. 11, 555265.
DuBois, J., 2007. The Stance Triangle. In: Engelbretson, R. (Ed.), Stancetaking in Discourse. Benjamins, Amsterdam, pp. 139–182.
Dudis, P., Hochgesang, J.A., Shaw, E., Villanueva, M., 2020. Introduction to Motivated Look at Indicating Verbs in ASL (MoLo) Project (https://osf.io/h8gk4/)..
Edwards, Terra, 2014. Language Emergence in the Seattle DeafBlind Community. PhD Thesis, The University of California,Berkeley.
Edwards, Terra, 2015. Bridging the Gap Between DeafBlind Minds: Interactional and social foundations of intention attribution in the Seattle DeafBlind community. Front. Psychol. (Language Sci. Section) 6.
Edwards, Terra, 2017. Sign Creation in the Seattle DeafBlind Community: A Triumphant Story about the Regeneration of Obviousness. Gesture 16 (2), 304–327.
Edwards, Terra, Brentari, Diane, 2020. Feeling Phonology: The emergence of tactile phonological patterns in protactile communities in the United States. Language 96 (4), 819–840.
Edwards, Terra, Brentari, Diane, 2021. The Grammatical Incorporation of Demonstratives in an EmergingTactile Language. Front. Psychol. 11(579992).
Enfield, N., Sidnell, J., 2014. Language Presupposes an Enchronic Infrastructure for Social Interaction. In: Dor, D., Knight, C., Lewis, J. (Eds.), Social Origins of Language: Studies in the Evolution of Language. Oxford University Press, Oxford, pp. 92–104. Evans, N., 2003. Context, Culture, and Structuration in the Languages of Australia. Annual Rev. Anthropol. 32, 13–40. https://doi.org/10.1146/annurev.anthro.32.061002.093137.
Evans, N., Bergqvist, H., Roque, L.S., 2018. The grammar of engagement I: framework and initial exemplification. Language Cognition 10, 110–140.
Fenlon, J., Cooperrider, K., Keane, J., Brentari, D., Goldin-Meadow, S., 2019. Comparing sign language and gesture: insights from pointing. Glossa: A J. General Linguist. 4, 1–26. https://doi.org/10.5334/gjgl.499/.
Forker, D., 2020. Elevation as a grammatical and semantic category of demonstratives. Front. Psychol.. https://doi.org/10.3389/fpsyg.2020.01712.
Friedner, M., Helmreich, S., 2012. Sound Studies Meets Deaf Studies. Senses Soc. 7, 72–86.
Gershon, I., 2017. Language and the Newness of Media. Annual Rev. Anthropol. 46, 15–31.
https://doi.org/10.1146/annurevanthro102116041300.
Goldin-Meadow, S., Brentari, D., 2017. Gesture, sign and language: The coming of age of sign language and gesture studies. Behav. Brain Sci. 40. https://doi.org/10.1017/S0140525X15001247.
Goodwin, C., 2007. Environmentally Coupled Gestures. In: Duncan, S., Cassell, J., Levy, E. (Eds.), Gesture and the Dynamic Dimensions of Language. John Benjamins, Amsterdam Philadelphia.
Granda, A., Nuccio, J., 2018. Protactile Principles. Tactile Communications URL: https://www.tactilecommunications.org/ ProTactilePrinciples. Green, E.M., 2014a. Building the Tower of Babel: International Sign, linguistic commensuration, and moral orientation. Language Soc. 43, 445–
465.
Green, E.M., 2014b. The Nature of Signs: Nepal’s Deaf Society, Local Sign, and the Production of Communicative Sociality. Ph.D. thesis. The University of California Berkeley..
Hanks, W.F., 1990. Referential Practice: language and lived space among the Maya. University of Chicago Press, Chicago.
Harkness, N., 2014. Songs of Seoul: An Ethnography of Voice and Voicing in Christian South Korea. University of California Press, Berkeley.
Haviland, J.B., 2014. The emerging grammar of nouns in a first generation sign language: specification, iconicity, and syntax. Gesture 13, 309–353.
Hochgesang, J., Crasborn, O., Lilo-Martin, D., 2018. ASL Signbank. URL: https://aslsignbank.haskins.yale.edu/..
Heritage, J., 2012. Epistemics in action: action formation and territories of knowledge. Res. Language Social Interact. 45, 1–29.
Hull, M., 2012. Government of Paper: The Materiality of Bureaucracy in Urban Pakistan. University of California Press, Berkeley. Hyland, K., 2005. Stance and engagement: a model of interaction in academic discourse. Discourse Stud. 7, 173–192. https://doi. org/10.1177/1461445605050365, URL: http://journals.sagepub.com/doi/10.1177/1461445605050365.
Inoue, M., 2004. What Does Language Remember?: Indexical Inversion and the Naturalized HIstory of Japanese Women. J. Linguistic Anthropol. 14, 39–56.
Iwasaki, S., Barlett, M., Manns, H., Willooughby, L., 2018. The challenges of multimodality and multisensorality: Methodological issues in analyzing tactile signed interaction. J. Pragmatics 143, 215–227. https://doi.org/10.1016/j.pragma.2018.05.003.
Keating, E., Mirus, E., 2003. American Sign Language in virtual space: interactions between deaf users of computer-mediated video communication and the impact of technology on language practices. Language Soc. 32, 693–714.
Kockelman, P., 2010. Enemies, Parasites, andNoise: How to take up residence in a system without becoming a term in it. J. Linguistic Anthropol. 20, 406–421.
Kusters, A., 2017. Gesture-based customer interactions: deaf and hearing Mumbaikars? multimodal and metrolingual practices. Int. J. Multilingual.14, 283–302. https://doi.org/10.1080/14790718.2017.1315811.
Kusters, A., 2020. The tipping point: On the use of signs from American Sign Language in International Sign. Language Commun. 75, 51–68.
Landaburu, J., 2007. La modalisation du savior en langue andoke (Amazonie colombienne). In: L’énonciation médiatisée II. Le traitementé?pistémologique de l’information. Peeters, Louvain, pp. 23–47..
Larkin, B., 2013. the Politics and Poetics of Infrastructure. Annual Rev. Anthropol. 42.
Le Guen, O., Safar, J., Coppola, M. (Eds.), 2020. Emerging sign languages of the Americas. Number volume 9 in Sign language typology series, De Gruyter Mouton, Boston..
Lemon, A., 2018. Technologies for Intuition. University of California Press, Oakland.
Lutzenberger, H., de Vos, C., Crasborn, O., Fikkert, P., 2021. Formal variation in the Kata Kolok lexicon. Glossa: A J. General Linguist. 6.
https://doi.org/10.16995/glossa.5880. URL: https://www.glossa-journal.org/article/id/5880/..
McMillen, S.K., 2015. Is Protactile Habitable at Gallaudet University: What does it take? Ph.D. thesis. Gallaudet University. Washington, D.C..
Meir, I., Sandler, W., Padden, C., Aronoff, M., 2010. Emerging sign languages. In: Oxford Handbook of Deaf Studies, Language, and Education. vol. 2, pp. 267–280..
Mesch, J., 2001. Tactile Sign Language: Turn Taking and Questions in Signed Conversations of Deaf-blind People. Signum, Hamburg.
Mesch, J., 2013. Tactile signing with one-handed perception. Sign Language Stud. 13, 238–263. https://doi.org/10.1353/ sls.2013.0005.
Mesch, J., Raanes, E., Ferrara, L., 2015. Co-forming real space blends in tactile signed language dialogues. Cognitive Linguist. 26. https://doi.org/10.1515/cog-2014-0066.
Mesh, K., Hou, L., 2020. Negation in San Juan Quiahije Chatino Sign Language. Gesture 17, 330–374. https://doi.org/10.1075/ gest.18017.mes.
Murphy, K., 2005. Collaborative imagining: the interactive uses of gestures, talk, and graphic representation in architectural practice. Semiotica 156, 113–145.
Núñez, R., Sweetser, E., 2006. With the future behind them: convergent evidence from language and gesture in teh cross-linguistic comparison of spatial construals of time. Cognitive Sci. 30, 401–450.
Petronio, K., Dively, V., 2006. YES, #NO, Visibility, and Variation in ASL and Tactile ASL. Sign Language Stud. 7..
Quinto-Pozos, D., 2002. Deictic Points in the Visual-Gestural and Tactile-Gestural Modalities. In: Meier, R.P., Cormier, K., QuintoPozos, D. (Eds.), Modality and Structure in Signed and Spoken Languages. Cambridge University Press, Cambridge, pp. 442– 467.
Reed, C.M., Delhorne, L.A., Durlach, N.I., Fischer, S.D., 1995. A study of the tactual reception of Sign Language. J. Speech Hear. Res. 38.
Richie, R., Yang, C., Coppola, M., 2014. Modeling the Emergence of Lexicons in Homesign Systems. Topics Cognitive Sci. 6, 183– 195. https://doi.org/10.1111/tops.12076, URL:https://onlinelibrary.wiley.com/doi/10.1111/tops.12076.
Rumsey, A., 2014. Language and Human Sociality. In: Enfield, N., Kockelman, P., Sidnell, J. (Eds.), The Cambridge Handbook of Linguistic Anthropology. Cambridge University Press, Cambridge.
Russell, K., 2020. Facing Another: The Attenuation of Contact as Space in Dhofar, Oman. Signs Soc. 8, 290–318.
Shankar, S., Cavanaugh, J.R., 2012. Language and Materiality in Global Capitalism. Annual Rev. Anthropol. 41, 355–369. https://doi.org/10.1146/annurev-anthro-092611-145811.
Shaw, E., 2019. Gesture in Multiparty Interaction. Gallaudet University Press, Washington, DC.
Sicoli, M., 2016. Repair organization in Chinatec whistled speech. Language 92, 411–432.
Sidnell, J., Enfield, N.J., 2012. Language Diversity and Social Action: A Third Locus of Linguistic Relativity. Current Anthropol. 53, 302–333.
Slobin, D.I., 1996. From ‘Thought and Language? to ‘Thinking for Speaking?. In: Gumperz, J.J., Levinson, S.C. (Eds.), Rethinking Linguistic Relativity.Cambridge University Press, Cambridge, pp. 70–96.
Streeck, J., 2015. Embodiment in Human Communication. Annual Rev. Anthropol. 44, 419–438.
https://doi.org/10.1146/annurevanthro102214014045.
Willoughby, L., Iwasaki, S., Bartlett, M., Manns, H., 2018. Tactile sign languages, In: Östman, J.O., Verschueren, J. (Eds.), Handbook of Pragmatics.Benjamins. vol. 21, pp. 239–258.
END
Proudly powered by Weebly