Protactile Research Network
Chapter 5: Being for Speaking
In the previous chapter, I described how the protactile movement emerged in the Seattle DeafBlind community and how it exerted pressure on the organization of interaction and language-use. I traced the emergence of new communication conventions in particular social and institutional contexts, and I showed how, through pedagogy and politics, a new and more tactile world began to take shape. Implicit in this story are some important questions about the way a person uses language, on the one hand, and the way the world appears to the speaker of that language, on the other. While there are many different ways of thinking about the complex interaction of lan- guage and the world, in this chapter, my point of departure is the idea that our thoughts are prospectively oriented toward acts of speaking—a process known as “thinking for speaking” (Slobin 1996). Scholars who have analyzed thinking for speaking across languages have noted that each language has a grammar and each grammar has requirements for which aspects of experience must be expressed. Therefore, “[w]hatever else language may do in human thought and action, it surely directs us to attend—while speaking—to the dimensions of experience that are enshrined in grammatical categories” (Slobin 1996: 71).
In this chapter, I extend this line of inquiry to argue that pressures exerted during the process of language-use lead protactile DeafBlind people not only to attend to their environment in particular ways, but to be in their environment in particular ways. This is most evident when they are engaged in a special kind of language-use called, “deictic reference.” As I explained in Chapter 1, deictic reference is a kind of “pointing,” carried out using terms like I and you, here and there, this and that, which are unusual when compared to other kinds of words, because in order to interpret them, two values must be retrieved—one from the linguistic system and the other from the immediate environment. If someone says, “house,” you have some sense of what they mean, whether or not there is a house before you. In contrast, if someone says “I,” requirements for interpretation are more complex. First, you must know that “I” means “the person currently speaking.” Second, you must be able to locate the person in the speech situation who is speaking at the time the word “I” is uttered. For example, if I were to ask a DeafBlind person, “Should we sit there or there?,” I must (a) know the linguistic form conventionally associated with the concept there as well as any alternate concepts I could have chosen, but didn’t (e.g. here); (b) be able to be here in much the same way you are; and (c) identify a pathway or relation from here to there for us. While something like this is required for anyone speaking deicticly (Hanks 2009), the present ethnographic context highlights the fact that we must commit to being one way or another before we can refer to, or talk about, anything at all. In other words, historically and culturally given possibilities for how we can be are prospectively oriented to acts of speaking—hence: being for speaking.
In order to arrive at a more thorough understanding of being for speaking and its consequences, I begin by reviewing some of the ideas central to thinking for speaking (Section 5.1). In Sections 5.2 and 5.3, I review the institutional histories that generated options for how DeafBlind people in Seattle could be before and after the protactile movement. These sections focus on a historical moment when DeafBlind people were faced with a choice between old ways of being DeafBlind and new ways of being DeafBlind. In Section 5.4, I argue that because grammatical structure encodes social choices like this, and because it has a special capacity to repetitively impose those choices on its speakers, language can act as a catalyst, pushing people toward one way of being or another.
5.1 Thinking for Speaking
Does the language you speak influence your perceptions of reality? This is a question that has occupied the minds of cultural anthropologists since the inception of the field. Franz Boas, a foundational thinker in this tradition, encountered this question as he described the difficulty hearing people have in perceiving sounds when they occur in a novel or unfamiliar context. He called this “sound-blindness” and argued that it is a pervasive condition of spoken-language communication (Boas 1889: 47–49). Sound-blindness is the negative product of many years of learning how to produce and hear the sounds in a particular language. When a speaker produces those sounds, the actual positions of the speech organs are not the same each time, due to various idiosyncratic circumstances (a dry mouth, a loud environment, allergies), and this means that individual sounds are not the same across instances of use. How, then, does the hearer recognize sounds produced by different people in different contexts?
Boas says that the hearer can only recognize a sound because she has heard a similar sound before and judges it to be more similar to that sound than to some other sound she has previously heard. In Boas’s words, if a sound is understood as similar to one that has been heard before, “the difference between the two stimuli will be so small that it does not exceed the differential threshold” (Boas 1889: 48–49). And he clarifies further: “It will be understood that I do not mean to say that such sensations are not recognized in their individuality, but they are classified according to their similarity, and the classification is made according to known sensations” (ibid.: 50). For example, in learning a new language it is likely that mistakes will be made due to a misapplication of the categories of the native language. Anyone who has attempted to learn a second language will be familiar with the problem: Your language teacher pronounces a word in the language you are learning and asks you to repeat it. You repeat what you think is the same word, but your teacher shakes his head and asks you to try again. He is hearing something that you are not hearing.
William Stokoe (1960) and several generations of linguists since him have demonstrated that the same is true of visual languages. For example, in ASL, the signer can produce the verb “to see” in a range of locations on the face. However, if the location slips below some threshold, it becomes the verb “to smoke.” Distinguishing between the two is, in part, a matter of determining whether, on a given occasion, the sign in question is closer to what the signer has previously interpreted as “see” or “smoke.” In moments like these, it becomes clear that we do not hear or see the sounds or gestures of language as straightforward, physical stimuli. Instead, we compare them to sounds and gestures we have encountered before, and as Boas argued, we recognize them as falling within a minimum threshold of sameness or difference. Language, then, involves categories and relations that are imposed on the physical phenomena of vocal and manual gesture. If we think of these “raw materials” as being part of a language-external reality, the particular language we speak renders certain dimensions of that reality perceptible and others imperceptible. In other words, the language we speak influences our perception of reality.
Edward Sapir, a student of Boas, extended this idea further. He argued that each form in a language has a corresponding “feeling,” which derives from its relation to other forms in the same language (Sapir 1995 [1934]: 155). For example, in ASL there is a sign that is often glossed “worse.” At first glance, this word has the same meaning as the English word “worse.” However, the following use of the term “worse” is possible in ASL and not in English: “Joe is a pretty good photographer, but Julie is worse good.” This is because the meaning of the ASL word “worse” overlaps with the meaning of the corresponding English term. However, unlike the English term, it derives its value, in part, from its formal similarity to the ASL word “multiply.” In its association with “multiply,” the ASL word “worse” can be positive or negative (it just means more), while the English term is always negative. Sapir would say that the meaning of the ASL and English terms are overlapping, but their “form-feeling” is different. This difference does not derive from states of affairs in the world but from the relation of one sign to another within the same language.
According to Sapir, words are always caught up in relations like this, so even though two words in two different languages might refer to the same object, their form-feeling will always differ. These differences build up, so that users of a language orient to objects in the world through distinct sets of “form- feeling coordinates.” These coordinates, through habitual use, lead to a certain “feeling for relations.” This relational intuition begins in speaking a language, but it extends further, with use, to constrain conceptualization and organize sense-perception. For Sapir, then, the study of language is, to some degree, the study of the way the world appears to the language-user.
Building on the work of Boas, Sapir, and others working in the same tradition, psychologist and linguist Dan Slobin (1996), proposed a shift from the static concepts of “thought” and “language” to the dynamic concepts of “thinking” and “speaking.” Instead of an atemporal snap-shot view of the linguistic system and the reality outside of it, Slobin focuses on the moment of speaking, as it unfolds in time. He argues that in formulating our utterances in that moment, we engage in a kind of thought that is prospectively oriented to the grammatical resources available in the specific language we are speaking. Thought can take many forms, but, Slobin writes, “we encounter the contents of the mind in a special way when they are being accessed for use” (76). In other words, the kind of thinking that takes place in the activity of language- use involves selecting aspects of experience that can readily be conceptualized and coded in the language we are speaking.
Scholars of language and gesture have since pointed out, however, that language is not the only resource available for the expression of thought (McNeill and Duncan 2000). They have demonstrated that the gestures hearing people produce while they are speaking, or “co-speech gesture,” are systematically synchronized with speech such that language and gesture must be considered “co-expressive” (p. 2). This tightly integrated paring of language and gesture enables speakers to conceptualize and formulate their thoughts in terms of both the “categorical” requirements of language, and the “imagistic” possibilities of gesture. For example, in describing an event, one must decide if the event has been completed or is ongoing if the language being spoken at that moment has a verbal affix for each meaning and one or the other must be chosen. This kind of choice is characteristic of language as a semiotic system. According to McNeil and Duncan (2010), gesture is different from language in that it does not present the speaker with such choices. Instead, it offers a kind of synthetic glue, which helps unite linguistic elements in a larger semiotic expression, which, as a whole, shares important characteristics with the represented objects (pp. 3–4). In this view, speech and gesture are not redundant nor is one a “translation” of the other. Rather, the minimal processing unit for the expression of thought is a synthesis of the two: it is “imagistic-categorial” in nature (p. 7).
In this chapter, I argue that speaking deictically, more than any other kind of speaking, works to integrate residence and representation because it links language to the world, as it is habitually grasped by its speakers. Consider a deictic expression like “this one” in English. In producing this expression, the speaker’s intentional status is filtered through the expressive possibilities of the deictic system of English. The meaning of the word this derives from the convention that in English, this is not that. Because the contrast between the two is highly schematic, a gesture, such as pointing, would also be required to pick out one among many possible referents. However, producing “this one” (and the gesture that goes with it) in a way that is interpretable requires more than language and gesture, for at least two reasons: First, in order to individuate an object of reference in the immediate environment, it has to be there. Second, we must be here in much the same way to presuppose a pathway or relation from here to there for us.
5.2 Ways of Being DeafBlind
Recall that when I say “way of being,” I am drawing on Paul Kockelman’s theory of the “residential whole.” Building on the philosophy of Martin Hei- degger, Kockleman argues that residing in the world involves a chain of activ- ity that starts with interpreting the environment in terms of “affordances.” “Instruments” with particular affordances are wielded to perform actions. If certain actions are performed routinely, a role is taken on, and taking on certain roles habitually leads to a way of being. There are conventional associations involved at each step in that process, so it is not as if you can be whoever you like. The options for how we can be are historically and culturally given, and yet we are also active interpreters of our own being. According to Heidegger, that is what sets us apart as human—that our being is, as he says, an “issue” for us. He doesn’t mean that we have an explicit awareness of our being. He emphasizes that our being is an issue for us, in a vague and everyday way. It is something we acquire unwittingly as part of socialization and it operates at the threshold of conscious awareness, so to us and to others, it feels like: That’s just who I am.
Although it wasn’t Heidegger’s focus, one’s way of being continues to develop through the life course and can do so in rapidly changing conditions, such that a break in transmission or development occurs, and a new way of being is needed and made possible. As I have mentioned in previous chapters, most of the DeafBlind people in this book were born Deaf and slowly became blind over the course of several decades. They were socialized in Deaf communities and learned a visual language, but usually around adolescence they began the slow process of becoming blind. They ended up in adulthood in a world that their socialization had not prepared them for, having to find some way to be.
I am not claiming that situations like this are unusual. Adolescents elsewhere transition to adulthood under conditions of rapid historical change, where their options for how to be are in flux and there is a sense of newness to their trajectory. Sometimes political or economic systems collapse, for example, and with them the structures of authority that ground practice. In that case, people of all ages have to find new ways of being what they used to be, what they planned to be, or else forge some other path by interpreting their environment in new ways—seeing new affordances in the things around them. At that point, possibilities for action shift, a reconfiguration of social roles is triggered, and new ways of being can emerge.
Over the years conducting research in DeafBlind communities, I have watched many people arrive at a crossroads, where it is clear to them and to those around them that they need to find a new way of being; but from the 1970s to now the options available to them once they have that realization have shifted dramatically. When people in the 1970s were told they would go blind, they couldn’t imagine how life could go on at all. No one explained to them what they could expect or how they might cope. When methods of coping with blindness were recommended, they were often unappealing. For example, two DeafBlind sisters reportedly sought advice from a prominent Deaf teacher in the 1970s, when they were teens and just starting to become blind. He told them that once they were blind, they would have to sign in a smaller and smaller space to accommodate their shrinking tunnel of vision, and at the end they would have to switch to fingerspelling. He said that sign language would no longer be a possibility once they were blind.
Given projections like this, it was difficult to imagine how life would be possible at all.
Growing up as a Deaf child with “vision problems” meant being picked on by other kids, being called clumsy, and being treated as not smart or not capable because of misunderstandings surrounding vision. Blindness was what made you not a good athlete, not a graceful person, not smart, but it was not clear, in a positive sense, what life might be like as a “blind Deaf person.” Against this background, Seattle appeared as a place with hope for a collective future and energy for building it. Blindness was not stigmatized the same way that it was in the broader Deaf community. There were recognizable social roles to be inhabited and people to hang out with. Seattle became a rare and viable alternative to many of the effects of blindness, though not exactly as a place where blindness could be embraced. Counter-intuitively, cultivating a “DeafBlind” identity led not to a shared world suited to tactile experience but rather to services and social roles that would keep impending blindness at bay (Chapter 3).
5.3 New Ways of Being “Tactile”
In the 20 years after the DeafBlind Service Center (DBSC) was established (Chapter 2), a few key events transpired that led to new tactile ways of being. While an in-depth analysis is provided in Chapter 4, key points are as follows: First, the national standards for certifying sign language interpreters changed in 2005. Instead of requiring a two-year associate’s degree, they were now requiring a bachelor’s degree. As a result, the interpreter training program at Seattle Central Community College closed and nothing replaced it. Almost immediately the shortage of interpreters was felt, and the situation worsened quickly. Second, Adrijana was hired as the first ever DeafBlind director of DBSC in that same year, 2005. Recall that at that point in the history of the community, DeafBlind people could be “tactile,” which meant they communicated by touching the hands of the person who was signing, or they could be “tunnel-vision,” which meant they communicated visually, through a restricted channel. Up until this point, it was the tunnel-vision people who were offered the best jobs, were the first to know of any news or gossip, and were invited to all of the best parties and events. Tunnel-vision people were closer to the center of things, and the center of things was sighted. The better one was at approximating sighted norms, the more access one had. However, Adrijana was a tactile person, and she responded to the shortage of interpreters from a tactile perspective.
Lastly, Adrijana hired more tactile DeafBlind people than any previous director had, so there were groups of tactile people routinely working together. At the time, tactile people communicated with each other through interpreters, so when they needed to have a meeting among themselves, they had two options. They could wait for several weeks for an interpreter to be available (and wait times were always getting longer), or they could have their meetings without interpreters and try to communicate as a group, directly. Adrijana and her staff chose the latter option, which meant they had to find new ways to communicate. They figured things out as they went and they didn’t realize how much their communication practices had changed until people from the Lighthouse visited and the DBSC staff found that “they didn’t know how to communicate.” After a period of confusion, Adrijana and her team concluded that the DBSC staff had “gone tactile” while the Lighthouse workers had not, and this was the root of the problem. Once they identified that difference, they created a politics around it. They called what they were doing “protactile” and what the Lighthouse was doing “not protactile.” This distinction went far beyond communication. They argued, in the broadest terms, that to be protactile is to act on the assumption that hearing and vision are totally unnecessary for life. All human activity can be realized via touch. They made these assertions, but they didn’t actually know how protactile walking, cooking, eating, or communicating would work. So as a kind of experiment Adrijana and DBSC’s education specialist, Lee, started organizing DeafBlind-only events. They argued that DeafBlind people have stronger intuitions about touch than sighted people do, but their intuitions had been buried by sighted socialization. The first step, then, was to get rid of the interpreters and try to do things together. They organized classes where one DeafBlind person would teach others how to use a saw, or how to make a milkshake, and without interpreters, communication had to be direct. At first, the idea of getting rid of sighted mediators was unpopular, but Adrijana and Lee pushed. When people announced that they were going back to the old system because they had been touched in a way they felt was inappropriate, they were told: Do you think when a sighted person gets a dirty look, they give up on vision altogether? When people said they were overstimulated by all of the touching, they were told that their response was an effect of social isolation and they should fight through it. One by one, they converted the members of their community like that, and then their interpreters, their families, and their friends. They called their effort the “protactile movement”(Chapter 4).
As the protactile movement gained ground, DeafBlind people encountered new choices. Rather than choosing between being tactile or tunnel vision, now one chose to be protactile or not-protactile. Status accrued to all things protactile, which meant that those who embraced the protactile way of being had greater access to social networks, information, employment, and other valuable resources. Awareness of this shift spread more quickly than the practices themselves, so people started claiming they were protactile, but this only got you so far. At some point, adopting the label would not be enough. You would have to know how to be protactile. One place where this tension surfaced was in language-use, and in particular moments when referents in the immediate environment were singled out using special linguistic resources tailored to the task.
5.4 Being for Speaking
By the time the protactile movement started to take root in the mid-2000s, I was already away at graduate school. I returned during summer and winter breaks. On one of those visits, I saw something unusual. An interpreter, walking with a DeafBlind person, was describing something, and as part of her description she was pointing. The DeafBlind person interpreting the description cut her off mid-sentence and told her that the way she was pointing was incorrect. She then modeled a new kind of pointing (the “correct” kind) which involved incorporating the other person’s body into the expression (as I describe below). Several things about this encounter were unusual. First, the force and confidence with which the DeafBlind person intervened and the decisiveness with which they evaluated one practice over another as correct; second, the way the interpreter accepted the intervention without question; and not least of all this new way of pointing, which was unlike anything I had ever seen. In retrospect, I recognize this as an early sign that DeafBlind people were taking up residence in the world in new and more tactile ways, and new affordances in their environment were being discovered. From there, they started replacing sighted people as the experts on tactile communication, and as a result communication started to make a lot more sense.
Prior to the protactile movement, sighted people were the experts. It was common for DeafBlind people to pretend that they understood sighted people’s descriptions—maybe to avoid derailing the interaction, or maybe to avoid becoming a “difficult DeafBlind person” whom interpreters didn’t want to work with. As DeafBlind leaders started training members of their commu- nity, they emphasized the importance of DeafBlind people being the ones to decide what was and wasn’t clear. To do that, they often turned to activities involving pointing, such as direction-giving, for which comprehension could easily be verified (either you understood my directions to the door and could locate it or you didn’t). The strategies that DeafBlind people had relied on for keeping up appearances were thereby challenged and an alternative (one that was actually effective) was proposed.
Before the protactile movement, pointing involved extending a finger toward the referent, along a visual pathway, just as one would expect in ASL. In protactile workshops, this type of pointing was proven ineffective and deemed inappropriate by the instructors, Adrijana and Lee. “Protac- tile philosophy” became a way of legitimizing new practices as they were emerging. For example, in the following exchange Adrijana demonstrates to her student that he can’t resolve reference using ASL pointing signs and she explains that this failure is predictable from the perspective of protactile philosophy:
ADRIJANA: I’m going to explain PT philosophy to you. I’m not going to preach. It’s going to be a discussion between the two of us. So let’s say that I come up to you, and I start explaining: “There’s a table over there, and there’s a door further over there.” Do you understand me?
DB PARTICIPANT: Yes.
ADRIJANA: No you don’t.
DB PARTICIPANT: You said that there is a wall over there [points] and a door over there [points] right?
ADRIJANA: No, the door is over there [points].
DB PARTICIPANT: Well, whatever.
ADRIJANA: Yeah, but that’s exactly it. It’s important. When people point like that to direct you, and you’re standing in the middle of the room, you’re totally lost. Right? [DB participant nods]. You’re sitting here, and it might seem clear for a minute, but when you stand up and try to find the things I just located for you, the directions won’t seem to match the environment and you’ll be confused. Deaf [sighted] people do that—they point to places, but that’s not clear.
DB PARTICIPANT: Well, yeah. That’s visual information.
ADRIJANA: Right. But it has to be adapted to be protactile. So instead of pointing, we have to teach them to do this. . . .
To direct her DeafBlind interlocutor to the door, Adrijana produced an expression foreign to ASL. Instead of extending a finger out into space along a visual trajectory, Adrijana took the DB participant’s hand and turned it over so the palm was facing up. She held it in place with her left hand from underneath. Then, with her right hand, she located herself and her interlocutor by pressing a finger into the upturned palm to mean “here.” Then, she touched her finger first to her interlocutor’s chest (meaning “you”) and touched her own chest to mean, “me.” This sequence can be glossed, “here, you, me,” and the translation would be, “You and I are here.”
This is a representation of the ground, against which, something in the environment is singled out. Once Adrijana established this as “our” location (i.e. her and her student), by pressing on her student’s palm, she could then locate the door relative to that location. First, she presses the thumb of her left hand into the location she has associated with “here,” and keeps it pressed down. Then, she traces a path from “here” to the door. Finally, she presses once in the location associated with the door, to mean “the door is here in relation to us.” As the deictic system of protactile language emerged, a systematic contrast across speakers emerged between “press” and “trace,” where the former represents a discrete location in space, and the latter represents a path (Edwards 2015). This is a “locative” set, or a set of terms that provides different ways of describing locations. In this two-term set, contrast is based on the presence or absence of a path. This is a highly salient dimension of proprioceptive experience across contexts since one knows, without any visual or sonic input, if they are experiencing movement along some path or not. For that reason, this contrast is a good candidate for incorporation into the deictic system of the language.
This schematic set of linguistic meanings, and others like it, are made specific when a term in the set is instantiated, or used, just as “I” (the person speaking) is made specific when it is associated with the person speaking in the speech situation. The facts of the situation, as they appear in the interaction, accrue to schematic distinctions and are anticipated by language, but they are not in language. In the example given above, the linguistic system provides a simple and relatively abstract contrast: ± path movement.+ path movement = “trace,” − path movement = “press.” This contrast is then applied in a specific interaction between two protactile people. The specific path that is traced is not supplied by the language.
For a visual person standing in the middle of a room, the walls, the floor, and the ceiling have affordances for navigation. We can see where the door is relative to us because there is a floor, a ceiling, and walls—all of which give us a sense of orientation. From here to the door is a straight line that follows our sightlines. For a DeafBlind person who has gone tactile, the first move would be to find some orienting, tactile structure instead. If there is only one texture, such as carpet, on the floor, the floor itself is not helpful. It would therefore be necessary to seek out a place where two textures or structures come together, such as the wall and the floor. The place where the wall meets the floor constitutes an orienting line, sometimes called a “shoreline.” Protactile deictics anticipate these patterns in how affordances are interpreted for purposes of navigation and project tactile motor lines, not sightlines.
This system emerged when protactile people started communicating directly with each other and interpreters were not around to sort out misunderstandings. In the following exchange, for example, Adrijana is with another student. They are on a break from the workshop, and the student mentions that one of the sighted videographers, Victor, is nearby. He points in the direction of where he thinks Victor is using ASL. Adrijana responds by saying, “You see Victor? I don’t see anything.” Then the student tries, unsuccessfully, to clarify. Adrijana appears irritated. She puts down the bottle of water she has been drinking and prepares to intervene in a way that became familiar to me, analyzing videos from protactile workshops. I began to think of these moments as more than mere corrections. They seemed to be treated as a test to determine if the person would choose an old way of being DeafBlind or if they would be open instead to a new, protactile way of being. In this case, Adrijana explains that he needs to locate Victor in the protactile way and she demonstrates. She takes his dominant hand and turns it palm-up. Then she presses her finger into her student’s chest and then her own chest to mean “you and me.” The student can feel her pointing to her own chest because he has a “listening hand” attached to her articulating hand. Then she presses on the palm of his other hand to mean “here.” Finally she says “Victor” and presses on several places on the palm, followed by a question marker to mean: “Which of these locations is it? Where is Victor in relation to us?”
In order to interpret deictic expressions like these and respond in a way the teacher will accept, the student has to be protactile. He has to inhabit the environment he is in a more tactile way and that shift is thrust upon him. It isn’t a matter of personal preference or a step in his personal process of becoming blind. The language requires him to be protactile in that moment. When people are becoming blind slowly, they adapt slowly, bit by bit. How- ever, every time a referential situation like this unfolds, a kind of pressure is exerted that takes a slow gradual process and turns it into a switch. You are either prompted to cash in on the visual affordances in your environment or you are prompted to cash in on tactile affordances, and each of those choices comes with a cascade of consequences for who you are and who you are taken to be. At some level, people choose. But if someone gives you directions to the door in protactile language, you have to commit, in that moment, to being protactile just to interpret the instructions. This is what I am calling “being for speaking,” where one’s way of being in the world is structured by categories and relations encoded in the language being spoken.
I am not claiming that new ways of being DeafBlind emerge moment to moment in the unfolding of specific interactions. As I have argued, the options DeafBlind people have at their disposal, in any one speech situation, are socio-historical products. In the 1980s, prior to the protactile movement, the options were very different than they are now, and those differences have been shaped in part by the convergence of institutional histories at the Seattle Lighthouse for the Blind and Seattle Central Community College. These institutions together generated conditions that made it possible for DeafBlind people to avoid contact with each other. Without that, it would have been difficult for visual ways of being to persist as long as they did. New ways of being emerged when there were no longer enough interpreters to maintain that system, and, crucially, when actors in key positions of authority were tactile DeafBlind people. In situations where DeafBlind leaders want to convert people to new ways of being DeafBlind, deictic reference plays a key role.
While acts of deictic reference make these requirements impossible to ignore, Adrijana and Lee drew their students’ attention to this consistently across all sorts of contexts, even those where language-use was not the primary activity. For example, in a lull between activities in the protactile workshops, Lee was trying to teach a small group of students, but she kept get- ting interrupted by people asking her questions. After the third interruption, there was some confusion among her students. They couldn’t understand what was going on. One person in the group responded to the confusion by reverting to vision. He leaned back and started looking around with his eyes, while the rest of his body was still (and therefore, from a tactile perspective, disengaged). Lee tapped him to get his attention and said:
"Don’t just stand there, passively, and look around. You have to be actively seeking information through touch. If someone isn’t interpreting information, and you’re just standing there, knowing you are missing out on information, you have to do something about it. Does PT mean that everyone has to wear a blindfold and never use their eyes for anything? Not at all! The point is to always communicate through touch. If you’re just standing back getting information visually, it means you don’t respect PT."
For Lee, “respecting PT” meant getting information in ways that generate information for others. This yields, in Goffman’s terms, a “situation,” which he defined as “a space of mutual monitoring possibilities” (1964: 135). To “be here” means, minimally, monitoring the situation in ways that could be monitored by others, and more specifically, in ways that presuppose tactile modes of access. If a person is just standing there, perfectly still (apart from an undetectable back and forth of the eyes), they are essentially exiting the situation. The point, Lee explains, is not to deny one’s biological capacity to see (if one has remaining sight). It is to actively enter into a space of mutual monitoring possibilities by choosing channels that can be presupposed across the group. A few moments after this interaction, perhaps feeling bad for the pointed correction, Lee shifts to a more understanding tone and says, “I know, I completely get it. We are telling you a lot about what protactile is, and you are getting it, intellectually, but it takes a while for it to go from something you are thinking about to something you are.” This intervention suggests that once the channels that can be presupposed across the group are internalized, reaching one’s hands out to gather tactile information in a moment of interactional confusion will become as natural as glancing across the room in response to a visual disturbance.
The level at which Lee and Adrijana intervened left space for their students to uncover new affordances in the environment that had previously gone undiscovered. They were insisting that everyone “be here,” not that they adopt this or that specific practice or “technique.” Almost immediately, though, Lee and Adrijana’s interventions were mistaken for more rigid attachments. For example, at one point, Lee was teaching two new students that they should give the speaker tactile feedback (such as tapping on or squeezing their thigh). In one-on-one conversations, the students were picking it up, but when there were three people, they faltered. In one such case, Lee reminded her students to give tactile feedback. They asked her exactly what they were supposed to do with their hands. They asked, “What is the ‘right way’ to do it?” Lee responded, “It doesn’t matter. I’m not trying to give you specific rules to follow. It’s just the principle— it’s important for the person talking to feel the feedback. Exactly how you do that is up to you.” The emphasis is not on doing things in a particular way because you are in a protactile environment, i.e. being appropriate to context, but on creating a context that would make all kinds of actions legible and effective. At the most basic level, this involved simply being there.
One of the most fundamental structures that effectively yielded a sense of being there was a particular configuration for two-person interactions. Prior to the protactile movement, utterances were conveyed from the hands of the speaker to the hands of the addressee, and these were usually the only parts of their bodies in contact. Just weeks into the workshops, a new contact surface became conventional, which greatly expanded the number and types of available channels. Instead of the hands being the only point of contact, speaker and addressee sat with their faces just a few inches from one another, legs touching on one side, on the outer thighs. This increase in proximity and surface area meant that behaviors could be observed, recognized, and typified via thermal, motoric, olfactory, proprioceptive, and touch-based channels. It turns out that some people heat up when they are exerting effort or are experiencing emotional strain, while others do not. You can tell what kind of soap they use to wash their clothes, whether they have a dog, and what kind of foods they cook at home. If one were to exit the situation (to silently move the eyes around in their sockets, for example), those channels, and all of the information they carry, would retract or grow thin, and the existence of the other would be attenuated.
Recall that the word “I” cannot be interpreted until the person speaking has been located in the immediate environment. This raises an important question: Is there a minimal threshold of existence for being a speaker? Can the speaking “I” be merely a set of disembodied hands, floating around in air space? Even if your answer to this question is yes, consider the fact that a speaker is only a speaker in relation to an addressee. If the legs are continuously pressed together from the beginning to the end of the encounter, the thigh of the speaker is readily accessible to the addressee’s hand for sending signals that they are listening and engaged. Without any way to register the fact that you are being addressed, can you really be an addressee? If there is no addressee, how can there be a speaker? Perhaps this explains Adrijana and Lee’s intuition to start with co-presence. Do whatever it takes to be here together. From there, a wealth of affordances will be revealed for actions of all kinds.
Given this approach, new roles quickly became available. There were new ways of participating in conversation, giving or attending lectures, workshops, and dinner parties. There were new ways of playing and watching games, and if one wanted to observe some other activity, protactile people had intuitions about how that might be done. For example, during the workshops, Adrijana wanted to teach participants how to make macramé sleeves for bottles, mugs, and other household objects (instead of “boring” Braille labels). While she manipulated long strands of twine, her students would stand behind her, their arms and hands placed on top of her arms and hands, so they could track every movement, while also feeling the effects on the twine. In order to make that feasible, the students had to press their chests against their teacher’s back, resting their chin on her shoulder. Elsewhere, that kind of contact would only be appropriate in the context of an intimate relationship, but, here, that was the structure that effectively incorporated and contextualized the relevant role-relation, and therefore it was quickly and widely adopted. All of this structure depended on the ability to be a speaker and an addressee.
From within those structures, signs of attention, agreement, boredom, interest, annoyance, and confusion came through loud and clear. This meant that in speaking and being spoken to in the context of a particular activity, one could, for example, be annoying, a good student, a keen observer, or a boring person. The primary roles of speaker and addressee were incorporated into and contextualized by a wide range of participation frameworks (which are discussed below). Within those frameworks, more specific and contingent roles emerged as patterns of behavior were consistently observable and there- fore typifiable (Hanks 1990; Irvine 1996). In an interview, Lee explained how this process was set in motion as reliance on sighted people was reduced:
"If an object is in front of a DeafBlind person, an interpreter is very likely going to explain the object to them. [. . .] The more DeafBlind people are in contact with other DeafBlind people, the more tactile things will become. The more tactile things become, the more DeafBlind people will demand that kind of thing from interpreters. For example, the DeafBlind person touches the object and then asks the interpreter a bunch of questions about it. That’s so much better than the other way around. So really there is a reversal of information—where it originates. Sighted people make less decisions about what counts as information, so there is less chance for them to impose their visual perspective."
Again, this goes back to the two basic requirements Lee and Adrijana insisted upon. First, DeafBlind people must know how to be co-present. Second, DeafBlind people decide together what counts as relevant or worthwhile information, which is a process that must be ratified by the members of the group over time. Together, these requirements guarantee that representations of the world will always be grounded in tactile ways of being in the world. Deictic reference is a productive activity for generating, reinforcing, and testing those connections.
5.5 Conclusion
In this chapter, I have argued that the protactile movement introduced new options for how one could be DeafBlind. In the early stages of the movement, when ways of being were in flux, new and emerging linguistic systems in protactile language, and in particular the deictic system, played a crucial role by encoding social choices and then recycling and re-imposing those choices moment to moment, day to day, at the periphery of awareness. There is a subtle relentlessness to this that can push people down a path they might
otherwise take later, more slowly, or not at all. Adrijana and Lee developed a reflexive awareness of this and they used it to propagate a social movement. In particular, when they wanted to convert a member of their community to a protactile way of being, they often employed deictic reference. This, more than any other form of language-use, forced their interlocutors to be protactile.
Since then, the deictic system of protactile language has become a repos- itory for regularities in navigation, interaction, and communication and it demanded that values be returned from that order. For example, two different combinations of movement and contact, “tap” and “press,” systematically invoke different dimensions of setting (Edwards 2015). “Tap” is a demon- strative. It singles out a referent against a horizon of other, possible referents (like the English word “this”). “Press,” in contrast, is a locative. It identifies a location (like the English word “here”), against a horizon of other, possible locations. There are further contrasts within each category. For locatives, “press” prepares the addressee for a discrete location, while “trace” prepares them for a path. For demonstratives, a trilled “tap” prepares the addressee for a cognitively foregrounded object, while “grip” prepares them for a cog- nitively backgrounded object. Exposed to this system of contrasts routinely, the addressee becomes sensitive to subtle differences in tactile stimuli in the transmission of linguistic signals in much the same way that in learning a tonal language, one becomes attuned to differences in tone. They also hone sensibilities about how the environment itself is likely to be interpreted. For example, “trace,” on its own, includes only the highly schematic concepts: contact (with a surface) and movement (along a path). Knowing how to apply those relatively abstract meanings in ways that can be operationalized by one’s addressee requires corresponding ways of routinely interacting with environmental structures, such as surfaces and paths that can be used for standing and walking.
In the next chapter, I follow the protactile movement to Gallaudet University in Washington, D.C., where architecture and infrastructure, in the context of urban development, became the focus. At Gallaudet, connections between the structure of the environment and the structure of language were made explicit as an integral part of protactile politics. The challenge was finding a way to be protactile in spaces that were not, and had never been, “for” DeafBlind people. While Seattle was home to DBSC—an organization run by and for DeafBlind people—most places where the protactile movement gained ground, including Gallaudet, had no such institution. At Gallaudet, this problem was addressed by “laminating” protactile environments onto “Deaf Space.”
In this chapter, I extend this line of inquiry to argue that pressures exerted during the process of language-use lead protactile DeafBlind people not only to attend to their environment in particular ways, but to be in their environment in particular ways. This is most evident when they are engaged in a special kind of language-use called, “deictic reference.” As I explained in Chapter 1, deictic reference is a kind of “pointing,” carried out using terms like I and you, here and there, this and that, which are unusual when compared to other kinds of words, because in order to interpret them, two values must be retrieved—one from the linguistic system and the other from the immediate environment. If someone says, “house,” you have some sense of what they mean, whether or not there is a house before you. In contrast, if someone says “I,” requirements for interpretation are more complex. First, you must know that “I” means “the person currently speaking.” Second, you must be able to locate the person in the speech situation who is speaking at the time the word “I” is uttered. For example, if I were to ask a DeafBlind person, “Should we sit there or there?,” I must (a) know the linguistic form conventionally associated with the concept there as well as any alternate concepts I could have chosen, but didn’t (e.g. here); (b) be able to be here in much the same way you are; and (c) identify a pathway or relation from here to there for us. While something like this is required for anyone speaking deicticly (Hanks 2009), the present ethnographic context highlights the fact that we must commit to being one way or another before we can refer to, or talk about, anything at all. In other words, historically and culturally given possibilities for how we can be are prospectively oriented to acts of speaking—hence: being for speaking.
In order to arrive at a more thorough understanding of being for speaking and its consequences, I begin by reviewing some of the ideas central to thinking for speaking (Section 5.1). In Sections 5.2 and 5.3, I review the institutional histories that generated options for how DeafBlind people in Seattle could be before and after the protactile movement. These sections focus on a historical moment when DeafBlind people were faced with a choice between old ways of being DeafBlind and new ways of being DeafBlind. In Section 5.4, I argue that because grammatical structure encodes social choices like this, and because it has a special capacity to repetitively impose those choices on its speakers, language can act as a catalyst, pushing people toward one way of being or another.
5.1 Thinking for Speaking
Does the language you speak influence your perceptions of reality? This is a question that has occupied the minds of cultural anthropologists since the inception of the field. Franz Boas, a foundational thinker in this tradition, encountered this question as he described the difficulty hearing people have in perceiving sounds when they occur in a novel or unfamiliar context. He called this “sound-blindness” and argued that it is a pervasive condition of spoken-language communication (Boas 1889: 47–49). Sound-blindness is the negative product of many years of learning how to produce and hear the sounds in a particular language. When a speaker produces those sounds, the actual positions of the speech organs are not the same each time, due to various idiosyncratic circumstances (a dry mouth, a loud environment, allergies), and this means that individual sounds are not the same across instances of use. How, then, does the hearer recognize sounds produced by different people in different contexts?
Boas says that the hearer can only recognize a sound because she has heard a similar sound before and judges it to be more similar to that sound than to some other sound she has previously heard. In Boas’s words, if a sound is understood as similar to one that has been heard before, “the difference between the two stimuli will be so small that it does not exceed the differential threshold” (Boas 1889: 48–49). And he clarifies further: “It will be understood that I do not mean to say that such sensations are not recognized in their individuality, but they are classified according to their similarity, and the classification is made according to known sensations” (ibid.: 50). For example, in learning a new language it is likely that mistakes will be made due to a misapplication of the categories of the native language. Anyone who has attempted to learn a second language will be familiar with the problem: Your language teacher pronounces a word in the language you are learning and asks you to repeat it. You repeat what you think is the same word, but your teacher shakes his head and asks you to try again. He is hearing something that you are not hearing.
William Stokoe (1960) and several generations of linguists since him have demonstrated that the same is true of visual languages. For example, in ASL, the signer can produce the verb “to see” in a range of locations on the face. However, if the location slips below some threshold, it becomes the verb “to smoke.” Distinguishing between the two is, in part, a matter of determining whether, on a given occasion, the sign in question is closer to what the signer has previously interpreted as “see” or “smoke.” In moments like these, it becomes clear that we do not hear or see the sounds or gestures of language as straightforward, physical stimuli. Instead, we compare them to sounds and gestures we have encountered before, and as Boas argued, we recognize them as falling within a minimum threshold of sameness or difference. Language, then, involves categories and relations that are imposed on the physical phenomena of vocal and manual gesture. If we think of these “raw materials” as being part of a language-external reality, the particular language we speak renders certain dimensions of that reality perceptible and others imperceptible. In other words, the language we speak influences our perception of reality.
Edward Sapir, a student of Boas, extended this idea further. He argued that each form in a language has a corresponding “feeling,” which derives from its relation to other forms in the same language (Sapir 1995 [1934]: 155). For example, in ASL there is a sign that is often glossed “worse.” At first glance, this word has the same meaning as the English word “worse.” However, the following use of the term “worse” is possible in ASL and not in English: “Joe is a pretty good photographer, but Julie is worse good.” This is because the meaning of the ASL word “worse” overlaps with the meaning of the corresponding English term. However, unlike the English term, it derives its value, in part, from its formal similarity to the ASL word “multiply.” In its association with “multiply,” the ASL word “worse” can be positive or negative (it just means more), while the English term is always negative. Sapir would say that the meaning of the ASL and English terms are overlapping, but their “form-feeling” is different. This difference does not derive from states of affairs in the world but from the relation of one sign to another within the same language.
According to Sapir, words are always caught up in relations like this, so even though two words in two different languages might refer to the same object, their form-feeling will always differ. These differences build up, so that users of a language orient to objects in the world through distinct sets of “form- feeling coordinates.” These coordinates, through habitual use, lead to a certain “feeling for relations.” This relational intuition begins in speaking a language, but it extends further, with use, to constrain conceptualization and organize sense-perception. For Sapir, then, the study of language is, to some degree, the study of the way the world appears to the language-user.
Building on the work of Boas, Sapir, and others working in the same tradition, psychologist and linguist Dan Slobin (1996), proposed a shift from the static concepts of “thought” and “language” to the dynamic concepts of “thinking” and “speaking.” Instead of an atemporal snap-shot view of the linguistic system and the reality outside of it, Slobin focuses on the moment of speaking, as it unfolds in time. He argues that in formulating our utterances in that moment, we engage in a kind of thought that is prospectively oriented to the grammatical resources available in the specific language we are speaking. Thought can take many forms, but, Slobin writes, “we encounter the contents of the mind in a special way when they are being accessed for use” (76). In other words, the kind of thinking that takes place in the activity of language- use involves selecting aspects of experience that can readily be conceptualized and coded in the language we are speaking.
Scholars of language and gesture have since pointed out, however, that language is not the only resource available for the expression of thought (McNeill and Duncan 2000). They have demonstrated that the gestures hearing people produce while they are speaking, or “co-speech gesture,” are systematically synchronized with speech such that language and gesture must be considered “co-expressive” (p. 2). This tightly integrated paring of language and gesture enables speakers to conceptualize and formulate their thoughts in terms of both the “categorical” requirements of language, and the “imagistic” possibilities of gesture. For example, in describing an event, one must decide if the event has been completed or is ongoing if the language being spoken at that moment has a verbal affix for each meaning and one or the other must be chosen. This kind of choice is characteristic of language as a semiotic system. According to McNeil and Duncan (2010), gesture is different from language in that it does not present the speaker with such choices. Instead, it offers a kind of synthetic glue, which helps unite linguistic elements in a larger semiotic expression, which, as a whole, shares important characteristics with the represented objects (pp. 3–4). In this view, speech and gesture are not redundant nor is one a “translation” of the other. Rather, the minimal processing unit for the expression of thought is a synthesis of the two: it is “imagistic-categorial” in nature (p. 7).
In this chapter, I argue that speaking deictically, more than any other kind of speaking, works to integrate residence and representation because it links language to the world, as it is habitually grasped by its speakers. Consider a deictic expression like “this one” in English. In producing this expression, the speaker’s intentional status is filtered through the expressive possibilities of the deictic system of English. The meaning of the word this derives from the convention that in English, this is not that. Because the contrast between the two is highly schematic, a gesture, such as pointing, would also be required to pick out one among many possible referents. However, producing “this one” (and the gesture that goes with it) in a way that is interpretable requires more than language and gesture, for at least two reasons: First, in order to individuate an object of reference in the immediate environment, it has to be there. Second, we must be here in much the same way to presuppose a pathway or relation from here to there for us.
5.2 Ways of Being DeafBlind
Recall that when I say “way of being,” I am drawing on Paul Kockelman’s theory of the “residential whole.” Building on the philosophy of Martin Hei- degger, Kockleman argues that residing in the world involves a chain of activ- ity that starts with interpreting the environment in terms of “affordances.” “Instruments” with particular affordances are wielded to perform actions. If certain actions are performed routinely, a role is taken on, and taking on certain roles habitually leads to a way of being. There are conventional associations involved at each step in that process, so it is not as if you can be whoever you like. The options for how we can be are historically and culturally given, and yet we are also active interpreters of our own being. According to Heidegger, that is what sets us apart as human—that our being is, as he says, an “issue” for us. He doesn’t mean that we have an explicit awareness of our being. He emphasizes that our being is an issue for us, in a vague and everyday way. It is something we acquire unwittingly as part of socialization and it operates at the threshold of conscious awareness, so to us and to others, it feels like: That’s just who I am.
Although it wasn’t Heidegger’s focus, one’s way of being continues to develop through the life course and can do so in rapidly changing conditions, such that a break in transmission or development occurs, and a new way of being is needed and made possible. As I have mentioned in previous chapters, most of the DeafBlind people in this book were born Deaf and slowly became blind over the course of several decades. They were socialized in Deaf communities and learned a visual language, but usually around adolescence they began the slow process of becoming blind. They ended up in adulthood in a world that their socialization had not prepared them for, having to find some way to be.
I am not claiming that situations like this are unusual. Adolescents elsewhere transition to adulthood under conditions of rapid historical change, where their options for how to be are in flux and there is a sense of newness to their trajectory. Sometimes political or economic systems collapse, for example, and with them the structures of authority that ground practice. In that case, people of all ages have to find new ways of being what they used to be, what they planned to be, or else forge some other path by interpreting their environment in new ways—seeing new affordances in the things around them. At that point, possibilities for action shift, a reconfiguration of social roles is triggered, and new ways of being can emerge.
Over the years conducting research in DeafBlind communities, I have watched many people arrive at a crossroads, where it is clear to them and to those around them that they need to find a new way of being; but from the 1970s to now the options available to them once they have that realization have shifted dramatically. When people in the 1970s were told they would go blind, they couldn’t imagine how life could go on at all. No one explained to them what they could expect or how they might cope. When methods of coping with blindness were recommended, they were often unappealing. For example, two DeafBlind sisters reportedly sought advice from a prominent Deaf teacher in the 1970s, when they were teens and just starting to become blind. He told them that once they were blind, they would have to sign in a smaller and smaller space to accommodate their shrinking tunnel of vision, and at the end they would have to switch to fingerspelling. He said that sign language would no longer be a possibility once they were blind.
Given projections like this, it was difficult to imagine how life would be possible at all.
Growing up as a Deaf child with “vision problems” meant being picked on by other kids, being called clumsy, and being treated as not smart or not capable because of misunderstandings surrounding vision. Blindness was what made you not a good athlete, not a graceful person, not smart, but it was not clear, in a positive sense, what life might be like as a “blind Deaf person.” Against this background, Seattle appeared as a place with hope for a collective future and energy for building it. Blindness was not stigmatized the same way that it was in the broader Deaf community. There were recognizable social roles to be inhabited and people to hang out with. Seattle became a rare and viable alternative to many of the effects of blindness, though not exactly as a place where blindness could be embraced. Counter-intuitively, cultivating a “DeafBlind” identity led not to a shared world suited to tactile experience but rather to services and social roles that would keep impending blindness at bay (Chapter 3).
5.3 New Ways of Being “Tactile”
In the 20 years after the DeafBlind Service Center (DBSC) was established (Chapter 2), a few key events transpired that led to new tactile ways of being. While an in-depth analysis is provided in Chapter 4, key points are as follows: First, the national standards for certifying sign language interpreters changed in 2005. Instead of requiring a two-year associate’s degree, they were now requiring a bachelor’s degree. As a result, the interpreter training program at Seattle Central Community College closed and nothing replaced it. Almost immediately the shortage of interpreters was felt, and the situation worsened quickly. Second, Adrijana was hired as the first ever DeafBlind director of DBSC in that same year, 2005. Recall that at that point in the history of the community, DeafBlind people could be “tactile,” which meant they communicated by touching the hands of the person who was signing, or they could be “tunnel-vision,” which meant they communicated visually, through a restricted channel. Up until this point, it was the tunnel-vision people who were offered the best jobs, were the first to know of any news or gossip, and were invited to all of the best parties and events. Tunnel-vision people were closer to the center of things, and the center of things was sighted. The better one was at approximating sighted norms, the more access one had. However, Adrijana was a tactile person, and she responded to the shortage of interpreters from a tactile perspective.
Lastly, Adrijana hired more tactile DeafBlind people than any previous director had, so there were groups of tactile people routinely working together. At the time, tactile people communicated with each other through interpreters, so when they needed to have a meeting among themselves, they had two options. They could wait for several weeks for an interpreter to be available (and wait times were always getting longer), or they could have their meetings without interpreters and try to communicate as a group, directly. Adrijana and her staff chose the latter option, which meant they had to find new ways to communicate. They figured things out as they went and they didn’t realize how much their communication practices had changed until people from the Lighthouse visited and the DBSC staff found that “they didn’t know how to communicate.” After a period of confusion, Adrijana and her team concluded that the DBSC staff had “gone tactile” while the Lighthouse workers had not, and this was the root of the problem. Once they identified that difference, they created a politics around it. They called what they were doing “protactile” and what the Lighthouse was doing “not protactile.” This distinction went far beyond communication. They argued, in the broadest terms, that to be protactile is to act on the assumption that hearing and vision are totally unnecessary for life. All human activity can be realized via touch. They made these assertions, but they didn’t actually know how protactile walking, cooking, eating, or communicating would work. So as a kind of experiment Adrijana and DBSC’s education specialist, Lee, started organizing DeafBlind-only events. They argued that DeafBlind people have stronger intuitions about touch than sighted people do, but their intuitions had been buried by sighted socialization. The first step, then, was to get rid of the interpreters and try to do things together. They organized classes where one DeafBlind person would teach others how to use a saw, or how to make a milkshake, and without interpreters, communication had to be direct. At first, the idea of getting rid of sighted mediators was unpopular, but Adrijana and Lee pushed. When people announced that they were going back to the old system because they had been touched in a way they felt was inappropriate, they were told: Do you think when a sighted person gets a dirty look, they give up on vision altogether? When people said they were overstimulated by all of the touching, they were told that their response was an effect of social isolation and they should fight through it. One by one, they converted the members of their community like that, and then their interpreters, their families, and their friends. They called their effort the “protactile movement”(Chapter 4).
As the protactile movement gained ground, DeafBlind people encountered new choices. Rather than choosing between being tactile or tunnel vision, now one chose to be protactile or not-protactile. Status accrued to all things protactile, which meant that those who embraced the protactile way of being had greater access to social networks, information, employment, and other valuable resources. Awareness of this shift spread more quickly than the practices themselves, so people started claiming they were protactile, but this only got you so far. At some point, adopting the label would not be enough. You would have to know how to be protactile. One place where this tension surfaced was in language-use, and in particular moments when referents in the immediate environment were singled out using special linguistic resources tailored to the task.
5.4 Being for Speaking
By the time the protactile movement started to take root in the mid-2000s, I was already away at graduate school. I returned during summer and winter breaks. On one of those visits, I saw something unusual. An interpreter, walking with a DeafBlind person, was describing something, and as part of her description she was pointing. The DeafBlind person interpreting the description cut her off mid-sentence and told her that the way she was pointing was incorrect. She then modeled a new kind of pointing (the “correct” kind) which involved incorporating the other person’s body into the expression (as I describe below). Several things about this encounter were unusual. First, the force and confidence with which the DeafBlind person intervened and the decisiveness with which they evaluated one practice over another as correct; second, the way the interpreter accepted the intervention without question; and not least of all this new way of pointing, which was unlike anything I had ever seen. In retrospect, I recognize this as an early sign that DeafBlind people were taking up residence in the world in new and more tactile ways, and new affordances in their environment were being discovered. From there, they started replacing sighted people as the experts on tactile communication, and as a result communication started to make a lot more sense.
Prior to the protactile movement, sighted people were the experts. It was common for DeafBlind people to pretend that they understood sighted people’s descriptions—maybe to avoid derailing the interaction, or maybe to avoid becoming a “difficult DeafBlind person” whom interpreters didn’t want to work with. As DeafBlind leaders started training members of their commu- nity, they emphasized the importance of DeafBlind people being the ones to decide what was and wasn’t clear. To do that, they often turned to activities involving pointing, such as direction-giving, for which comprehension could easily be verified (either you understood my directions to the door and could locate it or you didn’t). The strategies that DeafBlind people had relied on for keeping up appearances were thereby challenged and an alternative (one that was actually effective) was proposed.
Before the protactile movement, pointing involved extending a finger toward the referent, along a visual pathway, just as one would expect in ASL. In protactile workshops, this type of pointing was proven ineffective and deemed inappropriate by the instructors, Adrijana and Lee. “Protac- tile philosophy” became a way of legitimizing new practices as they were emerging. For example, in the following exchange Adrijana demonstrates to her student that he can’t resolve reference using ASL pointing signs and she explains that this failure is predictable from the perspective of protactile philosophy:
ADRIJANA: I’m going to explain PT philosophy to you. I’m not going to preach. It’s going to be a discussion between the two of us. So let’s say that I come up to you, and I start explaining: “There’s a table over there, and there’s a door further over there.” Do you understand me?
DB PARTICIPANT: Yes.
ADRIJANA: No you don’t.
DB PARTICIPANT: You said that there is a wall over there [points] and a door over there [points] right?
ADRIJANA: No, the door is over there [points].
DB PARTICIPANT: Well, whatever.
ADRIJANA: Yeah, but that’s exactly it. It’s important. When people point like that to direct you, and you’re standing in the middle of the room, you’re totally lost. Right? [DB participant nods]. You’re sitting here, and it might seem clear for a minute, but when you stand up and try to find the things I just located for you, the directions won’t seem to match the environment and you’ll be confused. Deaf [sighted] people do that—they point to places, but that’s not clear.
DB PARTICIPANT: Well, yeah. That’s visual information.
ADRIJANA: Right. But it has to be adapted to be protactile. So instead of pointing, we have to teach them to do this. . . .
To direct her DeafBlind interlocutor to the door, Adrijana produced an expression foreign to ASL. Instead of extending a finger out into space along a visual trajectory, Adrijana took the DB participant’s hand and turned it over so the palm was facing up. She held it in place with her left hand from underneath. Then, with her right hand, she located herself and her interlocutor by pressing a finger into the upturned palm to mean “here.” Then, she touched her finger first to her interlocutor’s chest (meaning “you”) and touched her own chest to mean, “me.” This sequence can be glossed, “here, you, me,” and the translation would be, “You and I are here.”
This is a representation of the ground, against which, something in the environment is singled out. Once Adrijana established this as “our” location (i.e. her and her student), by pressing on her student’s palm, she could then locate the door relative to that location. First, she presses the thumb of her left hand into the location she has associated with “here,” and keeps it pressed down. Then, she traces a path from “here” to the door. Finally, she presses once in the location associated with the door, to mean “the door is here in relation to us.” As the deictic system of protactile language emerged, a systematic contrast across speakers emerged between “press” and “trace,” where the former represents a discrete location in space, and the latter represents a path (Edwards 2015). This is a “locative” set, or a set of terms that provides different ways of describing locations. In this two-term set, contrast is based on the presence or absence of a path. This is a highly salient dimension of proprioceptive experience across contexts since one knows, without any visual or sonic input, if they are experiencing movement along some path or not. For that reason, this contrast is a good candidate for incorporation into the deictic system of the language.
This schematic set of linguistic meanings, and others like it, are made specific when a term in the set is instantiated, or used, just as “I” (the person speaking) is made specific when it is associated with the person speaking in the speech situation. The facts of the situation, as they appear in the interaction, accrue to schematic distinctions and are anticipated by language, but they are not in language. In the example given above, the linguistic system provides a simple and relatively abstract contrast: ± path movement.+ path movement = “trace,” − path movement = “press.” This contrast is then applied in a specific interaction between two protactile people. The specific path that is traced is not supplied by the language.
For a visual person standing in the middle of a room, the walls, the floor, and the ceiling have affordances for navigation. We can see where the door is relative to us because there is a floor, a ceiling, and walls—all of which give us a sense of orientation. From here to the door is a straight line that follows our sightlines. For a DeafBlind person who has gone tactile, the first move would be to find some orienting, tactile structure instead. If there is only one texture, such as carpet, on the floor, the floor itself is not helpful. It would therefore be necessary to seek out a place where two textures or structures come together, such as the wall and the floor. The place where the wall meets the floor constitutes an orienting line, sometimes called a “shoreline.” Protactile deictics anticipate these patterns in how affordances are interpreted for purposes of navigation and project tactile motor lines, not sightlines.
This system emerged when protactile people started communicating directly with each other and interpreters were not around to sort out misunderstandings. In the following exchange, for example, Adrijana is with another student. They are on a break from the workshop, and the student mentions that one of the sighted videographers, Victor, is nearby. He points in the direction of where he thinks Victor is using ASL. Adrijana responds by saying, “You see Victor? I don’t see anything.” Then the student tries, unsuccessfully, to clarify. Adrijana appears irritated. She puts down the bottle of water she has been drinking and prepares to intervene in a way that became familiar to me, analyzing videos from protactile workshops. I began to think of these moments as more than mere corrections. They seemed to be treated as a test to determine if the person would choose an old way of being DeafBlind or if they would be open instead to a new, protactile way of being. In this case, Adrijana explains that he needs to locate Victor in the protactile way and she demonstrates. She takes his dominant hand and turns it palm-up. Then she presses her finger into her student’s chest and then her own chest to mean “you and me.” The student can feel her pointing to her own chest because he has a “listening hand” attached to her articulating hand. Then she presses on the palm of his other hand to mean “here.” Finally she says “Victor” and presses on several places on the palm, followed by a question marker to mean: “Which of these locations is it? Where is Victor in relation to us?”
In order to interpret deictic expressions like these and respond in a way the teacher will accept, the student has to be protactile. He has to inhabit the environment he is in a more tactile way and that shift is thrust upon him. It isn’t a matter of personal preference or a step in his personal process of becoming blind. The language requires him to be protactile in that moment. When people are becoming blind slowly, they adapt slowly, bit by bit. How- ever, every time a referential situation like this unfolds, a kind of pressure is exerted that takes a slow gradual process and turns it into a switch. You are either prompted to cash in on the visual affordances in your environment or you are prompted to cash in on tactile affordances, and each of those choices comes with a cascade of consequences for who you are and who you are taken to be. At some level, people choose. But if someone gives you directions to the door in protactile language, you have to commit, in that moment, to being protactile just to interpret the instructions. This is what I am calling “being for speaking,” where one’s way of being in the world is structured by categories and relations encoded in the language being spoken.
I am not claiming that new ways of being DeafBlind emerge moment to moment in the unfolding of specific interactions. As I have argued, the options DeafBlind people have at their disposal, in any one speech situation, are socio-historical products. In the 1980s, prior to the protactile movement, the options were very different than they are now, and those differences have been shaped in part by the convergence of institutional histories at the Seattle Lighthouse for the Blind and Seattle Central Community College. These institutions together generated conditions that made it possible for DeafBlind people to avoid contact with each other. Without that, it would have been difficult for visual ways of being to persist as long as they did. New ways of being emerged when there were no longer enough interpreters to maintain that system, and, crucially, when actors in key positions of authority were tactile DeafBlind people. In situations where DeafBlind leaders want to convert people to new ways of being DeafBlind, deictic reference plays a key role.
While acts of deictic reference make these requirements impossible to ignore, Adrijana and Lee drew their students’ attention to this consistently across all sorts of contexts, even those where language-use was not the primary activity. For example, in a lull between activities in the protactile workshops, Lee was trying to teach a small group of students, but she kept get- ting interrupted by people asking her questions. After the third interruption, there was some confusion among her students. They couldn’t understand what was going on. One person in the group responded to the confusion by reverting to vision. He leaned back and started looking around with his eyes, while the rest of his body was still (and therefore, from a tactile perspective, disengaged). Lee tapped him to get his attention and said:
"Don’t just stand there, passively, and look around. You have to be actively seeking information through touch. If someone isn’t interpreting information, and you’re just standing there, knowing you are missing out on information, you have to do something about it. Does PT mean that everyone has to wear a blindfold and never use their eyes for anything? Not at all! The point is to always communicate through touch. If you’re just standing back getting information visually, it means you don’t respect PT."
For Lee, “respecting PT” meant getting information in ways that generate information for others. This yields, in Goffman’s terms, a “situation,” which he defined as “a space of mutual monitoring possibilities” (1964: 135). To “be here” means, minimally, monitoring the situation in ways that could be monitored by others, and more specifically, in ways that presuppose tactile modes of access. If a person is just standing there, perfectly still (apart from an undetectable back and forth of the eyes), they are essentially exiting the situation. The point, Lee explains, is not to deny one’s biological capacity to see (if one has remaining sight). It is to actively enter into a space of mutual monitoring possibilities by choosing channels that can be presupposed across the group. A few moments after this interaction, perhaps feeling bad for the pointed correction, Lee shifts to a more understanding tone and says, “I know, I completely get it. We are telling you a lot about what protactile is, and you are getting it, intellectually, but it takes a while for it to go from something you are thinking about to something you are.” This intervention suggests that once the channels that can be presupposed across the group are internalized, reaching one’s hands out to gather tactile information in a moment of interactional confusion will become as natural as glancing across the room in response to a visual disturbance.
The level at which Lee and Adrijana intervened left space for their students to uncover new affordances in the environment that had previously gone undiscovered. They were insisting that everyone “be here,” not that they adopt this or that specific practice or “technique.” Almost immediately, though, Lee and Adrijana’s interventions were mistaken for more rigid attachments. For example, at one point, Lee was teaching two new students that they should give the speaker tactile feedback (such as tapping on or squeezing their thigh). In one-on-one conversations, the students were picking it up, but when there were three people, they faltered. In one such case, Lee reminded her students to give tactile feedback. They asked her exactly what they were supposed to do with their hands. They asked, “What is the ‘right way’ to do it?” Lee responded, “It doesn’t matter. I’m not trying to give you specific rules to follow. It’s just the principle— it’s important for the person talking to feel the feedback. Exactly how you do that is up to you.” The emphasis is not on doing things in a particular way because you are in a protactile environment, i.e. being appropriate to context, but on creating a context that would make all kinds of actions legible and effective. At the most basic level, this involved simply being there.
One of the most fundamental structures that effectively yielded a sense of being there was a particular configuration for two-person interactions. Prior to the protactile movement, utterances were conveyed from the hands of the speaker to the hands of the addressee, and these were usually the only parts of their bodies in contact. Just weeks into the workshops, a new contact surface became conventional, which greatly expanded the number and types of available channels. Instead of the hands being the only point of contact, speaker and addressee sat with their faces just a few inches from one another, legs touching on one side, on the outer thighs. This increase in proximity and surface area meant that behaviors could be observed, recognized, and typified via thermal, motoric, olfactory, proprioceptive, and touch-based channels. It turns out that some people heat up when they are exerting effort or are experiencing emotional strain, while others do not. You can tell what kind of soap they use to wash their clothes, whether they have a dog, and what kind of foods they cook at home. If one were to exit the situation (to silently move the eyes around in their sockets, for example), those channels, and all of the information they carry, would retract or grow thin, and the existence of the other would be attenuated.
Recall that the word “I” cannot be interpreted until the person speaking has been located in the immediate environment. This raises an important question: Is there a minimal threshold of existence for being a speaker? Can the speaking “I” be merely a set of disembodied hands, floating around in air space? Even if your answer to this question is yes, consider the fact that a speaker is only a speaker in relation to an addressee. If the legs are continuously pressed together from the beginning to the end of the encounter, the thigh of the speaker is readily accessible to the addressee’s hand for sending signals that they are listening and engaged. Without any way to register the fact that you are being addressed, can you really be an addressee? If there is no addressee, how can there be a speaker? Perhaps this explains Adrijana and Lee’s intuition to start with co-presence. Do whatever it takes to be here together. From there, a wealth of affordances will be revealed for actions of all kinds.
Given this approach, new roles quickly became available. There were new ways of participating in conversation, giving or attending lectures, workshops, and dinner parties. There were new ways of playing and watching games, and if one wanted to observe some other activity, protactile people had intuitions about how that might be done. For example, during the workshops, Adrijana wanted to teach participants how to make macramé sleeves for bottles, mugs, and other household objects (instead of “boring” Braille labels). While she manipulated long strands of twine, her students would stand behind her, their arms and hands placed on top of her arms and hands, so they could track every movement, while also feeling the effects on the twine. In order to make that feasible, the students had to press their chests against their teacher’s back, resting their chin on her shoulder. Elsewhere, that kind of contact would only be appropriate in the context of an intimate relationship, but, here, that was the structure that effectively incorporated and contextualized the relevant role-relation, and therefore it was quickly and widely adopted. All of this structure depended on the ability to be a speaker and an addressee.
From within those structures, signs of attention, agreement, boredom, interest, annoyance, and confusion came through loud and clear. This meant that in speaking and being spoken to in the context of a particular activity, one could, for example, be annoying, a good student, a keen observer, or a boring person. The primary roles of speaker and addressee were incorporated into and contextualized by a wide range of participation frameworks (which are discussed below). Within those frameworks, more specific and contingent roles emerged as patterns of behavior were consistently observable and there- fore typifiable (Hanks 1990; Irvine 1996). In an interview, Lee explained how this process was set in motion as reliance on sighted people was reduced:
"If an object is in front of a DeafBlind person, an interpreter is very likely going to explain the object to them. [. . .] The more DeafBlind people are in contact with other DeafBlind people, the more tactile things will become. The more tactile things become, the more DeafBlind people will demand that kind of thing from interpreters. For example, the DeafBlind person touches the object and then asks the interpreter a bunch of questions about it. That’s so much better than the other way around. So really there is a reversal of information—where it originates. Sighted people make less decisions about what counts as information, so there is less chance for them to impose their visual perspective."
Again, this goes back to the two basic requirements Lee and Adrijana insisted upon. First, DeafBlind people must know how to be co-present. Second, DeafBlind people decide together what counts as relevant or worthwhile information, which is a process that must be ratified by the members of the group over time. Together, these requirements guarantee that representations of the world will always be grounded in tactile ways of being in the world. Deictic reference is a productive activity for generating, reinforcing, and testing those connections.
5.5 Conclusion
In this chapter, I have argued that the protactile movement introduced new options for how one could be DeafBlind. In the early stages of the movement, when ways of being were in flux, new and emerging linguistic systems in protactile language, and in particular the deictic system, played a crucial role by encoding social choices and then recycling and re-imposing those choices moment to moment, day to day, at the periphery of awareness. There is a subtle relentlessness to this that can push people down a path they might
otherwise take later, more slowly, or not at all. Adrijana and Lee developed a reflexive awareness of this and they used it to propagate a social movement. In particular, when they wanted to convert a member of their community to a protactile way of being, they often employed deictic reference. This, more than any other form of language-use, forced their interlocutors to be protactile.
Since then, the deictic system of protactile language has become a repos- itory for regularities in navigation, interaction, and communication and it demanded that values be returned from that order. For example, two different combinations of movement and contact, “tap” and “press,” systematically invoke different dimensions of setting (Edwards 2015). “Tap” is a demon- strative. It singles out a referent against a horizon of other, possible referents (like the English word “this”). “Press,” in contrast, is a locative. It identifies a location (like the English word “here”), against a horizon of other, possible locations. There are further contrasts within each category. For locatives, “press” prepares the addressee for a discrete location, while “trace” prepares them for a path. For demonstratives, a trilled “tap” prepares the addressee for a cognitively foregrounded object, while “grip” prepares them for a cog- nitively backgrounded object. Exposed to this system of contrasts routinely, the addressee becomes sensitive to subtle differences in tactile stimuli in the transmission of linguistic signals in much the same way that in learning a tonal language, one becomes attuned to differences in tone. They also hone sensibilities about how the environment itself is likely to be interpreted. For example, “trace,” on its own, includes only the highly schematic concepts: contact (with a surface) and movement (along a path). Knowing how to apply those relatively abstract meanings in ways that can be operationalized by one’s addressee requires corresponding ways of routinely interacting with environmental structures, such as surfaces and paths that can be used for standing and walking.
In the next chapter, I follow the protactile movement to Gallaudet University in Washington, D.C., where architecture and infrastructure, in the context of urban development, became the focus. At Gallaudet, connections between the structure of the environment and the structure of language were made explicit as an integral part of protactile politics. The challenge was finding a way to be protactile in spaces that were not, and had never been, “for” DeafBlind people. While Seattle was home to DBSC—an organization run by and for DeafBlind people—most places where the protactile movement gained ground, including Gallaudet, had no such institution. At Gallaudet, this problem was addressed by “laminating” protactile environments onto “Deaf Space.”
Proudly powered by Weebly