Protactile Research Network
Feeling Phonology:
The conventionalization of phonology in protactile communities in the United States
Terra Edwards, Saint Louis University, Department of Sociology and Anthropology
Diane Brentari, University of Chicago, Department of Linguistics
Language, Volume 96(4)
Key Words: tactile phonology, language emergence, signed languages, protactile, DeafBlind, conventionalization
____________
[1] The name of the language we analyze is not yet conventionalized in protactile language. We use the English term “protactile language” or Protactile as a place holder.
The conventionalization of phonology in protactile communities in the United States
Terra Edwards, Saint Louis University, Department of Sociology and Anthropology
Diane Brentari, University of Chicago, Department of Linguistics
Language, Volume 96(4)
Key Words: tactile phonology, language emergence, signed languages, protactile, DeafBlind, conventionalization
____________
[1] The name of the language we analyze is not yet conventionalized in protactile language. We use the English term “protactile language” or Protactile as a place holder.
feeling_phonology.pdf | |
File Size: | 29095 kb |
File Type: |
1. Introduction
In this paper we argue that new phonological patterns are emerging within a sub-group of DeafBlind signers in the United States who communicate via reciprocal, tactile channels, a practice they call “Protactile” (granda and Nuccio 2018, Clark 2017). Recent research on emergent visual signed languages has demonstrated that a number of principles are at work within a phonological system before the most obvious criteria of phonological rules and minimal pairs are observable (Brentari, 2019; Brentari et al., 2012, 2013, 2015, 2017; Coppola and Brentari, 2014). For example, linguistic structures and contrasts in sign languages are expressed in terms of five phonological components (handshape, movement, location, orientation, and non-manual behaviors). In Nicaraguan Sign Language, patterns in how iconic handshapes are used to represent either an object’s size-and-shape (object handshapes), or how an object is manipulated (handling handshapes), gradually differentiate themselves into categories phonologically and morphologically. In these works, it has become clear that conventionalization is not a single monolithic process, but rather a complex of principles involving patterns of distribution—discreteness, stability, and productivity of form—as form becomes linked with meaning in increasingly stable ways.
In this article, we examine emerging patterns in protactile language, essentially addressing what the units of the new language are, and how can they can be determined. These patterns are most apparent in what we are calling “proprioceptive constructions” (PCs). PCs are comparable to “classifier constructions” in visual signed languages; however, PCs are produced by the hands and arms of both the signer and the receiver, unlike classifier constructions in visual signed languages, which are produced by the hands and arms of the signer. We hypothesize, therefore, that one of the earliest stages in the conventionalization of protactile phonology will necessarily involve coordination of the four articulators, and as part of this, each articulator will be assigned its own linguistic tasks. To test this hypothesis, several steps are required.
First, because we are focusing on aspects of the articulatory system, a set of criteria must be created for identifying articulatory units in terms of their phonological structure (Section 2.4). Second, the functional units, which constitute “linguistic tasks” for the articulators producing PCs must be identified and described (Section 3.1). Third, the correspondence of these units with particular articulators must be tested. In other words, we must find out if particular linguistic functions are being consistently performed by particular articulators (Section 3.2). Fourth, if we find that this is the case, we must determine whether or not these patterns are beginning to affect protactile forms beyond PCs (Section 3.3). In performing these interrelated analyses, our aim is to show how a new phonological system can be conventionalized in the tactile modality.
1.1 Background: Language use in DeafBlind communities
There are people all over the world who are DeafBlind, some of whom live as minorities within larger Deaf, sighted communities, while others are active members of a signing or non-signing DeafBlind community. Language and communication vary widely from community to community and across individuals in the same community. The dominant language in some DeafBlind communities in the United States is English, perceived via adaptive technologies such as amplification systems. In others, the dominant language is American Sign Language (ASL). In order to perceive ASL through touch, the receiver places their hand(s) on top of the hand(s) of the signer to track the production of signs. Just as spoken languages require adaptive measures to be perceived by DeafBlind signers, adaptations and innovations are necessary for the perception of visual languages by DeafBlind signers as well. However, those adaptations may not enable full access to the message; Reed et al. (1995:15) found that DeafBlind signers received ASL—a visual signed language—with only 60-85% accuracy, and that the largest source of errors was inaccuracies in the reception of the phonological parameters of ASL.
Recent research has examined the ways that language use is different among DeafBlind signers when compared to Deaf sighted signers (Checchetto et al., 2018; Collins and Petronio, 1998; Collins, 2004; Iwasaki 2018, Mesch 2001, 2013; Mesch et al. 2015; Petronio and Dively, 2006; Quinto-Pozos 2002; Reed et al. 1995; and see Willoughby et al. 2018 for an overview). For example, Mesch et al. (2015) report on tactile Swedish Sign Language, where DeafBlind signing dyads exhibit different positions for monologues vs. dialogues, and on “co-constructed” forms whereby clausal structures utilize articulators of both the “speaker” and the “listener” in “real space blends.” In ASL, Quinto-Pozos (2002) reports an avoidance and restricted range of functions in pointing signs. Iwasaki et al. (2018) describe how DeafBlind signers of Auslan manage turns at talk without access to non-manual features, such as eye gaze, eyebrow movements, and facial expressions that sighted Auslan signers depend on in performing the same communicative functions. Petronio and Dively (2006) report an increase in frequency of the manual signs yes and no, which they attribute to a lack of access to corresponding non-manual expressions (e.g. head and brow movements). Checchetto et al. (2018) analyzed productions by the Italian Deafblind community using tactile Italian Sign Language (LISt). They have proposed several principles that will guide ongoing changes in LISt, to which our data largely agree. First, they propose that LISt will tend toward sequentialization of form, relative to the simultaneity of visual signed languages. Second, they propose that the functions performed by non-manual markers (i.e. the face) will be performed manually. Third, they propose that innovation in the distribution of lexical and grammatical form will occur.
1.2 Contribution of this Research
Over the past 60 years, there has been growing acceptance of the idea that the vocal/auditory channel is not the only set of sensorimotor peripheral systems than can sustain a phonological system. Since Stokoe (1960) and with the decades of subsequent work on sign language phonology from the perspectives of distribution and constituent structure (Stokoe, 1960, Liddell 1984 Liddell and Johnson 1989; Sandler 1989, Brentari 1998, van der Kooij and van der Hulst 2005, Sandler and Lillo-Martin 2006), acquisition (Boyes Braem 1981, Marentette and Mayberry 2000, Meier et al. 2008), and processing (Corina and Emmorey 1993, Petitto et al. 2000, Corina and Hildebrandt 2002, Thompson et al. 2005, Baus et al. 2008, Carreiras et al. 2008, Gutiérrez et al. 2012, Caselli and Cohen-Goldberg 2014), there has been a slow, steady paradigm shift toward understanding phonology as the abstract component of a grammar that organizes meaningless elements, without specific reference to its communication modality.
This article contributes to that shift, calling into question the very definition of phonology. We ask: Can the tactile modality sustain phonological structure? The results of this study suggest that it can, and that the observed changes that have taken place as American Sign Language has been adapted to protactile environments have changed the very primitives used to create new signs.
This article contributes to the linguistics of tactile signed languages as well. Here, our contribution concerns the principles proposed by Checchetto et al. (2018). Our data support the prediction that the tactile modality will favor the sequentialization of form, relative to the simultaneity of visual signed languages. As will be demonstrated in what follows, PCs—the constructions that constitute the main focus of this article—exhibit a relative sequentialization of form at the phonological level, when compared with classifier constructions in ASL. Also, while not the focus of this article, our data also reflect a general avoidance of the face as part of the articulatory apparatus, and therefore are consistent with Checchetto et al. (2018)’s second and third principles.
In addition, we propose a principle concerning the use of space; namely: air space is dead space. What we mean by this is that for DeafBlind signers, contact with the body of the listener has affordances that the space on and around the body of the signer does not. The former is what granda and Nuccio (2018) call “contact space” and the latter is what they call “air space.” In air space, locations are perceived relative to each other against a visual backdrop that is inaccessible for DeafBlind signers (e.g., “to the right of the mouth” vs. “to the right of the eye”). In contrast, locations on the body of the listener can be clearly perceived against the backdrop of the listener’s own body.
1.3 Relationship of protactile language to visual sign languages
A cascade of consequences follows from the switch to contact space and we have observed that these changes are triggering the emergence of a new phonological system in protactile language. At this point, as innovations in protactile language occur, protactile signers are not asking: “How do we adjust signs to make them more perceptible to us since we can’t see?” Instead, they seem to be demoting ASL, treating it like an archival lexicon. When protactile signers retrieve signs from the ASL lexicon, they are concerned with whether or not the sign can articulate to contact space in ways that do not break with emerging protactile conventions. For example, there are two classifier handshapes for representing a “person” in ASL—the “1” handshape: B and an upside-down “V” handshape: Y. The “1” handshape (B) does not articulate to contact space easily because the bottom of the wrist is difficult to position and move on the body of the addressee in a precise and perceptible way. It follows that it is difficult to modulate its path or speed to express manner, which is an important function of classifier constructions and PCs alike. In contrast, in the “V”* classifier Y, the two extended fingers representing the legs and the tips of the fingers make contact with body. This handshape is perceptible and much easier to modify for manner of movement, and so is preferred in the protactile system.
*In order to indicate walking the “V” handshape is turned upside down so that the fingers represent the legs.
The first principles outlined by Checchetto et al. (2018) seem to follow the principle of “change by necessity”; namely, when a visual sign language structure is no longer workable, it will be modified, a new one will be innovated, or a non-linguistic strategy will be employed. In the work we report here, innovations differ in an important way from Checchetto et al. (2018) and other analyses of signed languages perceived tactually. Protactile signers in Seattle have found that air space is ineffective, and they are making a sharp turn toward contact space. In doing so, they are maximizing the proprioceptive sense in ways that are, to our knowledge, unattested in both visual and tactile signed languages. The changes are, in part, a response to the imperceptibility of ASL structures, but they also reflect a conscious rupture with the practices of ASL. As reported by protactile signers themselves, they are less interested in retaining ASL as much as possible, and more interested in embracing the potential of the proprioceptive/tactile modality for precise and efficient communication and for creating strong iconic and indexical ties to the world, as they experience it. granda and Nuccio (2018:13) explain: As Deaf children, we were drawn to visual imagery in ASL stories— transported into the vivid details of the worlds created for us. As DeafBlind adults, we still carry those values within us, but ASL doesn’t evoke those same feelings for us anymore. When you are perceiving a visual language through touch, the precision, beauty, and emotion are stripped away; the imagery is lost. […] If you try to access an ASL story through an interpreter […], you just feel a hand moving around in air space […]. In air space we are told what is happening for other people, but nothing happens for us.
This orientation suggests that protactile signers are prioritizing intuitive and effective communication over and against the preservation of ASL structures. In what follows, we argue that innovations emerging under these pressures are organized by new well-formedness principles. This is in no way a prediction about direction in which all DeafBlind communities will change; it is one way in which language emerges, given a particular “communicative ecology” (Horton 2018).
This raises the question for us as linguists about how much we should refer to visual sign language structures in describing this new language. According to John Lee Clark (2020), a skilled protactile signer and national leader of the protactile movement, the situation is changing rapidly: ASL speakers have always had a huge advantage, a head start, in learning Protactile. For them, a great part of it is about “converting” ASL knowledge into Protactile. But in the near future, this advantage will diminish. At some point ASL speakers and non-ASL speakers will need to take the same classes! Right now, ASL speakers can skip to “Protactile II,” while non-ASL speakers start with Protactile.* But soon, that won’t be the case.
*We are not suggesting that simultaneity is absent in protactile language. We note that complex layering of meaningful elements within a PC is not only possible, but common. For example, a protactile signer can express path and manner simultaneously with a PC representing a person walking in contact space, i.e. on the body of the addressee. This is done by pressing the index finger and middle finger, alternately down, in a particular way, while moving forward in some direction.
With this in mind we do our best in this article to define protactile units on their own terms, not in terms of their relation to visual sign languages. In describing these mechanisms, we are careful not to import categories from other linguistic modalities. Indeed, attempts to find one-to-one correspondences between units across modalities can be a hindrance. Handshape, location, and movement, for example, are not parameters that can be taken for granted in protactile languages, just as there is no reason to expect visual signed languages, or spoken languages, to have a conventionalized way of coordinating the articulators of two people. Therefore, we follow in the spirit of Stokoe (1960), who established unique labels, like “tabula,” “designator,” and “signation” to prevent equivocations across modalities. The labels we have created describe the functions of each part of the PC. For example, we label the four hands more neutrally as “Articulators,” and the category we label “Initiate” is the unit that is used to initiate, or start, the PC.
1.4 Background on the targeted linguistic structures
In this paper we focus on structures that describe motion and location events, a specific type of structural innovation in protactile language that is the correlate of classifier constructions in visual sign languages. We call these structures “proprioceptive constructions” (PCs). We focus on these structures for two reasons. First, the use of four handed forms is most prevalent in structures that describe motion and location events, and the aim of this study is to show how principles of conventionalization are being applied by protactile signers to sequence the four articulators.
The second reason we focus on descriptions of motion and location events is that the constructions which carry out parallel communicative functions in ASL describing motion and location events—classifier constructions— should, in theory, require more radical restructuring to be expressed in the tactile modality than other types of signs. The ASL lexicon has been divided into three parts (Brentari and Padden 2001), using a framework developed by Itô and Mester for the phonology of Japanese (Itô and Mester, 1995a,1995b). This lexical architecture, as we are using it here, is simply a way to conceptualize the different types of vocabulary items that any language is likely to have, rather than a language-specific proposal about ASL or Japanese. The “core” lexicon is comprised of forms whose parameters are meaningless sub-lexical units with a highly conventionalized form-meaning association. These are the signs you would expect to see listed in a dictionary. Modifications in the core component have been described within many DeafBlind communities, such as those mentioned above by Checchetto et al. (2018), for example, the displacement of non-manuals to more manual forms, the sequentialization of form, or the use of dynamic points as opposed to static directional points. The forms derived from the manual alphabet consisting of fingerspelled sequences comprise a second component, the “foreign” component. The third, “spatial” component is composed of polycomponential structures, most commonly known as “classifier constructions” (Supalla 1982; Zwitserlood, 2012), and other spatial signs.
Figure 1: The three components of the ASL lexicon (Brentari and Padden 2001)
The meanings of classifier constructions in the spatial component of visual sign languages are componential—all three of the primary manual components (handshape, movement and location) retain independent autonomous meaning, so there is little redundancy in the information conveyed by the parameters, and they cannot be understood unless there is full access to the form. In contrast, in the other parts of the lexicon partial information is sufficient for understanding because the redundant information is predictable. For example, in fingerspelled sequences, which belong to the foreign lexicon, except for the letters -J- and -Z-, the manual alphabet is composed exclusively of handshapes in a predictable location, and in which the movements are largely predictable transitions between handshapes (Battison, 1978; Wilcox, 1992; Keane and Brentari 2016). Location and movement are therefore somewhat predictable, or redundant, and can be understood even if only partial information is conveyed. Given the lack of redundancy of classifier constructions in visual sign languages, it is perhaps not surprising that classifier constructions are being restructured by protactile signers.
The primary aim of this study is therefore to analyze proprioceptive constructions, or “PCs” in order to determine the internal units that comprise them, and to begin to understand how the patterns of conventionalization of PCs extend to protactile phonology, more generally.
2. Study design and procedures
In this study, we analyze data generated in a description task. Dyads of protactile participants were asked to describe a series of tactile stimuli. We videorecorded, analyzed, and transcribed their productions. In the following sections, detailed information is provided about participants, procedures for collecting data, the stimuli that were used, and transcription methods.
Three fundamental observations have led to our hypotheses and drive the analysis of PCs we present here. The first observation concerns the use of four hands in PCs instead of two. PCs are not produced exclusively by the hands and arms of Signer 1 (the “speaker”). They also incorporate the body of Signer 2 (the “listener”). In order to coordinate the articulators of the two signers, Signer 1 needs a conventional way of signaling how and when they want Signer 2 to contribute to the co-articulation of protactile signs. We hypothesize that the conventionalization of such mechanisms involves assigning specific linguistic tasks to four articulators, in much the same way that the two hands in visual signed languages are assigned consistent and distinct tasks (Battison 1978). Since this coordination of articulators must be sorted out early on in the process of conventionalization for efficient and effective communication, the findings of this study can shed light on an early stage in the conventionalization of protactile phonology.
The second observation relates to the functional units that comprise the PC. In order to address PC co-creation, we must identify and describe the functional units used to accomplish particular “linguistic tasks” and the way in which they contribute to the PC structure as a whole.
2.1 Participants.
The seven participants in this study (four males and three females, ages 32-47) were all DeafBlind individuals who had participated in a protactile network for at least one year. Six were exposed to ASL by the age of seven via visual perception (those who became blind in adulthood), and one (who was born blind) was exposed to ASL via tactile perception since birth. In adulthood, they moved to Seattle for employment, a large socially and politically active DeafBlind community, for educational opportunities, and/or communication-related resources. At the time these data were collected, five of the seven participants were working in environments that required them to interact with other protactile signers daily, for many hours during the work-week, and variably at night and on the weekends when they attended formal and informal protactile events or interacted with their protactile roommates. The other two participants interacted with protactile signers often, according to their own reports, but with less frequency and consistency than the others, as they did not work in environments where protactile language was widespread. All of the participants in this study reported that they were right-handed.
2.2 Procedures
Recruitment took place in two stages. First, an email was circulated to relevant community leaders explaining the project and requesting participation. That email was shared by them, more broadly, within the community. A local DeafBlind educator selected a subset of those who responded, based on her evaluation of high protactile proficiency. During data collection events, prior to filming, we gave consent forms to participants in their preferred format (e.g. Braille or large print). We also offered to interpret the consent forms into protactile language. One of the co-authors who is fluent in protactile language then discussed the consent forms with each of the participants, answering questions and clarifying as requested. The consent forms included questions requesting permission to include images of these communication events in published research and other research and education contexts, such as conferences and classrooms. Once consent had been obtained, we commenced with data collection.
Data collection took place at a dining room table in a privately-owned home. Dyads of protactile signers were seated at the corner of the table. The interactions were always between two protactile signers, both of whom were participants in the study. They changed roles after a given object (item) was completed, and discussed and gave feedback to one another about the clarity of a description, as it unfolded. We placed a cloth napkin with thick edges on the tabletop to provide a tactile boundary within which the stimuli would be placed. The stimuli were placed on the napkin in pseudo-random order and Signer 1 was instructed to “describe what they feel.” Signer 2 was told that Signer 1 would be describing something they felt. After the description, Signer 2, who was not exposed to the stimulus prior, picked up the object and explored it tactually. The co-authors were present throughout the task to operate the video camera, but were only in tactile contact with the participants when placing stimuli. The camera was on a tripod on the table, positioned above the participants pointing down, in order to capture contact and motion between them.
In all cases, the dyads discussed aspects of the object and adjusted their descriptions—sometimes at great length. In addition, the stimuli had many different pieces and parts, each of which was described by the participants. Therefore, we collected a large number of tokens in response to a limited number of stimuli.
2.3 Stimuli
Proprioceptive constructions that involved whole objects or their size and shape were elicited by presenting 3 objects using tactile stimuli to the participants: a lollipop, a jack (the kind children use to play the game “jacks”), and a complex wooden toy involving movable arms, magnets, and magnetized pieces. The first two stimuli were presented in both a singular context (1 object) and in a plural context (several of the same object in a row). These objects were chosen because they provide opportunities to convey information about motion and location events in protactile form, and they can be presented using real objects on a bounded flat surface placed next to the two participants.
2.4 Transcription
The descriptions of the stimuli were videotaped, labeled, and annotated using ELAN (Crasborn and Sloethes 2008). Annotating one tier at a time, we identified the tasks being performed by each of the articulators in general terms in order to determine if there was a clear division of labor among the articulators. The labeled tiers in our transcription system are provided in Table 1.
Table 1. Lexical and articulatory categories in transcription system
Lexical components of PT signs
1, Core, Structures expressing non-spatial events
2, Spatial , Structures expressing spatial events (Proprioceptive Constructions)
Articulatory components of Proprioceptive Constructions
1, Articulator 1, Dominant hand – Signer 1
2, Articulator 2, Dominant hand – Signer 2
3, Articulator 3, Non-dominant hand – Signer 1
4, Articulator 4, Non-dominant hand – Signer 2
5, contact space (-c), Locations on or near Signer 2’s body—“signing space” for protactile language.
6, air space (-a), The space on and around the body of Signer 1—“signing space” for visual signed languages.
In order to identify units of analysis, we assigned Signer 1 and Signer 2 independent tiers. Signer 1 is the principal conveyer of information. Signer 2 contributes to the articulation of the message, but in terms of information, is the principal receiver. A form could be produced by one or both signers. We also established one tier each for the four hands/arms of the two signers, assigning placeholders for the dominant hands of Signer 1 (A1) and Signer 2 (A2) and the non-dominant hands of Signer 1 (A3) and Signer 2 (A4). In visual signed languages, the dominant hand (H1) and the non-dominant hand (H2) are assigned complementary roles; H1 is more active than H2 (Battison, 1978). In protactile language, four anatomical structures are available for producing each sign, which we ultimately assign to roles based on the degree to which they are active (Figure 2). A1 is the most active and is assigned to the dominant hand of Signer 1, who is the principal conveyor of information. A2 is the next most active role and assigned to the dominant hand of Signer 2, who is the principal receiver of information. A3 is assigned to the non-dominant hand of Signer 1. A4 has the least active role, and is assigned to the non-dominant hand of Signer 2, being called on sporadically to produce certain components of signs, and otherwise being available for producing tactile backchanneling and tracking the movements of Signer 1's dominant hand (A1).
Figure 2: Sequence of forms used to describe the cylindrical portion of the lollipop stimulus
This article focuses on PCs, which are part of the spatial lexicon. However, as protactile phonology emerges, it should affect all areas of the lexicon. Therefore, we also track the spread of devices found in PCs to the core lexicon on each of the four articulators. In order to distinguish between core and spatial forms and whether they are produced in “contact space” or “air space” (granda and Nuccio 2018), we create four categories: core-(a)ir space, core-(c)ontact space, spatial-(a)ir space, and spatial-(c)ontact space. Contact space is defined as the space on the body of Signer 2, while air space is the space in front of, around, and on Signer 1’s body. Core-a are forms that use air space to represent non-spatial concepts, while core-c refers to core forms that are produced in contact space. Spatial-a refers to spatial forms produced in airspace, while spatial-c refers to spatial forms produced in contact space. This part of the transcription process allows us to keep track of the extent and nature of changes in the core lexicon, as compared with the spatial lexicon, which is our primary focus in this article.
3. Analyses
We performed four types of analyses in order to address which functions and structures are used in PCs. First, each articulator was assigned a number (1-4), which captures its level of linguistic activity by signer and by articulator; “1” is most active to “4” least active (Section 2.4). Second, we describe the functional units involved in producing PCs (Section 3.1). The third step is to analyze the appropriation of function to each of the four hands (Section 3.2), and the last qualitative analysis describes how each of the linguistic structures involved in producing a PC have been observed to generalize to the core lexicon independently from one another (Section 3.3).
3.1 Functional Units of Analysis
In this section we define the different types of communicative functions that occur within the larger PC unit produced by the four articulators. Each has a label that describes what it contributes to the PC. In a PC they appear in the following temporal order: initiate (I), proprioceptive object (PO), prompt to continue (PTC), and movement-contact type (MC). These units form a unified whole with rapid interchange between Signer 1 and Signer 2. We will refer to these units in the following sections. For reference, we provide definitions of the functional units that will be used throughout the rest of this paper in Table 2.
Table 2. Definitions of functional units in a proprioceptive construction
Functional Units Proprioceptive Constructions
1, Initiate (I), A request for active involvement of S2 in co-producing a PC.
1a, --Initiate-touch, Instruction by S1 to S2 to foreground a new contact space by touching it
1b, --Initiate-grasp, Instruction by S1 to S2 by grasping the relevant body part
1c, --Initiate-prompt, Two-part sequence by S1 to S2 to foreground a particular body part as a PO
1ci, ----prompt-tap, Instruction by S1 to S2 to activate A2 for purposes of articulation, and/or that a prompt-PO is coming next
1cii, ----prompt-PO, Instruction by S1 to S2 to produce a particular handshape on A2
2, Proprioceptive Object (PO), Active articulatory space- type selected in response to type of Initiate produced.
3, Prompt to Continue (PTC), Keeps selected articulatory space active for further information to be added.
4, Movement Contact Type (MC), Tactile and proprioceptive cues that contain information about size, shape, location, or movement of an entity.
3.1.1 Proprioceptive objects (POs)
First, we describe the proprioceptive objects (PO), which we observe to be the anchor of the PC; see Figure 2b; in this case Signer 2’s dominant fist and arm (labeled A2), placed vertically. It is produced by A2 (i.e., Signer 2’s dominant hand). Effective introduction and use of POs is one of the main innovations of the PC structure. A PO has two functions: First, it conveys information about size, shape, and position. Second, in conveying that information, it delimits and activates a space on the body of Signer 2, on which Signer 1 can produce signs. POs are spatial forms, meaning that they use space to represent spatial relationships. While ASL also has spatial forms, POs are always produced in contact space, not air space. We therefore label POs “spatial-c” forms, where “spatial” refers to the area of the lexicon to which they belong, and “-c” refers to “contact space.” POs are a set of substantive elements, and in the data analyzed here, attested categories include: plane, incline, sphere, cylinder, individuated objects, and penetrable surface. In producing a PO, Signer 2 produces what might appear, at first glance, to be “handshapes” (in visual sign language terms). However, handshape inventories in visual signed languages are organized around contrasts that are often not perceptually salient via the tactile sense. Instead of feeling the external surface of handshapes, Signer 2 perceives shapes and their positioning via proprioception. The term PO captures the dual role of this unit which both defines an articulatory space, and assumes an articulatory shape.
3.1.2 Initiate
There are several ways of signaling which PO Signer 1 wants Signer 2 to select. Signed languages (visual and tactile) that employ handshapes, rather than POs, do not need conventional signs to request the active participation of Signer 2 in articulatory tasks, therefore, a new term is needed for this category of sign, which is a conventional signal produced by Signer 1 to elicit a PO from Signer 2. Since these forms initiate the entire PC, we called them initiate. initiate does not refer outside of the system; it establishes relations within it. It has a strictly language-internal function, therefore, we consider it a core-c form of the grammatical/functional variety.
We found three sub-categories of initiate, each one represented by a distinct form: initiate-touch, initiate-grasp, and initiate-prompt. In other words, there are three ways to initiate a proprioceptive construction and prompt Signer 2 to provide a PO: (1) by touching a surface on the body of Signer 2, thereby incorporating that surface into the active signing space, or activating it as an articulator (initiate-touch); (2) by grasping Signer 2’s hand or arm, thereby activating it as an articulator (initiate-grasp); or (3) by prompting Signer 2 to produce a form (initiate-prompt).
Initiate-touch activates some portion of Signer 2’s body when Signer 1 makes contact. That portion of Signer 2’s body is then activated in the production of a sign. In the 4-handed proprioceptive constructions we analyze here, the activated area functions both as a space for articulation and as a backgrounded, meaningful element. That backgrounded element is represented by a PO. Initiate-touch can only occur, then, when a PO has already been selected via initiate-grasp or initiate-prompt. In sum, initiate-touch works to foreground a new contact space against a previously established background.
For example, in Figure 2a-2b Signer 1 initiates the basic PO, which includes Signer 2’s fist and arm, placed vertically. You can see that this PO has been selected in Figure 2c. Next, Signer 1 traces Signer 2’s arm to represent the stick of the lollipop in Figure 2e. In the second case, this activates a smaller PO (only the arm of A2) within the previously established PO (the arm and fist of A2). When a smaller portion of a previously established PO is activated, we label that initiate-touch.
As stated above, Signer 1 has two additional options for initiating the PC. They can use initiate-grasp, which involves grasping some portion of Signer 2’s body and selecting a PO by moving Signer 2’s hand or arm into that shape. Signer 1 can also use initiate-prompt. Attested categories of (i)nitiate-prompt include: (I)-prompt-tap and (I)-prompt-po. These two forms can work in tandem. For example, it is common for Signer 1 to tap Signer 2’s non-dominant hand (A4) twice, before producing a shape. This shape is not the PO, but a request for Signer 2 to copy the shape, thereby producing the PO. We therefore label this “prompt-po.” The prompt-tap that sometimes precedes it, is an instruction to Signer 2 to be prepared for a prompt-po. prompt-po can occur alone, while prompt-tap cannot.
3.1.3 Movement/Contact types (MCs)
POs are indeterminate until Signer 1 adds more information by tracing, gripping, and producing other forms of movement and contact on the PO. We therefore identified those conventional signals as Movement-Contact types (MCs). MCs act on the pre-determined PO or activate an additional PO, thereby backgrounding the previous PO. For example, A2 was used to represent the lollipop in Figure 2. At first, the fist of A2 was foregrounded by an MC to represent the candy portion of the lollipop. The arm was available, but backgrounded at that point. Next, an MC is used to foreground the arm (as a cylinder) as a PO, to represent the stick of the lollipop. In Figure 3 the same arm (this time as a plane) is used as a PO to represent a horizontal surface, where several lollipops are located (white circles). The locations themselves are represented by MCs.
MCs are substantive, spatial-c forms, which use contact space to represent spatial concepts. In Figure 3, Signer 1 is producing what might be seen as handshapes as he makes contact with Signer 2, but the handshapes matter much less than the way the fingers or hand contact the PO. Attested MCs include: trace, grip, grip-twist, grip-wiggle, slide, penetration, tap, slap, press, scratch, and move. The part of the form with contact is always counted for duration of the MC; i.e., when A1 and A2 are touching, as shown with the white circles. If a “listening hand,” (e.g. A4 in Figure 2a), is following the movements of A1 in a PC, then the movement from one MC to a subsequent MC is also included.
Figure 3: Signer 1 (right) produces multiple MCs on previously established PO
3.1.4 Prompt-to-Continue (PTC)
Finally, we observed that once a PO is established, Signer 1 can hold the PO in place during the subsequent MCs and until the final MC had been produced. Across many instantiations, this form seems to serve the function of maintaining the active, contact signing space generated by the PO (See A3 in Figure 2d). It tells Signer 2, “Leave this hand here. There is more to come.” Therefore, we call this category of forms, prompt-to-continue (PTC). PTC maintains the active status of the PO by maintaining contact with the PO until the string of movement contact types is completed. The end of this unit often co-occurs or is closely linked with the production of the final MC in the proprioceptive construction. Like initiate, prompt-to-continue has a strictly language-internal function, therefore, we consider it a core-c form of the grammatical/functional variety. Attested categories include hold and press.
3.2 Correspondence between Units and Articulators
In this section we describe the systematic links in quantitative terms between the functional units and articulatory units of a PC, illustrated in Figure 2a-2d, which we argue have been assigned to specific articulators among protactile signers. As stated above, the order of elements is consistent. When a new initiate occurs, its articulation will always begin before the PO. When a new PTC occurs, its articulation will always begin after the PO has been established, and finally, the articulation of the MC always begins after all of the other components of the PC have been established, i.e. MC is last in the sequence.
In Figure 2a-2b, Signer 1 requests Signer 2’s active participation by “grasping” her dominant hand (A1), and moving it toward a vertical position; this is initiate-grasp. In Figure 2c, Signer 2 is responsive to this request and repositions her arm, which is her dominant articulator (A2). In Figure 2d, Signer 1 holds Signer 2's arm in place; this is prompt-to-continue (PTC), and finally, in Figure 2e, Signer 1 traces Signer 2's arm, to highlight its cylindrical shape. This sequence together comprises the second PC with the meaning “cylinder.” Together, “cylinder” + “sphere” (not pictured here) describe the entire lollipop stimulus. All protactile signers produced a construction like this to represent the lollipop. Each description had two parts: a cylinder (to represent the stick) and a sphere (to represent the candy).
After determining the order and roles for each sub-unit of the PC, we analyzed the consistency of ascribing specific functions to specific articulators. The frequency by individual for articulator and how they map to functional roles is represented in Table 3, along with standard error calculations. Proportions of each Articulator (A1, A2, A3) are based on the total for that function (I, PO, PTC, MC). The “total” proportion of each function is based on the grand total of productions for each participant. Results from Mann-Whitney U comparison of rankings for each functional unit show that: for Initiate, A1 values are significantly higher than for A3 (U=6; z-score 2.29, p<.05); for PO, A2 values are significantly higher than for A4 (U=6; z-score 2.29, p<.05); for PTC, A3 values are significant higher than for A1 (U=0; z-score 3.07, p<.01); and for MC, A1 values are significant higher than for A3 (U=0; z-score 3.07, p<.01).
Table 3: Proportion of articulatory-functional alignment by individual.*
Initiate, Proprioceptive Object, Prompt-to-Continue, Movement-Contact Type, A1, A3, I-Total, A2, A4, PO-total, A1, A3, PtC-Total, A1, A3, MC-Total
Participant 1, 0.48, 0.52, 0.34, 0.74, 0.26, 0.18, 0.25, 0.75, 0.10, 0.68, 0.32, 0.38
Participant 2, 0.73, 0.27, 0.27, 0.86, 0.14, 0.20, 0.32, 0.68, 0.13, 0.78, 0.22, 0.40
Participant 3, 0.48, 0.52, 0.29, 1.00, 0.00, 0.25, 0.00, 1.00, 0.08, 0.63, 0.37, 0.38
Participant 4, 0.66, 0.34, 0.21, 0.95, 0.05, 0.17, 0.36, 0.64, 0.19, 0.86, 0.14, 0.43
Participant 5, 0.52, 0.48, 0.2, 0.98, 0.02, 0.14, 0.26, 0.74, 0.16, 0.83, 0.18, 0.49
Participant 6, 0.81, 0.19, 0.29, 0.00, 0.00, 0.00, 0.13, 0.87, 0.05, 0.99, 0.01, 0.65
Participant 7, 0.58, 0.42, 0.34, 0.78, 0.22, 0.16, 0.18, 0.82, 0.14, 0.82, 0.18, 0.35
Average: All Participants, 0.60, 0.40, 0.28, 0.85, 0.15, 0.15, 0.26, 0.74, 0.13, 0.82, 0.18, 0.44
Standard error, 0.05, 0.05, 0.02, 0.13, 0.04, 0.00, 0.05, 0.05, 0.02, 0.04, 0.04, 0.04
*One of the (male) participants responded to only one of the three stimuli.
** initiate-prompt-tap is primarily produced with A3. As shown in Table 3 P1, 3, and 5 produce initiate-prompt-tap frequently, and therefore share initiates between A1 and A3. P 2, 4, and 6, do not produce many initiate-prompt-tap, which increases their use of A1, where the other types of initiate are produced.
We calculated the proportions that each participant assigned each of the PC roles (I, PO, PTC, and MC) to each articulator, and then averaged the individual averages. We found that initiate was produced more often with A1 (60% of 623 tokens), with A3 in nearly all other cases (40%). PO was produced most often with A2 (85% of 335 tokens), with A2 in all other cases (15%). PTC was produced most often with A3 (74% of 280 tokens), with A1 in all other cases (26%). MC was produced most often with A1 (82% of 966 tokens), with A3 in all other cases (18%).
Figure 4: Proportion of articulatory-functional alignment by group.
As you can see in Figure 4, for PO, PTC, and MC, there is a clear division of labor: PO is most often assigned to A2, PTC to A3, and MC to A1. While there is a preference for initiate to be produced with A1, the pattern is not strong relative to the other categories. The use of the articulators for each of the functional units is consistent across participants for PO, PTC, and MC. We see some variation, however, in the proportion of use for A1 and A3 for initiate. One possible reason for this is that the different types of initiates are assigned to different articulators.
To investigate this possibility, we analyzed the sub-types of initiate (I). Again, in order to correct for the differences in token count among participants, we calculated the proportions that each participant assigned to each of the PC categories to each articulator, we then averaged the individual averages to reach the analysis presented in Figure 5 below.
Figure 5: Percentages of sub-Initiate forms produced by A1 and A3
i-touch was produced most often by A1 (87% of 181 tokens). However, i-grasp was not clearly assigned to one articulator. A3 produced 54% of 372 tokens, and the remaining tokens were produced by A1. Similarly, i-prompt more often produced by A1 (59 % of 61 tokens), but A1 was not far behind (41%). While i-grasp may be equally distributed across articulators, we suspected that i-prompt should be analyzed further into its two sub-types: i-prompt-tap and i-prompt-po, as shown in Figure 6.
Figure 6: Percentages of sub-sub-Initiate forms produced by A1 and A3
i-prompt-po was most often produced by A1 (89% of 28 tokens), while the remaining tokens were produced by A3. i-prompt-tap, in contrast, was most often produced by A3 (76% of 33 tokens), while the remaining tokens were produced by A1. This is a small number of tokens; therefore, we take these calculations to be provisional. Nevertheless, a strong pattern presents itself here. Apart from i-grasp, which appears to be distributed almost equally across A1 and A3, each linguistic task has been assigned to a specific articulator.
In sum, analysis of these data suggests that among protactile signers, specific linguistic functions are assigned to specific articulators and distributed over the dyad in PCs. These relations are becoming conventionalized, allowing two signers to coordinate four articulators quickly and efficiently. The use of Signer 2’s hands and arms as part of the active articulatory apparatus differs from both visual and tactile signed languages, which use two articulators. This study, therefore, provides new insights into how emergent phonological systems can become conventionalized, and broadens our understanding of the flexibility and potential of phonology as it is manifested in different communication modalities.
3.3 Generalizing PC devices
We performed one additional analysis in order to determine if the innovations found in PCs are used elsewhere in the lexicon. We hypothesized that these patterns are affecting spatial forms at a faster rate than core forms. In order to test this secondary hypothesis, we assigned each annotation to one of four categories: spatial-a, spatial-c, core-a, and core-c. (Recall that “-c” indicates that the form was produced in contact space and “–a” indicates that the form was produced in air space.) In these data, a total of 1,450 spatial forms were produced. 96% of those forms were produced in contact space. A total of 1,419 core forms were produced, and of those, 62% were produced in contact space (Figure 7).
Figure 7: Percentages of spatial and core forms produced in contact vs. air space
As stated in the Introduction, the foreign component of the lexicon has not yet shown much modification in PCs, perhaps because both the movement and the location are predictable (redundant) in fingerspelling. We therefore looked to the core lexicon for examples of the use of I, PO, PTC and MC. At this stage of the work we make no claims about the direction of the generalization. It could be the PCs occur first and become productive in the core lexicon later, or vice versa.
There are two routes into the core protactile lexicon. First, core forms in ASL, which are conventionally produced in air space, can be borrowed into protactile language by simply changing their place of articulation to contact space, a device that is obligatory in PCs via proprioceptive objects. In the core, contact space is not on a proprioceptive object but somewhere on Signer 2’s body that provides tactile grounding, even if the place of contact has no particular meaning as it does in a PO. In these data, there are several ways this is accomplished (described below). We think that as protactile language develops, at least some of these patterns will become more widely conventionalized across groups of protactile signers. The second route for core protactile lexical items is through the spatial lexicon itself. As stated above, the spatial lexicon—in both visual and tactile signed languages—contains constructions where all parameters of the sign are (or can be) meaningful. Spatial constructions can enter the core lexicon by abstracting away from the details of the description. We expect both of these processes, given the right communicative ecology, to play out in protactile language. The analysis presented here, then, offers some insight into some possible trajectories for language emergence.
In this study, the most common pattern in transferring ASL core lexical items to contact space involves ASL handshapes that are articulated by making contact with the dominant hand of Signer 2 in contact space, instead of with the non-dominant hand of Signer 1 in air space. For example, once the lollipop has been described and, the signer establishes locations in contact space (on the palm of Signer 2), to represent relative locations of multiple lollipops, placed on the table. While the PO structure (the palm) was still active, Signer 1 produced an ASL “Y” handshape :f , as in the ASL sign, “same,” as shown on the left side of Figure 8 (ASL Signbank, 2020), however, it is produced by making contact with the PO, as shown on the right side of Figure 8. Therefore, while same is a core lexical form in ASL, here the ASL handshape has been transferred to contact space, using conventional PC devices; in this case a PO plus MC combination. Where ASL handshapes are transferred to contact space in this manner, PC devices are operating beyond the spatial lexicon.
Figure 8: Handshape transferred to contact space via PC devices and conventions
Spatial constructions also enter the core lexicon by abstracting away from the details of the description. For example, one protactile signer described 5 jacks, which were spread out on the table by producing mc-press two times on po-plane and then adding the number “5.” We have observed that mc-press is often used to describe the location of a referent in relation to another referent (e.g. “One jack is here [mc-press] and another is here [mc-press]”). However, in this case, both the location of the referents, and the number of referents, are abstracted away from the details of the description to mean something like: The jacks are distributed in space, not: this jack is here and this jack is here, and so-on until the locations of each of the 5 jacks has been described. The main cues that distinguish these two meanings seem to be: (1) the speed of production; (2) the presence vs. absence of pauses between instances of mc-press, and the presence vs. absence of a lengthened mc-press press, co-articulated with all other instances, to mark the origo, or position from which reference is calculated. This suggests that core protactile lexical items, in addition to entering via the transfer of ASL core forms into contact space, are also entering via the protactile spatial lexicon. In this case, the handshape used in the ASL demonstriative “this” shown on the left side of Figure 9 (ASL Signbank 2020), is transferred into contact space by making contact with the PO (Figure 9, right).
Figure 9: Handshape transferred to contact space using PC devices and conventions
4. Discussion
In recent work on the phonology of emerging signed languages, Brentari has argued that minimal pairs and phonological rules are insufficient criteria for deeming a phenomenon to be phonological (Brentari et al. 2012, Coppola and Brentari 2014, Brentari et al. 2017). Rather, phonological patterns in emergent languages can be grasped by way of more basic principles, which organize the system slowly in historical time during conventionalization.
One way to think about innovations in Protactile is from the perspectives of two general pressures on a phonological system (Brentari, 2019). The first is the pressure of efficiency, common to both signed and spoken languages, which includes how the units of the system are organized to maximize the information conveyed, as well as ease of production, ease of perception, and the way that the strengths of the particular communication modalities affect it (auditory-aural; visual-gestural; tactile-proprioceptive). Efficiency includes principles of redundancy and well-formedness, which we see in the PC forms we have analyzed.
The internal structure of the protactile elements described here utilize redundancy, since the space introduced in PO must be the same one elaborated on in the MC unit and the two must be in that order. The signers know what is coming next in a PC because the order is fixed. It is clear that principles of well-formedness are at work because protactile signers correct learners of the system when they produce incorrect forms. The inventory of values for each form has definable boundaries that allow it to be interpreted as well-formed or not.
The second pressure is to maximize the affordances of iconicity, which all languages exploit, but which sign languages exploit to a greater extent. Since relations of resemblance will vary as modes of perception vary, we would expect a language used by protactile perceivers to exhibit a kind of iconicity grounded in non-visual modes of experience. Given that protactile signers have experience with signed languages, one might also expect that they would have a high “iconicity threshold” for protactile language; that is, they want their language to be as iconic as possible, because that is what ASL offers in the visual modality. The way that types of iconicity affect the form–meaning correspondences of units in protactile language is an area that can contribute to our understanding of language more generally. As we see in the forms we have discussed in this paper, tactile and proprioceptive iconicity has started to replace visual iconicity in protactile language.
To efficiency and iconicity, we add a third pressure: the necessity of establishing and maintaining deictic relations (Buhler 2001 [1934], Hanks 1990). Describing and discussing shared objects of attention in protactile language requires deictic reference and deictic reference requires the ability to inhabit a shared and reciprocal zero-point, or “origo” from which reference can be computed. As stated by Hanks, “The question for deixis is not ‘Where is the referent?’ but ‘How do we identify the referent in relation to us?” (Hanks, 2009:12). Protactile signers answer that question in ways that non-protactile signers would not think to (Edwards 2017). The ways that different deictic relations, grounded in different forms of spatial cognition, affect form-meaning correspondences of units in signed languages, is, like iconicity, an area that can contribute to our understanding of language more generally. Iconicity and indexicality are sign-object relations (Peirce 1955/1940 [1893-1910]), which interact with, and exert pressure on, the internal organization of grammatical systems in signed and spoken languages (signed languages: Brentari, 2019, Dudis 2004, Horton 2018, Shaw and Delaporte 2015, Hwang et al. 2017, Padden et al. 2013; spoken languages: Hanks 1990, Inoue 2004, Kockelman 2003, Sicoli 2014, Silverstein 1976).
The consistent assignation of a particular linguistic function to a particular articulator, as well as the constraints on how information can be packaged and in what order, suggest that strictly linguistic principles are being applied as well, generating patterns of distribution, discreteness, and productivity of form, which are becoming conventionalized across a group of protactile signers. This complex of processes work together to link form with meaning in increasingly stable ways.
Coppola and Brentari (2014), building on recent theories of language emergence, have proposed three stages in the emergence of phonology:
Stage 1: Increase Contrasts: Recognize particular features as a form that can be manipulated to create different meanings or used for grammatical purposes.
Stage 2: Create the Opposition: Distinguish the distribution of two features or feature values in one’s system, associating one feature with one meaning and the other to another meaning. This association does not have to be complete or absolute.
Stage 3: Apply the Opposition Productively: Apply the feature or class of features productively to new situations where the same opposition is needed.
Using contact space for meaning satisfies Stage 1. Creating opposition among the four articulators satisfies Stage 2. And the generalization of I, PO, PTC, and MC to the core satisfies Stage 3. As discussed above, the third stage is not yet in full swing. Observing growth in the productive application of the oppositions described here will offer unique opportunities to test the model put forth by Coppola and Brentari (2014) in the tactile modality.
5. Conclusions
In this paper we have shown that an important step in the conventionalization of a new phonological system is underway. This provisionally suggests that the tactile/proprioceptive modality can sustain language. The case we report is similar to, and different from, cases of emerging sign languages in Nicaragua (Kegl and Iwata, 1989; Senghas and Coppola, 2001) and Israel (Sandler et al. 2005). Participants in the present study acquired ASL as children. As they became blind and ASL became difficult to use, individuals compensated in idiosyncratic ways (Edwards 2014). This led to a splintering of ASL into simplified, idiosyncratic systems, similar to homesign systems, in that they were developed by individuals who routinely communicated in non-reciprocal contexts, where their systems were not used by those communicating with them (Goldin-Meadow and Feldman 1977). When these idiosyncratic systems came together in reciprocal communication contexts (i.e. protactile contexts), the linguistic patterns we describe began to cohere. Similarly, when homesign systems come together in reciprocal visual communication contexts, languages emerge (Goldin-Meadow and Brentari 2017:29).
One significant difference is that the innovations described in this paper were initiated after participants acquired a first language. For reasons discussed elsewhere (Clark 2017, granda and Nuccio 2018), protactile signers are aiming for the maximization of affordances in the tactile/proprioceptive modality over and against the preservation of ASL grammar. Whatever is left of ASL is being sidelined, functioning mostly as an archival lexicon. Signs are retrievable from the ASL lexicon insofar as they can be transferred to contact space without violating emerging protactile conventions.
The conventions we have described in this paper align in several ways with recent findings in a growing body of research on DeafBlind language use and tactile sign languages (Willoughby et al. 2018). First, when compared to visual sign languages, the simultaneous packaging of classifier predication is more sequentialized in protactile language at the phonological level. In other words, the components of the PC unfold—as a rule— in sequence. This finding supports Checchetto et al.’s (2018) prediction that LISt will tend toward sequentialization relative to the simultaneity of visual signed languages. Like Checchetto et al. (2018), we also note a general avoidance of the face in the production of protactile signs.
This research contributes new findings as well. In particular, protactile signers have a clear preference for contact space over air space, as demonstrated in Section 3.3. The shift to contact space is triggering radical changes in the phonological organization of protactile language. In this paper, we have argued that an early stage in that process is the consistent assignation of specific linguistics tasks to four articulators available for producing PCs.
In line with studies of language emergence, the results of this research support the idea that the human drive to create language is resilient, supported by whatever modality can sustain it. Our findings also point to the fact that iconic and indexical pressures can exert palpable effects on the emergent structure of specific languages. Where the drive to create language and the drive to use language align, grammar emerges.
Note: This research was supported by an NSF research grant (BCS-1651100) awarded to Edwards and Brentari. We wish to thank Jelica Nuccio, aj granda, Vince Nuccio and the many members of the Seattle DeafBlind community who contributed to, and participated in, this research; John Lee Clark and Susan Goldin-Meadow for comments on the manuscript; our research staff: Halene Anderson, Joanna Ball Smith, Oscar Chacon, Abby Clements, Eddie Martinez, Lilia McGee-Harris, Jelica Nuccio, and Yashaira Romilus for their analyses and insights; and Paul Dudis and the department of Linguistics at Gall Lillo-Martin, and the anonymous reviewers for invaluable feedback.
References:
Battison, Robbin. 1978. Lexical borrowing in American Sign Language. Silver Spring, MD: Linstock Press.
Baus, C., E. Gutiérrez-Sigut, J. Quer, and M. Carreiras. 2008. Lexical access in Catalan Signed Language production. Cognition, 108(3), 856–865.
Boyes Braem, P. 1981. Features of the handshape in American Sign Language. Berkeley: University of California dissertation.
Brentari, D. 1998. A Prosodic Model of Sign Language Phonology. Cambridge, MA: MIT Press.
Brentari, Diane. 2019. Sign Language Phonology. Cambridge, UK: Cambridge University Press.
Brentari, Diane and Carol A. Padden. 2001. Native and foreign vocabulary in American Sign Language: A lexicon with multiple origins. Foreign vocabulary in sign languages: A Cross-linguistic investigation of word formation, ed. by D. Brentari. Mahwah, NJ: Lawrence Erlbaum.
Brentari, Diane, Marie Coppola, Laura Mazzoni, and SusanGoldin-Meadow. 2012. When does a system become phonological? Handshape production in gesturers, signers, and homesigners. Natural Language and Linguistic Theory 30.1-31.
Brentari, Diane and Marie Coppola. 2013. What sign language creation teaches us about language. Wiley Interdisciplinary Reviews: WIREs: Cognitive Science 4.201-11.
Brentari, Diane, A. Di Renzo, J. Keane and V. Volterra. 2015. Cognitive, cultural, and linguistic sources of a Handshape Distinction Expressing agentivity. TopiCS 7.95-123.
Brentari, Diane, Marie Coppola, Pyeong Whan Cho, and Ann Senghas. 2017. Handshape complexity as a precursor to phonology: Variation, emergence, and acquisition. Language Acquisition 24. 283-306.
Bühler, Karl. 2001 [1934]. Theory of language: The representational function of language. Amsterdam; Philadelphia, PA: John Benjamins.
Carreiras, M., E. Gutiérrez-Sigut, S. Baquero, and D. Corina. 2008. Lexical processing in Spanish Signed Language (LSE). Journal of Memory and Language, 58(1), 100–122.
Caselli, N. K., and A.M. Cohen-Goldberg. 2014. Lexical access in sign language: a computational model. Frontiers in Psychology 5, 428
Checchetto, Alessandra, Carlo Geraci, Carlo Cecchetto, and Sandro Zucchhi. 2018. The Language instinct in extreme circumstances: The transition to tactile Italian Sign Language (LISt) by Deafblind signers. Glossa 3.1-28.
Clark, John Lee. 2017. Distantism. https://johnleeclark.tumblr.com/.
Clark, John Lee. 2020. The raft. Manuscript in preparation.
Collins, Steven. 2004. Adverbial morphemes in Tactile American Sign Language. Doctoral dissertation, Graduate College of Union Institute and University.
Collins, Steven and Karen Petronio. 1998. What happens in Tactile ASL? Pinky extension and eye gaze: Language use in Deaf communities, ed. by C. Lucas, 18-37. Washington DC: Gallaudet University Press.
Coppola, Marie and Diane Brentari. 2014. From iconic handshapes to grammatical contrasts: longitudinal evidence from a child homesigner. Frontiers in Psychology 5. doi:10.3389/fpsyg.2014.00830
Corina, D. P., and K. Emmorey. 1993. Lexical priming in American sign language. Poster presented at the 34th annual meeting of the Psychonomics Society,Washington, DC.
Corina, D. P., and U. Hildebrandt. 2002. Psycholinguistic investigations of phonological structure in ASL. In R. P. Meier, K. Cormier, and D. Quinto-Pozos (eds.), Modality and structure in signed and spoken languages, 88–111. Cambridge, United Kingdom: Cambridge University Press.
Crasborn, Onno and Han Sloetjes. 2008. Enhanced ELAN functionality for sign language corpora. Proceedings of the 3rd Workshop on the Representation and Processing of Sign Languages (at LREC 2008), 39–43. Online: http://www.lrec-conf.org/proceedings/lrec2008/.
Dudis, Paul G. 2004. Body partitioning and real-space blends. Cognitive Linguistics 15.223-38.
Edwards, Terra. 2014. Language emergence in the Seattle DeafBlind Community. Doctoral dissertation, The University of California, Berkeley.
Edwards, Terra. 2017. Sign creation in the Seattle DeafBlind community: A Triumphant story about the regeneration of obviousness. Gesture 16.304-27.
Goldin-Meadow, Susan and Diane Brentari. 2017. Gesture, sign and language:
The coming of age of sign language and gesture studies. Behavioral and Brain Sciences 40. doi:10.1017/S0140525X15001247.
Goldin-Meadow, Susan and Heidi Feldman. 1977. The Development of Language-Like Communication Without a Language Model. Science 197.22-24.
granda, aj and Jelica Nuccio. 2018. Protactile Principles. Tactile Communications. https://DeafBlind.tactilecommunications.org/ProTactilePrinciples.
Gutiérrez, E., O. Müller, C. Baus, and M. Carreiras, M. 2012. Electrophysiological evidence of phonological priming in Spanish Sign Language lexical access. Neuropsychologia, 50, 1335-1346.
Hanks, William F. 1990. Referential practice: language and lived space among the Maya Chicago: University of Chicago Press.
Hanks, William F. 2009. Fieldwork on deixis. Journal of Pragmatics 41.10-24.
Horton, Laura. 2018. Conventionalization of shared homesign systems in Guatemala: Social, lexical, and morphophonological dimensions. Doctoral dissertation. University of Chicago.
Julie A. Hochgesang, Onno Crasborn, and Diane Lillo-Martin. (2020) ASL Signbank. New Haven, CT: Haskins Lab, Yale University. https://aslsignbank.haskins.yale.edu/
Hwang, So-One, Nozomi Tomita, Hope Morgan, Rabia Ergin, Deniz Ilkbasaran, Sharon Seegers, Ryan Lepic and Carol Padden. 2017. Of the body and the hands: patterned iconicity for semantic categories. Language and Cognition 9.573-602.
Inoue, Miyako. 2004. What does language remember?: Indexical inversion and the naturalized history of Japanese women. Journal of Linguistic Anthropology 14.39-56.
Itô , Junko, and Armin Mester. 1995a. Japanese phonology. In J. Goldsmith (ed.), Handbook of phonological theory (pp. 817–838). Oxford/New York: Blackwell.
Itô , Junko, and Armin Mester. 1995b. The core-periphery structure of the lexicon and constraints on reranking. In J. Beckman, L. Walsh Dickey and S Urbanczyk (eds.), University of Massachusetts occasional papers 18: Papers in Optimality Theory (pp. 181–209). Amherst, MA: GLSA (Graduate Linguistic Students Association), University of Massachusetts.
Iwasaki, Shimako, Meredith Barlett, Howard Manns, and Louisa Willoughby. 2018. The challenges of multimodality and multisensorality: Methodological issues in analyzing tactile signed interaction. Journal of Pragmatics. 143.215-227
Keane, Jon and Diane Brentari. 2016. Fingerspelling: Beyond Handshape Sequences. In M. Marschark and P. Siple, eds., The Oxford Handbook of Deaf Studies in Language: Research, Policy, and Practice, 146-160. NY/Oxford: Oxford University Press.
Kegl, Judy and Gayla Iwata. 1989. Lenguaje de Signos Nicaragüense: A pidgin sheds light on the “creole”? ASL. In M. Carlson , S. DeLancey, S. Gildea, D. Payne, A. Saxena, eds., Proceedings of the Fourth Meetings of the Pacific Linguistics Conference, 266-294. Eugene, Oregon: Department of Linguistics, University of Oregon.
Kockelman, Paul. 2003. The Meanings of interjections in Q’eqchi’ Maya. Current Anthropology 44.467-90.
Kooij, E. van der, and H. van der Hulst. 2005. On the internal and external organization of sign segments: some modality specific property of sign segments in NGT. In M. van Oostendorp and J. van de Weijer, eds.. The internal organization of phonological segments. Studies in Generative Grammar, 77, 153-180. Berlin/New York: Mouton de Gruyter
Liddell, S. 1984. THINK and BELIEVE: Sequentiality in American Sign Language. Language 60,372–392.
Liddell, S. and R. E. Johnson. 1989. American Sign Language: The phonological base. Sign Language Studies 64. 197–277.
Marentette. P., and R. Mayberry. 2000. Principles for an emerging phonological system: a case study early American Sign Language acquisition. In C. Chamberlain, J. Morford J. and R. Mayberry, eds., Language Acquisition by Eye, 71-90. Mahwah, NJ: Lawrence Erlbaum Associates.
Meier, R., C. Mauk, A. Cheek, and C. Moreland. 2008. The Form of Children's Early Signs: Iconic or Motoric Determinants. Language Learning and Development 4(1). 63–98
Mesch, Johanna. 2001. Tactile sign language: Turn taking and questions in signed conversations of Deaf-blind people. Hamburg: Signum.
Mesch, Johanna. 2013. Tactile signing with one-handed perception. Sign Language Studies 13.238-63.
Mesch, Johanna, Eli Raanes and Lindsay Ferrara. 2015. Co-forming real space blends in tactile signed language dialogues. Cognitive Linguistics 26.261-287.
Padden, Carol, Irit Meir, So-One Hwang, Ryan Lepic, Sharon Seegers and Tory Sampson. 2013. Patterned iconicity in sign language lexicons. Gesture 13.287-308.
Peirce, Charles Sanders. 1955/1940 [1893-1910]. Logic as semiotic: The theory of signs. Philosophical Writings of Peirce, ed. by J. Buchler. New York: Dover.
Petitto, L.A., R. Zatorre, K. Gauna, E.J. Nikelski, D. Dostie, and A. Evans. 2000. Speech-like cerebral activity in profoundly deaf people processing signed languages: Implications for the neural basis of human language
Petronio, Karen and Valerie Dively. 2006. YES, #NO, Visibility, and variation in ASL and Tactile ASL. Sign Language Studies 7.57-98.
Quinto-Pozos, David. 2002. Deictic points in the visual-gestural and tactile-gestural nodalities. Modality and Structure in Signed and Spoken Languages, ed. by R.P. Meier, K. Cormier and D. Quinto-Pozos. Cambridge: Cambridge University Press.
Reed, Charlotte M., Lorraine A. Delhorne, Nathaniel I. Durlach and Susan D. Fischer. 1995. A study of the tactual reception of Sign Language. Journal of Speech and Hearing Research 38.
Sandler, W 1989. Phonological Representation of the Sign: Linearity and Nonlinearity in American Sign Language. Dordrecht: Foris.
Sandler, Wendy, Irit Meir, Carol Padden, and Mark Aronoff. 2005. The emergence of grammar: Systematic structure in a new language. Proceedings of the National Academy of Sciences of the United States of America 102.2661-65.
Sandler, W., and D. Lillo-Martin. 2006. Sign Language and Linguistic Universals. Cambridge/New York: Cambridge University Press.
Senghas, Ann and Marie Coppola. 2001. Children creating language: How Nicaraguan Sign Language acquired a spatial grammar. Psychological Science 12.323-328.
Shaw, Emily and Yves Delaporte. 2015. A historical and etymological dictionary of American Sign Language. Washington, DC: Gallaudet University Press.
Stokoe, W. 1960. Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf. Buffalo, NY: University of Buffalo (Occasional Papers 8).
Sicoli, Mark. 2014. Ideophones, rhemes, interpretants. Pragmatics and Society 5.445-54.
Silverstein, Michael. 1976. Shifters, linguistic categories, and cultural description. Meaning in anthropology, ed. by K. Basso and D.B.A. Selby, 11-55. Albuquerque, NM: University of New Mexico Press.
Stokoe, William. 1960. Sign language structure: An outline of the visual communication systems of the American Deaf. Buffalo, NY: University of Buffalo (Occasional Papers 8).
Supalla, Ted. 1982. Structure and acquisition of verbs of motion and location in American Sign Language. Doctoral dissertation, University of California, San Diego.
Thompson, R., K. Emmorey, and T.H. Gollan. 2005. “Tip of the fingers” experiences by deaf signers: Insights into the organization of a sign-based lexicon. Psychological Science 16(11), 856–860.
Wilcox, Sherman. 1992. The phonetics of fingerspelling. Philadelphia, PA: John Benjamins.
Willoughby, Louisa, Shimako Iwasaki, Meredith Bartlett, and Howard Manns. 2018. Tactile sign languages. Handbook of pragmatics, ed. by J.-O. Östman and J. Verschueren, 239-58: Benjamins.
Zwitserlood, Inge. 2012. Classifiers. Handbook of Sign Language Linguistics, ed. by In: R. Pfau, M. Steinbach, B.Woll, 158-185. Berlin: Mouton de Gruyter.
END
In this paper we argue that new phonological patterns are emerging within a sub-group of DeafBlind signers in the United States who communicate via reciprocal, tactile channels, a practice they call “Protactile” (granda and Nuccio 2018, Clark 2017). Recent research on emergent visual signed languages has demonstrated that a number of principles are at work within a phonological system before the most obvious criteria of phonological rules and minimal pairs are observable (Brentari, 2019; Brentari et al., 2012, 2013, 2015, 2017; Coppola and Brentari, 2014). For example, linguistic structures and contrasts in sign languages are expressed in terms of five phonological components (handshape, movement, location, orientation, and non-manual behaviors). In Nicaraguan Sign Language, patterns in how iconic handshapes are used to represent either an object’s size-and-shape (object handshapes), or how an object is manipulated (handling handshapes), gradually differentiate themselves into categories phonologically and morphologically. In these works, it has become clear that conventionalization is not a single monolithic process, but rather a complex of principles involving patterns of distribution—discreteness, stability, and productivity of form—as form becomes linked with meaning in increasingly stable ways.
In this article, we examine emerging patterns in protactile language, essentially addressing what the units of the new language are, and how can they can be determined. These patterns are most apparent in what we are calling “proprioceptive constructions” (PCs). PCs are comparable to “classifier constructions” in visual signed languages; however, PCs are produced by the hands and arms of both the signer and the receiver, unlike classifier constructions in visual signed languages, which are produced by the hands and arms of the signer. We hypothesize, therefore, that one of the earliest stages in the conventionalization of protactile phonology will necessarily involve coordination of the four articulators, and as part of this, each articulator will be assigned its own linguistic tasks. To test this hypothesis, several steps are required.
First, because we are focusing on aspects of the articulatory system, a set of criteria must be created for identifying articulatory units in terms of their phonological structure (Section 2.4). Second, the functional units, which constitute “linguistic tasks” for the articulators producing PCs must be identified and described (Section 3.1). Third, the correspondence of these units with particular articulators must be tested. In other words, we must find out if particular linguistic functions are being consistently performed by particular articulators (Section 3.2). Fourth, if we find that this is the case, we must determine whether or not these patterns are beginning to affect protactile forms beyond PCs (Section 3.3). In performing these interrelated analyses, our aim is to show how a new phonological system can be conventionalized in the tactile modality.
1.1 Background: Language use in DeafBlind communities
There are people all over the world who are DeafBlind, some of whom live as minorities within larger Deaf, sighted communities, while others are active members of a signing or non-signing DeafBlind community. Language and communication vary widely from community to community and across individuals in the same community. The dominant language in some DeafBlind communities in the United States is English, perceived via adaptive technologies such as amplification systems. In others, the dominant language is American Sign Language (ASL). In order to perceive ASL through touch, the receiver places their hand(s) on top of the hand(s) of the signer to track the production of signs. Just as spoken languages require adaptive measures to be perceived by DeafBlind signers, adaptations and innovations are necessary for the perception of visual languages by DeafBlind signers as well. However, those adaptations may not enable full access to the message; Reed et al. (1995:15) found that DeafBlind signers received ASL—a visual signed language—with only 60-85% accuracy, and that the largest source of errors was inaccuracies in the reception of the phonological parameters of ASL.
Recent research has examined the ways that language use is different among DeafBlind signers when compared to Deaf sighted signers (Checchetto et al., 2018; Collins and Petronio, 1998; Collins, 2004; Iwasaki 2018, Mesch 2001, 2013; Mesch et al. 2015; Petronio and Dively, 2006; Quinto-Pozos 2002; Reed et al. 1995; and see Willoughby et al. 2018 for an overview). For example, Mesch et al. (2015) report on tactile Swedish Sign Language, where DeafBlind signing dyads exhibit different positions for monologues vs. dialogues, and on “co-constructed” forms whereby clausal structures utilize articulators of both the “speaker” and the “listener” in “real space blends.” In ASL, Quinto-Pozos (2002) reports an avoidance and restricted range of functions in pointing signs. Iwasaki et al. (2018) describe how DeafBlind signers of Auslan manage turns at talk without access to non-manual features, such as eye gaze, eyebrow movements, and facial expressions that sighted Auslan signers depend on in performing the same communicative functions. Petronio and Dively (2006) report an increase in frequency of the manual signs yes and no, which they attribute to a lack of access to corresponding non-manual expressions (e.g. head and brow movements). Checchetto et al. (2018) analyzed productions by the Italian Deafblind community using tactile Italian Sign Language (LISt). They have proposed several principles that will guide ongoing changes in LISt, to which our data largely agree. First, they propose that LISt will tend toward sequentialization of form, relative to the simultaneity of visual signed languages. Second, they propose that the functions performed by non-manual markers (i.e. the face) will be performed manually. Third, they propose that innovation in the distribution of lexical and grammatical form will occur.
1.2 Contribution of this Research
Over the past 60 years, there has been growing acceptance of the idea that the vocal/auditory channel is not the only set of sensorimotor peripheral systems than can sustain a phonological system. Since Stokoe (1960) and with the decades of subsequent work on sign language phonology from the perspectives of distribution and constituent structure (Stokoe, 1960, Liddell 1984 Liddell and Johnson 1989; Sandler 1989, Brentari 1998, van der Kooij and van der Hulst 2005, Sandler and Lillo-Martin 2006), acquisition (Boyes Braem 1981, Marentette and Mayberry 2000, Meier et al. 2008), and processing (Corina and Emmorey 1993, Petitto et al. 2000, Corina and Hildebrandt 2002, Thompson et al. 2005, Baus et al. 2008, Carreiras et al. 2008, Gutiérrez et al. 2012, Caselli and Cohen-Goldberg 2014), there has been a slow, steady paradigm shift toward understanding phonology as the abstract component of a grammar that organizes meaningless elements, without specific reference to its communication modality.
This article contributes to that shift, calling into question the very definition of phonology. We ask: Can the tactile modality sustain phonological structure? The results of this study suggest that it can, and that the observed changes that have taken place as American Sign Language has been adapted to protactile environments have changed the very primitives used to create new signs.
This article contributes to the linguistics of tactile signed languages as well. Here, our contribution concerns the principles proposed by Checchetto et al. (2018). Our data support the prediction that the tactile modality will favor the sequentialization of form, relative to the simultaneity of visual signed languages. As will be demonstrated in what follows, PCs—the constructions that constitute the main focus of this article—exhibit a relative sequentialization of form at the phonological level, when compared with classifier constructions in ASL. Also, while not the focus of this article, our data also reflect a general avoidance of the face as part of the articulatory apparatus, and therefore are consistent with Checchetto et al. (2018)’s second and third principles.
In addition, we propose a principle concerning the use of space; namely: air space is dead space. What we mean by this is that for DeafBlind signers, contact with the body of the listener has affordances that the space on and around the body of the signer does not. The former is what granda and Nuccio (2018) call “contact space” and the latter is what they call “air space.” In air space, locations are perceived relative to each other against a visual backdrop that is inaccessible for DeafBlind signers (e.g., “to the right of the mouth” vs. “to the right of the eye”). In contrast, locations on the body of the listener can be clearly perceived against the backdrop of the listener’s own body.
1.3 Relationship of protactile language to visual sign languages
A cascade of consequences follows from the switch to contact space and we have observed that these changes are triggering the emergence of a new phonological system in protactile language. At this point, as innovations in protactile language occur, protactile signers are not asking: “How do we adjust signs to make them more perceptible to us since we can’t see?” Instead, they seem to be demoting ASL, treating it like an archival lexicon. When protactile signers retrieve signs from the ASL lexicon, they are concerned with whether or not the sign can articulate to contact space in ways that do not break with emerging protactile conventions. For example, there are two classifier handshapes for representing a “person” in ASL—the “1” handshape: B and an upside-down “V” handshape: Y. The “1” handshape (B) does not articulate to contact space easily because the bottom of the wrist is difficult to position and move on the body of the addressee in a precise and perceptible way. It follows that it is difficult to modulate its path or speed to express manner, which is an important function of classifier constructions and PCs alike. In contrast, in the “V”* classifier Y, the two extended fingers representing the legs and the tips of the fingers make contact with body. This handshape is perceptible and much easier to modify for manner of movement, and so is preferred in the protactile system.
*In order to indicate walking the “V” handshape is turned upside down so that the fingers represent the legs.
The first principles outlined by Checchetto et al. (2018) seem to follow the principle of “change by necessity”; namely, when a visual sign language structure is no longer workable, it will be modified, a new one will be innovated, or a non-linguistic strategy will be employed. In the work we report here, innovations differ in an important way from Checchetto et al. (2018) and other analyses of signed languages perceived tactually. Protactile signers in Seattle have found that air space is ineffective, and they are making a sharp turn toward contact space. In doing so, they are maximizing the proprioceptive sense in ways that are, to our knowledge, unattested in both visual and tactile signed languages. The changes are, in part, a response to the imperceptibility of ASL structures, but they also reflect a conscious rupture with the practices of ASL. As reported by protactile signers themselves, they are less interested in retaining ASL as much as possible, and more interested in embracing the potential of the proprioceptive/tactile modality for precise and efficient communication and for creating strong iconic and indexical ties to the world, as they experience it. granda and Nuccio (2018:13) explain: As Deaf children, we were drawn to visual imagery in ASL stories— transported into the vivid details of the worlds created for us. As DeafBlind adults, we still carry those values within us, but ASL doesn’t evoke those same feelings for us anymore. When you are perceiving a visual language through touch, the precision, beauty, and emotion are stripped away; the imagery is lost. […] If you try to access an ASL story through an interpreter […], you just feel a hand moving around in air space […]. In air space we are told what is happening for other people, but nothing happens for us.
This orientation suggests that protactile signers are prioritizing intuitive and effective communication over and against the preservation of ASL structures. In what follows, we argue that innovations emerging under these pressures are organized by new well-formedness principles. This is in no way a prediction about direction in which all DeafBlind communities will change; it is one way in which language emerges, given a particular “communicative ecology” (Horton 2018).
This raises the question for us as linguists about how much we should refer to visual sign language structures in describing this new language. According to John Lee Clark (2020), a skilled protactile signer and national leader of the protactile movement, the situation is changing rapidly: ASL speakers have always had a huge advantage, a head start, in learning Protactile. For them, a great part of it is about “converting” ASL knowledge into Protactile. But in the near future, this advantage will diminish. At some point ASL speakers and non-ASL speakers will need to take the same classes! Right now, ASL speakers can skip to “Protactile II,” while non-ASL speakers start with Protactile.* But soon, that won’t be the case.
*We are not suggesting that simultaneity is absent in protactile language. We note that complex layering of meaningful elements within a PC is not only possible, but common. For example, a protactile signer can express path and manner simultaneously with a PC representing a person walking in contact space, i.e. on the body of the addressee. This is done by pressing the index finger and middle finger, alternately down, in a particular way, while moving forward in some direction.
With this in mind we do our best in this article to define protactile units on their own terms, not in terms of their relation to visual sign languages. In describing these mechanisms, we are careful not to import categories from other linguistic modalities. Indeed, attempts to find one-to-one correspondences between units across modalities can be a hindrance. Handshape, location, and movement, for example, are not parameters that can be taken for granted in protactile languages, just as there is no reason to expect visual signed languages, or spoken languages, to have a conventionalized way of coordinating the articulators of two people. Therefore, we follow in the spirit of Stokoe (1960), who established unique labels, like “tabula,” “designator,” and “signation” to prevent equivocations across modalities. The labels we have created describe the functions of each part of the PC. For example, we label the four hands more neutrally as “Articulators,” and the category we label “Initiate” is the unit that is used to initiate, or start, the PC.
1.4 Background on the targeted linguistic structures
In this paper we focus on structures that describe motion and location events, a specific type of structural innovation in protactile language that is the correlate of classifier constructions in visual sign languages. We call these structures “proprioceptive constructions” (PCs). We focus on these structures for two reasons. First, the use of four handed forms is most prevalent in structures that describe motion and location events, and the aim of this study is to show how principles of conventionalization are being applied by protactile signers to sequence the four articulators.
The second reason we focus on descriptions of motion and location events is that the constructions which carry out parallel communicative functions in ASL describing motion and location events—classifier constructions— should, in theory, require more radical restructuring to be expressed in the tactile modality than other types of signs. The ASL lexicon has been divided into three parts (Brentari and Padden 2001), using a framework developed by Itô and Mester for the phonology of Japanese (Itô and Mester, 1995a,1995b). This lexical architecture, as we are using it here, is simply a way to conceptualize the different types of vocabulary items that any language is likely to have, rather than a language-specific proposal about ASL or Japanese. The “core” lexicon is comprised of forms whose parameters are meaningless sub-lexical units with a highly conventionalized form-meaning association. These are the signs you would expect to see listed in a dictionary. Modifications in the core component have been described within many DeafBlind communities, such as those mentioned above by Checchetto et al. (2018), for example, the displacement of non-manuals to more manual forms, the sequentialization of form, or the use of dynamic points as opposed to static directional points. The forms derived from the manual alphabet consisting of fingerspelled sequences comprise a second component, the “foreign” component. The third, “spatial” component is composed of polycomponential structures, most commonly known as “classifier constructions” (Supalla 1982; Zwitserlood, 2012), and other spatial signs.
Figure 1: The three components of the ASL lexicon (Brentari and Padden 2001)
The meanings of classifier constructions in the spatial component of visual sign languages are componential—all three of the primary manual components (handshape, movement and location) retain independent autonomous meaning, so there is little redundancy in the information conveyed by the parameters, and they cannot be understood unless there is full access to the form. In contrast, in the other parts of the lexicon partial information is sufficient for understanding because the redundant information is predictable. For example, in fingerspelled sequences, which belong to the foreign lexicon, except for the letters -J- and -Z-, the manual alphabet is composed exclusively of handshapes in a predictable location, and in which the movements are largely predictable transitions between handshapes (Battison, 1978; Wilcox, 1992; Keane and Brentari 2016). Location and movement are therefore somewhat predictable, or redundant, and can be understood even if only partial information is conveyed. Given the lack of redundancy of classifier constructions in visual sign languages, it is perhaps not surprising that classifier constructions are being restructured by protactile signers.
The primary aim of this study is therefore to analyze proprioceptive constructions, or “PCs” in order to determine the internal units that comprise them, and to begin to understand how the patterns of conventionalization of PCs extend to protactile phonology, more generally.
2. Study design and procedures
In this study, we analyze data generated in a description task. Dyads of protactile participants were asked to describe a series of tactile stimuli. We videorecorded, analyzed, and transcribed their productions. In the following sections, detailed information is provided about participants, procedures for collecting data, the stimuli that were used, and transcription methods.
Three fundamental observations have led to our hypotheses and drive the analysis of PCs we present here. The first observation concerns the use of four hands in PCs instead of two. PCs are not produced exclusively by the hands and arms of Signer 1 (the “speaker”). They also incorporate the body of Signer 2 (the “listener”). In order to coordinate the articulators of the two signers, Signer 1 needs a conventional way of signaling how and when they want Signer 2 to contribute to the co-articulation of protactile signs. We hypothesize that the conventionalization of such mechanisms involves assigning specific linguistic tasks to four articulators, in much the same way that the two hands in visual signed languages are assigned consistent and distinct tasks (Battison 1978). Since this coordination of articulators must be sorted out early on in the process of conventionalization for efficient and effective communication, the findings of this study can shed light on an early stage in the conventionalization of protactile phonology.
The second observation relates to the functional units that comprise the PC. In order to address PC co-creation, we must identify and describe the functional units used to accomplish particular “linguistic tasks” and the way in which they contribute to the PC structure as a whole.
2.1 Participants.
The seven participants in this study (four males and three females, ages 32-47) were all DeafBlind individuals who had participated in a protactile network for at least one year. Six were exposed to ASL by the age of seven via visual perception (those who became blind in adulthood), and one (who was born blind) was exposed to ASL via tactile perception since birth. In adulthood, they moved to Seattle for employment, a large socially and politically active DeafBlind community, for educational opportunities, and/or communication-related resources. At the time these data were collected, five of the seven participants were working in environments that required them to interact with other protactile signers daily, for many hours during the work-week, and variably at night and on the weekends when they attended formal and informal protactile events or interacted with their protactile roommates. The other two participants interacted with protactile signers often, according to their own reports, but with less frequency and consistency than the others, as they did not work in environments where protactile language was widespread. All of the participants in this study reported that they were right-handed.
2.2 Procedures
Recruitment took place in two stages. First, an email was circulated to relevant community leaders explaining the project and requesting participation. That email was shared by them, more broadly, within the community. A local DeafBlind educator selected a subset of those who responded, based on her evaluation of high protactile proficiency. During data collection events, prior to filming, we gave consent forms to participants in their preferred format (e.g. Braille or large print). We also offered to interpret the consent forms into protactile language. One of the co-authors who is fluent in protactile language then discussed the consent forms with each of the participants, answering questions and clarifying as requested. The consent forms included questions requesting permission to include images of these communication events in published research and other research and education contexts, such as conferences and classrooms. Once consent had been obtained, we commenced with data collection.
Data collection took place at a dining room table in a privately-owned home. Dyads of protactile signers were seated at the corner of the table. The interactions were always between two protactile signers, both of whom were participants in the study. They changed roles after a given object (item) was completed, and discussed and gave feedback to one another about the clarity of a description, as it unfolded. We placed a cloth napkin with thick edges on the tabletop to provide a tactile boundary within which the stimuli would be placed. The stimuli were placed on the napkin in pseudo-random order and Signer 1 was instructed to “describe what they feel.” Signer 2 was told that Signer 1 would be describing something they felt. After the description, Signer 2, who was not exposed to the stimulus prior, picked up the object and explored it tactually. The co-authors were present throughout the task to operate the video camera, but were only in tactile contact with the participants when placing stimuli. The camera was on a tripod on the table, positioned above the participants pointing down, in order to capture contact and motion between them.
In all cases, the dyads discussed aspects of the object and adjusted their descriptions—sometimes at great length. In addition, the stimuli had many different pieces and parts, each of which was described by the participants. Therefore, we collected a large number of tokens in response to a limited number of stimuli.
2.3 Stimuli
Proprioceptive constructions that involved whole objects or their size and shape were elicited by presenting 3 objects using tactile stimuli to the participants: a lollipop, a jack (the kind children use to play the game “jacks”), and a complex wooden toy involving movable arms, magnets, and magnetized pieces. The first two stimuli were presented in both a singular context (1 object) and in a plural context (several of the same object in a row). These objects were chosen because they provide opportunities to convey information about motion and location events in protactile form, and they can be presented using real objects on a bounded flat surface placed next to the two participants.
2.4 Transcription
The descriptions of the stimuli were videotaped, labeled, and annotated using ELAN (Crasborn and Sloethes 2008). Annotating one tier at a time, we identified the tasks being performed by each of the articulators in general terms in order to determine if there was a clear division of labor among the articulators. The labeled tiers in our transcription system are provided in Table 1.
Table 1. Lexical and articulatory categories in transcription system
Lexical components of PT signs
1, Core, Structures expressing non-spatial events
2, Spatial , Structures expressing spatial events (Proprioceptive Constructions)
Articulatory components of Proprioceptive Constructions
1, Articulator 1, Dominant hand – Signer 1
2, Articulator 2, Dominant hand – Signer 2
3, Articulator 3, Non-dominant hand – Signer 1
4, Articulator 4, Non-dominant hand – Signer 2
5, contact space (-c), Locations on or near Signer 2’s body—“signing space” for protactile language.
6, air space (-a), The space on and around the body of Signer 1—“signing space” for visual signed languages.
In order to identify units of analysis, we assigned Signer 1 and Signer 2 independent tiers. Signer 1 is the principal conveyer of information. Signer 2 contributes to the articulation of the message, but in terms of information, is the principal receiver. A form could be produced by one or both signers. We also established one tier each for the four hands/arms of the two signers, assigning placeholders for the dominant hands of Signer 1 (A1) and Signer 2 (A2) and the non-dominant hands of Signer 1 (A3) and Signer 2 (A4). In visual signed languages, the dominant hand (H1) and the non-dominant hand (H2) are assigned complementary roles; H1 is more active than H2 (Battison, 1978). In protactile language, four anatomical structures are available for producing each sign, which we ultimately assign to roles based on the degree to which they are active (Figure 2). A1 is the most active and is assigned to the dominant hand of Signer 1, who is the principal conveyor of information. A2 is the next most active role and assigned to the dominant hand of Signer 2, who is the principal receiver of information. A3 is assigned to the non-dominant hand of Signer 1. A4 has the least active role, and is assigned to the non-dominant hand of Signer 2, being called on sporadically to produce certain components of signs, and otherwise being available for producing tactile backchanneling and tracking the movements of Signer 1's dominant hand (A1).
Figure 2: Sequence of forms used to describe the cylindrical portion of the lollipop stimulus
This article focuses on PCs, which are part of the spatial lexicon. However, as protactile phonology emerges, it should affect all areas of the lexicon. Therefore, we also track the spread of devices found in PCs to the core lexicon on each of the four articulators. In order to distinguish between core and spatial forms and whether they are produced in “contact space” or “air space” (granda and Nuccio 2018), we create four categories: core-(a)ir space, core-(c)ontact space, spatial-(a)ir space, and spatial-(c)ontact space. Contact space is defined as the space on the body of Signer 2, while air space is the space in front of, around, and on Signer 1’s body. Core-a are forms that use air space to represent non-spatial concepts, while core-c refers to core forms that are produced in contact space. Spatial-a refers to spatial forms produced in airspace, while spatial-c refers to spatial forms produced in contact space. This part of the transcription process allows us to keep track of the extent and nature of changes in the core lexicon, as compared with the spatial lexicon, which is our primary focus in this article.
3. Analyses
We performed four types of analyses in order to address which functions and structures are used in PCs. First, each articulator was assigned a number (1-4), which captures its level of linguistic activity by signer and by articulator; “1” is most active to “4” least active (Section 2.4). Second, we describe the functional units involved in producing PCs (Section 3.1). The third step is to analyze the appropriation of function to each of the four hands (Section 3.2), and the last qualitative analysis describes how each of the linguistic structures involved in producing a PC have been observed to generalize to the core lexicon independently from one another (Section 3.3).
3.1 Functional Units of Analysis
In this section we define the different types of communicative functions that occur within the larger PC unit produced by the four articulators. Each has a label that describes what it contributes to the PC. In a PC they appear in the following temporal order: initiate (I), proprioceptive object (PO), prompt to continue (PTC), and movement-contact type (MC). These units form a unified whole with rapid interchange between Signer 1 and Signer 2. We will refer to these units in the following sections. For reference, we provide definitions of the functional units that will be used throughout the rest of this paper in Table 2.
Table 2. Definitions of functional units in a proprioceptive construction
Functional Units Proprioceptive Constructions
1, Initiate (I), A request for active involvement of S2 in co-producing a PC.
1a, --Initiate-touch, Instruction by S1 to S2 to foreground a new contact space by touching it
1b, --Initiate-grasp, Instruction by S1 to S2 by grasping the relevant body part
1c, --Initiate-prompt, Two-part sequence by S1 to S2 to foreground a particular body part as a PO
1ci, ----prompt-tap, Instruction by S1 to S2 to activate A2 for purposes of articulation, and/or that a prompt-PO is coming next
1cii, ----prompt-PO, Instruction by S1 to S2 to produce a particular handshape on A2
2, Proprioceptive Object (PO), Active articulatory space- type selected in response to type of Initiate produced.
3, Prompt to Continue (PTC), Keeps selected articulatory space active for further information to be added.
4, Movement Contact Type (MC), Tactile and proprioceptive cues that contain information about size, shape, location, or movement of an entity.
3.1.1 Proprioceptive objects (POs)
First, we describe the proprioceptive objects (PO), which we observe to be the anchor of the PC; see Figure 2b; in this case Signer 2’s dominant fist and arm (labeled A2), placed vertically. It is produced by A2 (i.e., Signer 2’s dominant hand). Effective introduction and use of POs is one of the main innovations of the PC structure. A PO has two functions: First, it conveys information about size, shape, and position. Second, in conveying that information, it delimits and activates a space on the body of Signer 2, on which Signer 1 can produce signs. POs are spatial forms, meaning that they use space to represent spatial relationships. While ASL also has spatial forms, POs are always produced in contact space, not air space. We therefore label POs “spatial-c” forms, where “spatial” refers to the area of the lexicon to which they belong, and “-c” refers to “contact space.” POs are a set of substantive elements, and in the data analyzed here, attested categories include: plane, incline, sphere, cylinder, individuated objects, and penetrable surface. In producing a PO, Signer 2 produces what might appear, at first glance, to be “handshapes” (in visual sign language terms). However, handshape inventories in visual signed languages are organized around contrasts that are often not perceptually salient via the tactile sense. Instead of feeling the external surface of handshapes, Signer 2 perceives shapes and their positioning via proprioception. The term PO captures the dual role of this unit which both defines an articulatory space, and assumes an articulatory shape.
3.1.2 Initiate
There are several ways of signaling which PO Signer 1 wants Signer 2 to select. Signed languages (visual and tactile) that employ handshapes, rather than POs, do not need conventional signs to request the active participation of Signer 2 in articulatory tasks, therefore, a new term is needed for this category of sign, which is a conventional signal produced by Signer 1 to elicit a PO from Signer 2. Since these forms initiate the entire PC, we called them initiate. initiate does not refer outside of the system; it establishes relations within it. It has a strictly language-internal function, therefore, we consider it a core-c form of the grammatical/functional variety.
We found three sub-categories of initiate, each one represented by a distinct form: initiate-touch, initiate-grasp, and initiate-prompt. In other words, there are three ways to initiate a proprioceptive construction and prompt Signer 2 to provide a PO: (1) by touching a surface on the body of Signer 2, thereby incorporating that surface into the active signing space, or activating it as an articulator (initiate-touch); (2) by grasping Signer 2’s hand or arm, thereby activating it as an articulator (initiate-grasp); or (3) by prompting Signer 2 to produce a form (initiate-prompt).
Initiate-touch activates some portion of Signer 2’s body when Signer 1 makes contact. That portion of Signer 2’s body is then activated in the production of a sign. In the 4-handed proprioceptive constructions we analyze here, the activated area functions both as a space for articulation and as a backgrounded, meaningful element. That backgrounded element is represented by a PO. Initiate-touch can only occur, then, when a PO has already been selected via initiate-grasp or initiate-prompt. In sum, initiate-touch works to foreground a new contact space against a previously established background.
For example, in Figure 2a-2b Signer 1 initiates the basic PO, which includes Signer 2’s fist and arm, placed vertically. You can see that this PO has been selected in Figure 2c. Next, Signer 1 traces Signer 2’s arm to represent the stick of the lollipop in Figure 2e. In the second case, this activates a smaller PO (only the arm of A2) within the previously established PO (the arm and fist of A2). When a smaller portion of a previously established PO is activated, we label that initiate-touch.
As stated above, Signer 1 has two additional options for initiating the PC. They can use initiate-grasp, which involves grasping some portion of Signer 2’s body and selecting a PO by moving Signer 2’s hand or arm into that shape. Signer 1 can also use initiate-prompt. Attested categories of (i)nitiate-prompt include: (I)-prompt-tap and (I)-prompt-po. These two forms can work in tandem. For example, it is common for Signer 1 to tap Signer 2’s non-dominant hand (A4) twice, before producing a shape. This shape is not the PO, but a request for Signer 2 to copy the shape, thereby producing the PO. We therefore label this “prompt-po.” The prompt-tap that sometimes precedes it, is an instruction to Signer 2 to be prepared for a prompt-po. prompt-po can occur alone, while prompt-tap cannot.
3.1.3 Movement/Contact types (MCs)
POs are indeterminate until Signer 1 adds more information by tracing, gripping, and producing other forms of movement and contact on the PO. We therefore identified those conventional signals as Movement-Contact types (MCs). MCs act on the pre-determined PO or activate an additional PO, thereby backgrounding the previous PO. For example, A2 was used to represent the lollipop in Figure 2. At first, the fist of A2 was foregrounded by an MC to represent the candy portion of the lollipop. The arm was available, but backgrounded at that point. Next, an MC is used to foreground the arm (as a cylinder) as a PO, to represent the stick of the lollipop. In Figure 3 the same arm (this time as a plane) is used as a PO to represent a horizontal surface, where several lollipops are located (white circles). The locations themselves are represented by MCs.
MCs are substantive, spatial-c forms, which use contact space to represent spatial concepts. In Figure 3, Signer 1 is producing what might be seen as handshapes as he makes contact with Signer 2, but the handshapes matter much less than the way the fingers or hand contact the PO. Attested MCs include: trace, grip, grip-twist, grip-wiggle, slide, penetration, tap, slap, press, scratch, and move. The part of the form with contact is always counted for duration of the MC; i.e., when A1 and A2 are touching, as shown with the white circles. If a “listening hand,” (e.g. A4 in Figure 2a), is following the movements of A1 in a PC, then the movement from one MC to a subsequent MC is also included.
Figure 3: Signer 1 (right) produces multiple MCs on previously established PO
3.1.4 Prompt-to-Continue (PTC)
Finally, we observed that once a PO is established, Signer 1 can hold the PO in place during the subsequent MCs and until the final MC had been produced. Across many instantiations, this form seems to serve the function of maintaining the active, contact signing space generated by the PO (See A3 in Figure 2d). It tells Signer 2, “Leave this hand here. There is more to come.” Therefore, we call this category of forms, prompt-to-continue (PTC). PTC maintains the active status of the PO by maintaining contact with the PO until the string of movement contact types is completed. The end of this unit often co-occurs or is closely linked with the production of the final MC in the proprioceptive construction. Like initiate, prompt-to-continue has a strictly language-internal function, therefore, we consider it a core-c form of the grammatical/functional variety. Attested categories include hold and press.
3.2 Correspondence between Units and Articulators
In this section we describe the systematic links in quantitative terms between the functional units and articulatory units of a PC, illustrated in Figure 2a-2d, which we argue have been assigned to specific articulators among protactile signers. As stated above, the order of elements is consistent. When a new initiate occurs, its articulation will always begin before the PO. When a new PTC occurs, its articulation will always begin after the PO has been established, and finally, the articulation of the MC always begins after all of the other components of the PC have been established, i.e. MC is last in the sequence.
In Figure 2a-2b, Signer 1 requests Signer 2’s active participation by “grasping” her dominant hand (A1), and moving it toward a vertical position; this is initiate-grasp. In Figure 2c, Signer 2 is responsive to this request and repositions her arm, which is her dominant articulator (A2). In Figure 2d, Signer 1 holds Signer 2's arm in place; this is prompt-to-continue (PTC), and finally, in Figure 2e, Signer 1 traces Signer 2's arm, to highlight its cylindrical shape. This sequence together comprises the second PC with the meaning “cylinder.” Together, “cylinder” + “sphere” (not pictured here) describe the entire lollipop stimulus. All protactile signers produced a construction like this to represent the lollipop. Each description had two parts: a cylinder (to represent the stick) and a sphere (to represent the candy).
After determining the order and roles for each sub-unit of the PC, we analyzed the consistency of ascribing specific functions to specific articulators. The frequency by individual for articulator and how they map to functional roles is represented in Table 3, along with standard error calculations. Proportions of each Articulator (A1, A2, A3) are based on the total for that function (I, PO, PTC, MC). The “total” proportion of each function is based on the grand total of productions for each participant. Results from Mann-Whitney U comparison of rankings for each functional unit show that: for Initiate, A1 values are significantly higher than for A3 (U=6; z-score 2.29, p<.05); for PO, A2 values are significantly higher than for A4 (U=6; z-score 2.29, p<.05); for PTC, A3 values are significant higher than for A1 (U=0; z-score 3.07, p<.01); and for MC, A1 values are significant higher than for A3 (U=0; z-score 3.07, p<.01).
Table 3: Proportion of articulatory-functional alignment by individual.*
Initiate, Proprioceptive Object, Prompt-to-Continue, Movement-Contact Type, A1, A3, I-Total, A2, A4, PO-total, A1, A3, PtC-Total, A1, A3, MC-Total
Participant 1, 0.48, 0.52, 0.34, 0.74, 0.26, 0.18, 0.25, 0.75, 0.10, 0.68, 0.32, 0.38
Participant 2, 0.73, 0.27, 0.27, 0.86, 0.14, 0.20, 0.32, 0.68, 0.13, 0.78, 0.22, 0.40
Participant 3, 0.48, 0.52, 0.29, 1.00, 0.00, 0.25, 0.00, 1.00, 0.08, 0.63, 0.37, 0.38
Participant 4, 0.66, 0.34, 0.21, 0.95, 0.05, 0.17, 0.36, 0.64, 0.19, 0.86, 0.14, 0.43
Participant 5, 0.52, 0.48, 0.2, 0.98, 0.02, 0.14, 0.26, 0.74, 0.16, 0.83, 0.18, 0.49
Participant 6, 0.81, 0.19, 0.29, 0.00, 0.00, 0.00, 0.13, 0.87, 0.05, 0.99, 0.01, 0.65
Participant 7, 0.58, 0.42, 0.34, 0.78, 0.22, 0.16, 0.18, 0.82, 0.14, 0.82, 0.18, 0.35
Average: All Participants, 0.60, 0.40, 0.28, 0.85, 0.15, 0.15, 0.26, 0.74, 0.13, 0.82, 0.18, 0.44
Standard error, 0.05, 0.05, 0.02, 0.13, 0.04, 0.00, 0.05, 0.05, 0.02, 0.04, 0.04, 0.04
*One of the (male) participants responded to only one of the three stimuli.
** initiate-prompt-tap is primarily produced with A3. As shown in Table 3 P1, 3, and 5 produce initiate-prompt-tap frequently, and therefore share initiates between A1 and A3. P 2, 4, and 6, do not produce many initiate-prompt-tap, which increases their use of A1, where the other types of initiate are produced.
We calculated the proportions that each participant assigned each of the PC roles (I, PO, PTC, and MC) to each articulator, and then averaged the individual averages. We found that initiate was produced more often with A1 (60% of 623 tokens), with A3 in nearly all other cases (40%). PO was produced most often with A2 (85% of 335 tokens), with A2 in all other cases (15%). PTC was produced most often with A3 (74% of 280 tokens), with A1 in all other cases (26%). MC was produced most often with A1 (82% of 966 tokens), with A3 in all other cases (18%).
Figure 4: Proportion of articulatory-functional alignment by group.
As you can see in Figure 4, for PO, PTC, and MC, there is a clear division of labor: PO is most often assigned to A2, PTC to A3, and MC to A1. While there is a preference for initiate to be produced with A1, the pattern is not strong relative to the other categories. The use of the articulators for each of the functional units is consistent across participants for PO, PTC, and MC. We see some variation, however, in the proportion of use for A1 and A3 for initiate. One possible reason for this is that the different types of initiates are assigned to different articulators.
To investigate this possibility, we analyzed the sub-types of initiate (I). Again, in order to correct for the differences in token count among participants, we calculated the proportions that each participant assigned to each of the PC categories to each articulator, we then averaged the individual averages to reach the analysis presented in Figure 5 below.
Figure 5: Percentages of sub-Initiate forms produced by A1 and A3
i-touch was produced most often by A1 (87% of 181 tokens). However, i-grasp was not clearly assigned to one articulator. A3 produced 54% of 372 tokens, and the remaining tokens were produced by A1. Similarly, i-prompt more often produced by A1 (59 % of 61 tokens), but A1 was not far behind (41%). While i-grasp may be equally distributed across articulators, we suspected that i-prompt should be analyzed further into its two sub-types: i-prompt-tap and i-prompt-po, as shown in Figure 6.
Figure 6: Percentages of sub-sub-Initiate forms produced by A1 and A3
i-prompt-po was most often produced by A1 (89% of 28 tokens), while the remaining tokens were produced by A3. i-prompt-tap, in contrast, was most often produced by A3 (76% of 33 tokens), while the remaining tokens were produced by A1. This is a small number of tokens; therefore, we take these calculations to be provisional. Nevertheless, a strong pattern presents itself here. Apart from i-grasp, which appears to be distributed almost equally across A1 and A3, each linguistic task has been assigned to a specific articulator.
In sum, analysis of these data suggests that among protactile signers, specific linguistic functions are assigned to specific articulators and distributed over the dyad in PCs. These relations are becoming conventionalized, allowing two signers to coordinate four articulators quickly and efficiently. The use of Signer 2’s hands and arms as part of the active articulatory apparatus differs from both visual and tactile signed languages, which use two articulators. This study, therefore, provides new insights into how emergent phonological systems can become conventionalized, and broadens our understanding of the flexibility and potential of phonology as it is manifested in different communication modalities.
3.3 Generalizing PC devices
We performed one additional analysis in order to determine if the innovations found in PCs are used elsewhere in the lexicon. We hypothesized that these patterns are affecting spatial forms at a faster rate than core forms. In order to test this secondary hypothesis, we assigned each annotation to one of four categories: spatial-a, spatial-c, core-a, and core-c. (Recall that “-c” indicates that the form was produced in contact space and “–a” indicates that the form was produced in air space.) In these data, a total of 1,450 spatial forms were produced. 96% of those forms were produced in contact space. A total of 1,419 core forms were produced, and of those, 62% were produced in contact space (Figure 7).
Figure 7: Percentages of spatial and core forms produced in contact vs. air space
As stated in the Introduction, the foreign component of the lexicon has not yet shown much modification in PCs, perhaps because both the movement and the location are predictable (redundant) in fingerspelling. We therefore looked to the core lexicon for examples of the use of I, PO, PTC and MC. At this stage of the work we make no claims about the direction of the generalization. It could be the PCs occur first and become productive in the core lexicon later, or vice versa.
There are two routes into the core protactile lexicon. First, core forms in ASL, which are conventionally produced in air space, can be borrowed into protactile language by simply changing their place of articulation to contact space, a device that is obligatory in PCs via proprioceptive objects. In the core, contact space is not on a proprioceptive object but somewhere on Signer 2’s body that provides tactile grounding, even if the place of contact has no particular meaning as it does in a PO. In these data, there are several ways this is accomplished (described below). We think that as protactile language develops, at least some of these patterns will become more widely conventionalized across groups of protactile signers. The second route for core protactile lexical items is through the spatial lexicon itself. As stated above, the spatial lexicon—in both visual and tactile signed languages—contains constructions where all parameters of the sign are (or can be) meaningful. Spatial constructions can enter the core lexicon by abstracting away from the details of the description. We expect both of these processes, given the right communicative ecology, to play out in protactile language. The analysis presented here, then, offers some insight into some possible trajectories for language emergence.
In this study, the most common pattern in transferring ASL core lexical items to contact space involves ASL handshapes that are articulated by making contact with the dominant hand of Signer 2 in contact space, instead of with the non-dominant hand of Signer 1 in air space. For example, once the lollipop has been described and, the signer establishes locations in contact space (on the palm of Signer 2), to represent relative locations of multiple lollipops, placed on the table. While the PO structure (the palm) was still active, Signer 1 produced an ASL “Y” handshape :f , as in the ASL sign, “same,” as shown on the left side of Figure 8 (ASL Signbank, 2020), however, it is produced by making contact with the PO, as shown on the right side of Figure 8. Therefore, while same is a core lexical form in ASL, here the ASL handshape has been transferred to contact space, using conventional PC devices; in this case a PO plus MC combination. Where ASL handshapes are transferred to contact space in this manner, PC devices are operating beyond the spatial lexicon.
Figure 8: Handshape transferred to contact space via PC devices and conventions
Spatial constructions also enter the core lexicon by abstracting away from the details of the description. For example, one protactile signer described 5 jacks, which were spread out on the table by producing mc-press two times on po-plane and then adding the number “5.” We have observed that mc-press is often used to describe the location of a referent in relation to another referent (e.g. “One jack is here [mc-press] and another is here [mc-press]”). However, in this case, both the location of the referents, and the number of referents, are abstracted away from the details of the description to mean something like: The jacks are distributed in space, not: this jack is here and this jack is here, and so-on until the locations of each of the 5 jacks has been described. The main cues that distinguish these two meanings seem to be: (1) the speed of production; (2) the presence vs. absence of pauses between instances of mc-press, and the presence vs. absence of a lengthened mc-press press, co-articulated with all other instances, to mark the origo, or position from which reference is calculated. This suggests that core protactile lexical items, in addition to entering via the transfer of ASL core forms into contact space, are also entering via the protactile spatial lexicon. In this case, the handshape used in the ASL demonstriative “this” shown on the left side of Figure 9 (ASL Signbank 2020), is transferred into contact space by making contact with the PO (Figure 9, right).
Figure 9: Handshape transferred to contact space using PC devices and conventions
4. Discussion
In recent work on the phonology of emerging signed languages, Brentari has argued that minimal pairs and phonological rules are insufficient criteria for deeming a phenomenon to be phonological (Brentari et al. 2012, Coppola and Brentari 2014, Brentari et al. 2017). Rather, phonological patterns in emergent languages can be grasped by way of more basic principles, which organize the system slowly in historical time during conventionalization.
One way to think about innovations in Protactile is from the perspectives of two general pressures on a phonological system (Brentari, 2019). The first is the pressure of efficiency, common to both signed and spoken languages, which includes how the units of the system are organized to maximize the information conveyed, as well as ease of production, ease of perception, and the way that the strengths of the particular communication modalities affect it (auditory-aural; visual-gestural; tactile-proprioceptive). Efficiency includes principles of redundancy and well-formedness, which we see in the PC forms we have analyzed.
The internal structure of the protactile elements described here utilize redundancy, since the space introduced in PO must be the same one elaborated on in the MC unit and the two must be in that order. The signers know what is coming next in a PC because the order is fixed. It is clear that principles of well-formedness are at work because protactile signers correct learners of the system when they produce incorrect forms. The inventory of values for each form has definable boundaries that allow it to be interpreted as well-formed or not.
The second pressure is to maximize the affordances of iconicity, which all languages exploit, but which sign languages exploit to a greater extent. Since relations of resemblance will vary as modes of perception vary, we would expect a language used by protactile perceivers to exhibit a kind of iconicity grounded in non-visual modes of experience. Given that protactile signers have experience with signed languages, one might also expect that they would have a high “iconicity threshold” for protactile language; that is, they want their language to be as iconic as possible, because that is what ASL offers in the visual modality. The way that types of iconicity affect the form–meaning correspondences of units in protactile language is an area that can contribute to our understanding of language more generally. As we see in the forms we have discussed in this paper, tactile and proprioceptive iconicity has started to replace visual iconicity in protactile language.
To efficiency and iconicity, we add a third pressure: the necessity of establishing and maintaining deictic relations (Buhler 2001 [1934], Hanks 1990). Describing and discussing shared objects of attention in protactile language requires deictic reference and deictic reference requires the ability to inhabit a shared and reciprocal zero-point, or “origo” from which reference can be computed. As stated by Hanks, “The question for deixis is not ‘Where is the referent?’ but ‘How do we identify the referent in relation to us?” (Hanks, 2009:12). Protactile signers answer that question in ways that non-protactile signers would not think to (Edwards 2017). The ways that different deictic relations, grounded in different forms of spatial cognition, affect form-meaning correspondences of units in signed languages, is, like iconicity, an area that can contribute to our understanding of language more generally. Iconicity and indexicality are sign-object relations (Peirce 1955/1940 [1893-1910]), which interact with, and exert pressure on, the internal organization of grammatical systems in signed and spoken languages (signed languages: Brentari, 2019, Dudis 2004, Horton 2018, Shaw and Delaporte 2015, Hwang et al. 2017, Padden et al. 2013; spoken languages: Hanks 1990, Inoue 2004, Kockelman 2003, Sicoli 2014, Silverstein 1976).
The consistent assignation of a particular linguistic function to a particular articulator, as well as the constraints on how information can be packaged and in what order, suggest that strictly linguistic principles are being applied as well, generating patterns of distribution, discreteness, and productivity of form, which are becoming conventionalized across a group of protactile signers. This complex of processes work together to link form with meaning in increasingly stable ways.
Coppola and Brentari (2014), building on recent theories of language emergence, have proposed three stages in the emergence of phonology:
Stage 1: Increase Contrasts: Recognize particular features as a form that can be manipulated to create different meanings or used for grammatical purposes.
Stage 2: Create the Opposition: Distinguish the distribution of two features or feature values in one’s system, associating one feature with one meaning and the other to another meaning. This association does not have to be complete or absolute.
Stage 3: Apply the Opposition Productively: Apply the feature or class of features productively to new situations where the same opposition is needed.
Using contact space for meaning satisfies Stage 1. Creating opposition among the four articulators satisfies Stage 2. And the generalization of I, PO, PTC, and MC to the core satisfies Stage 3. As discussed above, the third stage is not yet in full swing. Observing growth in the productive application of the oppositions described here will offer unique opportunities to test the model put forth by Coppola and Brentari (2014) in the tactile modality.
5. Conclusions
In this paper we have shown that an important step in the conventionalization of a new phonological system is underway. This provisionally suggests that the tactile/proprioceptive modality can sustain language. The case we report is similar to, and different from, cases of emerging sign languages in Nicaragua (Kegl and Iwata, 1989; Senghas and Coppola, 2001) and Israel (Sandler et al. 2005). Participants in the present study acquired ASL as children. As they became blind and ASL became difficult to use, individuals compensated in idiosyncratic ways (Edwards 2014). This led to a splintering of ASL into simplified, idiosyncratic systems, similar to homesign systems, in that they were developed by individuals who routinely communicated in non-reciprocal contexts, where their systems were not used by those communicating with them (Goldin-Meadow and Feldman 1977). When these idiosyncratic systems came together in reciprocal communication contexts (i.e. protactile contexts), the linguistic patterns we describe began to cohere. Similarly, when homesign systems come together in reciprocal visual communication contexts, languages emerge (Goldin-Meadow and Brentari 2017:29).
One significant difference is that the innovations described in this paper were initiated after participants acquired a first language. For reasons discussed elsewhere (Clark 2017, granda and Nuccio 2018), protactile signers are aiming for the maximization of affordances in the tactile/proprioceptive modality over and against the preservation of ASL grammar. Whatever is left of ASL is being sidelined, functioning mostly as an archival lexicon. Signs are retrievable from the ASL lexicon insofar as they can be transferred to contact space without violating emerging protactile conventions.
The conventions we have described in this paper align in several ways with recent findings in a growing body of research on DeafBlind language use and tactile sign languages (Willoughby et al. 2018). First, when compared to visual sign languages, the simultaneous packaging of classifier predication is more sequentialized in protactile language at the phonological level. In other words, the components of the PC unfold—as a rule— in sequence. This finding supports Checchetto et al.’s (2018) prediction that LISt will tend toward sequentialization relative to the simultaneity of visual signed languages. Like Checchetto et al. (2018), we also note a general avoidance of the face in the production of protactile signs.
This research contributes new findings as well. In particular, protactile signers have a clear preference for contact space over air space, as demonstrated in Section 3.3. The shift to contact space is triggering radical changes in the phonological organization of protactile language. In this paper, we have argued that an early stage in that process is the consistent assignation of specific linguistics tasks to four articulators available for producing PCs.
In line with studies of language emergence, the results of this research support the idea that the human drive to create language is resilient, supported by whatever modality can sustain it. Our findings also point to the fact that iconic and indexical pressures can exert palpable effects on the emergent structure of specific languages. Where the drive to create language and the drive to use language align, grammar emerges.
Note: This research was supported by an NSF research grant (BCS-1651100) awarded to Edwards and Brentari. We wish to thank Jelica Nuccio, aj granda, Vince Nuccio and the many members of the Seattle DeafBlind community who contributed to, and participated in, this research; John Lee Clark and Susan Goldin-Meadow for comments on the manuscript; our research staff: Halene Anderson, Joanna Ball Smith, Oscar Chacon, Abby Clements, Eddie Martinez, Lilia McGee-Harris, Jelica Nuccio, and Yashaira Romilus for their analyses and insights; and Paul Dudis and the department of Linguistics at Gall Lillo-Martin, and the anonymous reviewers for invaluable feedback.
References:
Battison, Robbin. 1978. Lexical borrowing in American Sign Language. Silver Spring, MD: Linstock Press.
Baus, C., E. Gutiérrez-Sigut, J. Quer, and M. Carreiras. 2008. Lexical access in Catalan Signed Language production. Cognition, 108(3), 856–865.
Boyes Braem, P. 1981. Features of the handshape in American Sign Language. Berkeley: University of California dissertation.
Brentari, D. 1998. A Prosodic Model of Sign Language Phonology. Cambridge, MA: MIT Press.
Brentari, Diane. 2019. Sign Language Phonology. Cambridge, UK: Cambridge University Press.
Brentari, Diane and Carol A. Padden. 2001. Native and foreign vocabulary in American Sign Language: A lexicon with multiple origins. Foreign vocabulary in sign languages: A Cross-linguistic investigation of word formation, ed. by D. Brentari. Mahwah, NJ: Lawrence Erlbaum.
Brentari, Diane, Marie Coppola, Laura Mazzoni, and SusanGoldin-Meadow. 2012. When does a system become phonological? Handshape production in gesturers, signers, and homesigners. Natural Language and Linguistic Theory 30.1-31.
Brentari, Diane and Marie Coppola. 2013. What sign language creation teaches us about language. Wiley Interdisciplinary Reviews: WIREs: Cognitive Science 4.201-11.
Brentari, Diane, A. Di Renzo, J. Keane and V. Volterra. 2015. Cognitive, cultural, and linguistic sources of a Handshape Distinction Expressing agentivity. TopiCS 7.95-123.
Brentari, Diane, Marie Coppola, Pyeong Whan Cho, and Ann Senghas. 2017. Handshape complexity as a precursor to phonology: Variation, emergence, and acquisition. Language Acquisition 24. 283-306.
Bühler, Karl. 2001 [1934]. Theory of language: The representational function of language. Amsterdam; Philadelphia, PA: John Benjamins.
Carreiras, M., E. Gutiérrez-Sigut, S. Baquero, and D. Corina. 2008. Lexical processing in Spanish Signed Language (LSE). Journal of Memory and Language, 58(1), 100–122.
Caselli, N. K., and A.M. Cohen-Goldberg. 2014. Lexical access in sign language: a computational model. Frontiers in Psychology 5, 428
Checchetto, Alessandra, Carlo Geraci, Carlo Cecchetto, and Sandro Zucchhi. 2018. The Language instinct in extreme circumstances: The transition to tactile Italian Sign Language (LISt) by Deafblind signers. Glossa 3.1-28.
Clark, John Lee. 2017. Distantism. https://johnleeclark.tumblr.com/.
Clark, John Lee. 2020. The raft. Manuscript in preparation.
Collins, Steven. 2004. Adverbial morphemes in Tactile American Sign Language. Doctoral dissertation, Graduate College of Union Institute and University.
Collins, Steven and Karen Petronio. 1998. What happens in Tactile ASL? Pinky extension and eye gaze: Language use in Deaf communities, ed. by C. Lucas, 18-37. Washington DC: Gallaudet University Press.
Coppola, Marie and Diane Brentari. 2014. From iconic handshapes to grammatical contrasts: longitudinal evidence from a child homesigner. Frontiers in Psychology 5. doi:10.3389/fpsyg.2014.00830
Corina, D. P., and K. Emmorey. 1993. Lexical priming in American sign language. Poster presented at the 34th annual meeting of the Psychonomics Society,Washington, DC.
Corina, D. P., and U. Hildebrandt. 2002. Psycholinguistic investigations of phonological structure in ASL. In R. P. Meier, K. Cormier, and D. Quinto-Pozos (eds.), Modality and structure in signed and spoken languages, 88–111. Cambridge, United Kingdom: Cambridge University Press.
Crasborn, Onno and Han Sloetjes. 2008. Enhanced ELAN functionality for sign language corpora. Proceedings of the 3rd Workshop on the Representation and Processing of Sign Languages (at LREC 2008), 39–43. Online: http://www.lrec-conf.org/proceedings/lrec2008/.
Dudis, Paul G. 2004. Body partitioning and real-space blends. Cognitive Linguistics 15.223-38.
Edwards, Terra. 2014. Language emergence in the Seattle DeafBlind Community. Doctoral dissertation, The University of California, Berkeley.
Edwards, Terra. 2017. Sign creation in the Seattle DeafBlind community: A Triumphant story about the regeneration of obviousness. Gesture 16.304-27.
Goldin-Meadow, Susan and Diane Brentari. 2017. Gesture, sign and language:
The coming of age of sign language and gesture studies. Behavioral and Brain Sciences 40. doi:10.1017/S0140525X15001247.
Goldin-Meadow, Susan and Heidi Feldman. 1977. The Development of Language-Like Communication Without a Language Model. Science 197.22-24.
granda, aj and Jelica Nuccio. 2018. Protactile Principles. Tactile Communications. https://DeafBlind.tactilecommunications.org/ProTactilePrinciples.
Gutiérrez, E., O. Müller, C. Baus, and M. Carreiras, M. 2012. Electrophysiological evidence of phonological priming in Spanish Sign Language lexical access. Neuropsychologia, 50, 1335-1346.
Hanks, William F. 1990. Referential practice: language and lived space among the Maya Chicago: University of Chicago Press.
Hanks, William F. 2009. Fieldwork on deixis. Journal of Pragmatics 41.10-24.
Horton, Laura. 2018. Conventionalization of shared homesign systems in Guatemala: Social, lexical, and morphophonological dimensions. Doctoral dissertation. University of Chicago.
Julie A. Hochgesang, Onno Crasborn, and Diane Lillo-Martin. (2020) ASL Signbank. New Haven, CT: Haskins Lab, Yale University. https://aslsignbank.haskins.yale.edu/
Hwang, So-One, Nozomi Tomita, Hope Morgan, Rabia Ergin, Deniz Ilkbasaran, Sharon Seegers, Ryan Lepic and Carol Padden. 2017. Of the body and the hands: patterned iconicity for semantic categories. Language and Cognition 9.573-602.
Inoue, Miyako. 2004. What does language remember?: Indexical inversion and the naturalized history of Japanese women. Journal of Linguistic Anthropology 14.39-56.
Itô , Junko, and Armin Mester. 1995a. Japanese phonology. In J. Goldsmith (ed.), Handbook of phonological theory (pp. 817–838). Oxford/New York: Blackwell.
Itô , Junko, and Armin Mester. 1995b. The core-periphery structure of the lexicon and constraints on reranking. In J. Beckman, L. Walsh Dickey and S Urbanczyk (eds.), University of Massachusetts occasional papers 18: Papers in Optimality Theory (pp. 181–209). Amherst, MA: GLSA (Graduate Linguistic Students Association), University of Massachusetts.
Iwasaki, Shimako, Meredith Barlett, Howard Manns, and Louisa Willoughby. 2018. The challenges of multimodality and multisensorality: Methodological issues in analyzing tactile signed interaction. Journal of Pragmatics. 143.215-227
Keane, Jon and Diane Brentari. 2016. Fingerspelling: Beyond Handshape Sequences. In M. Marschark and P. Siple, eds., The Oxford Handbook of Deaf Studies in Language: Research, Policy, and Practice, 146-160. NY/Oxford: Oxford University Press.
Kegl, Judy and Gayla Iwata. 1989. Lenguaje de Signos Nicaragüense: A pidgin sheds light on the “creole”? ASL. In M. Carlson , S. DeLancey, S. Gildea, D. Payne, A. Saxena, eds., Proceedings of the Fourth Meetings of the Pacific Linguistics Conference, 266-294. Eugene, Oregon: Department of Linguistics, University of Oregon.
Kockelman, Paul. 2003. The Meanings of interjections in Q’eqchi’ Maya. Current Anthropology 44.467-90.
Kooij, E. van der, and H. van der Hulst. 2005. On the internal and external organization of sign segments: some modality specific property of sign segments in NGT. In M. van Oostendorp and J. van de Weijer, eds.. The internal organization of phonological segments. Studies in Generative Grammar, 77, 153-180. Berlin/New York: Mouton de Gruyter
Liddell, S. 1984. THINK and BELIEVE: Sequentiality in American Sign Language. Language 60,372–392.
Liddell, S. and R. E. Johnson. 1989. American Sign Language: The phonological base. Sign Language Studies 64. 197–277.
Marentette. P., and R. Mayberry. 2000. Principles for an emerging phonological system: a case study early American Sign Language acquisition. In C. Chamberlain, J. Morford J. and R. Mayberry, eds., Language Acquisition by Eye, 71-90. Mahwah, NJ: Lawrence Erlbaum Associates.
Meier, R., C. Mauk, A. Cheek, and C. Moreland. 2008. The Form of Children's Early Signs: Iconic or Motoric Determinants. Language Learning and Development 4(1). 63–98
Mesch, Johanna. 2001. Tactile sign language: Turn taking and questions in signed conversations of Deaf-blind people. Hamburg: Signum.
Mesch, Johanna. 2013. Tactile signing with one-handed perception. Sign Language Studies 13.238-63.
Mesch, Johanna, Eli Raanes and Lindsay Ferrara. 2015. Co-forming real space blends in tactile signed language dialogues. Cognitive Linguistics 26.261-287.
Padden, Carol, Irit Meir, So-One Hwang, Ryan Lepic, Sharon Seegers and Tory Sampson. 2013. Patterned iconicity in sign language lexicons. Gesture 13.287-308.
Peirce, Charles Sanders. 1955/1940 [1893-1910]. Logic as semiotic: The theory of signs. Philosophical Writings of Peirce, ed. by J. Buchler. New York: Dover.
Petitto, L.A., R. Zatorre, K. Gauna, E.J. Nikelski, D. Dostie, and A. Evans. 2000. Speech-like cerebral activity in profoundly deaf people processing signed languages: Implications for the neural basis of human language
Petronio, Karen and Valerie Dively. 2006. YES, #NO, Visibility, and variation in ASL and Tactile ASL. Sign Language Studies 7.57-98.
Quinto-Pozos, David. 2002. Deictic points in the visual-gestural and tactile-gestural nodalities. Modality and Structure in Signed and Spoken Languages, ed. by R.P. Meier, K. Cormier and D. Quinto-Pozos. Cambridge: Cambridge University Press.
Reed, Charlotte M., Lorraine A. Delhorne, Nathaniel I. Durlach and Susan D. Fischer. 1995. A study of the tactual reception of Sign Language. Journal of Speech and Hearing Research 38.
Sandler, W 1989. Phonological Representation of the Sign: Linearity and Nonlinearity in American Sign Language. Dordrecht: Foris.
Sandler, Wendy, Irit Meir, Carol Padden, and Mark Aronoff. 2005. The emergence of grammar: Systematic structure in a new language. Proceedings of the National Academy of Sciences of the United States of America 102.2661-65.
Sandler, W., and D. Lillo-Martin. 2006. Sign Language and Linguistic Universals. Cambridge/New York: Cambridge University Press.
Senghas, Ann and Marie Coppola. 2001. Children creating language: How Nicaraguan Sign Language acquired a spatial grammar. Psychological Science 12.323-328.
Shaw, Emily and Yves Delaporte. 2015. A historical and etymological dictionary of American Sign Language. Washington, DC: Gallaudet University Press.
Stokoe, W. 1960. Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf. Buffalo, NY: University of Buffalo (Occasional Papers 8).
Sicoli, Mark. 2014. Ideophones, rhemes, interpretants. Pragmatics and Society 5.445-54.
Silverstein, Michael. 1976. Shifters, linguistic categories, and cultural description. Meaning in anthropology, ed. by K. Basso and D.B.A. Selby, 11-55. Albuquerque, NM: University of New Mexico Press.
Stokoe, William. 1960. Sign language structure: An outline of the visual communication systems of the American Deaf. Buffalo, NY: University of Buffalo (Occasional Papers 8).
Supalla, Ted. 1982. Structure and acquisition of verbs of motion and location in American Sign Language. Doctoral dissertation, University of California, San Diego.
Thompson, R., K. Emmorey, and T.H. Gollan. 2005. “Tip of the fingers” experiences by deaf signers: Insights into the organization of a sign-based lexicon. Psychological Science 16(11), 856–860.
Wilcox, Sherman. 1992. The phonetics of fingerspelling. Philadelphia, PA: John Benjamins.
Willoughby, Louisa, Shimako Iwasaki, Meredith Bartlett, and Howard Manns. 2018. Tactile sign languages. Handbook of pragmatics, ed. by J.-O. Östman and J. Verschueren, 239-58: Benjamins.
Zwitserlood, Inge. 2012. Classifiers. Handbook of Sign Language Linguistics, ed. by In: R. Pfau, M. Steinbach, B.Woll, 158-185. Berlin: Mouton de Gruyter.
END
Proudly powered by Weebly