Publications

Displaying 101 - 200 of 1516
  • Bosker, H. R. (2016). Our own speech rate influences speech perception. In J. Barnes, A. Brugos, S. Stattuck-Hufnagel, & N. Veilleux (Eds.), Proceedings of Speech Prosody 2016 (pp. 227-231).

    Abstract

    During conversation, spoken utterances occur in rich acoustic contexts, including speech produced by our interlocutor(s) and speech we produced ourselves. Prosodic characteristics of the acoustic context have been known to influence speech perception in a contrastive fashion: for instance, a vowel presented in a fast context is perceived to have a longer duration than the same vowel in a slow context. Given the ubiquity of the sound of our own voice, it may be that our own speech rate - a common source of acoustic context - also influences our perception of the speech of others. Two experiments were designed to test this hypothesis. Experiment 1 replicated earlier contextual rate effects by showing that hearing pre-recorded fast or slow context sentences alters the perception of ambiguous Dutch target words. Experiment 2 then extended this finding by showing that talking at a fast or slow rate prior to the presentation of the target words also altered the perception of those words. These results suggest that between-talker variation in speech rate production may induce between-talker variation in speech perception, thus potentially explaining why interlocutors tend to converge on speech rate in dialogue settings.

    Additional information

    pdf via conference website227
  • Bosker, H. R., Quené, H., Sanders, T. J. M., & de Jong, N. H. (2014). The perception of fluency in native and non-native speech. Language Learning, 64, 579-614. doi:10.1111/lang.12067.

    Abstract

    Where native speakers supposedly are fluent by default, non-native speakers often have to strive hard to achieve a native-like fluency level. However, disfluencies (such as pauses, fillers, repairs, etc.) occur in both native and non-native speech and it is as yet unclear ow luency raters weigh the fluency characteristics of native and non-native speech. Two rating experiments compared the way raters assess the luency of native and non-native speech. The fluency characteristics of native and non- native speech were controlled by using phonetic anipulations in pause (Experiment 1) and speed characteristics (Experiment 2). The results show that the ratings on manipulated native and on-native speech were affected in a similar fashion. This suggests that there is no difference in the way listeners weigh the fluency haracteristics of native and non-native speakers.
  • Bosker, H. R. (2014). The processing and evaluation of fluency in native and non-native speech. PhD Thesis, Utrecht University, Utrecht.

    Abstract

    Disfluency is a common characteristic of spontaneously produced speech. Disfluencies (e.g., silent pauses, filled pauses [uh’s and uhm’s], corrections, repetitions, etc.) occur in both native and non-native speech. There appears to be an apparent contradiction between claims from the evaluative and cognitive approach to fluency. On the one hand, the evaluative approach shows that non-native disfluencies have a negative effect on listeners’ subjective fluency impressions. On the other hand, the cognitive approach reports beneficial effects of native disfluencies on cognitive processes involved in speech comprehension, such as prediction and attention.

    This dissertation aims to resolve this apparent contradiction by combining the evaluative and cognitive approach. The reported studies target both the evaluation (Chapters 2 and 3) and the processing of fluency (Chapters 4 and 5) in native and non-native speech. Thus, it provides an integrative account of native and non-native fluency perception, informative to both language testing practice and cognitive psycholinguists. The proposed account of fluency perception testifies to the notion that speech performance matters: communication through spoken language does not only depend on what is said, but also on how it is said and by whom.
  • Böttner, M. (1998). A collective extension of relational grammar. Logic Journal of the IGPL, 6(2), 175-793. doi:10.1093/jigpal/6.2.175.

    Abstract

    Relational grammar was proposed in Suppes (1976) as a semantical grammar for natural language. Fragments considered so far are restricted to distributive notions. In this article, relational grammar is extended to collective notions.
  • Bowerman, M. (1985). Beyond communicative adequacy: From piecemeal knowledge to an integrated system in the child's acquisition of language. In K. Nelson (Ed.), Children's language (pp. 369-398). Hillsdale, N.J.: Lawrence Erlbaum.

    Abstract

    (From the chapter) the first section considers very briefly the kinds of processes that can be inferred to underlie errors that do not set in until after a period of correct usage acquisition often seems to be a more extended process than we have envisioned summarize a currently influential model of how linguistic forms, meaning, and communication are interrelated in the acquisition of language, point out some challenging problems for this model, and suggest that the notion of "meaning" in language must be reconceptualized before we can hope to solve these problems evidence from several types of late errors is marshalled in support of these arguments (From the preface) provides many examples of new errors that children introduce at relatively advanced stages of mastery of semantics and syntax Bowerman views these seemingly backwards steps as indications of definite steps forward by the child achieving reflective, flexible and integrated systems of semantics and syntax (
  • Bowerman, M. (1976). Commentary on M.D.S. Braine, “Children's first word combinations”. Monographs of the Society for Research in Child Development, 41(1), 98-104. Retrieved from http://www.jstor.org/stable/1165959.
  • Bowerman, M. (2004). From universal to language-specific in early grammatical development [Reprint]. In K. Trott, S. Dobbinson, & P. Griffiths (Eds.), The child language reader (pp. 131-146). London: Routledge.

    Abstract

    Attempts to explain children's grammatical development often assume a close initial match between units of meaning and units of form; for example, agents are said to map to sentence-subjects and actions to verbs. The meanings themselves, according to this view, are not influenced by language, but reflect children's universal non-linguistic way of understanding the world. This paper argues that, contrary to this position, meaning as it is expressed in children's early sentences is, from the beginning, organized on the basis of experience with the grammar and lexicon of a particular language. As a case in point, children learning English and Korean are shown to express meanings having to do with directed motion according to language-specific principles of semantic and grammatical structuring from the earliest stages of word combination.
  • Bowerman, M., & Meyer, A. (1991). Max-Planck-Institute for Psycholinguistics: Annual Report Nr.12 1991. Nijmegen: MPI for Psycholinguistics.
  • Bowerman, M. (1976). Le relazioni strutturali nel linguaggio infantile: sintattiche o semantiche? [Reprint]. In F. Antinucci, & C. Castelfranchi (Eds.), Psicolinguistica: Percezione, memoria e apprendimento del linguaggio (pp. 303-321). Bologna: Il Mulino.

    Abstract

    Reprinted from Bowerman, M. (1973). Structural relationships in children's utterances: Semantic or syntactic? In T. Moore (Ed.), Cognitive development and the acquisition of language (pp. 197 213). New York: Academic Press
  • Bowerman, M., Gullberg, M., Majid, A., & Narasimhan, B. (2004). Put project: The cross-linguistic encoding of placement events. In A. Majid (Ed.), Field Manual Volume 9 (pp. 10-24). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492916.

    Abstract

    How similar are the event concepts encoded by different languages? So far, few event domains have been investigated in any detail. The PUT project extends the systematic cross-linguistic exploration of event categorisation to a new domain, that of placement events (putting things in places and removing them from places). The goal of this task is to explore cross-linguistic universality and variability in the semantic categorisation of placement events (e.g., ‘putting a cup on the table’).

    Additional information

    2004_Put_project_video_stimuli.zip
  • Li, P., & Bowerman, M. (1998). The acquisition of lexical and grammatical aspect in Chinese. First Language, 18, 311-350. doi:10.1177/014272379801805404.

    Abstract

    This study reports three experiments on how children learning Mandarin Chinese comprehend and use aspect markers. These experiments examine the role of lexical aspect in children's acquisition of grammatical aspect. Results provide converging evidence for children's early sensitivity to (1) the association between atelic verbs and the imperfective aspect markers zai, -zhe, and -ne, and (2) the association between telic verbs and the perfective aspect marker -le. Children did not show a sensitivity in their use or understanding of aspect markers to the difference between stative and activity verbs or between semelfactive and activity verbs. These results are consistent with Slobin's (1985) basic child grammar hypothesis that the contrast between process and result is important in children's early acquisition of temporal morphology. In contrast, they are inconsistent with Bickerton's (1981, 1984) language bioprogram hypothesis that the distinctions between state and process and between punctual and nonpunctual are preprogrammed into language learners. We suggest new ways of looking at the results in the light of recent probabilistic hypotheses that emphasize the role of input, prototypes and connectionist representations.
  • Bowerman, M. (1976). Semantic factors in the acquisition of rules for word use and sentence construction. In D. Morehead, & A. Morehead (Eds.), Directions in normal and deficient language development (pp. 99-179). Baltimore: University Park Press.
  • Bowerman, M. (1985). What shapes children's grammars? In D. Slobin (Ed.), The crosslinguistic study of language acquisition (pp. 1257-1319). Hillsdale, N.J.: Lawrence Erlbaum.
  • Bramão, I., Reis, A., Petersson, K. M., & Faísca, L. (2016). Knowing that strawberries are red and seeing red strawberries: The interaction between surface colour and colour knowledge information. Journal of Cognitive Psychology, 28(6), 641-657. doi:10.1080/20445911.2016.1182171.

    Abstract

    his study investigates the interaction between surface and colour knowledge information during object recognition. In two different experiments, participants were instructed to decide whether two presented stimuli belonged to the same object identity. On the non-matching trials, we manipulated the shape and colour knowledge information activated by the two stimuli by creating four different stimulus pairs: (1) similar in shape and colour (e.g. TOMATO–APPLE); (2) similar in shape and dissimilar in colour (e.g. TOMATO–COCONUT); (3) dissimilar in shape and similar in colour (e.g. TOMATO–CHILI PEPPER) and (4) dissimilar in both shape and colour (e.g. TOMATO–PEANUT). The object pictures were presented in typical and atypical colours and also in black-and-white. The interaction between surface and colour knowledge showed to be contingent upon shape information: while colour knowledge is more important for recognising structurally similar shaped objects, surface colour is more prominent for recognising structurally dissimilar shaped objects.
  • Brehm, L., & Goldrick, M. (2016). Empirical and conceptual challenges for neurocognitive theories of language production. Language, Cognition and Neuroscience, 31(4), 504-507. doi:10.1080/23273798.2015.1110604.
  • Brehm, L. (2014). Speed limits and red flags: Why number agreement accidents happen. PhD Thesis, University of Illinois at Urbana-Champaign, Urbana-Champaign, Il.
  • Broeder, D., Brugman, H., Oostdijk, N., & Wittenburg, P. (2004). Towards Dynamic Corpora: Workshop on compiling and processing spoken corpora. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004) (pp. 59-62). Paris: European Language Resource Association.
  • Broeder, D., Wittenburg, P., & Crasborn, O. (2004). Using Profiles for IMDI Metadata Creation. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004) (pp. 1317-1320). Paris: European Language Resources Association.
  • Broeder, D., & Lannom, L. (2014). Data Type Registries: A Research Data Alliance Working Group. D-Lib Magazine, 20, 1. doi:10.1045/january2014-broeder.

    Abstract

    Automated processing of large amounts of scientific data, especially across domains, requires that the data can be selected and parsed without human intervention. Precise characterization of that data, as in typing, is needed once the processing goes beyond the realm of domain specific or local research group assumptions. The Research Data Alliance (RDA) Data Type Registries Working Group (DTR-WG) was assembled to address this issue through the creation of a Data Type Registry methodology, data model, and prototype. The WG was approved by the RDA Council during March of 2013 and will complete its work in mid-2014, in between the third and fourth RDA Plenaries.
  • Broeder, D., Declerck, T., Romary, L., Uneson, M., Strömqvist, S., & Wittenburg, P. (2004). A large metadata domain of language resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004) (pp. 369-372). Paris: European Language Resources Association.
  • Broeder, D. (2004). 40,000 IMDI sessions. Language Archive Newsletter, 1(4), 12-12.
  • Broeder, D., Nava, M., & Declerck, T. (2004). INTERA - a Distributed Domain of Metadata Resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Spoken Language Resources and Evaluation (LREC 2004) (pp. 369-372). Paris: European Language Resources Association.
  • Broeder, D., & Offenga, F. (2004). IMDI Metadata Set 3.0. Language Archive Newsletter, 1(2), 3-3.
  • Broeder, D., & Van Uytvanck, D. (2014). Metadata formats. In J. Durand, U. Gut, & G. Kristoffersen (Eds.), The Oxford Handbook of Corpus Phonology (pp. 150-165). Oxford: Oxford University Press.
  • Broeder, D., Schuurman, I., & Windhouwer, M. (2014). Experiences with the ISOcat Data Category Registry. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2014: 9th International Conference on Language Resources and Evaluation (pp. 4565-4568).
  • Broersma, M., Carter, D., & Acheson, D. J. (2016). Cognate costs in bilingual speech production: Evidence from language switching. Frontiers in Psychology, 7: 1461. doi:10.3389/fpsyg.2016.01461.

    Abstract

    This study investigates cross-language lexical competition in the bilingual mental lexicon. It provides evidence for the occurrence of inhibition as well as the commonly reported facilitation during the production of cognates (words with similar phonological form and meaning in two languages) in a mixed picture naming task by highly proficient Welsh-English bilinguals. Previous studies have typically found cognate facilitation. It has previously been proposed (with respect to non-cognates) that cross-language inhibition is limited to low-proficient bilinguals; therefore, we tested highly proficient, early bilinguals. In a mixed naming experiment (i.e., picture naming with language switching), 48 highly proficient, early Welsh-English bilinguals named pictures in Welsh and English, including cognate and non-cognate targets. Participants were English-dominant, Welsh-dominant, or had equal language dominance. The results showed evidence for cognate inhibition in two ways. First, both facilitation and inhibition were found on the cognate trials themselves, compared to non-cognate controls, modulated by the participants' language dominance. The English-dominant group showed cognate inhibition when naming in Welsh (and no difference between cognates and controls when naming in English), and the Welsh-dominant and equal dominance groups generally showed cognate facilitation. Second, cognate inhibition was found as a behavioral adaptation effect, with slower naming for non-cognate filler words in trials after cognates than after non-cognate controls. This effect was consistent across all language dominance groups and both target languages, suggesting that cognate production involved cognitive control even if this was not measurable in the cognate trials themselves. Finally, the results replicated patterns of symmetrical switch costs, as commonly reported for balanced bilinguals. We propose that cognate processing might be affected by two different processes, namely competition at the lexical-semantic level and facilitation at the word form level, and that facilitation at the word form level might (sometimes) outweigh any effects of inhibition at the lemma level. In sum, this study provides evidence that cognate naming can cause costs in addition to benefits. The finding of cognate inhibition, particularly for the highly proficient bilinguals tested, provides strong evidence for the occurrence of lexical competition across languages in the bilingual mental lexicon.
  • Broersma, M., & Kolkman, K. M. (2004). Lexical representation of non-native phonemes. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1241-1244). Seoul: Sunjijn Printing Co.
  • Brouwer, S., & Bradlow, A. R. (2014). Contextual variability during speech-in-speech recognition. The Journal of the Acoustical Society of America, 136(1), EL26-EL32. doi:10.1121/1.4881322.

    Abstract

    This study examined the influence of background language variation on speech recognition. English listeners performed an English sentence recognition task in either “pure” background conditions in which all trials had either English or Dutch background babble or in mixed background conditions in which the background language varied across trials (i.e., a mix of English and Dutch or one of these background languages mixed with quiet trials). This design allowed the authors to compare performance on identical trials across pure and mixed conditions. The data reveal that speech-in-speech recognition is sensitive to contextual variation in terms of the target-background language (mis)match depending on the relative ease/difficulty of the test trials in relation to the surrounding trials.
  • Brown, P. (2004). Position and motion in Tzeltal frog stories: The acquisition of narrative style. In S. Strömqvist, & L. Verhoeven (Eds.), Relating events in narrative: Typological and contextual perspectives (pp. 37-57). Mahwah: Erlbaum.

    Abstract

    How are events framed in narrative? Speakers of English (a 'satellite-framed' language), when 'reading' Mercer Mayer's wordless picture book 'Frog, Where Are You?', find the story self-evident: a boy has a dog and a pet frog; the frog escapes and runs away; the boy and dog look for it across hill and dale, through woods and over a cliff, until they find it and return home with a baby frog child of the original pet frog. In Tzeltal, as spoken in a Mayan community in southern Mexico, the story is somewhat different, because the language structures event descriptions differently. Tzeltal is in part a 'verb-framed' language with a set of Path-encoding motion verbs, so that the bare bones of the Frog story can consist of verbs translating as 'go'/'pass by'/'ascend'/ 'descend'/ 'arrive'/'return'. But Tzeltal also has satellite-framing adverbials, grammaticized from the same set of motion verbs, which encode the direction of motion or the orientation of static arrays. Furthermore, motion is not generally encoded barebones, but vivid pictorial detail is provided by positional verbs which can describe the position of the Figure as an outcome of a motion event; motion and stasis are thereby combined in a single event description. (For example: jipot jawal "he has been thrown (by the deer) lying¬_face_upwards_spread-eagled". This paper compares the use of these three linguistic resources in frog narratives from 14 Tzeltal adults and 21 children, looks at their development in the narratives of children between the ages of 4-12, and considers the results in relation to those from Berman and Slobin's (1996) comparative study of adult and child Frog stories.
  • Brown, P. (1998). Children's first verbs in Tzeltal: Evidence for an early verb category. Linguistics, 36(4), 713-753.

    Abstract

    A major finding in studies of early vocabulary acquisition has been that children tend to learn a lot of nouns early but make do with relatively few verbs, among which semantically general-purpose verbs like do, make, get, have, give, come, go, and be play a prominent role. The preponderance of nouns is explained in terms of nouns labelling concrete objects beings “easier” to learn than verbs, which label relational categories. Nouns label “natural categories” observable in the world, verbs label more linguistically and culturally specific categories of events linking objects belonging to such natural categories (Gentner 1978, 1982; Clark 1993). This view has been challenged recently by data from children learning certain non-Indo-European languges like Korean, where children have an early verb explosion and verbs dominate in early child utterances. Children learning the Mayan language Tzeltal also acquire verbs early, prior to any noun explosion as measured by production. Verb types are roughly equivalent to noun types in children’s beginning production vocabulary and soon outnumber them. At the one-word stage children’s verbs mostly have the form of a root stripped of affixes, correctly segmented despite structural difficulties. Quite early (before the MLU 2.0 point) there is evidence of productivity of some grammatical markers (although they are not always present): the person-marking affixes cross-referencing core arguments, and the completive/incompletive aspectual distinctions. The Tzeltal facts argue against a natural-categories explanation for childre’s early vocabulary, in favor of a view emphasizing the early effects of language-specific properties of the input. They suggest that when and how a child acquires a “verb” category is centrally influenced by the structural properties of the input, and that the semantic structure of the language - where the referential load is concentrated - plays a fundamental role in addition to distributional facts.
  • Brown, P. (1998). Conversational structure and language acquisition: The role of repetition in Tzeltal adult and child speech. Journal of Linguistic Anthropology, 8(2), 197-221. doi:10.1525/jlin.1998.8.2.197.

    Abstract

    When Tzeltal children in the Mayan community of Tenejapa, in southern Mexico, begin speaking, their production vocabulary consists predominantly of verb roots, in contrast to the dominance of nouns in the initial vocabulary of first‐language learners of Indo‐European languages. This article proposes that a particular Tzeltal conversational feature—known in the Mayanist literature as "dialogic repetition"—provides a context that facilitates the early analysis and use of verbs. Although Tzeltal babies are not treated by adults as genuine interlocutors worthy of sustained interaction, dialogic repetition in the speech the children are exposed to may have an important role in revealing to them the structural properties of the language, as well as in socializing the collaborative style of verbal interaction adults favor in this community.
  • Brown, P. (1998). Early Tzeltal verbs: Argument structure and argument representation. In E. Clark (Ed.), Proceedings of the 29th Annual Stanford Child Language Research Forum (pp. 129-140). Stanford: CSLI Publications.

    Abstract

    The surge of research activity focussing on children's acquisition of verbs (e.g., Tomasello and Merriman 1996) addresses some fundamental questions: Just how variable across languages, and across individual children, is the process of verb learning? How specific are arguments to particular verbs in early child language? How does the grammatical category 'Verb' develop? The position of Universal Grammar, that a verb category is early, contrasts with that of Tomasello (1992), Pine and Lieven and their colleagues (1996, in press), and many others, that children develop a verb category slowly, gradually building up subcategorizations of verbs around pragmatic, syntactic, and semantic properties of the language they are exposed to. On this latter view, one would expect the language which the child is learning, the cultural milieu and the nature of the interactions in which the child is engaged, to influence the process of acquiring verb argument structures. This paper explores these issues by examining the development of argument representation in the Mayan language Tzeltal, in both its lexical and verbal cross-referencing forms, and analyzing the semantic and pragmatic factors influencing the form argument representation takes. Certain facts about Tzeltal (the ergative/ absolutive marking, the semantic specificity of transitive and positional verbs) are proposed to affect the representation of arguments. The first 500 multimorpheme combinations of 3 children (aged between 1;8 and 2;4) are examined. It is argued that there is no evidence of semantically light 'pathbreaking' verbs (Ninio 1996) leading the way into word combinations. There is early productivity of cross-referencing affixes marking A, S, and O arguments (although there are systematic omissions). The paper assesses the respective contributions of three kinds of factors to these results - structural (regular morphology), semantic (verb specificity) and pragmatic (the nature of Tzeltal conversational interaction).
  • Brown, P. (1998). [Review of the book by A.J. Wootton, Interaction and the development of mind]. Journal of the Royal Anthropological Institute, 4(4), 816-817.
  • Brown, P., & Levinson, S. C. (2004). Frames of spatial reference and their acquisition in Tenejapan Tzeltal. In A. Assmann, U. Gaier, & G. Trommsdorff (Eds.), Zwischen Literatur und Anthropologie: Diskurse, Medien, Performanzen (pp. 285-314). Tübingen: Gunter Narr.

    Abstract

    This is a reprint of the Brown and Levinson 2000 article.
  • Brown, P. (2014). Gestures in native Mexico and Central America. In C. Müller, A. Cienki, E. Fricke, S. Ladewig, D. McNeill, & J. Bressem (Eds.), Body -language – communication: An international handbook on multimodality in human interaction. Volume 2 (pp. 1206-1215). Berlin: Mouton de Gruyter.

    Abstract

    The systematic study of kinesics, gaze, and gestural aspects of communication in Central American cultures is a recent phenomenon, most of it focussing on the Mayan cultures of southern Mexico, Guatemala, and Belize. This article surveys ethnographic observations and research reports on bodily aspects of speaking in three domains: gaze and kinesics in social interaction, indexical pointing in adult and caregiver-child interactions, and co-speech gestures associated with “absolute” (geographically-based) systems of spatial reference. In addition, it reports how the indigenous co-speech gesture repertoire has provided the basis for developing village sign languages in the region. It is argued that studies of the embodied aspects of speech in the Mayan areas of Mexico and Central America have contributed to the typology of gestures and of spatial frames of reference. They have refined our understanding of how spatial frames of reference are invoked, communicated, and switched in conversational interaction and of the importance of co-speech gestures in understanding language use, language acquisition, and the transmission of culture-specific cognitive styles.
  • Brown, P., Levinson, S. C., & Senft, G. (2004). Initial references to persons and places. In A. Majid (Ed.), Field Manual Volume 9 (pp. 37-44). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492929.

    Abstract

    This task has two parts: (i) video-taped elicitation of the range of possibilities for referring to persons and places, and (ii) observations of (first) references to persons and places in video-taped natural interaction. The goal of this task is to establish the repertoires of referential terms (and other practices) used for referring to persons and to places in particular languages and cultures, and provide examples of situated use of these kinds of referential practices in natural conversation. This data will form the basis for cross-language comparison, and for formulating hypotheses about general principles underlying the deployment of such referential terms in natural language usage.
  • Brown, P., Gaskins, S., Lieven, E., Striano, T., & Liszkowski, U. (2004). Multimodal multiperson interaction with infants aged 9 to 15 months. In A. Majid (Ed.), Field Manual Volume 9 (pp. 56-63). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492925.

    Abstract

    Interaction, for all that it has an ethological base, is culturally constituted, and how new social members are enculturated into the interactional practices of the society is of critical interest to our understanding of interaction – how much is learned, how variable is it across cultures – as well as to our understanding of the role of culture in children’s social-cognitive development. The goal of this task is to document the nature of caregiver infant interaction in different cultures, especially during the critical age of 9-15 months when children come to have an understanding of others’ intentions. This is of interest to all students of interaction; it does not require specialist knowledge of children.
  • Brown, P. (1998). La identificación de las raíces verbales en Tzeltal (Maya): Cómo lo hacen los niños? Función, 17-18, 121-146.

    Abstract

    This is a Spanish translation of Brown 1997.
  • Brown, P., & Gaskins, S. (2014). Language acquisition and language socialization. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), Cambridge handbook of linguistic anthropology (pp. 187-226). Cambridge: Cambridge University Press.
  • Brown, P. (1998). How and why are women more polite: Some evidence from a Mayan community. In J. Coates (Ed.), Language and gender (pp. 81-99). Oxford: Blackwell.
  • Brown, P. (1991). Sind Frauen höflicher? Befunde aus einer Maya-Gemeinde. In S. Günther, & H. Kotthoff (Eds.), Von fremden Stimmen: Weibliches und männliches Sprechen im Kulturvergleich. Frankfurt am Main: Suhrkamp.

    Abstract

    This is a German translation of Brown 1980, How and why are women more polite: Some evidence from a Mayan community.
  • Brown, P., & Levinson, S. C. (1998). Politeness, introduction to the reissue: A review of recent work. In A. Kasher (Ed.), Pragmatics: Vol. 6 Grammar, psychology and sociology (pp. 488-554). London: Routledge.

    Abstract

    This article is a reprint of chapter 1, the introduction to Brown and Levinson, 1987, Politeness: Some universals in language usage (Cambridge University Press).
  • Brown, P. (2014). The interactional context of language learning in Tzeltal. In I. Arnon, M. Casillas, C. Kurumada, & B. Estigarriba (Eds.), Language in Interaction: Studies in honor of Eve V. Clark (pp. 51-82). Amsterdam: Benjamins.

    Abstract

    This paper addresses the theories of Eve Clark about how children learn word meanings in western middle-class interactional contexts by examining child language data from a Tzeltal Maya society in southern Mexico where interaction patterns are radically different. Through examples of caregiver interactions with children 12-30 months old, I ask what lessons we can learn from how the details of these interactions unfold in this non-child-centered cultural context, and specifically, what aspects of the Tzeltal linguistic and interactional context might help to focus children’s attention on the meanings and the conventional forms of words being used around them.
  • Brown, P. (1976). Women and politeness: A new perspective on language and society. Reviews in Anthropology, 3, 240-249.
  • Brucato, N., DeLisi, L. E., Fisher, S. E., & Francks, C. (2014). Hypomethylation of the paternally inherited LRRTM1 promoter linked to schizophrenia. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 165(7), 555-563. doi:10.1002/ajmg.b.32258.

    Abstract

    Epigenetic effects on psychiatric traits remain relatively under-studied, and it remains unclear what the sizes of individual epigenetic effects may be, or how they vary between different clinical populations. The gene LRRTM1 (chromosome 2p12) has previously been linked and associated with schizophrenia in a parent-of-origin manner in a set of affected siblings (LOD = 4.72), indirectly suggesting a disruption of paternal imprinting at this locus in these families. From the same set of siblings that originally showed strong linkage at this locus, we analyzed 99 individuals using 454-bisulfite sequencing, from whole blood DNA, to measure the level of DNA methylation in the promoter region of LRRTM1. We also assessed seven additional loci that would be informative to compare. Paternal identity-by-descent sharing at LRRTM1, within sibling pairs, was linked to their similarity of methylation at the gene's promoter. Reduced methylation at the promoter showed a significant association with schizophrenia. Sibling pairs concordant for schizophrenia showed more similar methylation levels at the LRRTM1 promoter than diagnostically discordant pairs. The alleles of common SNPs spanning the locus did not explain this epigenetic linkage, which can therefore be considered as largely independent of DNA sequence variation and would not be detected in standard genetic association analysis. Our data suggest that hypomethylation at the LRRTM1 promoter, particularly of the paternally inherited allele, was a risk factor for the development of schizophrenia in this set of siblings affected with familial schizophrenia, and that had previously showed linkage at this locus in an affected-sib-pair context.
  • Bruggeman, L., & Cutler, A. (2016). Lexical manipulation as a discovery tool for psycholinguistic research. In C. Carignan, & M. D. Tyler (Eds.), Proceedings of the 16th Australasian International Conference on Speech Science and Technology (SST2016) (pp. 313-316).
  • Bruggeman, L. (2016). Nativeness, dominance, and the flexibility of listening to spoken language. PhD Thesis, Western Sydney University, Sydney.
  • Brugman, H. (2004). ELAN 2.2 now available. Language Archive Newsletter, 1(3), 13-14.
  • Brugman, H., Sloetjes, H., Russel, A., & Klassmann, A. (2004). ELAN 2.3 available. Language Archive Newsletter, 1(4), 13-13.
  • Brugman, H. (2004). ELAN Releases 2.0.2 and 2.1. Language Archive Newsletter, 1(2), 4-4.
  • Brugman, H., Crasborn, O., & Russel, A. (2004). Collaborative annotation of sign language data with Peer-to-Peer technology. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Language Evaluation (LREC 2004) (pp. 213-216). Paris: European Language Resources Association.
  • Brugman, H., & Russel, A. (2004). Annotating Multi-media/Multi-modal resources with ELAN. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Language Evaluation (LREC 2004) (pp. 2065-2068). Paris: European Language Resources Association.
  • Buckler, H. (2014). The acquisition of morphophonological alternations across languages. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Bujok, R., Meyer, A. S., & Bosker, H. R. (2024). Audiovisual perception of lexical stress: Beat gestures and articulatory cues. Language and Speech. Advance online publication. doi:10.1177/00238309241258162.

    Abstract

    Human communication is inherently multimodal. Auditory speech, but also visual cues can be used to understand another talker. Most studies of audiovisual speech perception have focused on the perception of speech segments (i.e., speech sounds). However, less is known about the influence of visual information on the perception of suprasegmental aspects of speech like lexical stress. In two experiments, we investigated the influence of different visual cues (e.g., facial articulatory cues and beat gestures) on the audiovisual perception of lexical stress. We presented auditory lexical stress continua of disyllabic Dutch stress pairs together with videos of a speaker producing stress on the first or second syllable (e.g., articulating VOORnaam or voorNAAM). Moreover, we combined and fully crossed the face of the speaker producing lexical stress on either syllable with a gesturing body producing a beat gesture on either the first or second syllable. Results showed that people successfully used visual articulatory cues to stress in muted videos. However, in audiovisual conditions, we were not able to find an effect of visual articulatory cues. In contrast, we found that the temporal alignment of beat gestures with speech robustly influenced participants' perception of lexical stress. These results highlight the importance of considering suprasegmental aspects of language in multimodal contexts.
  • Bulut, T., & Hagoort, P. (2024). Contributions of the left and right thalami to language: A meta-analytic approach. Brain Structure & Function. Advance online publication. doi:10.1007/s00429-024-02795-3.

    Abstract

    Background: Despite a pervasive cortico-centric view in cognitive neuroscience, subcortical structures including the thalamus have been shown to be increasingly involved in higher cognitive functions. Previous structural and functional imaging studies demonstrated cortico-thalamo-cortical loops which may support various cognitive functions including language. However, large-scale functional connectivity of the thalamus during language tasks has not been examined before. Methods: The present study employed meta-analytic connectivity modeling to identify language-related coactivation patterns of the left and right thalami. The left and right thalami were used as regions of interest to search the BrainMap functional database for neuroimaging experiments with healthy participants reporting language-related activations in each region of interest. Activation likelihood estimation analyses were then carried out on the foci extracted from the identified studies to estimate functional convergence for each thalamus. A functional decoding analysis based on the same database was conducted to characterize thalamic contributions to different language functions. Results: The results revealed bilateral frontotemporal and bilateral subcortical (basal ganglia) coactivation patterns for both the left and right thalami, and also right cerebellar coactivations for the left thalamus, during language processing. In light of previous empirical studies and theoretical frameworks, the present connectivity and functional decoding findings suggest that cortico-subcortical-cerebellar-cortical loops modulate and fine-tune information transfer within the bilateral frontotemporal cortices during language processing, especially during production and semantic operations, but also other language (e.g., syntax, phonology) and cognitive operations (e.g., attention, cognitive control). Conclusion: The current findings show that the language-relevant network extends beyond the classical left perisylvian cortices and spans bilateral cortical, bilateral subcortical (bilateral thalamus, bilateral basal ganglia) and right cerebellar regions.

    Additional information

    supplementary information
  • Bulut, T., & Temiz, G. (2024). Cortical organization of action and object naming in Turkish: A transcranial magnetic stimulation study. Psikoloji Çalışmaları / Studies in Psychology, 44(2), 235-254. doi:10.26650/SP2023-1279982.

    Abstract

    It is controversial whether the linguistic distinction between nouns and verbs is reflected in the cortical organization of the lexicon. Neuropsychological studies of aphasia and neuroimaging studies have associated the left prefrontal cortex, particularly Broca’s area, with verbs/actions, and the left posterior temporal cortex, particularly Wernicke’s area, with nouns/objects. However, more recent research has revealed that evidence for this distinction is inconsistent. Against this background, the present study employed low-frequency repetitive transcranial magnetic stimulation (rTMS) to investigate the dissociation of action and object naming in Broca’s and Wernicke’s areas in Turkish. Thirty-six healthy adult participants took part in the study. In two experiments, low-frequency (1 Hz) inhibitory rTMS was administered at 100% of motor threshold for 10 minutes to suppress the activity of the left prefrontal cortex spanning Broca’s area or the left posterior temporal cortex spanning Wernicke’s area. A picture naming task involving objects and actions was employed before and after the stimulation sessions to examine any pre- to post-stimulation changes in naming latencies. Linear mixed models that included various psycholinguistic covariates including frequency, visual and conceptual complexity, age of acquisition, name agreement and word length were fitted to the data. The findings showed that conceptual complexity, age of acquisition of the target word and name agreement had a significant effect on naming latencies, which was consistent across both experiments. Critically, the findings significantly associated Broca’s area, but not Wernicke’s area, in the distinction between naming objects and actions. Suppression of Broca’s area led to a significant and robust increase in naming latencies (or slowdown) for objects and a marginally significant, but not robust, reduction in naming latencies (or speedup) for actions. The findings suggest that actions and objects in Turkish can be dissociated in Broca’s area.
  • Burchardt, L., Van de Sande, Y., Kehy, M., Gamba, M., Ravignani, A., & Pouw, W. (2024). A toolkit for the dynamic study of air sacs in siamang and other elastic circular structures. PLOS Computational Biology, 20(6): e1012222. doi:10.1371/journal.pcbi.1012222.

    Abstract

    Biological structures are defined by rigid elements, such as bones, and elastic elements, like muscles and membranes. Computer vision advances have enabled automatic tracking of moving animal skeletal poses. Such developments provide insights into complex time-varying dynamics of biological motion. Conversely, the elastic soft-tissues of organisms, like the nose of elephant seals, or the buccal sac of frogs, are poorly studied and no computer vision methods have been proposed. This leaves major gaps in different areas of biology. In primatology, most critically, the function of air sacs is widely debated; many open questions on the role of air sacs in the evolution of animal communication, including human speech, remain unanswered. To support the dynamic study of soft-tissue structures, we present a toolkit for the automated tracking of semi-circular elastic structures in biological video data. The toolkit contains unsupervised computer vision tools (using Hough transform) and supervised deep learning (by adapting DeepLabCut) methodology to track inflation of laryngeal air sacs or other biological spherical objects (e.g., gular cavities). Confirming the value of elastic kinematic analysis, we show that air sac inflation correlates with acoustic markers that likely inform about body size. Finally, we present a pre-processed audiovisual-kinematic dataset of 7+ hours of closeup audiovisual recordings of siamang (Symphalangus syndactylus) singing. This toolkit (https://github.com/WimPouw/AirSacTracker) aims to revitalize the study of non-skeletal morphological structures across multiple species.
  • Burenhult, N. (2004). Spatial deixis in Jahai. In S. Burusphat (Ed.), Papers from the 11th Annual Meeting of the Southeast Asian Linguistics Society 2001 (pp. 87-100). Arizona State University: Program for Southeast Asian Studies.
  • Burenhult, N. (2004). Landscape terms and toponyms in Jahai: A field report. Lund Working Papers, 51, 17-29.
  • Burenhult, N., & Kruspe, N. (2016). The language of eating and drinking: A window on Orang Asli meaning-making. In K. Endicott (Ed.), Malaysia’s original people: Past, present and future of the Orang Asli (pp. 175-199). Singapore: National University of Singapore Press.
  • Cai, D., Fonteijn, H. M., Guadalupe, T., Zwiers, M., Wittfeld, K., Teumer, A., Hoogman, M., Arias Vásquez, A., Yang, Y., Buitelaar, J., Fernández, G., Brunner, H. G., Van Bokhoven, H., Franke, B., Hegenscheid, K., Homuth, G., Fisher, S. E., Grabe, H. J., Francks, C., & Hagoort, P. (2014). A genome wide search for quantitative trait loci affecting the cortical surface area and thickness of Heschl's gyrus. Genes, Brain and Behavior, 13, 675-685. doi:10.1111/gbb.12157.

    Abstract

    Heschl's gyrus (HG) is a core region of the auditory cortex whose morphology is highly variable across individuals. This variability has been linked to sound perception ability in both speech and music domains. Previous studies show that variations in morphological features of HG, such as cortical surface area and thickness, are heritable. To identify genetic variants that affect HG morphology, we conducted a genome-wide association scan (GWAS) meta-analysis in 3054 healthy individuals using HG surface area and thickness as quantitative traits. None of the single nucleotide polymorphisms (SNPs) showed association P values that would survive correction for multiple testing over the genome. The most significant association was found between right HG area and SNP rs72932726 close to gene DCBLD2 (3q12.1; P=2.77x10(-7)). This SNP was also associated with other regions involved in speech processing. The SNP rs333332 within gene KALRN (3q21.2; P=2.27x10(-6)) and rs143000161 near gene COBLL1 (2q24.3; P=2.40x10(-6)) were associated with the area and thickness of left HG, respectively. Both genes are involved in the development of the nervous system. The SNP rs7062395 close to the X-linked deafness gene POU3F4 was associated with right HG thickness (Xq21.1; P=2.38x10(-6)). This is the first molecular genetic analysis of variability in HG morphology
  • Capilla, A., Schoffelen, J.-M., Paterson, G., Thut, G., & Gross, J. (2014). Dissociated α-band modulations in the dorsal and ventral visual pathways in visuospatial attention and perception. Cerebral Cortex., 24(2), 550-561. doi:10.1093/cercor/bhs343.

    Abstract

    Modulations of occipito-parietal α-band (8–14 Hz) power that are opposite in direction (α-enhancement vs. α-suppression) and origin of generation (ipsilateral vs. contralateral to the locus of attention) are a robust correlate of anticipatory visuospatial attention. Yet, the neural generators of these α-band modulations, their interdependence across homotopic areas, and their respective contribution to subsequent perception remain unclear. To shed light on these questions, we employed magnetoencephalography, while human volunteers performed a spatially cued detection task. Replicating previous findings, we found α-power enhancement ipsilateral to the attended hemifield and contralateral α-suppression over occipitoparietal sensors. Source localization (beamforming) analysis showed that α-enhancement and suppression were generated in 2 distinct brain regions, located in the dorsal and ventral visual streams, respectively. Moreover, α-enhancement and suppression showed different dynamics and contribution to perception. In contrast to the initial and transient dorsal α-enhancement, α-suppression in ventro-lateral occipital cortex was sustained and influenced subsequent target detection. This anticipatory biasing of ventrolateral extrastriate α-activity probably reflects increased receptivity in the brain region specialized in processing upcoming target features. Our results add to current models on the role of α-oscillations in attention orienting by showing that α-enhancement and suppression can be dissociated in time, space, and perceptual relevance.

    Additional information

    Capilla_Suppl_Data.pdf
  • Carlsson, K., Petersson, K. M., Lundqvist, D., Karlsson, A., Ingvar, M., & Öhman, A. (2004). Fear and the amygdala: manipulation of awareness generates differential cerebral responses to phobic and fear-relevant (but nonfeared) stimuli. Emotion, 4(4), 340-353. doi:10.1037/1528-3542.4.4.340.

    Abstract

    Rapid response to danger holds an evolutionary advantage. In this positron emission tomography study, phobics were exposed to masked visual stimuli with timings that either allowed awareness or not of either phobic, fear-relevant (e.g., spiders to snake phobics), or neutral images. When the timing did not permit awareness, the amygdala responded to both phobic and fear-relevant stimuli. With time for more elaborate processing, phobic stimuli resulted in an addition of an affective processing network to the amygdala activity, whereas no activity was found in response to fear-relevant stimuli. Also, right prefrontal areas appeared deactivated, comparing aware phobic and fear-relevant conditions. Thus, a shift from top-down control to an affectively driven system optimized for speed was observed in phobic relative to fear-relevant aware processing.
  • Carota, F., Bozic, M., & Marslen-Wilson, W. (2016). Decompositional Representation of Morphological Complexity: Multivariate fMRI Evidence from Italian. Journal of Cognitive Neuroscience, 28(12), 1878-1896. doi:10.1162/jocn\_a\_01009.

    Abstract

    Derivational morphology is a cross-linguistically dominant mechanism for word formation, combining existing words with derivational affixes to create new word forms. However, the neurocognitive mechanisms underlying the representation and processing of such forms remain unclear. Recent cross-linguistic neuroimaging research suggests that derived words are stored and accessed as whole forms, without engaging the left-hemisphere perisylvian network associated with combinatorial processing of syntactically and inflectionally complex forms. Using fMRI with a “simple listening” no-task procedure, we reexamine these suggestions in the context of the root-based combinatorially rich Italian lexicon to clarify the role of semantic transparency (between the derived form and its stem) and affix productivity in determining whether derived forms are decompositionally represented and which neural systems are involved. Combined univariate and multivariate analyses reveal a key role for semantic transparency, modulated by affix productivity. Opaque forms show strong cohort competition effects, especially for words with nonproductive suffixes (ventura, “destiny”). The bilateral frontotemporal activity associated with these effects indicates that opaque derived words are processed as whole forms in the bihemispheric language system. Semantically transparent words with productive affixes (libreria, “bookshop”) showed no effects of lexical competition, suggesting morphologically structured co-representation of these derived forms and their stems, whereas transparent forms with nonproductive affixes (pineta, pine forest) show intermediate effects. Further multivariate analyses of the transparent derived forms revealed affix productivity effects selectively involving left inferior frontal regions, suggesting that the combinatorial and decompositional processes triggered by such forms can vary significantly across languages.
  • Carrion Castillo, A. (2016). Deciphering common and rare genetic effects on reading ability. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Carrion Castillo, A., van Bergen, E., Vino, A., van Zuijen, T., de Jong, P. F., Francks, C., & Fisher, S. E. (2016). Evaluation of results from genome-wide studies of language and reading in a novel independent dataset. Genes, Brain and Behavior, 15(6), 531-541. doi:10.1111/gbb.12299.

    Abstract

    Recent genome wide association scans (GWAS) for reading and language abilities have pin-pointed promising new candidate loci. However, the potential contributions of these loci remain to be validated. In the present study, we tested 17 of the most significantly associated single nucleotide polymorphisms (SNPs) from these GWAS studies (p < 10−6 in the original studies) in a new independent population dataset from the Netherlands: known as FIOLA (Familial Influences On Literacy Abilities). This dataset comprised 483 children from 307 nuclear families, plus 505 adults (including parents of participating children), and provided adequate statistical power to detect the effects that were previously reported. The following measures of reading and language performance were collected: word reading fluency, nonword reading fluency, phonological awareness, and rapid automatized naming. Two SNPs (rs12636438, rs7187223) were associated with performance in multivariate and univariate testing, but these did not remain significant after correction for multiple testing. Another SNP (rs482700) was only nominally associated in the multivariate test. For the rest of the SNPs we did not find supportive evidence of association. The findings may reflect differences between our study and the previous investigations in respects such as the language of testing, the exact tests used, and the recruitment criteria. Alternatively, most of the prior reported associations may have been false positives. A larger scale GWAS meta-analysis than those previously performed will likely be required to obtain robust insights into the genomic architecture underlying reading and language.
  • Cartmill, E. A., Roberts, S. G., Lyn, H., & Cornish, H. (Eds.). (2014). The Evolution of Language: Proceedings of the 10th International Conference. Singapore: World Scientific.

    Abstract

    This volume comprises refereed papers and abstracts of the 10th International Conference on the Evolution of Language (EVOLANGX), held in Vienna on 14–17th April 2014. As the leading international conference in the field, the biennial EVOLANG meeting is characterised by an invigorating, multidisciplinary approach to the origins and evolution of human language, and brings together researchers from many subject areas, including anthropology, archaeology, biology, cognitive science, computer science, genetics, linguistics, neuroscience, palaeontology, primatology and psychology. For this 10th conference, the proceedings will include a special perspectives section featuring prominent researchers reflecting on the history of the conference and its impact on the field of language evolution since the inaugural EVOLANG conference in 1996.
  • Casillas, M. (2014). Taking the floor on time: Delay and deferral in children’s turn taking. In I. Arnon, M. Casillas, C. Kurumada, & B. Estigarribia (Eds.), Language in Interaction: Studies in honor of Eve V. Clark (pp. 101-114). Amsterdam: Benjamins.

    Abstract

    A key part of learning to speak with others is figuring out when to start talking and how to hold the floor in conversation. For young children, the challenge of planning a linguistic response can slow down their response latencies, making misunderstanding, repair, and loss of the floor more likely. Like adults, children can mitigate their delays by using fillers (e.g., uh and um) at the start of their turns. In this chapter I analyze the onset and development of fillers in five children’s spontaneous speech from ages 1;6–3;6. My findings suggest that children start using fillers by 2;0, and use them to effectively mitigate delay in making a response.
  • Casillas, M., Bobb, S. C., & Clark, E. V. (2016). Turn taking, timing, and planning in early language acquisition. Journal of Child Language, 43, 1310-1337. doi:10.1017/S0305000915000689.

    Abstract

    Young children answer questions with longer delays than adults do, and they don't reach typical adult response times until several years later. We hypothesized that this prolonged pattern of delay in children's timing results from competing demands: to give an answer, children must understand a question while simultaneously planning and initiating their response. Even as children get older and more efficient in this process, the demands on them increase because their verbal responses become more complex. We analyzed conversational question-answer sequences between caregivers and their children from ages 1;8 to 3;5, finding that children (1) initiate simple answers more quickly than complex ones, (2) initiate simple answers quickly from an early age, and (3) initiate complex answers more quickly as they grow older. Our results suggest that children aim to respond quickly from the start, improving on earlier-acquired answer types while they begin to practice later-acquired, slower ones.

    Additional information

    S0305000915000689sup001.docx
  • Casillas, M. (2014). Turn-taking. In D. Matthews (Ed.), Pragmatic development in first language acquisition (pp. 53-70). Amsterdam: Benjamins.

    Abstract

    Conversation is a structured, joint action for which children need to learn a specialized set skills and conventions. Because conversation is a primary source of linguistic input, we can better grasp how children become active agents in their own linguistic development by studying their acquisition of conversational skills. In this chapter I review research on children’s turn-taking. This fundamental skill of human interaction allows children to gain feedback, make clarifications, and test hypotheses at every stage of development. I broadly review children’s conversational experiences, the types of turn-based contingency they must acquire, how they ask and answer questions, and when they manage to make timely responses
  • Casillas, M., Foushee, R., Méndez Girón, J., Polian, G., & Brown, P. (2024). Little evidence for a noun bias in Tseltal spontaneous speech. First Language. Advance online publication. doi:10.1177/01427237231216571.

    Abstract

    This study examines whether children acquiring Tseltal (Mayan) demonstrate a noun bias – an overrepresentation of nouns in their early vocabularies. Nouns, specifically concrete and animate nouns, are argued to universally predominate in children’s early vocabularies because their referents are naturally available as bounded concepts to which linguistic labels can be mapped. This early advantage for noun learning has been documented using multiple methods and across a diverse collection of language populations. However, past evidence bearing on a noun bias in Tseltal learners has been mixed. Tseltal grammatical features and child–caregiver interactional patterns dampen the salience of nouns and heighten the salience of verbs, leading to the prediction of a diminished noun bias and perhaps even an early predominance of verbs. We here analyze the use of noun and verb stems in children’s spontaneous speech from egocentric daylong recordings of 29 Tseltal learners between 0;9 and 4;4. We find weak to no evidence for a noun bias using two separate analytical approaches on the same data; one analysis yields a preliminary suggestion of a flipped outcome (i.e. a verb bias). We discuss the implications of these findings for broader theories of learning bias in early lexical development.
  • Castro-Caldas, A., Petersson, K. M., Reis, A., Stone-Elander, S., & Ingvar, M. (1998). The illiterate brain: Learning to read and write during childhood influences the functional organization of the adult brain. Brain, 121, 1053-1063. doi:10.1093/brain/121.6.1053.

    Abstract

    Learning a specific skill during childhood may partly determine the functional organization of the adult brain. This hypothesis led us to study oral language processing in illiterate subjects who, for social reasons, had never entered school and had no knowledge of reading or writing. In a brain activation study using PET and statistical parametric mapping, we compared word and pseudoword repetition in literate and illiterate subjects. Our study confirms behavioural evidence of different phonological processing in illiterate subjects. During repetition of real words, the two groups performed similarly and activated similar areas of the brain. In contrast, illiterate subjects had more difficulty repeating pseudowords correctly and did not activate the same neural structures as literates. These results are consistent with the hypothesis that learning the written form of language (orthography) interacts with the function of oral language. Our results indicate that learning to read and write during childhood influences the functional organization of the adult human brain.
  • Ceroni, F., Simpson, N. H., Francks, C., Baird, G., Conti-Ramsden, G., Clark, A., Bolton, P. F., Hennessy, E. R., Donnelly, P., Bentley, D. R., Martin, H., IMGSAC, SLI Consortium, WGS500 Consortium, Parr, J., Pagnamenta, A. T., Maestrini, E., Bacchelli, E., Fisher, S. E., & Newbury, D. F. (2014). Homozygous microdeletion of exon 5 in ZNF277 in a girl with specific language impairment. European Journal of Human Genetics, 22, 1165-1171. doi:10.1038/ejhg.2014.4.

    Abstract

    Specific language impairment (SLI), an unexpected failure to develop appropriate language skills despite adequate non-verbal intelligence, is a heterogeneous multifactorial disorder with a complex genetic basis. We identified a homozygous microdeletion of 21,379 bp in the ZNF277 gene (NM_021994.2), encompassing exon 5, in an individual with severe receptive and expressive language impairment. The microdeletion was not found in the proband’s affected sister or her brother who had mild language impairment. However, it was inherited from both parents, each of whom carries a heterozygous microdeletion and has a history of language problems. The microdeletion falls within the AUTS1 locus, a region linked to autistic spectrum disorders (ASDs). Moreover, ZNF277 is adjacent to the DOCK4 and IMMP2L genes, which have been implicated in ASD. We screened for the presence of ZNF277 microdeletions in cohorts of children with SLI or ASD and panels of control subjects. ZNF277 microdeletions were at an increased allelic frequency in SLI probands (1.1%) compared with both ASD family members (0.3%) and independent controls (0.4%). We performed quantitative RT-PCR analyses of the expression of IMMP2L, DOCK4 and ZNF277 in individuals carrying either an IMMP2L_DOCK4 microdeletion or a ZNF277 microdeletion. Although ZNF277 microdeletions reduce the expression of ZNF277, they do not alter the levels of DOCK4 or IMMP2L transcripts. Conversely, IMMP2L_DOCK4 microdeletions do not affect the expression levels of ZNF277. We postulate that ZNF277 microdeletions may contribute to the risk of language impairments in a manner that is independent of the autism risk loci previously described in this region.
  • Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2024). Does the speaker’s eye gaze facilitate infants’ word segmentation from continuous speech? An ERP study. Developmental Science, 27(2): e13436. doi:10.1111/desc.13436.

    Abstract

    The environment in which infants learn language is multimodal and rich with social cues. Yet, the effects of such cues, such as eye contact, on early speech perception have not been closely examined. This study assessed the role of ostensive speech, signalled through the speaker's eye gaze direction, on infants’ word segmentation abilities. A familiarisation-then-test paradigm was used while electroencephalography (EEG) was recorded. Ten-month-old Dutch-learning infants were familiarised with audio-visual stories in which a speaker recited four sentences with one repeated target word. The speaker addressed them either with direct or with averted gaze while speaking. In the test phase following each story, infants heard familiar and novel words presented via audio-only. Infants’ familiarity with the words was assessed using event-related potentials (ERPs). As predicted, infants showed a negative-going ERP familiarity effect to the isolated familiarised words relative to the novel words over the left-frontal region of interest during the test phase. While the word familiarity effect did not differ as a function of the speaker's gaze over the left-frontal region of interest, there was also a (not predicted) positive-going early ERP familiarity effect over right fronto-central and central electrodes in the direct gaze condition only. This study provides electrophysiological evidence that infants can segment words from audio-visual speech, regardless of the ostensiveness of the speaker's communication. However, the speaker's gaze direction seems to influence the processing of familiar words.
  • Çetinçelik, M., Jordan‐Barros, A., Rowland, C. F., & Snijders, T. M. (2024). The effect of visual speech cues on neural tracking of speech in 10‐month‐old infants. European Journal of Neuroscience. Advance online publication. doi:10.1111/ejn.16492.

    Abstract

    While infants' sensitivity to visual speech cues and the benefit of these cues have been well-established by behavioural studies, there is little evidence on the effect of visual speech cues on infants' neural processing of continuous auditory speech. In this study, we investigated whether visual speech cues, such as the movements of the lips, jaw, and larynx, facilitate infants' neural speech tracking. Ten-month-old Dutch-learning infants watched videos of a speaker reciting passages in infant-directed speech while electroencephalography (EEG) was recorded. In the videos, either the full face of the speaker was displayed or the speaker's mouth and jaw were masked with a block, obstructing the visual speech cues. To assess neural tracking, speech-brain coherence (SBC) was calculated, focusing particularly on the stress and syllabic rates (1–1.75 and 2.5–3.5 Hz respectively in our stimuli). First, overall, SBC was compared to surrogate data, and then, differences in SBC in the two conditions were tested at the frequencies of interest. Our results indicated that infants show significant tracking at both stress and syllabic rates. However, no differences were identified between the two conditions, meaning that infants' neural tracking was not modulated further by the presence of visual speech cues. Furthermore, we demonstrated that infants' neural tracking of low-frequency information is related to their subsequent vocabulary development at 18 months. Overall, this study provides evidence that infants' neural tracking of speech is not necessarily impaired when visual speech cues are not fully visible and that neural tracking may be a potential mechanism in successful language acquisition.

    Additional information

    supplementary materials
  • Çetinçelik, M. (2024). A look into language: The role of visual cues in early language acquisition in the infant brain. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Chabout, J., Sarkar, A., Patel, S., Radden, T., Dunson, D., Fisher, S. E., & Jarvis, E. (2016). A Foxp2 mutation implicated in human speech deficits alters sequencing of ultrasonic vocalizations in adult male mice. Frontiers in Behavioral Neuroscience, 10: 197. doi:10.3389/fnbeh.2016.00197.

    Abstract

    Development of proficient spoken language skills is disrupted by mutations of the FOXP2 transcription factor. A heterozygous missense mutation in the KE family causes speech apraxia, involving difficulty producing words with complex learned sequences of syllables. Manipulations in songbirds have helped to elucidate the role of this gene in vocal learning, but findings in non-human mammals have been limited or inconclusive. Here we performed a systematic study of ultrasonic vocalizations (USVs) of adult male mice carrying the KE family mutation. Using novel statistical tools, we found that Foxp2 heterozygous mice did not have detectable changes in USV syllable acoustic structure, but produced shorter sequences and did not shift to more complex syntax in social contexts where wildtype animals did. Heterozygous mice also displayed a shift in the position of their rudimentary laryngeal motor cortex layer-5 neurons. Our findings indicate that although mouse USVs are mostly innate, the underlying contributions of FoxP2 to sequencing of vocalizations are conserved with humans.
  • Chalfoun, A., Rossi, G., & Stivers, T. (2024). The magic word? Face-work and the functions of 'please' in everyday requests. Social Psychology Quarterly. doi:10.1177/01902725241245141.

    Abstract

    Expressions of politeness such as 'please' are prominent elements of interactional conduct that are explicitly targeted in early socialization and are subject to cultural expectations around socially desirable behavior. Yet their specific interactional functions remain poorly understood. Using conversation analysis supplemented with systematic coding, this study investigates when and where interactants use 'please' in everyday requests. We find that 'please' is rare, occurring in only 7 percent of request attempts. Interactants use 'please' to manage face-threats when a request is ill fitted to its immediate interactional context. Within this, we identify two environments in which 'please' prototypically occurs. First, 'please' is used when the requestee has demonstrated unwillingness to comply. Second, 'please' is used when the request is intrusive due to its incompatibility with the requestee’s engagement in a competing action trajectory. Our findings advance research on politeness and extend Goffman’s theory of face-work, with particular salience for scholarship on request behavior.
  • Chang, F., & Fitz, H. (2014). Computational models of sentence production: A dual-path approach. In M. Goldrick, & M. Miozzo (Eds.), The Oxford handbook of language production (pp. 70-89). Oxford: Oxford University Press.

    Abstract

    Sentence production is the process we use to create language-specific sentences that convey particular meanings. In production, there are complex interactions between meaning, words, and syntax at different points in sentences. Computational models can make these interactions explicit and connectionist learning algorithms have been useful for building such models. Connectionist models use domaingeneral mechanisms to learn internal representations and these mechanisms can also explain evidence of long-term syntactic adaptation in adult speakers. This paper will review work showing that these models can generalize words in novel ways and learn typologically-different languages like English and Japanese. It will also present modeling work which shows that connectionist learning algorithms can account for complex sentence production in children and adult production phenomena like structural priming, heavy NP shift, and conceptual/lexical accessibility.
  • Chen, A., Gussenhoven, C., & Rietveld, T. (2004). Language specificity in perception of paralinguistic intonational meaning. Language and Speech, 47(4), 311-349.

    Abstract

    This study examines the perception of paralinguistic intonational meanings deriving from Ohala’s Frequency Code (Experiment 1) and Gussenhoven’s Effort Code (Experiment 2) in British English and Dutch. Native speakers of British English and Dutch listened to a number of stimuli in their native language and judged each stimulus on four semantic scales deriving from these two codes: SELF-CONFIDENT versus NOT SELF-CONFIDENT, FRIENDLY versus NOT FRIENDLY (Frequency Code); SURPRISED versus NOT SURPRISED, and EMPHATIC versus NOT EMPHATIC (Effort Code). The stimuli, which were lexically equivalent across the two languages, differed in pitch contour, pitch register and pitch span in Experiment 1, and in pitch register, peak height, peak alignment and end pitch in Experiment 2. Contrary to the traditional view that the paralinguistic usage of intonation is similar across languages, it was found that British English and Dutch listeners differed considerably in the perception of “confident,” “friendly,” “emphatic,” and “surprised.” The present findings support a theory of paralinguistic meaning based on the universality of biological codes, which however acknowledges a languagespecific component in the implementation of these codes.
  • Chen, A. (2014). Production-comprehension (A)Symmetry: Individual differences in the acquisition of prosodic focus-marking. In N. Campbell, D. Gibbon, & D. Hirst (Eds.), Proceedings of Speech Prosody 2014 (pp. 423-427).

    Abstract

    Previous work based on different groups of children has shown that four- to five-year-old children are similar to adults in both producing and comprehending the focus-toaccentuation mapping in Dutch, contra the alleged productionprecedes- comprehension asymmetry in earlier studies. In the current study, we addressed the question of whether there are individual differences in the production-comprehension (a)symmetricity. To this end, we examined the use of prosody in focus marking in production and the processing of focusrelated prosody in online language comprehension in the same group of 4- to 5-year-olds. We have found that the relationship between comprehension and production can be rather diverse at an individual level. This result suggests some degree of independence in learning to use prosody to mark focus in production and learning to process focus-related prosodic information in online language comprehension, and implies influences of other linguistic and non-linguistic factors on the production-comprehension (a)symmetricity
  • Chen, A., Chen, A., Kager, R., & Wong, P. (2014). Rises and falls in Dutch and Mandarin Chinese. In C. Gussenhoven, Y. Chen, & D. Dediu (Eds.), Proceedings of the 4th International Symposium on Tonal Aspects of Language (pp. 83-86).

    Abstract

    Despite of the different functions of pitch in tone and nontone languages, rises and falls are common pitch patterns across different languages. In the current study, we ask what is the language specific phonetic realization of rises and falls. Chinese and Dutch speakers participated in a production experiment. We used contexts composed for conveying specific communicative purposes to elicit rises and falls. We measured both tonal alignment and tonal scaling for both patterns. For the alignment measurements, we found language specific patterns for the rises, but for falls. For rises, both peak and valley were aligned later among Chinese speakers compared to Dutch speakers. For all the scaling measurements (maximum pitch, minimum pitch, and pitch range), no language specific patterns were found for either the rises or the falls
  • Cheung, C.-Y., Kirby, S., & Raviv, L. (2024). The role of gender, social bias and personality traits in shaping linguistic accommodation: An experimental approach. In J. Nölle, L. Raviv, K. E. Graham, S. Hartmann, Y. Jadoul, M. Josserand, T. Matzinger, K. Mudd, M. Pleyer, A. Slonimska, & S. Wacewicz (Eds.), The Evolution of Language: Proceedings of the 15th International Conference (EVOLANG XV) (pp. 80-82). Nijmegen: The Evolution of Language Conferences. doi:10.17617/2.3587960.
  • Cho, T., & McQueen, J. M. (2004). Phonotactics vs. phonetic cues in native and non-native listening: Dutch and Korean listeners' perception of Dutch and English. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1301-1304). Seoul: Sunjijn Printing Co.

    Abstract

    We investigated how listeners of two unrelated languages, Dutch and Korean, process phonotactically legitimate and illegitimate sounds spoken in Dutch and American English. To Dutch listeners, unreleased word-final stops are phonotactically illegal because word-final stops in Dutch are generally released in isolation, but to Korean listeners, released final stops are illegal because word-final stops are never released in Korean. Two phoneme monitoring experiments showed a phonotactic effect: Dutch listeners detected released stops more rapidly than unreleased stops whereas the reverse was true for Korean listeners. Korean listeners with English stimuli detected released stops more accurately than unreleased stops, however, suggesting that acoustic-phonetic cues associated with released stops improve detection accuracy. We propose that in non-native speech perception, phonotactic legitimacy in the native language speeds up phoneme recognition, the richness of acousticphonetic cues improves listening accuracy, and familiarity with the non-native language modulates the relative influence of these two factors.
  • Cho, T. (2004). Prosodically conditioned strengthening and vowel-to-vowel coarticulation in English. Journal of Phonetics, 32(2), 141-176. doi:10.1016/S0095-4470(03)00043-3.

    Abstract

    The goal of this study is to examine how the degree of vowel-to-vowel coarticulation varies as a function of prosodic factors such as nuclear-pitch accent (accented vs. unaccented), level of prosodic boundary (Prosodic Word vs. Intermediate Phrase vs. Intonational Phrase), and position-in-prosodic-domain (initial vs. final). It is hypothesized that vowels in prosodically stronger locations (e.g., in accented syllables and at a higher prosodic boundary) are not only coarticulated less with their neighboring vowels, but they also exert a stronger influence on their neighbors. Measurements of tongue position for English /a i/ over time were obtained with Carsten’s electromagnetic articulography. Results showed that vowels in prosodically stronger locations are coarticulated less with neighboring vowels, but do not exert a stronger influence on the articulation of neighboring vowels. An examination of the relationship between coarticulation and duration revealed that (a) accent-induced coarticulatory variation cannot be attributed to a duration factor and (b) some of the data with respect to boundary effects may be accounted for by the duration factor. This suggests that to the extent that prosodically conditioned coarticulatory variation is duration-independent, there is no absolute causal relationship from duration to coarticulation. It is proposed that prosodically conditioned V-to-V coarticulatory reduction is another type of strengthening that occurs in prosodically strong locations. The prosodically driven coarticulatory patterning is taken to be part of the phonetic signatures of the hierarchically nested structure of prosody.
  • Cho, T., & Johnson, E. K. (2004). Acoustic correlates of phrase-internal lexical boundaries in Dutch. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1297-1300). Seoul: Sunjin Printing Co.

    Abstract

    The aim of this study was to determine if Dutch speakers reliably signal phrase-internal lexical boundaries, and if so, how. Six speakers recorded 4 pairs of phonemically identical strong-weak-strong (SWS) strings with matching syllable boundaries but mismatching intended word boundaries (e.g. reis # pastei versus reispas # tij, or more broadly C1V2(C)#C2V2(C)C3V3(C) vs. C1V2(C)C2V2(C)#C3V3(C)). An Analysis of Variance revealed 3 acoustic parameters that were significantly greater in S#WS items (C2 DURATION, RIME1 DURATION, C3 BURST AMPLITUDE) and 5 parameters that were significantly greater in the SW#S items (C2 VOT, C3 DURATION, RIME2 DURATION, RIME3 DURATION, and V2 AMPLITUDE). Additionally, center of gravity measurements suggested that the [s] to [t] coarticulation was greater in reis # pa[st]ei versus reispa[s] # [t]ij. Finally, a Logistic Regression Analysis revealed that the 3 parameters (RIME1 DURATION, RIME2 DURATION, and C3 DURATION) contributed most reliably to a S#WS versus SW#S classification.
  • Cho, S.-J., Brown-Schmidt, S., Clough, S., & Duff, M. C. (2024). Comparing Functional Trend and Learning among Groups in Intensive Binary Longitudinal Eye-Tracking Data using By-Variable Smooth Functions of GAMM. Psychometrika. Advance online publication. doi:10.1007/s11336-024-09986-1.

    Abstract

    This paper presents a model specification for group comparisons regarding a functional trend over time within a trial and learning across a series of trials in intensive binary longitudinal eye-tracking data. The functional trend and learning effects are modeled using by-variable smooth functions. This model specification is formulated as a generalized additive mixed model, which allowed for the use of the freely available mgcv package (Wood in Package ‘mgcv.’ https://cran.r-project.org/web/packages/mgcv/mgcv.pdf, 2023) in R. The model specification was applied to intensive binary longitudinal eye-tracking data, where the questions of interest concern differences between individuals with and without brain injury in their real-time language comprehension and how this affects their learning over time. The results of the simulation study show that the model parameters are recovered well and the by-variable smooth functions are adequately predicted in the same condition as those found in the application.
  • Choi, J. (2014). Rediscovering a forgotten language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Choi, S., & Bowerman, M. (1991). Learning to express motion events in English and Korean: The influence of language-specific lexicalization patterns. Cognition, 41, 83-121. doi:10.1016/0010-0277(91)90033-Z.

    Abstract

    English and Korean differ in how they lexicalize the components of motionevents. English characteristically conflates Motion with Manner, Cause, or Deixis, and expresses Path separately. Korean, in contrast, conflates Motion with Path and elements of Figure and Ground in transitive clauses for caused Motion, but conflates motion with Deixis and spells out Path and Manner separately in intransitive clauses for spontaneous motion. Children learningEnglish and Korean show sensitivity to language-specific patterns in the way they talk about motion from as early as 17–20 months. For example, learners of English quickly generalize their earliest spatial words — Path particles like up, down, and in — to both spontaneous and caused changes of location and, for up and down, to posture changes, while learners of Korean keep words for spontaneous and caused motion strictly separate and use different words for vertical changes of location and posture changes. These findings challenge the widespread view that children initially map spatial words directly to nonlinguistic spatial concepts, and suggest that they are influenced by the semantic organization of their language virtually from the beginning. We discuss how input and cognition may interact in the early phases of learning to talk about space.
  • Cholin, J. (2004). Syllables in speech production: Effects of syllable preparation and syllable frequency. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.60589.

    Abstract

    The fluent production of speech is a very complex human skill. It requires the coordination of several articulatory subsystems. The instructions that lead articulatory movements to execution are the result of the interplay of speech production levels that operate above the articulatory network. During the process of word-form encoding, the groundwork for the articulatory programs is prepared which then serve the articulators as basic units. This thesis investigated whether or not syllables form the basis for the articulatory programs and in particular whether or not these syllable programs are stored, separate from the store of the lexical word-forms. It is assumed that syllable units are stored in a so-called 'mental syllabary'. The main goal of this thesis was to find evidence of the syllable playing a functionally important role in speech production and for the assumption that syllables are stored units. In a variant of the implicit priming paradigm, it was investigated whether information about the syllabic structure of a target word facilitates the preparation (advanced planning) of a to-be-produced utterance. These experiments yielded evidence for the functionally important role of syllables in speech production. In a subsequent row of experiments, it could be demonstrated that the production of syllables is sensitive to frequency. Syllable frequency effects provide strong evidence for the notion of a mental syllabary because only stored units are likely to exhibit frequency effects. In a last study, effects of syllable preparation and syllable frequency were investigated in a combined study to disentangle the two effects. The results of this last experiment converged with those reported for the other experiments and added further support to the claim that syllables play a core functional role in speech production and are stored in a mental syllabary.

    Additional information

    full text via Radboud Repository
  • Cholin, J., Schiller, N. O., & Levelt, W. J. M. (2004). The preparation of syllables in speech production. Journal of Memory and Language, 50(1), 47-61. doi:10.1016/j.jml.2003.08.003.

    Abstract

    Models of speech production assume that syllables play a functional role in the process of word-form encoding in speech production. In this study, we investigate this claim and specifically provide evidence about the level at which syllables come into play. We report two studies using an odd-man-out variant of the implicit priming paradigm to examine the role of the syllable during the process of word formation. Our results show that this modified version of the implicit priming paradigm can trace the emergence of syllabic structure during spoken word generation. Comparing these results to prior syllable priming studies, we conclude that syllables emerge at the interface between phonological and phonetic encoding. The results are discussed in terms of the WEAVER++ model of lexical access.
  • Chu, M., & Kita, S. (2016). Co-thought and Co-speech Gestures Are Generated by the Same Action Generation Process. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(2), 257-270. doi:10.1037/xlm0000168.

    Abstract

    People spontaneously gesture when they speak (co-speech gestures) and when they solve problems silently (co-thought gestures). In this study, we first explored the relationship between these 2 types of gestures and found that individuals who produced co-thought gestures more frequently also produced co-speech gestures more frequently (Experiments 1 and 2). This suggests that the 2 types of gestures are generated from the same process. We then investigated whether both types of gestures can be generated from the representational use of the action generation process that also generates purposeful actions that have a direct physical impact on the world, such as manipulating an object or locomotion (the action generation hypothesis). To this end, we examined the effect of object affordances on the production of both types of gestures (Experiments 3 and 4). We found that individuals produced co-thought and co-speech gestures more often when the stimulus objects afforded action (objects with a smooth surface) than when they did not (objects with a spiky surface). These results support the action generation hypothesis for representational gestures. However, our findings are incompatible with the hypothesis that co-speech representational gestures are solely generated from the speech production process (the speech production hypothesis).
  • Chu, M., Meyer, A. S., Foulkes, L., & Kita, S. (2014). Individual differences in frequency and saliency of speech-accompanying gestures: The role of cognitive abilities and empathy. Journal of Experimental Psychology: General, 143, 694-709. doi:10.1037/a0033861.

    Abstract

    The present study concerns individual differences in gesture production. We used correlational and multiple regression analyses to examine the relationship between individuals’ cognitive abilities and empathy levels and their gesture frequency and saliency. We chose predictor variables according to experimental evidence of the functions of gesture in speech production and communication. We examined 3 types of gestures: representational gestures, conduit gestures, and palm-revealing gestures. Higher frequency of representational gestures was related to poorer visual and spatial working memory, spatial transformation ability, and conceptualization ability; higher frequency of conduit gestures was related to poorer visual working memory, conceptualization ability, and higher levels of empathy; and higher frequency of palm-revealing gestures was related to higher levels of empathy. The saliency of all gestures was positively related to level of empathy. These results demonstrate that cognitive abilities and empathy levels are related to individual differences in gesture frequency and saliency
  • Chu, M., & Hagoort, P. (2014). Synchronization of speech and gesture: Evidence for interaction in action. Journal of Experimental Psychology: General, 143(4), 1726-1741. doi:10.1037/a0036281.

    Abstract

    Language and action systems are highly interlinked. A critical piece of evidence is that speech and its accompanying gestures are tightly synchronized. Five experiments were conducted to test 2 hypotheses about the synchronization of speech and gesture. According to the interactive view, there is continuous information exchange between the gesture and speech systems, during both their planning and execution phases. According to the ballistic view, information exchange occurs only during the planning phases of gesture and speech, but the 2 systems become independent once their execution has been initiated. In all experiments, participants were required to point to and/or name a light that had just lit up. Virtual reality and motion tracking technologies were used to disrupt their gesture or speech execution. Participants delayed their speech onset when their gesture was disrupted. They did so even when their gesture was disrupted at its late phase and even when they received only the kinesthetic feedback of their gesture. Also, participants prolonged their gestures when their speech was disrupted. These findings support the interactive view and add new constraints on models of speech and gesture production
  • Chwilla, D., Hagoort, P., & Brown, C. M. (1998). The mechanism underlying backward priming in a lexical decision task: Spreading activation versus semantic matching. Quarterly Journal of Experimental Psychology, 51A(3), 531-560. doi:10.1080/713755773.

    Abstract

    Koriat (1981) demonstrated that an association from the target to a preceding prime, in the absence of an association from the prime to the target, facilitates lexical decision and referred to this effect as "backward priming". Backward priming is of relevance, because it can provide information about the mechanism underlying semantic priming effects. Following Neely (1991), we distinguish three mechanisms of priming: spreading activation, expectancy, and semantic matching/integration. The goal was to determine which of these mechanisms causes backward priming, by assessing effects of backward priming on a language-relevant ERP component, the N400, and reaction time (RT). Based on previous work, we propose that the N400 priming effect reflects expectancy and semantic matching/integration, but in contrast with RT does not reflect spreading activation. Experiment 1 shows a backward priming effect that is qualitatively similar for the N400 and RT in a lexical decision task. This effect was not modulated by an ISI manipulation. Experiment 2 clarifies that the N400 backward priming effect reflects genuine changes in N400 amplitude and cannot be ascribed to other factors. We will argue that these backward priming effects cannot be due to expectancy but are best accounted for in terms of semantic matching/integration.
  • Ciulkinyte, A., Mountford, H. S., Fontanillas, P., 23andMe Research Team, Bates, T. C., Martin, N. G., Fisher, S. E., & Luciano, M. (2024). Genetic neurodevelopmental clustering and dyslexia. Molecular Psychiatry. Advance online publication. doi:10.1038/s41380-024-02649-8.

    Abstract

    Dyslexia is a learning difficulty with neurodevelopmental origins, manifesting as reduced accuracy and speed in reading and spelling. It is substantially heritable and frequently co-occurs with other neurodevelopmental conditions, particularly attention deficit-hyperactivity disorder (ADHD). Here, we investigate the genetic structure underlying dyslexia and a range of psychiatric traits using results from genome-wide association studies of dyslexia, ADHD, autism, anorexia nervosa, anxiety, bipolar disorder, major depressive disorder, obsessive compulsive disorder,
    schizophrenia, and Tourette syndrome. Genomic Structural Equation Modelling (GenomicSEM) showed heightened support for a model consisting of five correlated latent genomic factors described as: F1) compulsive disorders (including obsessive-compulsive disorder, anorexia nervosa, Tourette syndrome), F2) psychotic disorder (including bipolar disorder, schizophrenia), F3) internalising disorders (including anxiety disorder, major depressive disorder), F4) neurodevelopmental traits (including autism, ADHD), and F5) attention and learning difficulties (including ADHD, dyslexia). ADHD loaded more strongly on the attention and learning difficulties latent factor (F5) than on the neurodevelopmental traits latent factor (F4). The attention and learning difficulties latent factor (F5) was positively correlated with internalising disorders (.40), neurodevelopmental traits (.25) and psychotic disorders (.17) latent factors, and negatively correlated with the compulsive disorders (–.16) latent factor. These factor correlations are mirrored in genetic correlations observed between the attention and learning difficulties latent factor and other cognitive, psychological and wellbeing traits. We further investigated genetic variants underlying both dyslexia and ADHD, which implicated 49 loci (40 not previously found in GWAS of the individual traits) mapping to 174 genes (121 not found in GWAS of individual traits) as potential pleiotropic variants. Our study confirms the increased genetic relation between dyslexia and ADHD versus other psychiatric traits and uncovers novel pleiotropic variants affecting both traits. In future, analyses including additional co-occurring traits such as dyscalculia and dyspraxia will allow a clearer definition of the attention and learning difficulties latent factor, yielding further insights into factor structure and pleiotropic effects.
  • Clark, N., & Perlman, M. (2014). Breath, vocal, and supralaryngeal flexibility in a human-reared gorilla. In B. De Boer, & T. Verhoef (Eds.), Proceedings of Evolang X, Workshop on Signals, Speech, and Signs (pp. 11-15).

    Abstract

    “Gesture-first” theories dismiss ancestral great apes’ vocalization as a substrate for language evolution based on the claim that extant apes exhibit minimal learning and volitional control of vocalization. Contrary to this claim, we present data of novel learned and voluntarily controlled vocal behaviors produced by a human-fostered gorilla (G. gorilla gorilla). These behaviors demonstrate varying degrees of flexibility in the vocal apparatus (including diaphragm, lungs, larynx, and supralaryngeal articulators), and are predominantly performed in coordination with manual behaviors and gestures. Instead of a gesture-first theory, we suggest that these findings support multimodal theories of language evolution in which vocal and gestural forms are coordinated and supplement one another
  • Clark, E. V., & Casillas, M. (2016). First language acquisition. In K. Allen (Ed.), The Routledge Handbook of Linguistics (pp. 311-328). New York: Routledge.
  • Claus, A. (2004). Access management system. Language Archive Newsletter, 1(2), 5.
  • Collins, J. (2016). The role of language contact in creating correlations between humidity and tone. Journal of Language Evolution, 46-52. doi:10.1093/jole/lzv012.

Share this page