Publications

Displaying 101 - 200 of 1286
  • Bowerman, M. (1983). Hidden meanings: The role of covert conceptual structures in children's development of language. In D. Rogers, & J. A. Sloboda (Eds.), The acquisition of symbolic skills (pp. 445-470). New York: Plenum Press.
  • Bowerman, M. (1983). How do children avoid constructing an overly general grammar in the absence of feedback about what is not a sentence? Papers and Reports on Child Language Development, 22, 23-35.

    Abstract

    The theory that language acquisition is guided and constrained by inborn linguistic knowledge is assessed. Specifically, the "no negative evidence" view, the belief that linguistic theory should be restricted in such a way that the grammars it allows can be learned by children on the basis of positive evidence only, is explored. Child language data are cited in order to investigate influential innatist approaches to language acquisition. Baker's view that children are innately constrained in significant ways with respect to language acquisition is evaluated. Evidence indicates that children persistently make overgeneralizations of the sort that violate the constrained view of language acquisition. Since children eventually do develop correct adult grammar, they must have other mechanisms for cutting back on these overgeneralizations. Thus, any hypothesized constraints cannot be justified on grounds that without them the child would end up with overly general grammar. It is necessary to explicate the mechanisms by which children eliminate their tendency toward overgeneralization.
  • Bowerman, M., Gullberg, M., Majid, A., & Narasimhan, B. (2004). Put project: The cross-linguistic encoding of placement events. In A. Majid (Ed.), Field Manual Volume 9 (pp. 10-24). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492916.

    Abstract

    How similar are the event concepts encoded by different languages? So far, few event domains have been investigated in any detail. The PUT project extends the systematic cross-linguistic exploration of event categorisation to a new domain, that of placement events (putting things in places and removing them from places). The goal of this task is to explore cross-linguistic universality and variability in the semantic categorisation of placement events (e.g., ‘putting a cup on the table’).

    Additional information

    2004_Put_project_video_stimuli.zip
  • Li, P., & Bowerman, M. (1998). The acquisition of lexical and grammatical aspect in Chinese. First Language, 18, 311-350. doi:10.1177/014272379801805404.

    Abstract

    This study reports three experiments on how children learning Mandarin Chinese comprehend and use aspect markers. These experiments examine the role of lexical aspect in children's acquisition of grammatical aspect. Results provide converging evidence for children's early sensitivity to (1) the association between atelic verbs and the imperfective aspect markers zai, -zhe, and -ne, and (2) the association between telic verbs and the perfective aspect marker -le. Children did not show a sensitivity in their use or understanding of aspect markers to the difference between stative and activity verbs or between semelfactive and activity verbs. These results are consistent with Slobin's (1985) basic child grammar hypothesis that the contrast between process and result is important in children's early acquisition of temporal morphology. In contrast, they are inconsistent with Bickerton's (1981, 1984) language bioprogram hypothesis that the distinctions between state and process and between punctual and nonpunctual are preprogrammed into language learners. We suggest new ways of looking at the results in the light of recent probabilistic hypotheses that emphasize the role of input, prototypes and connectionist representations.
  • Braun, B., & Chen, A. (2008). Now move X into cell Y: intonation of 'now' in on-line reference resolution. In P. Barbosa, S. Madureira, & C. Reis (Eds.), Proceedings of the 4th International Conferences on Speech Prosody (pp. 477-480). Campinas: Editora RG/CNPq.

    Abstract

    Prior work has shown that listeners efficiently exploit prosodic information both in the discourse referent and in the preceding modifier to identify the referent. This study investigated whether listeners make use of prosodic information prior to the ENTIRE referential expression, i.e. the intonational realization of the adverb 'now', to identify the upcoming referent. The adverb ‘now’ can be used to draw attention to contrasting information in the sentence. (e.g., ‘put the book on the bookshelf. Now put the pen on the bookshelf.’). It has been shown for Dutch that nu ('now') is realized prosodically differently in different information structural contexts though certain realizations occur across information structural contexts. In an eye-tracking experiment we tested two hypotheses regarding the role of the intonation of nu in online reference resolution in Dutch: the “irrelevant intonation” hypothesis, whereby listeners make no use of the intonation of nu, vs. the “linguistic intonation” hypothesis, whereby listeners are sensitive to the conditional probabilities between different intonational realizations of nu and the referent. Our findings show that listeners employ the intonation of nu to identify the upcoming referent. They are mislead by an accented nu but correctly interpret an unaccented nu as referring to a new, unmentioned entity.
  • Braun, B., Lemhöfer, K., & Cutler, A. (2008). English word stress as produced by English and Dutch speakers: The role of segmental and suprasegmental differences. In Proceedings of Interspeech 2008 (pp. 1953-1953).

    Abstract

    It has been claimed that Dutch listeners use suprasegmental cues (duration, spectral tilt) more than English listeners in distinguishing English word stress. We tested whether this asymmetry also holds in production, comparing the realization of English word stress by native English speakers and Dutch speakers. Results confirmed that English speakers centralize unstressed vowels more, while Dutch speakers of English make more use of suprasegmental differences.
  • Braun, B., Tagliapietra, L., & Cutler, A. (2008). Contrastive utterances make alternatives salient: Cross-modal priming evidence. In Proceedings of Interspeech 2008 (pp. 69-69).

    Abstract

    Sentences with contrastive intonation are assumed to presuppose contextual alternatives to the accented elements. Two cross-modal priming experiments tested in Dutch whether such contextual alternatives are automatically available to listeners. Contrastive associates – but not non- contrastive associates - were facilitated only when primes were produced in sentences with contrastive intonation, indicating that contrastive intonation makes unmentioned contextual alternatives immediately available. Possibly, contrastive contours trigger a “presupposition resolution mechanism” by which these alternatives become salient.
  • De Bree, E., Van Alphen, P. M., Fikkert, P., & Wijnen, F. (2008). Metrical stress in comprehension and production of Dutch children at risk of dyslexia. In H. Chan, H. Jacob, & E. Kapia (Eds.), Proceedings of the 32nd Annual Boston University Conference on Language Development (pp. 60-71). Somerville, Mass: Cascadilla Press.

    Abstract

    The present study compared the role of metrical stress in comprehension and production of three-year-old children with a familial risk of dyslexia with that of normally developing children to further explore the phonological deficit in dyslexia. A visual fixation task with stress (mis-)matches in bisyllabic words, as well as a non-word repetition task with bisyllabic targets were presented to the control and at-risk children. Results show that the at-risk group was less sensitive to stress mismatches in word recognition than the control group. Correct production of metrical stress patterns did not differ significantly between the groups, but the percentages of phonemes produced correctly were lower for the at-risk than the control group. These findings suggest that processing of metrical stress is not impaired in at-risk children, but that this group cannot exploit metrical stress for speech in word recognition. This study demonstrates the importance of including suprasegmental skills in dyslexia research.
  • Brehm, L., Jackson, C. N., & Miller, K. L. (2019). Incremental interpretation in the first and second language. In M. Brown, & B. Dailey (Eds.), BUCLD 43: Proceedings of the 43rd annual Boston University Conference on Language Development (pp. 109-122). Sommerville, MA: Cascadilla Press.
  • Brehm, L., Taschenberger, L., & Meyer, A. S. (2019). Mental representations of partner task cause interference in picture naming. Acta Psychologica, 199: 102888. doi:10.1016/j.actpsy.2019.102888.

    Abstract

    Interference in picture naming occurs from representing a partner's preparations to speak (Gambi, van de Cavey, & Pickering, 2015). We tested the origins of this interference using a simple non-communicative joint naming task based on Gambi et al. (2015), where response latencies indexed interference from partner task and partner speech content, and eye fixations to partner objects indexed overt attention. Experiment 1 contrasted a partner-present condition with a control partner-absent condition to establish the role of the partner in eliciting interference. For latencies, we observed interference from the partner's task and speech content, with interference increasing due to partner task in the partner-present condition. Eye-tracking measures showed that interference in naming was not due to overt attention to partner stimuli but to broad expectations about likely utterances. Experiment 2 examined whether an equivalent non-verbal task also elicited interference, as predicted from a language as joint action framework. We replicated the finding of interference due to partner task and again found no relationship between overt attention and interference. These results support Gambi et al. (2015). Individuals co-represent a partner's task while speaking, and doing so does not require overt attention to partner stimuli.
  • Brehm, L., Jackson, C. N., & Miller, K. L. (2019). Speaker-specific processing of anomalous utterances. Quarterly Journal of Experimental Psychology, 72(4), 764-778. doi:10.1177/1747021818765547.

    Abstract

    Existing work shows that readers often interpret grammatical errors (e.g., The key to the cabinets *were shiny) and sentence-level blends (“without-blend”: Claudia left without her headphones *off) in a non-literal fashion, inferring that a more frequent or more canonical utterance was intended instead. This work examines how interlocutor identity affects the processing and interpretation of anomalous sentences. We presented anomalies in the context of “emails” attributed to various writers in a self-paced reading paradigm and used comprehension questions to probe how sentence interpretation changed based upon properties of the item and properties of the “speaker.” Experiment 1 compared standardised American English speakers to L2 English speakers; Experiment 2 compared the same standardised English speakers to speakers of a non-Standardised American English dialect. Agreement errors and without-blends both led to more non-literal responses than comparable canonical items. For agreement errors, more non-literal interpretations also occurred when sentences were attributed to speakers of Standardised American English than either non-Standardised group. These data suggest that understanding sentences relies on expectations and heuristics about which utterances are likely. These are based upon experience with language, with speaker-specific differences, and upon more general cognitive biases.

    Additional information

    Supplementary material
  • Brennan, J. R., & Martin, A. E. (2019). Phase synchronization varies systematically with linguistic structure composition. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375(1791): 20190305. doi:10.1098/rstb.2019.0305.

    Abstract

    Computation in neuronal assemblies is putatively reflected in the excitatory and inhibitory cycles of activation distributed throughout the brain. In speech and language processing, coordination of these cycles resulting in phase synchronization has been argued to reflect the integration of information on different timescales (e.g. segmenting acoustics signals to phonemic and syllabic representations; (Giraud and Poeppel 2012 Nat. Neurosci.15, 511 (doi:10.1038/nn.3063)). A natural extension of this claim is that phase synchronization functions similarly to support the inference of more abstract higher-level linguistic structures (Martin 2016 Front. Psychol.7, 120; Martin and Doumas 2017 PLoS Biol. 15, e2000663 (doi:10.1371/journal.pbio.2000663); Martin and Doumas. 2019 Curr. Opin. Behav. Sci.29, 77–83 (doi:10.1016/j.cobeha.2019.04.008)). Hale et al. (Hale et al. 2018 Finding syntax in human encephalography with beam search. arXiv 1806.04127 (http://arxiv.org/abs/1806.04127)) showed that syntactically driven parsing decisions predict electroencephalography (EEG) responses in the time domain; here we ask whether phase synchronization in the form of either inter-trial phrase coherence or cross-frequency coupling (CFC) between high-frequency (i.e. gamma) bursts and lower-frequency carrier signals (i.e. delta, theta), changes as the linguistic structures of compositional meaning (viz., bracket completions, as denoted by the onset of words that complete phrases) accrue. We use a naturalistic story-listening EEG dataset from Hale et al. to assess the relationship between linguistic structure and phase alignment. We observe increased phase synchronization as a function of phrase counts in the delta, theta, and gamma bands, especially for function words. A more complex pattern emerged for CFC as phrase count changed, possibly related to the lack of a one-to-one mapping between ‘size’ of linguistic structure and frequency band—an assumption that is tacit in recent frameworks. These results emphasize the important role that phase synchronization, desynchronization, and thus, inhibition, play in the construction of compositional meaning by distributed neural networks in the brain.
  • Broeder, D., Brugman, H., Oostdijk, N., & Wittenburg, P. (2004). Towards Dynamic Corpora: Workshop on compiling and processing spoken corpora. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004) (pp. 59-62). Paris: European Language Resource Association.
  • Broeder, D., Wittenburg, P., & Crasborn, O. (2004). Using Profiles for IMDI Metadata Creation. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004) (pp. 1317-1320). Paris: European Language Resources Association.
  • Broeder, D., Declerck, T., Romary, L., Uneson, M., Strömqvist, S., & Wittenburg, P. (2004). A large metadata domain of language resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004) (pp. 369-372). Paris: European Language Resources Association.
  • Broeder, D., Nathan, D., Strömqvist, S., & Van Veenendaal, R. (2008). Building a federation of Language Resource Repositories: The DAM-LR project and its continuation within CLARIN. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008).

    Abstract

    The DAM-LR project aims at virtually integrating various European language resource archives that allow users to navigate and operate in a single unified domain of language resources. This type of integration introduces Grid technology to the humanities disciplines and forms a federation of archives. The complete architecture is designed based on a few well-known components .This is considered the basis for building a research infrastructure for Language Resources as is planned within the CLARIN project. The DAM-LR project was purposefully started with only a small number of participants for flexibility and to avoid complex contract negotiations with respect to legal issues. Now that we have gained insights into the basic technology issues and organizational issues, it is foreseen that the federation will be expanded considerably within the CLARIN project that will also address the associated legal issues.
  • Broeder, D. (2004). 40,000 IMDI sessions. Language Archive Newsletter, 1(4), 12-12.
  • Broeder, D., Nava, M., & Declerck, T. (2004). INTERA - a Distributed Domain of Metadata Resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Spoken Language Resources and Evaluation (LREC 2004) (pp. 369-372). Paris: European Language Resources Association.
  • Broeder, D., & Offenga, F. (2004). IMDI Metadata Set 3.0. Language Archive Newsletter, 1(2), 3-3.
  • Broeder, D., Declerck, T., Hinrichs, E., Piperidis, S., Romary, L., Calzolari, N., & Wittenburg, P. (2008). Foundation of a component-based flexible registry for language resources and technology. In N. Calzorali (Ed.), Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008) (pp. 1433-1436). European Language Resources Association (ELRA).

    Abstract

    Within the CLARIN e-science infrastructure project it is foreseen to develop a component-based registry for metadata for Language Resources and Language Technology. With this registry it is hoped to overcome the problems of the current available systems with respect to inflexible fixed schema, unsuitable terminology and interoperability problems. The registry will address interoperability needs by refering to a shared vocabulary registered in data category registries as they are suggested by ISO.
  • Broeder, D., Auer, E., Kemps-Snijders, M., Sloetjes, H., Wittenburg, P., & Zinn, C. (2008). Managing very large multimedia archives and their integration into federations. In P. Manghi, P. Pagano, & P. Zezula (Eds.), First Workshop in Very Large Digital Libraries (VLDL 2008).
  • Broersma, M., & Cutler, A. (2008). Phantom word activation in L2. System, 36(1), 22-34. doi:10.1016/j.system.2007.11.003.

    Abstract

    L2 listening can involve the phantom activation of words which are not actually in the input. All spoken-word recognition involves multiple concurrent activation of word candidates, with selection of the correct words achieved by a process of competition between them. L2 listening involves more such activation than L1 listening, and we report two studies illustrating this. First, in a lexical decision study, L2 listeners accepted (but L1 listeners did not accept) spoken non-words such as groof or flide as real English words. Second, a priming study demonstrated that the same spoken non-words made recognition of the real words groove, flight easier for L2 (but not L1) listeners, suggesting that, for the L2 listeners only, these real words had been activated by the spoken non-word input. We propose that further understanding of the activation and competition process in L2 lexical processing could lead to new understanding of L2 listening difficulty.
  • Broersma, M. (2008). Flexible cue use in nonnative phonetic categorization (L). Journal of the Acoustical Society of America, 124(2), 712-715. doi:10.1121/1.2940578.

    Abstract

    Native and nonnative listeners categorized final /v/ versus /f/ in English nonwords. Fricatives followed phonetically long originally /v/-preceding or short originally /f/-preceding vowels. Vowel duration was constant for each participant and sometimes mismatched other voicing cues. Previous results showed that English but not Dutch listeners whose L1 has no final voicing contrast nevertheless used the misleading vowel duration for /v/-/f/ categorization. New analyses showed that Dutch listeners did use vowel duration initially, but quickly reduced its use, whereas the English listeners used it consistently throughout the experiment. Thus, nonnative listeners adapted to the stimuli more flexibly than native listeners did.
  • Broersma, M., & Kolkman, K. M. (2004). Lexical representation of non-native phonemes. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1241-1244). Seoul: Sunjijn Printing Co.
  • Brouwer, S., Cornips, L., & Hulk, A. (2008). Misrepresentation of Dutch neuter gender in older bilingual children? In B. Hazdenar, & E. Gavruseva (Eds.), Current trends in child second language acquisition: A generative perspective (pp. 83-96). Amsterdam: Benjamins.
  • Brown, P. (2004). Position and motion in Tzeltal frog stories: The acquisition of narrative style. In S. Strömqvist, & L. Verhoeven (Eds.), Relating events in narrative: Typological and contextual perspectives (pp. 37-57). Mahwah: Erlbaum.

    Abstract

    How are events framed in narrative? Speakers of English (a 'satellite-framed' language), when 'reading' Mercer Mayer's wordless picture book 'Frog, Where Are You?', find the story self-evident: a boy has a dog and a pet frog; the frog escapes and runs away; the boy and dog look for it across hill and dale, through woods and over a cliff, until they find it and return home with a baby frog child of the original pet frog. In Tzeltal, as spoken in a Mayan community in southern Mexico, the story is somewhat different, because the language structures event descriptions differently. Tzeltal is in part a 'verb-framed' language with a set of Path-encoding motion verbs, so that the bare bones of the Frog story can consist of verbs translating as 'go'/'pass by'/'ascend'/ 'descend'/ 'arrive'/'return'. But Tzeltal also has satellite-framing adverbials, grammaticized from the same set of motion verbs, which encode the direction of motion or the orientation of static arrays. Furthermore, motion is not generally encoded barebones, but vivid pictorial detail is provided by positional verbs which can describe the position of the Figure as an outcome of a motion event; motion and stasis are thereby combined in a single event description. (For example: jipot jawal "he has been thrown (by the deer) lying¬_face_upwards_spread-eagled". This paper compares the use of these three linguistic resources in frog narratives from 14 Tzeltal adults and 21 children, looks at their development in the narratives of children between the ages of 4-12, and considers the results in relation to those from Berman and Slobin's (1996) comparative study of adult and child Frog stories.
  • Brown, P. (2008). Up, down, and across the land: Landscape terms and place names in Tzeltal. Language Sciences, 30(2/3), 151-181. doi:10.1016/j.langsci.2006.12.003.

    Abstract

    The Tzeltal language is spoken in a mountainous region of southern Mexico by some 280,000 Mayan corn farmers. This paper focuses on landscape and place vocabulary in the Tzeltal municipio of Tenejapa, where speakers use an absolute system of spatial reckoning based on the overall uphill (southward)/downhill (northward) slope of the land. The paper examines the formal and functional properties of the Tenejapa Tzeltal vocabulary labelling features of the local landscape and relates it to spatial vocabulary for describing locative relations, including the uphill/downhill axis for spatial reckoning as well as body part terms for specifying parts of locative grounds. I then examine the local place names, discuss their semantic and morphosyntactic properties, and relate them to the landscape vocabulary, to spatial vocabulary, and also to cultural narratives about events associated with particular places. I conclude with some observations on the determinants of landscape and place terminology in Tzeltal, and what this vocabulary and how it is used reveal about the conceptualization of landscape and places.
  • Brown, P. (2008). Verb specificity and argument realization in Tzeltal child language. In M. Bowerman, & P. Brown (Eds.), Crosslinguistic perspectives on argument structure: Implications for learnability (pp. 167-189). Mahwah, NJ: Erlbaum.

    Abstract

    How do children learn a language whose arguments are freely ellipsed? The Mayan language Tzeltal, spoken in southern Mexico, is such a language. The acquisition pattern for Tzeltal is distinctive, in at least two ways: verbs predominate even in children’s very early production vocabulary, and these verbs are often very specific in meaning. This runs counter to the patterns found in most Indo-European languages, where nouns tend to predominate in early vocabulary and children’s first verbs tend to be ‘light’ or semantically general. Here I explore the idea that noun ellipsis and ‘heavy’ verbs are related: the ‘heavy’ verbs restrict the nominal reference and so allow recovery of the ‘missing’ nouns. Using data drawn from videotaped interaction of four Tzeltal children and their caregivers, I examined transitive clauses in an adult input sample and in child speech, and tested the hypothesis that direct object arguments are less likely to be realized overtly with semantically specific verbs than with general verbs. This hypothesis was confirmed, both for the adult input and for the speech of the children (aged 3;4-3;9). It is therefore possible that argument ellipsis could provide a clue to verb semantics (specific vs. general) for the Tzeltal child.
  • Brown, P. (1998). Children's first verbs in Tzeltal: Evidence for an early verb category. Linguistics, 36(4), 713-753.

    Abstract

    A major finding in studies of early vocabulary acquisition has been that children tend to learn a lot of nouns early but make do with relatively few verbs, among which semantically general-purpose verbs like do, make, get, have, give, come, go, and be play a prominent role. The preponderance of nouns is explained in terms of nouns labelling concrete objects beings “easier” to learn than verbs, which label relational categories. Nouns label “natural categories” observable in the world, verbs label more linguistically and culturally specific categories of events linking objects belonging to such natural categories (Gentner 1978, 1982; Clark 1993). This view has been challenged recently by data from children learning certain non-Indo-European languges like Korean, where children have an early verb explosion and verbs dominate in early child utterances. Children learning the Mayan language Tzeltal also acquire verbs early, prior to any noun explosion as measured by production. Verb types are roughly equivalent to noun types in children’s beginning production vocabulary and soon outnumber them. At the one-word stage children’s verbs mostly have the form of a root stripped of affixes, correctly segmented despite structural difficulties. Quite early (before the MLU 2.0 point) there is evidence of productivity of some grammatical markers (although they are not always present): the person-marking affixes cross-referencing core arguments, and the completive/incompletive aspectual distinctions. The Tzeltal facts argue against a natural-categories explanation for childre’s early vocabulary, in favor of a view emphasizing the early effects of language-specific properties of the input. They suggest that when and how a child acquires a “verb” category is centrally influenced by the structural properties of the input, and that the semantic structure of the language - where the referential load is concentrated - plays a fundamental role in addition to distributional facts.
  • Brown, P. (1998). Conversational structure and language acquisition: The role of repetition in Tzeltal adult and child speech. Journal of Linguistic Anthropology, 8(2), 197-221. doi:10.1525/jlin.1998.8.2.197.

    Abstract

    When Tzeltal children in the Mayan community of Tenejapa, in southern Mexico, begin speaking, their production vocabulary consists predominantly of verb roots, in contrast to the dominance of nouns in the initial vocabulary of first‐language learners of Indo‐European languages. This article proposes that a particular Tzeltal conversational feature—known in the Mayanist literature as "dialogic repetition"—provides a context that facilitates the early analysis and use of verbs. Although Tzeltal babies are not treated by adults as genuine interlocutors worthy of sustained interaction, dialogic repetition in the speech the children are exposed to may have an important role in revealing to them the structural properties of the language, as well as in socializing the collaborative style of verbal interaction adults favor in this community.
  • Brown, P. (1998). Early Tzeltal verbs: Argument structure and argument representation. In E. Clark (Ed.), Proceedings of the 29th Annual Stanford Child Language Research Forum (pp. 129-140). Stanford: CSLI Publications.

    Abstract

    The surge of research activity focussing on children's acquisition of verbs (e.g., Tomasello and Merriman 1996) addresses some fundamental questions: Just how variable across languages, and across individual children, is the process of verb learning? How specific are arguments to particular verbs in early child language? How does the grammatical category 'Verb' develop? The position of Universal Grammar, that a verb category is early, contrasts with that of Tomasello (1992), Pine and Lieven and their colleagues (1996, in press), and many others, that children develop a verb category slowly, gradually building up subcategorizations of verbs around pragmatic, syntactic, and semantic properties of the language they are exposed to. On this latter view, one would expect the language which the child is learning, the cultural milieu and the nature of the interactions in which the child is engaged, to influence the process of acquiring verb argument structures. This paper explores these issues by examining the development of argument representation in the Mayan language Tzeltal, in both its lexical and verbal cross-referencing forms, and analyzing the semantic and pragmatic factors influencing the form argument representation takes. Certain facts about Tzeltal (the ergative/ absolutive marking, the semantic specificity of transitive and positional verbs) are proposed to affect the representation of arguments. The first 500 multimorpheme combinations of 3 children (aged between 1;8 and 2;4) are examined. It is argued that there is no evidence of semantically light 'pathbreaking' verbs (Ninio 1996) leading the way into word combinations. There is early productivity of cross-referencing affixes marking A, S, and O arguments (although there are systematic omissions). The paper assesses the respective contributions of three kinds of factors to these results - structural (regular morphology), semantic (verb specificity) and pragmatic (the nature of Tzeltal conversational interaction).
  • Brown, C. M., Hagoort, P., & Ter Keurs, M. (1999). Electrophysiological signatures of visual lexical processing: open en closed-class words. Journal of Cognitive Neuroscience, 11(3), 261-281.

    Abstract

    In this paper presents evidence of the disputed existence of an electrophysiological marker for the lexical-categorical distinction between open- and closed-class words. Event-related brain potentials were recorded from the scalp while subjects read a story. Separate waveforms were computed for open- and closed-class words. Two aspects of the waveforms could be reliably related to vocabulary class. The first was an early negativity in the 230- to 350-msec epoch, with a bilateral anterior predominance. This negativity was elicited by open- and closed-class words alike, was not affected by word frequency or word length, and had an earlier peak latency for closed-class words. The second was a frontal slow negative shift in the 350- to 500-msec epoch, largest over the left side of the scalp. This late negativity was only elicited by closed-class words. Although the early negativity cannot serve as a qualitative marker of the open- and closed-class distinction, it does reflect the earliest electrophysiological manifestation of the availability of categorical information from the mental lexicon. These results suggest that the brain honors the distinction between open- and closed-class words, in relation to the different roles that they play in on-line sentence processing.
  • Brown, P. (1999). Anthropologie cognitive. Anthropologie et Sociétés, 23(3), 91-119.

    Abstract

    In reaction to the dominance of universalism in the 1970s and '80s, there have recently been a number of reappraisals of the relation between language and cognition, and the field of cognitive anthropology is flourishing in several new directions in both America and Europe. This is partly due to a renewal and re-evaluation of approaches to the question of linguistic relativity associated with Whorf, and partly to the inspiration of modern developments in cognitive science. This review briefly sketches the history of cognitive anthropology and surveys current research on both sides of the Atlantic. The focus is on assessing current directions, considering in particular, by way of illustration, recent work in cultural models and on spatial language and cognition. The review concludes with an assessment of how cognitive anthropology could contribute directly both to the broader project of cognitive science and to the anthropological study of how cultural ideas and practices relate to structures and processes of human cognition.
  • Brown, P. (1983). [Review of the book Conversational routine: Explorations in standardized communication situations and prepatterned speech ed. by Florian Coulmas]. Language, 59, 215-219.
  • Brown, P. (1983). [Review of the books Mayan Texts I, II, and III ed. by Louanna Furbee-Losee]. International Journal of American Linguistics, 49, 337-341.
  • Brown, A., & Gullberg, M. (2008). Bidirectional crosslinguistic influence in L1-L2 encoding of manner in speech and gesture. Studies in Second Language Acquisition, 30(2), 225-251. doi:10.1017/S0272263108080327.

    Abstract

    Whereas most research in SLA assumes the relationship between the first language (L1) and the second language (L2) to be unidirectional, this study investigates the possibility of a bidirectional relationship. We examine the domain of manner of motion, in which monolingual Japanese and English speakers differ both in speech and gesture. Parallel influences of the L1 on the L2 and the L2 on the L1 were found in production from native Japanese speakers with intermediate knowledge of English. These effects, which were strongest in gesture patterns, demonstrate that (a) bidirectional interaction between languages in the multilingual mind can occur even with intermediate proficiency in the L2 and (b) gesture analyses can offer insights on interactions between languages beyond those observed through analyses of speech alone.
  • Brown, P. (1998). [Review of the book by A.J. Wootton, Interaction and the development of mind]. Journal of the Royal Anthropological Institute, 4(4), 816-817.
  • Brown, A. (2008). Gesture viewpoint in Japanese and English: Cross-linguistic interactions between two languages in one speaker. Gesture, 8(2), 256-276. doi:10.1075/gest.8.2.08bro.

    Abstract

    Abundant evidence across languages, structures, proficiencies, and modalities shows that properties of first languages influence performance in second languages. This paper presents an alternative perspective on the interaction between established and emerging languages within second language speakers by arguing that an L2 can influence an L1, even at relatively low proficiency levels. Analyses of the gesture viewpoint employed in English and Japanese descriptions of motion events revealed systematic between-language and within-language differences. Monolingual Japanese speakers used significantly more Character Viewpoint than monolingual English speakers, who predominantly employed Observer Viewpoint. In their L1 and their L2, however, native Japanese speakers with intermediate knowledge of English patterned more like the monolingual English speakers than their monolingual Japanese counterparts. After controlling for effects of cultural exposure, these results offer valuable insights into both the nature of cross-linguistic interactions within individuals and potential factors underlying gesture viewpoint.
  • Brown, P., & Levinson, S. C. (2004). Frames of spatial reference and their acquisition in Tenejapan Tzeltal. In A. Assmann, U. Gaier, & G. Trommsdorff (Eds.), Zwischen Literatur und Anthropologie: Diskurse, Medien, Performanzen (pp. 285-314). Tübingen: Gunter Narr.

    Abstract

    This is a reprint of the Brown and Levinson 2000 article.
  • Brown, P., Levinson, S. C., & Senft, G. (2004). Initial references to persons and places. In A. Majid (Ed.), Field Manual Volume 9 (pp. 37-44). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492929.

    Abstract

    This task has two parts: (i) video-taped elicitation of the range of possibilities for referring to persons and places, and (ii) observations of (first) references to persons and places in video-taped natural interaction. The goal of this task is to establish the repertoires of referential terms (and other practices) used for referring to persons and to places in particular languages and cultures, and provide examples of situated use of these kinds of referential practices in natural conversation. This data will form the basis for cross-language comparison, and for formulating hypotheses about general principles underlying the deployment of such referential terms in natural language usage.
  • Brown, P., Gaskins, S., Lieven, E., Striano, T., & Liszkowski, U. (2004). Multimodal multiperson interaction with infants aged 9 to 15 months. In A. Majid (Ed.), Field Manual Volume 9 (pp. 56-63). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492925.

    Abstract

    Interaction, for all that it has an ethological base, is culturally constituted, and how new social members are enculturated into the interactional practices of the society is of critical interest to our understanding of interaction – how much is learned, how variable is it across cultures – as well as to our understanding of the role of culture in children’s social-cognitive development. The goal of this task is to document the nature of caregiver infant interaction in different cultures, especially during the critical age of 9-15 months when children come to have an understanding of others’ intentions. This is of interest to all students of interaction; it does not require specialist knowledge of children.
  • Brown, P. (1998). La identificación de las raíces verbales en Tzeltal (Maya): Cómo lo hacen los niños? Función, 17-18, 121-146.

    Abstract

    This is a Spanish translation of Brown 1997.
  • Brown, P. (1998). How and why are women more polite: Some evidence from a Mayan community. In J. Coates (Ed.), Language and gender (pp. 81-99). Oxford: Blackwell.
  • Brown, P. (1999). Repetition [Encyclopedia entry for 'Lexicon for the New Millenium', ed. Alessandro Duranti]. Journal of Linguistic Anthropology, 9(2), 223-226. doi:10.1525/jlin.1999.9.1-2.223.

    Abstract

    This is an encyclopedia entry describing conversational and interactional uses of linguistic repetition.
  • Brown, C. M., & Hagoort, P. (1999). The cognitive neuroscience of language: Challenges and future directions. In C. M. Brown, & P. Hagoort (Eds.), The neurocognition of language (pp. 3-14). Oxford: Oxford University Press.
  • Brown, P. (1991). Sind Frauen höflicher? Befunde aus einer Maya-Gemeinde. In S. Günther, & H. Kotthoff (Eds.), Von fremden Stimmen: Weibliches und männliches Sprechen im Kulturvergleich. Frankfurt am Main: Suhrkamp.

    Abstract

    This is a German translation of Brown 1980, How and why are women more polite: Some evidence from a Mayan community.
  • Brown, P., & Levinson, S. C. (1998). Politeness, introduction to the reissue: A review of recent work. In A. Kasher (Ed.), Pragmatics: Vol. 6 Grammar, psychology and sociology (pp. 488-554). London: Routledge.

    Abstract

    This article is a reprint of chapter 1, the introduction to Brown and Levinson, 1987, Politeness: Some universals in language usage (Cambridge University Press).
  • Brown, P., & Levinson, S. C. (1999). Politeness: Some universals in language usage [Reprint]. In A. Jaworski, & N. Coupland (Eds.), The discourse reader (pp. 321-335). London: Routledge.

    Abstract

    This article is a reprint of chapter 1, the introduction to Brown and Levinson, 1987, Politeness: Some universals in language usage (Cambridge University Press).
  • Brown-Schmidt, S., & Konopka, A. E. (2008). Little houses and casas pequenas: Message formulation and syntactic form in unscripted speech with speakers of English and Spanish. Cognition, 109(2), 274-280. doi:10.1016/j.cognition.2008.07.011.

    Abstract

    During unscripted speech, speakers coordinate the formulation of pre-linguistic messages with the linguistic processes that implement those messages into speech. We examine the process of constructing a contextually appropriate message and interfacing that message with utterance planning in English (the small butterfly) and Spanish (la mariposa pequeña) during an unscripted, interactive task. The coordination of gaze and speech during formulation of these messages is used to evaluate two hypotheses regarding the lower limit on the size of message planning units, namely whether messages are planned in units isomorphous to entire phrases or units isomorphous to single lexical items. Comparing the planning of fluent pre-nominal adjectives in English and post-nominal adjectives in Spanish showed that size information is added to the message later in Spanish than English, suggesting that speakers can prepare pre-linguistic messages in lexically-sized units. The results also suggest that the speaker can use disfluency to coordinate the transition from thought to speech.
  • Bruggeman, L., & Cutler, A. (2019). The dynamics of lexical activation and competition in bilinguals’ first versus second language. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1342-1346). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    Speech input causes listeners to activate multiple
    candidate words which then compete with one
    another. These include onset competitors, that share a
    beginning (bumper, butter), but also, counterintuitively,
    rhyme competitors, sharing an ending
    (bumper, jumper). In L1, competition is typically
    stronger for onset than for rhyme. In L2, onset
    competition has been attested but rhyme competition
    has heretofore remained largely unexamined. We
    assessed L1 (Dutch) and L2 (English) word
    recognition by the same late-bilingual individuals. In
    each language, eye gaze was recorded as listeners
    heard sentences and viewed sets of drawings: three
    unrelated, one depicting an onset or rhyme competitor
    of a word in the input. Activation patterns revealed
    substantial onset competition but no significant
    rhyme competition in either L1 or L2. Rhyme
    competition may thus be a “luxury” feature of
    maximally efficient listening, to be abandoned when
    resources are scarcer, as in listening by late
    bilinguals, in either language.
  • Brugman, H. (2004). ELAN 2.2 now available. Language Archive Newsletter, 1(3), 13-14.
  • Brugman, H., Sloetjes, H., Russel, A., & Klassmann, A. (2004). ELAN 2.3 available. Language Archive Newsletter, 1(4), 13-13.
  • Brugman, H. (2004). ELAN Releases 2.0.2 and 2.1. Language Archive Newsletter, 1(2), 4-4.
  • Brugman, H., Malaisé, V., & Hollink, L. (2008). A common multimedia annotation framework for cross linking cultural heritage digital collections. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008).

    Abstract

    In the context of the CATCH research program that is currently carried out at a number of large Dutch cultural heritage institutions our ambition is to combine and exchange heterogeneous multimedia annotations between projects and institutions. As first step we designed an Annotation Meta Model: a simple but powerful RDF/OWL model mainly addressing the anchoring of annotations to segments of the many different media types used in the collections of the archives, museums and libraries involved. The model includes support for the annotation of annotations themselves, and of segments of annotation values, to be able to layer annotations and in this way enable projects to process each other’s annotation data as the primary data for further annotation. On basis of AMM we designed an application programming interface for accessing annotation repositories and implemented it both as a software library and as a web service. Finally, we report on our experiences with the application of model, API and repository when developing web applications for collection managers in cultural heritage institutions
  • Brugman, H., Crasborn, O., & Russel, A. (2004). Collaborative annotation of sign language data with Peer-to-Peer technology. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Language Evaluation (LREC 2004) (pp. 213-216). Paris: European Language Resources Association.
  • Brugman, H., & Russel, A. (2004). Annotating Multi-media/Multi-modal resources with ELAN. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Language Evaluation (LREC 2004) (pp. 2065-2068). Paris: European Language Resources Association.
  • Burenhult, N. (2008). Spatial coordinate systems in demonstrative meaning. Linguistic Typology, 12(1), 99-142. doi:10.1515/LITY.2008.032.

    Abstract

    Exploring the semantic encoding of a group of crosslinguistically uncommon “spatial-coordinate demonstratives”, this work establishes the existence of demonstratives whose function is to project angular search domains, thus invoking proper coordinate systems (or “frames of reference”). What is special about these distinctions is that they rely on a spatial asymmetry in relativizing a demonstrative referent (representing the Figure) to the deictic center (representing the Ground). A semantic typology of such demonstratives is constructed based on the nature of the asymmetries they employ. A major distinction is proposed between asymmetries outside the deictic Figure-Ground array (e.g., features of the larger environment) and those within it (e.g., facets of the speaker/addressee dyad). A unique system of the latter type, present in Jahai, an Aslian (Mon-Khmer) language spoken by groups of hunter-gatherers in the Malay Peninsula, is introduced and explored in detail using elicited data as well as natural conversational data captured on video. Although crosslinguistically unusual, spatial-coordinate demonstratives sit at the interface of issues central to current discourse in semantic-pragmatic theory: demonstrative function, deictic layout, and spatial frames of reference.
  • Burenhult, N. (2004). Spatial deixis in Jahai. In S. Burusphat (Ed.), Papers from the 11th Annual Meeting of the Southeast Asian Linguistics Society 2001 (pp. 87-100). Arizona State University: Program for Southeast Asian Studies.
  • Burenhult, N. (2008). Streams of words: Hydrological lexicon in Jahai. Language Sciences, 30(2/3), 182-199. doi:10.1016/j.langsci.2006.12.005.

    Abstract

    This article investigates hydrological lexicon in Jahai, a Mon-Khmer language of the Malay Peninsula. Setting out from an analysis of the structural and semantic properties as well as the indigenous vs. borrowed origin of lexicon related to drainage, it teases out a set of distinct lexical systems for reference to and description of hydrological features. These include (1) indigenous nominal labels subcategorised by metaphor, (2) borrowed nominal labels, (3) verbals referring to properties and processes of water, (4) a set of motion verbs, and (5) place names. The lexical systems, functionally diverse and driven by different factors, illustrate that principles and strategies of geographical categorisation can vary systematically and profoundly within a single language.
  • Burenhult, N. (2004). Landscape terms and toponyms in Jahai: A field report. Lund Working Papers, 51, 17-29.
  • Burenhult, N., & Levinson, S. C. (2008). Language and landscape: A cross-linguistic perspective. Language Sciences, 30(2/3), 135-150. doi:10.1016/j.langsci.2006.12.028.

    Abstract

    This special issue is the outcome of collaborative work on the relationship between language and landscape, carried out in the Language and Cognition Group at the Max Planck Institute for Psycholinguistics. The contributions explore the linguistic categories of landscape terms and place names in nine genetically, typologically and geographically diverse languages, drawing on data from first-hand fieldwork. The present introductory article lays out the reasons why the domain of landscape is of central interest to the language sciences and beyond, and it outlines some of the major patterns that emerge from the cross-linguistic comparison which the papers invite. The data point to considerable variation within and across languages in how systems of landscape terms and place names are ontologised. This has important implications for practical applications from international law to modern navigation systems.
  • Burenhult, N. (Ed.). (2008). Language and landscape: Geographical ontology in cross-linguistic perspective [Special Issue]. Language Sciences, 30(2/3).

    Abstract

    This special issue is the outcome of collaborative work on the relationship between language and landscape, carried out in the Language and Cognition Group at the Max Planck Institute for Psycholinguistics. The contributions explore the linguistic categories of landscape terms and place names in nine genetically, typologically and geographically diverse languages, drawing on data from first-hand fieldwork. The present introductory article lays out the reasons why the domain of landscape is of central interest to the language sciences and beyond, and it outlines some of the major patterns that emerge from the cross-linguistic comparison which the papers invite. The data point to considerable variation within and across languages in how systems of landscape terms and place names are ontologised. This has important implications for practical applications from international law to modern navigation systems.
  • Burenkova, O. V., & Fisher, S. E. (2019). Genetic insights into the neurobiology of speech and language. In E. Grigorenko, Y. Shtyrov, & P. McCardle (Eds.), All About Language: Science, Theory, and Practice. Baltimore, MD: Paul Brookes Publishing, Inc.
  • Burkhardt, P., Avrutin, S., Piñango, M. M., & Ruigendijk, E. (2008). Slower-than-normal syntactic processing in agrammatic Broca's aphasia: Evidence from Dutch. Journal of Neurolinguistics, 21(2), 120-137. doi:10.1016/j.jneuroling.2006.10.004.

    Abstract

    Studies of agrammatic Broca's aphasia reveal a diverging pattern of performance in the comprehension of reflexive elements: offline, performance seems unimpaired, whereas online—and in contrast to both matching controls and Wernicke's patients—no antecedent reactivation is observed at the reflexive. Here we propose that this difference characterizes the agrammatic comprehension deficit as a result of slower-than-normal syntactic structure formation. To test this characterization, the comprehension of three Dutch agrammatic patients and matching control participants was investigated utilizing the cross-modal lexical decision (CMLD) interference task. Two types of reflexive-antecedent dependencies were tested, which have already been shown to exert distinct processing demands on the comprehension system as a function of the level at which the dependency was formed. Our hypothesis predicts that if the agrammatic system has a processing limitation such that syntactic structure is built in a protracted manner, this limitation will be reflected in delayed interpretation. Confirming previous findings, the Dutch patients show an effect of distinct processing demands for the two types of reflexive-antecedent dependencies but with a temporal delay. We argue that this delayed syntactic structure formation is the result of limited processing capacity that specifically affects the syntactic system.
  • Burkhardt, P. (2008). Two types of definites: Evidence for presupposition cost. In A. Grønn (Ed.), Proceedings of SuB 12 (pp. 66-80). Oslo: ILOS.

    Abstract

    This paper investigates the notion of definiteness from a psycholinguistic perspective and addresses Löbner’s (1987) distinction between semantic and pragmatic definites. To this end inherently definite noun phrases, proper names, and indexicals are investigated as instances of (relatively) rigid designators (i.e. semantic definites) and contrasted with definite noun phrases and third person pronouns that are contingent on context to unambiguously determine their reference (i.e. pragmatic definites). Electrophysiological data provide support for this distinction and further substantiate the claim that proper names differ from definite descriptions. These findings suggest that certain expressions carry a feature of inherent definiteness, which facilitates their discourse integration (i.e. semantic definites), while others rely on the establishment of a relation with prior information, which results in processing cost.
  • Burkhardt, P. (2008). What inferences can tell us about the given-new distinction. In Proceedings of the 18th International Congress of Linguists (pp. 219-220).
  • Burkhardt, P. (2008). Dependency precedes independence: Online evidence from discourse processing. In A. Benz, & P. Kühnlein (Eds.), Constraints in discourse (pp. 141-158). Amsterdam: Benjamins.

    Abstract

    This paper investigates the integration of definite determiner phrases (DPs) as a function of their contextual salience, which is reflected in the degree of dependency on prior information. DPs depend on previously established discourse referents or introduce a new, independent discourse referent. This paper presents a formal model that explains how discourse referents are represented in the language system and what kind of mechanisms are implemented during DP interpretation. Experimental data from an event-related potential study are discussed that demonstrate how definite DPs are integrated in real-time processing. The data provide evidence for two distinct mechanisms – Specify R and Establish Independent File Card – and substantiate a model that includes various processes and constraints at the level of discourse representation.
  • Burra, N., Hervais-Adelman, A., Celeghin, A., de Gelder, B., & Pegna, A. J. (2019). Affective blindsight relies on low spatial frequencies. Neuropsychologia, 128, 44-49. doi:10.1016/j.neuropsychologia.2017.10.009.

    Abstract

    The human brain can process facial expressions of emotions rapidly and without awareness. Several studies in patients with damage to their primary visual cortices have shown that they may be able to guess the emotional expression on a face despite their cortical blindness. This non-conscious processing, called affective blindsight, may arise through an intact subcortical visual route that leads from the superior colliculus to the pulvinar, and thence to the amygdala. This pathway is thought to process the crude visual information conveyed by the low spatial frequencies of the stimuli.

    In order to investigate whether this is the case, we studied a patient (TN) with bilateral cortical blindness and affective blindsight. An fMRI paradigm was performed in which fearful and neutral expressions were presented using faces that were either unfiltered, or filtered to remove high or low spatial frequencies. Unfiltered fearful faces produced right amygdala activation although the patient was unaware of the presence of the stimuli. More importantly, the low spatial frequency components of fearful faces continued to produce right amygdala activity while the high spatial frequency components did not. Our findings thus confirm that the visual information present in the low spatial frequencies is sufficient to produce affective blindsight, further suggesting that its existence could rely on the subcortical colliculo-pulvino-amygdalar pathway.
  • Carlsson, K., Petersson, K. M., Lundqvist, D., Karlsson, A., Ingvar, M., & Öhman, A. (2004). Fear and the amygdala: manipulation of awareness generates differential cerebral responses to phobic and fear-relevant (but nonfeared) stimuli. Emotion, 4(4), 340-353. doi:10.1037/1528-3542.4.4.340.

    Abstract

    Rapid response to danger holds an evolutionary advantage. In this positron emission tomography study, phobics were exposed to masked visual stimuli with timings that either allowed awareness or not of either phobic, fear-relevant (e.g., spiders to snake phobics), or neutral images. When the timing did not permit awareness, the amygdala responded to both phobic and fear-relevant stimuli. With time for more elaborate processing, phobic stimuli resulted in an addition of an affective processing network to the amygdala activity, whereas no activity was found in response to fear-relevant stimuli. Also, right prefrontal areas appeared deactivated, comparing aware phobic and fear-relevant conditions. Thus, a shift from top-down control to an affectively driven system optimized for speed was observed in phobic relative to fear-relevant aware processing.
  • Carota, F., & Sirigu, A. (2008). Neural Bases of Sequence Processing in Action and Language. Language Learning, 58(1), 179-199. doi:10.1111/j.1467-9922.2008.00470.x.

    Abstract

    Real-time estimation of what we will do next is a crucial prerequisite
    of purposive behavior. During the planning of goal-oriented actions, for
    instance, the temporal and causal organization of upcoming subsequent
    moves needs to be predicted based on our knowledge of events. A forward
    computation of sequential structure is also essential for planning
    contiguous discourse segments and syntactic patterns in language. The
    neural encoding of sequential event knowledge and its domain dependency
    is a central issue in cognitive neuroscience. Converging evidence shows
    the involvement of a dedicated neural substrate, including the
    prefrontal cortex and Broca's area, in the representation and the
    processing of sequential event structure. After reviewing major
    representational models of sequential mechanisms in action and language,
    we discuss relevant neuropsychological and neuroimaging findings on the
    temporal organization of sequencing and sequence processing in both
    domains, suggesting that sequential event knowledge may be modularly
    organized through prefrontal and frontal subregions.
  • Carrion Castillo, A., Van der Haegen, L., Tzourio-Mazoyer, N., Kavaklioglu, T., Badillo, S., Chavent, M., Saracco, J., Brysbaert, M., Fisher, S. E., Mazoyer, B., & Francks, C. (2019). Genome sequencing for rightward hemispheric language dominance. Genes, Brain and Behavior, 18(5): e12572. doi:10.1111/gbb.12572.

    Abstract

    Most people have left‐hemisphere dominance for various aspects of language processing, but only roughly 1% of the adult population has atypically reversed, rightward hemispheric language dominance (RHLD). The genetic‐developmental program that underlies leftward language laterality is unknown, as are the causes of atypical variation. We performed an exploratory whole‐genome‐sequencing study, with the hypothesis that strongly penetrant, rare genetic mutations might sometimes be involved in RHLD. This was by analogy with situs inversus of the visceral organs (left‐right mirror reversal of the heart, lungs and so on), which is sometimes due to monogenic mutations. The genomes of 33 subjects with RHLD were sequenced and analyzed with reference to large population‐genetic data sets, as well as 34 subjects (14 left‐handed) with typical language laterality. The sample was powered to detect rare, highly penetrant, monogenic effects if they would be present in at least 10 of the 33 RHLD cases and no controls, but no individual genes had mutations in more than five RHLD cases while being un‐mutated in controls. A hypothesis derived from invertebrate mechanisms of left‐right axis formation led to the detection of an increased mutation load, in RHLD subjects, within genes involved with the actin cytoskeleton. The latter finding offers a first, tentative insight into molecular genetic influences on hemispheric language dominance.

    Additional information

    gbb12572-sup-0001-AppendixS1.docx
  • Casasanto, D. (2008). Similarity and proximity: When does close in space mean close in mind? Memory & Cognition, 36(6), 1047-1056. doi:10.3758/MC.36.6.1047.

    Abstract

    People often describe things that are similar as close and things that are dissimilar as far apart. Does the way people talk about similarity reveal something fundamental about the way they conceptualize it? Three experiments tested the relationship between similarity and spatial proximity that is encoded in metaphors in language. Similarity ratings for pairs of words or pictures varied as a function of how far apart the stimuli appeared on the computer screen, but the influence of distance on similarity differed depending on the type of judgments the participants made. Stimuli presented closer together were rated more similar during conceptual judgments of abstract entities or unseen object properties but were rated less similar during perceptual judgments of visual appearance. These contrasting results underscore the importance of testing predictions based on linguistic metaphors experimentally and suggest that our sense of similarity arises from our ability to combine available perceptual information with stored knowledge of experiential regularities.
  • Casasanto, D. (2008). Who's afraid of the big bad Whorf? Crosslinguistic differences in temporal language and thought. In P. Indefrey, & M. Gullberg (Eds.), Time to speak: Cognitive and neural prerequisites for time in language (pp. 63-79). Oxford: Wiley.

    Abstract

    The idea that language shapes the way we think, often associated with Benjamin Whorf, has long been decried as not only wrong but also fundamentally wrong-headed. Yet, experimental evidence has reopened debate about the extent to which language influences nonlinguistic cognition, particularly in the domain of time. In this article, I will first analyze an influential argument against the Whorfian hypothesis and show that its anti-Whorfian conclusion is in part an artifact of conflating two distinct questions: Do we think in language? and Does language shape thought? Next, I will discuss crosslinguistic differences in spatial metaphors for time and describe experiments that demonstrate corresponding differences in nonlinguistic mental representations. Finally, I will sketch a simple learning mechanism by which some linguistic relativity effects appear to arise. Although people may not think in language, speakers of different languages develop distinctive conceptual repertoires as a consequence of ordinary and presumably universal neural and cognitive processes.
  • Casasanto, D. (2008). Who's afraid of the big bad Whorf? Crosslinguistic differences in temporal language and thought. Language Learning, 58(suppl. 1), 63-79. doi:10.1111/j.1467-9922.2008.00462.x.

    Abstract

    The idea that language shapes the way we think, often associated with Benjamin Whorf, has long been decried as not only wrong but also fundamentally wrong-headed. Yet, experimental evidence has reopened debate about the extent to which language influences nonlinguistic cognition, particularly in the domain of time. In this article, I will first analyze an influential argument against the Whorfian hypothesis and show that its anti-Whorfian conclusion is in part an artifact of conflating two distinct questions: Do we think in language? and Does language shape thought? Next, I will discuss crosslinguistic differences in spatial metaphors for time and describe experiments that demonstrate corresponding differences in nonlinguistic mental representations. Finally, I will sketch a simple learning mechanism by which some linguistic relativity effects appear to arise. Although people may not think in language, speakers of different languages develop distinctive conceptual repertoires as a consequence of ordinary and presumably universal neural and cognitive processes.
  • Casasanto, D., & Boroditsky, L. (2008). Time in the mind: Using space to think about time. Cognition, 106, 579-573. doi:10.1016/j.cognition.2007.03.004.

    Abstract

    How do we construct abstract ideas like justice, mathematics, or time-travel? In this paper we investigate whether mental representations that result from physical experience underlie people’s more abstract mental representations, using the domains of space and time as a testbed. People often talk about time using spatial language (e.g., a long vacation, a short concert). Do people also think about time using spatial representations, even when they are not using language? Results of six psychophysical experiments revealed that people are unable to ignore irrelevant spatial information when making judgments about duration, but not the converse. This pattern, which is predicted by the asymmetry between space and time in linguistic metaphors, was demonstrated here in tasks that do not involve any linguistic stimuli or responses. These findings provide evidence that the metaphorical relationship between space and time observed in language also exists in our more basic representations of distance and duration. Results suggest that our mental representations of things we can never see or touch may be built, in part, out of representations of physical experiences in perception and motor action.
  • Casillas, M., & Cristia, A. (2019). A step-by-step guide to collecting and analyzing long-format speech environment (LFSE) recordings. Collabra, 5(1): 24. doi:10.1525/collabra.209.

    Abstract

    Recent years have seen rapid technological development of devices that can record communicative behavior as participants go about daily life. This paper is intended as an end-to-end methodological guidebook for potential users of these technologies, including researchers who want to study children’s or adults’ communicative behavior in everyday contexts. We explain how long-format speech environment (LFSE) recordings provide a unique view on language use and how they can be used to complement other measures at the individual and group level. We aim to help potential users of these technologies make informed decisions regarding research design, hardware, software, and archiving. We also provide information regarding ethics and implementation, issues that are difficult to navigate for those new to this technology, and on which little or no resources are available. This guidebook offers a concise summary of information for new users and points to sources of more detailed information for more advanced users. Links to discussion groups and community-augmented databases are also provided to help readers stay up-to-date on the latest developments.
  • Casillas, M., Rafiee, A., & Majid, A. (2019). Iranian herbalists, but not cooks, are better at naming odors than laypeople. Cognitive Science, 43(6): e12763. doi:10.1111/cogs.12763.

    Abstract

    Odor naming is enhanced in communities where communication about odors is a central part of daily life (e.g., wine experts, flavorists, and some hunter‐gatherer groups). In this study, we investigated how expert knowledge and daily experience affect the ability to name odors in a group of experts that has not previously been investigated in this context—Iranian herbalists; also called attars—as well as cooks and laypeople. We assessed naming accuracy and consistency for 16 herb and spice odors, collected judgments of odor perception, and evaluated participants' odor meta‐awareness. Participants' responses were overall more consistent and accurate for more frequent and familiar odors. Moreover, attars were more accurate than both cooks and laypeople at naming odors, although cooks did not perform significantly better than laypeople. Attars' perceptual ratings of odors and their overall odor meta‐awareness suggest they are also more attuned to odors than the other two groups. To conclude, Iranian attars—but not cooks—are better odor namers than laypeople. They also have greater meta‐awareness and differential perceptual responses to odors. These findings further highlight the critical role that expertise and type of experience have on olfactory functions.

    Additional information

    Supplementary Materials
  • Castells-Nobau, A., Eidhof, I., Fenckova, M., Brenman-Suttner, D. B., Scheffer-de Gooyert, J. M., Christine, S., Schellevis, R. L., Van der Laan, K., Quentin, C., Van Ninhuijs, L., Hofmann, F., Ejsmont, R., Fisher, S. E., Kramer, J. M., Sigrist, S. J., Simon, A. F., & Schenck, A. (2019). Conserved regulation of neurodevelopmental processes and behavior by FoxP in Drosophila. PLoS One, 14(2): e211652. doi:10.1371/journal.pone.0211652.

    Abstract

    FOXP proteins form a subfamily of evolutionarily conserved transcription factors involved in the development and functioning of several tissues, including the central nervous system. In humans, mutations in FOXP1 and FOXP2 have been implicated in cognitive deficits including intellectual disability and speech disorders. Drosophila exhibits a single ortholog, called FoxP, but due to a lack of characterized mutants, our understanding of the gene remains poor. Here we show that the dimerization property required for mammalian FOXP function is conserved in Drosophila. In flies, FoxP is enriched in the adult brain, showing strong expression in ~1000 neurons of cholinergic, glutamatergic and GABAergic nature. We generate Drosophila loss-of-function mutants and UAS-FoxP transgenic lines for ectopic expression, and use them to characterize FoxP function in the nervous system. At the cellular level, we demonstrate that Drosophila FoxP is required in larvae for synaptic morphogenesis at axonal terminals of the neuromuscular junction and for dendrite development of dorsal multidendritic sensory neurons. In the developing brain, we find that FoxP plays important roles in α-lobe mushroom body formation. Finally, at a behavioral level, we show that Drosophila FoxP is important for locomotion, habituation learning and social space behavior of adult flies. Our work shows that Drosophila FoxP is important for regulating several neurodevelopmental processes and behaviors that are related to human disease or vertebrate disease model phenotypes. This suggests a high degree of functional conservation with vertebrate FOXP orthologues and established flies as a model system for understanding FOXP related pathologies.
  • Castro-Caldas, A., Petersson, K. M., Reis, A., Stone-Elander, S., & Ingvar, M. (1998). The illiterate brain: Learning to read and write during childhood influences the functional organization of the adult brain. Brain, 121, 1053-1063. doi:10.1093/brain/121.6.1053.

    Abstract

    Learning a specific skill during childhood may partly determine the functional organization of the adult brain. This hypothesis led us to study oral language processing in illiterate subjects who, for social reasons, had never entered school and had no knowledge of reading or writing. In a brain activation study using PET and statistical parametric mapping, we compared word and pseudoword repetition in literate and illiterate subjects. Our study confirms behavioural evidence of different phonological processing in illiterate subjects. During repetition of real words, the two groups performed similarly and activated similar areas of the brain. In contrast, illiterate subjects had more difficulty repeating pseudowords correctly and did not activate the same neural structures as literates. These results are consistent with the hypothesis that learning the written form of language (orthography) interacts with the function of oral language. Our results indicate that learning to read and write during childhood influences the functional organization of the adult human brain.
  • Cathomas, F., Azzinnari, D., Bergamini, G., Sigrist, H., Buerge, M., Hoop, V., Wicki, B., Goetze, L., Soares, S. M. P., Kukelova, D., Seifritz, E., Goebbels, S., Nave, K.-A., Ghandour, M. S., Seoighe, C., Hildebrandt, T., Leparc, G., Klein, H., Stupka, E., Hengerer, B. and 1 moreCathomas, F., Azzinnari, D., Bergamini, G., Sigrist, H., Buerge, M., Hoop, V., Wicki, B., Goetze, L., Soares, S. M. P., Kukelova, D., Seifritz, E., Goebbels, S., Nave, K.-A., Ghandour, M. S., Seoighe, C., Hildebrandt, T., Leparc, G., Klein, H., Stupka, E., Hengerer, B., & Pryce, C. R. (2019). Oligodendrocyte gene expression is reduced by and influences effects of chronic social stress in mice. Genes, Brain and Behavior, 18(1): e12475. doi:10.1111/gbb.12475.

    Abstract

    Oligodendrocyte gene expression is downregulated in stress-related neuropsychiatric disorders,
    including depression. In mice, chronic social stress (CSS) leads to depression-relevant changes
    in brain and emotional behavior, and the present study shows the involvement of oligodendrocytes in this model. In C57BL/6 (BL/6) mice, RNA-sequencing (RNA-Seq) was conducted with
    prefrontal cortex, amygdala and hippocampus from CSS and controls; a gene enrichment database for neurons, astrocytes and oligodendrocytes was used to identify cell origin of deregulated genes, and cell deconvolution was applied. To assess the potential causal contribution of
    reduced oligodendrocyte gene expression to CSS effects, mice heterozygous for the oligodendrocyte gene cyclic nucleotide phosphodiesterase (Cnp1) on a BL/6 background were studied;
    a 2 genotype (wildtype, Cnp1+/−
    ) × 2 environment (control, CSS) design was used to investigate
    effects on emotional behavior and amygdala microglia. In BL/6 mice, in prefrontal cortex and
    amygdala tissue comprising gray and white matter, CSS downregulated expression of multiple
    oligodendroycte genes encoding myelin and myelin-axon-integrity proteins, and cell deconvolution identified a lower proportion of oligodendrocytes in amygdala. Quantification of oligodendrocyte proteins in amygdala gray matter did not yield evidence for reduced translation,
    suggesting that CSS impacts primarily on white matter oligodendrocytes or the myelin transcriptome. In Cnp1 mice, social interaction was reduced by CSS in Cnp1+/− mice specifically;
    using ionized calcium-binding adaptor molecule 1 (IBA1) expression, microglia activity was
    increased additively by Cnp1+/− and CSS in amygdala gray and white matter. This study provides back-translational evidence that oligodendrocyte changes are relevant to the pathophysiology and potentially the treatment of stress-related neuropsychiatric disorders.
  • Cattani, A., Floccia, C., Kidd, E., Pettenati, P., Onofrio, D., & Volterra, V. (2019). Gestures and words in naming: Evidence from crosslinguistic and crosscultural comparison. Language Learning, 69(3), 709-746. doi:10.1111/lang.12346.

    Abstract

    We report on an analysis of spontaneous gesture production in 2‐year‐old children who come from three countries (Italy, United Kingdom, Australia) and who speak two languages (Italian, English), in an attempt to tease apart the influence of language and culture when comparing children from different cultural and linguistic environments. Eighty‐seven monolingual children aged 24–30 months completed an experimental task measuring their comprehension and production of nouns and predicates. The Italian children scored significantly higher than the other groups on all lexical measures. With regard to gestures, British children produced significantly fewer pointing and speech combinations compared to Italian and Australian children, who did not differ from each other. In contrast, Italian children produced significantly more representational gestures than the other two groups. We conclude that spoken language development is primarily influenced by the input language over gesture production, whereas the combination of cultural and language environments affects gesture production.
  • Chang, Y.-N., Monaghan, P., & Welbourne, S. (2019). A computational model of reading across development: Effects of literacy onset on language processing. Journal of Memory and Language, 108: 104025. doi:10.1016/j.jml.2019.05.003.

    Abstract

    Cognitive development is shaped by interactions between cognitive architecture and environmental experiences
    of the growing brain. We examined the extent to which this interaction during development could be observed in
    language processing. We focused on age of acquisition (AoA) effects in reading, where early-learned words tend
    to be processed more quickly and accurately relative to later-learned words. We implemented a computational
    model including representations of print, sound and meaning of words, with training based on children’s gradual
    exposure to language. The model produced AoA effects in reading and lexical decision, replicating the larger
    effects of AoA when semantic representations are involved. Further, the model predicted that AoA would relate
    to differing use of the reading system, with words acquired before versus after literacy onset with distinctive
    accessing of meaning and sound representations. An analysis of behaviour from the English Lexicon project was
    consistent with the predictions: Words acquired before literacy are more likely to access meaning via sound,
    showing a suppressed AoA effect, whereas words acquired after literacy rely more on direct print to meaning
    mappings, showing an exaggerated AoA effect. The reading system reveals vestigial traces of acquisition reflected
    in differing use of word representations during reading.
  • Chang, Y.-N., & Monaghan, P. (2019). Quantity and diversity of preliteracy language exposure both affect literacy development: Evidence from a computational model of reading. Scientific Studies of Reading, 23(3), 235-253. doi:10.1080/10888438.2018.1529177.

    Abstract

    Diversity of vocabulary knowledge and quantity of language exposure prior to literacy are key predictors of reading development. However, diversity and quantity of exposure are difficult to distinguish in behavioural studies, and so the causal relations with literacy are not well known. We tested these relations by training a connectionist triangle model of reading that learned to map between semantic; phonological; and, later, orthographic forms of words. The model first learned to map between phonology and semantics, where we manipulated the quantity and diversity of this preliterate language experience. Then the model learned to read. Both diversity and quantity of exposure had unique effects on reading performance, with larger effects for written word comprehension than for reading fluency. The results further showed that quantity of preliteracy language exposure was beneficial only when this was to a varied vocabulary and could be an impediment when exposed to a limited vocabulary.
  • Chen, J. (2008). The acquisition of verb compounding in Mandarin Chinese. PhD Thesis, Vrije Universiteit Amsterdam, Amsterdam.

    Abstract

    Seeing someone breaking a stick into two, an English speaks typically describes with a verb break, but a Mandarin speaker has to say bai1-duan4 ‘bend-be.broken’, a verb
    compound composed of two free verbs with each verb encoding one aspect of the breaking event. Verb compounding represents a typical and productive way to describe
    events of motion (e.g., zou3-chu1 ‘walk-exit’), and state change (e.g., bai1-duan4 ‘bendbe.broken’), the most common types of events that children of all languages are exposed
    to from an early age. Since languages vary in how events are linguistically encoded and categorized, the development of verb compounding provides a window to investigate the
    acquisition of form and meaning mapping for highly productive but constrained constructions and the interaction between children’s linguistic development and cognitive
    development. The theoretical analysis of verb compounds has been one of the central issues in Chinese linguistics, but the acquisition of this grammatical system has never
    been systematically studied. This dissertation constitutes the first in-depth study of this topic. It analyzes speech data from two longitudinal corpora as well as the data collected from five experiments on production and comprehension of verb compounds from children in P. R. China. It provides a description of the developmental process and unravels the complex learning tasks from the perspective of language production, comprehension, event categorization, and the interface of semantics and syntax. In showing how first-language learners acquire the Mandarin-specific way of representing and encoding causal events and motion events, this study has significance both for studies of language acquisition and for studies of cognition and event construal.
  • Chen, X. S., White, W. T. J., Collins, L. J., & Penny, D. (2008). Computational identification of four spliceosomal snRNAs from the deep-branch eukaryote Giardia intestinalis. PLoS One, 3(8), e3106. doi:10.1371/journal.pone.0003106.

    Abstract

    RNAs processing other RNAs is very general in eukaryotes, but is not clear to what extent it is ancestral to eukaryotes. Here we focus on pre-mRNA splicing, one of the most important RNA-processing mechanisms in eukaryotes. In most eukaryotes splicing is predominantly catalysed by the major spliceosome complex, which consists of five uridine-rich small nuclear RNAs (U-snRNAs) and over 200 proteins in humans. Three major spliceosomal introns have been found experimentally in Giardia; one Giardia U-snRNA (U5) and a number of spliceosomal proteins have also been identified. However, because of the low sequence similarity between the Giardia ncRNAs and those of other eukaryotes, the other U-snRNAs of Giardia had not been found. Using two computational methods, candidates for Giardia U1, U2, U4 and U6 snRNAs were identified in this study and shown by RT-PCR to be expressed. We found that identifying a U2 candidate helped identify U6 and U4 based on interactions between them. Secondary structural modelling of the Giardia U-snRNA candidates revealed typical features of eukaryotic U-snRNAs. We demonstrate a successful approach to combine computational and experimental methods to identify expected ncRNAs in a highly divergent protist genome. Our findings reinforce the conclusion that spliceosomal small-nuclear RNAs existed in the last common ancestor of eukaryotes.
  • Chen, A., & Mennen, I. (2008). Encoding interrogativity intonationally in a second language. In P. Barbosa, S. Madureira, & C. Reis (Eds.), Proceedings of the 4th International Conferences on Speech Prosody (pp. 513-516). Campinas: Editora RG/CNPq.

    Abstract

    This study investigated how untutored learners encode interrogativity intonationaly in a second language. Questions produced in free conversation were selected from longitudinal data of four untutored Italian learners of English. The questions were mostly wh-questions (WQs) and declarative questions (DQs). We examined the use of three cross-linguistically attested question cues: final rise, high peak and late peak. It was found that across learners the final rise occurred more frequently in DQs than in WQs. This is in line with the Functional Hypothesis whereby less syntactically-marked questions are more intonationally marked. However, the use of peak height and alignment is less consistent. The peak of the nuclear pitch accent was not necessarily higher and later in DQs than in WQs. The difference in learners’ exploitation of these cues can be explained by the relative importance of a question cue in the target language.
  • Chen, A., Gussenhoven, C., & Rietveld, T. (2004). Language specificity in perception of paralinguistic intonational meaning. Language and Speech, 47(4), 311-349.

    Abstract

    This study examines the perception of paralinguistic intonational meanings deriving from Ohala’s Frequency Code (Experiment 1) and Gussenhoven’s Effort Code (Experiment 2) in British English and Dutch. Native speakers of British English and Dutch listened to a number of stimuli in their native language and judged each stimulus on four semantic scales deriving from these two codes: SELF-CONFIDENT versus NOT SELF-CONFIDENT, FRIENDLY versus NOT FRIENDLY (Frequency Code); SURPRISED versus NOT SURPRISED, and EMPHATIC versus NOT EMPHATIC (Effort Code). The stimuli, which were lexically equivalent across the two languages, differed in pitch contour, pitch register and pitch span in Experiment 1, and in pitch register, peak height, peak alignment and end pitch in Experiment 2. Contrary to the traditional view that the paralinguistic usage of intonation is similar across languages, it was found that British English and Dutch listeners differed considerably in the perception of “confident,” “friendly,” “emphatic,” and “surprised.” The present findings support a theory of paralinguistic meaning based on the universality of biological codes, which however acknowledges a languagespecific component in the implementation of these codes.
  • Cho, T., & McQueen, J. M. (2004). Phonotactics vs. phonetic cues in native and non-native listening: Dutch and Korean listeners' perception of Dutch and English. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1301-1304). Seoul: Sunjijn Printing Co.

    Abstract

    We investigated how listeners of two unrelated languages, Dutch and Korean, process phonotactically legitimate and illegitimate sounds spoken in Dutch and American English. To Dutch listeners, unreleased word-final stops are phonotactically illegal because word-final stops in Dutch are generally released in isolation, but to Korean listeners, released final stops are illegal because word-final stops are never released in Korean. Two phoneme monitoring experiments showed a phonotactic effect: Dutch listeners detected released stops more rapidly than unreleased stops whereas the reverse was true for Korean listeners. Korean listeners with English stimuli detected released stops more accurately than unreleased stops, however, suggesting that acoustic-phonetic cues associated with released stops improve detection accuracy. We propose that in non-native speech perception, phonotactic legitimacy in the native language speeds up phoneme recognition, the richness of acousticphonetic cues improves listening accuracy, and familiarity with the non-native language modulates the relative influence of these two factors.
  • Cho, T. (2004). Prosodically conditioned strengthening and vowel-to-vowel coarticulation in English. Journal of Phonetics, 32(2), 141-176. doi:10.1016/S0095-4470(03)00043-3.

    Abstract

    The goal of this study is to examine how the degree of vowel-to-vowel coarticulation varies as a function of prosodic factors such as nuclear-pitch accent (accented vs. unaccented), level of prosodic boundary (Prosodic Word vs. Intermediate Phrase vs. Intonational Phrase), and position-in-prosodic-domain (initial vs. final). It is hypothesized that vowels in prosodically stronger locations (e.g., in accented syllables and at a higher prosodic boundary) are not only coarticulated less with their neighboring vowels, but they also exert a stronger influence on their neighbors. Measurements of tongue position for English /a i/ over time were obtained with Carsten’s electromagnetic articulography. Results showed that vowels in prosodically stronger locations are coarticulated less with neighboring vowels, but do not exert a stronger influence on the articulation of neighboring vowels. An examination of the relationship between coarticulation and duration revealed that (a) accent-induced coarticulatory variation cannot be attributed to a duration factor and (b) some of the data with respect to boundary effects may be accounted for by the duration factor. This suggests that to the extent that prosodically conditioned coarticulatory variation is duration-independent, there is no absolute causal relationship from duration to coarticulation. It is proposed that prosodically conditioned V-to-V coarticulatory reduction is another type of strengthening that occurs in prosodically strong locations. The prosodically driven coarticulatory patterning is taken to be part of the phonetic signatures of the hierarchically nested structure of prosody.
  • Cho, T., & McQueen, J. M. (2008). Not all sounds in assimilation environments are perceived equally: Evidence from Korean. Journal of Phonetics, 36, 239-249. doi:doi:10.1016/j.wocn.2007.06.001.

    Abstract

    This study tests whether potential differences in the perceptual robustness of speech sounds influence continuous-speech processes. Two phoneme-monitoring experiments examined place assimilation in Korean. In Experiment 1, Koreans monitored for targets which were either labials (/p,m/) or alveolars (/t,n/), and which were either unassimilated or assimilated to a following /k/ in two-word utterances. Listeners detected unaltered (unassimilated) labials faster and more accurately than assimilated labials; there was no such advantage for unaltered alveolars. In Experiment 2, labial–velar differences were tested using conditions in which /k/ and /p/ were illegally assimilated to a following /t/. Unassimilated sounds were detected faster than illegally assimilated sounds, but this difference tended to be larger for /k/ than for /p/. These place-dependent asymmetries suggest that differences in the perceptual robustness of segments play a role in shaping phonological patterns.
  • Cho, T., & Johnson, E. K. (2004). Acoustic correlates of phrase-internal lexical boundaries in Dutch. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1297-1300). Seoul: Sunjin Printing Co.

    Abstract

    The aim of this study was to determine if Dutch speakers reliably signal phrase-internal lexical boundaries, and if so, how. Six speakers recorded 4 pairs of phonemically identical strong-weak-strong (SWS) strings with matching syllable boundaries but mismatching intended word boundaries (e.g. reis # pastei versus reispas # tij, or more broadly C1V2(C)#C2V2(C)C3V3(C) vs. C1V2(C)C2V2(C)#C3V3(C)). An Analysis of Variance revealed 3 acoustic parameters that were significantly greater in S#WS items (C2 DURATION, RIME1 DURATION, C3 BURST AMPLITUDE) and 5 parameters that were significantly greater in the SW#S items (C2 VOT, C3 DURATION, RIME2 DURATION, RIME3 DURATION, and V2 AMPLITUDE). Additionally, center of gravity measurements suggested that the [s] to [t] coarticulation was greater in reis # pa[st]ei versus reispa[s] # [t]ij. Finally, a Logistic Regression Analysis revealed that the 3 parameters (RIME1 DURATION, RIME2 DURATION, and C3 DURATION) contributed most reliably to a S#WS versus SW#S classification.
  • Choi, S., McDonough, L., Bowerman, M., & Mandler, J. M. (1999). Early sensitivity to language-specific spatial categories in English and Korean. Cognitive Development, 14, 241-268. doi:10.1016/S0885-2014(99)00004-0.

    Abstract

    This study investigates young children’s comprehension of spatial terms in two languages that categorize space strikingly differently. English makes a distinction between actions resulting in containment (put in) versus support or surface attachment (put on), while Korean makes a cross-cutting distinction between tight-fit relations (kkita) versus loose-fit or other contact relations (various verbs). In particular, the Korean verb kkita refers to actions resulting in a tight-fit relation regardless of containment or support. In a preferential looking study we assessed the comprehension of in by 20 English learners and kkita by 10 Korean learners, all between 18 and 23 months. The children viewed pairs of scenes while listening to sentences with and without the target word. The target word led children to gaze at different and language-appropriate aspects of the scenes. We conclude that children are sensitive to language-specific spatial categories by 18–23 months.
  • Choi, S., & Bowerman, M. (1991). Learning to express motion events in English and Korean: The influence of language-specific lexicalization patterns. Cognition, 41, 83-121. doi:10.1016/0010-0277(91)90033-Z.

    Abstract

    English and Korean differ in how they lexicalize the components of motionevents. English characteristically conflates Motion with Manner, Cause, or Deixis, and expresses Path separately. Korean, in contrast, conflates Motion with Path and elements of Figure and Ground in transitive clauses for caused Motion, but conflates motion with Deixis and spells out Path and Manner separately in intransitive clauses for spontaneous motion. Children learningEnglish and Korean show sensitivity to language-specific patterns in the way they talk about motion from as early as 17–20 months. For example, learners of English quickly generalize their earliest spatial words — Path particles like up, down, and in — to both spontaneous and caused changes of location and, for up and down, to posture changes, while learners of Korean keep words for spontaneous and caused motion strictly separate and use different words for vertical changes of location and posture changes. These findings challenge the widespread view that children initially map spatial words directly to nonlinguistic spatial concepts, and suggest that they are influenced by the semantic organization of their language virtually from the beginning. We discuss how input and cognition may interact in the early phases of learning to talk about space.
  • Cholin, J. (2004). Syllables in speech production: Effects of syllable preparation and syllable frequency. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.60589.

    Abstract

    The fluent production of speech is a very complex human skill. It requires the coordination of several articulatory subsystems. The instructions that lead articulatory movements to execution are the result of the interplay of speech production levels that operate above the articulatory network. During the process of word-form encoding, the groundwork for the articulatory programs is prepared which then serve the articulators as basic units. This thesis investigated whether or not syllables form the basis for the articulatory programs and in particular whether or not these syllable programs are stored, separate from the store of the lexical word-forms. It is assumed that syllable units are stored in a so-called 'mental syllabary'. The main goal of this thesis was to find evidence of the syllable playing a functionally important role in speech production and for the assumption that syllables are stored units. In a variant of the implicit priming paradigm, it was investigated whether information about the syllabic structure of a target word facilitates the preparation (advanced planning) of a to-be-produced utterance. These experiments yielded evidence for the functionally important role of syllables in speech production. In a subsequent row of experiments, it could be demonstrated that the production of syllables is sensitive to frequency. Syllable frequency effects provide strong evidence for the notion of a mental syllabary because only stored units are likely to exhibit frequency effects. In a last study, effects of syllable preparation and syllable frequency were investigated in a combined study to disentangle the two effects. The results of this last experiment converged with those reported for the other experiments and added further support to the claim that syllables play a core functional role in speech production and are stored in a mental syllabary.

    Additional information

    full text via Radboud Repository
  • Cholin, J., Schiller, N. O., & Levelt, W. J. M. (2004). The preparation of syllables in speech production. Journal of Memory and Language, 50(1), 47-61. doi:10.1016/j.jml.2003.08.003.

    Abstract

    Models of speech production assume that syllables play a functional role in the process of word-form encoding in speech production. In this study, we investigate this claim and specifically provide evidence about the level at which syllables come into play. We report two studies using an odd-man-out variant of the implicit priming paradigm to examine the role of the syllable during the process of word formation. Our results show that this modified version of the implicit priming paradigm can trace the emergence of syllabic structure during spoken word generation. Comparing these results to prior syllable priming studies, we conclude that syllables emerge at the interface between phonological and phonetic encoding. The results are discussed in terms of the WEAVER++ model of lexical access.
  • Chu, M., & Kita, S. (2008). Spontaneous gestures during mental rotation tasks: Insights into the microdevelopment of the motor strategy. Journal of Experimental Psychology: General, 137, 706-723. doi:10.1037/a0013157.

    Abstract

    This study investigated the motor strategy involved in mental rotation tasks by examining 2 types of spontaneous gestures (hand–object interaction gestures, representing the agentive hand action on an object, vs. object-movement gestures, representing the movement of an object by itself) and different types of verbal descriptions of rotation. Hand–object interaction gestures were produced earlier than object-movement gestures, the rate of both types of gestures decreased, and gestures became more distant from the stimulus object over trials (Experiments 1 and 3). Furthermore, in the first few trials, object-movement gestures increased, whereas hand–object interaction gestures decreased, and this change of motor strategies was also reflected in the type of verbal description of rotation in the concurrent speech (Experiment 2). This change of motor strategies was hampered when gestures were prohibited (Experiment 4). The authors concluded that the motor strategy becomes less dependent on agentive action on the object, and also becomes internalized over the course of the experiment, and that gesture facilitates the former process. When solving a problem regarding the physical world, adults go through developmental processes similar to internalization and symbolic distancing in young children, albeit within a much shorter time span.
  • Chwilla, D., Hagoort, P., & Brown, C. M. (1998). The mechanism underlying backward priming in a lexical decision task: Spreading activation versus semantic matching. Quarterly Journal of Experimental Psychology, 51A(3), 531-560. doi:10.1080/713755773.

    Abstract

    Koriat (1981) demonstrated that an association from the target to a preceding prime, in the absence of an association from the prime to the target, facilitates lexical decision and referred to this effect as "backward priming". Backward priming is of relevance, because it can provide information about the mechanism underlying semantic priming effects. Following Neely (1991), we distinguish three mechanisms of priming: spreading activation, expectancy, and semantic matching/integration. The goal was to determine which of these mechanisms causes backward priming, by assessing effects of backward priming on a language-relevant ERP component, the N400, and reaction time (RT). Based on previous work, we propose that the N400 priming effect reflects expectancy and semantic matching/integration, but in contrast with RT does not reflect spreading activation. Experiment 1 shows a backward priming effect that is qualitatively similar for the N400 and RT in a lexical decision task. This effect was not modulated by an ISI manipulation. Experiment 2 clarifies that the N400 backward priming effect reflects genuine changes in N400 amplitude and cannot be ascribed to other factors. We will argue that these backward priming effects cannot be due to expectancy but are best accounted for in terms of semantic matching/integration.
  • Clahsen, H., Sonnenstuhl, I., Hadler, M., & Eisenbeiss, S. (2008). Morphological paradigms in language processing and language disorders. Transactions of the Philological Society, 99(2), 247-277. doi:10.1111/1467-968X.00082.

    Abstract

    We present results from two cross‐modal morphological priming experiments investigating regular person and number inflection on finite verbs in German. We found asymmetries in the priming patterns between different affixes that can be predicted from the structure of the paradigm. We also report data from language disorders which indicate that inflectional errors produced by language‐impaired adults and children tend to occur within a given paradigm dimension, rather than randomly across the paradigm. We conclude that morphological paradigms are used by the human language processor and can be systematically affected in language disorders.
  • Clark, E. V., & Bowerman, M. (1986). On the acquisition of final voiced stops. In J. A. Fishman (Ed.), The Fergusonian impact: in honor of Charles A. Ferguson on the occasion of his 65th birthday. Volume 1: From phonology to society (pp. 51-68). Berlin: Mouton de Gruyter.
  • Claus, A. (2004). Access management system. Language Archive Newsletter, 1(2), 5.

Share this page