Publications

Displaying 101 - 200 of 972
  • Brown, P., & Levinson, S. C. (1998). Politeness, introduction to the reissue: A review of recent work. In A. Kasher (Ed.), Pragmatics: Vol. 6 Grammar, psychology and sociology (pp. 488-554). London: Routledge.

    Abstract

    This article is a reprint of chapter 1, the introduction to Brown and Levinson, 1987, Politeness: Some universals in language usage (Cambridge University Press).
  • Brown, C. M., & Hagoort, P. (1993). The processing nature of the N400: Evidence from masked priming. Journal of Cognitive Neuroscience, 5, 34-44. doi:10.1162/jocn.1993.5.1.34.

    Abstract

    The N400 is an endogenous event-related brain potential (ERP) that is sensitive to semantic processes during language comprehension. The general question we address in this paper is which aspects of the comprehension process are manifest in the N400. The focus is on the sensitivity of the N400 to the automatic process of lexical access, or to the controlled process of lexical integration. The former process is the reflex-like and effortless behavior of computing a form representation of the linguistic signal, and of mapping this representation onto corresponding entries in the mental lexicon. The latter process concerns the integration of a spoken or written word into a higher-order meaning representation of the context within which it occurs. ERPs and reaction times (RTs) were acquired to target words preceded by semantically related and unrelated prime words. The semantic relationship between a prime and its target has been shown to modulate the amplitude of the N400 to the target. This modulation can arise from lexical access processes, reflecting the automatic spread of activation between words related in meaning in the mental lexicon. Alternatively, the N400 effect can arise from lexical integration processes, reflecting the relative ease of meaning integration between the prime and the target. To assess the impact of automatic lexical access processes on the N400, we compared the effect of masked and unmasked presentations of a prime on the N400 to a following target. Masking prevents perceptual identification, and as such it is claimed to rule out effects from controlled processes. It therefore enables a stringent test of the possible impact of automatic lexical access processes on the N400. The RT study showed a significant semantic priming effect under both unmasked and masked presentations of the prime. The result for masked priming reflects the effect of automatic spreading of activation during the lexical access process. The ERP study showed a significant N400 effect for the unmasked presentation condition, but no such effect for the masked presentation condition. This indicates that the N400 is not a manifestation of lexical access processes, but reflects aspects of semantic integration processes.
  • Brown, P. (1993). The role of shape in the acquisition of Tzeltal (Mayan) locatives. In E. V. Clark (Ed.), Proceedings of the 25th Annual Child Language Research Forum (pp. 211-220). Stanford, CA: CSLI/University of Chicago Press.

    Abstract

    In a critique of the current state of theories of language acquisition, Bowerman (1985) has argued forcibly for the need to take crosslinguistic variation in semantic structure seriously, in order to understand children's acquisition of semantic categories in the process of learning their language. The semantics of locative expressions in the Mayan language Tzeltal exemplifies this point, for no existing theory of spatial expressions provides an adequate basis for capturing the semantic structure of spatial description in this Mayan language. In this paper I describe some of the characteristics of Tzeltal locative descriptions, as a contribution to the growing body of data on crosslinguistic variation in this domain and as a prod to ideas about acquisition processes, confining myself to the topological notions of 'on' and 'in', and asking whether, and how, these notions are involved in the semantic distinctions underlying Tzeltal locatives.
  • Brown, P. (2012). Time and space in Tzeltal: Is the future uphill? Frontiers in Psychology, 3, 212. doi:10.3389/fpsyg.2012.00212.

    Abstract

    Linguistic expressions of time often draw on spatial language, which raises the question of whether cultural specificity in spatial language and cognition is reflected in thinking about time. In the Mayan language Tzeltal, spatial language relies heavily on an absolute frame of reference utilizing the overall slope of the land, distinguishing an “uphill/downhill” axis oriented from south to north, and an orthogonal “crossways” axis (sunrise-set) on the basis of which objects at all scales are located. Does this absolute system for calculating spa-tial relations carry over into construals of temporal relations? This question was explored in a study where Tzeltal consultants produced temporal expressions and performed two different non-linguistic temporal ordering tasks. The results show that at least five distinct schemata for conceptualizing time underlie Tzeltal linguistic expressions: (i) deictic ego-centered time, (ii) time as an ordered sequence (e.g., “first”/“later”), (iii) cyclic time (times of the day, seasons), (iv) time as spatial extension or location (e.g., “entering/exiting July”), and (v) a time vector extending uphillwards into the future. The non-linguistic task results showed that the “time moves uphillwards” metaphor, based on the absolute frame of reference prevalent in Tzeltal spatial language and thinking and important as well in the linguistic expressions for time, is not strongly reflected in responses on these tasks. It is argued that systematic and consistent use of spatial language in an absolute frame of reference does not necessarily transfer to consistent absolute time conceptualization in non-linguistic tasks; time appears to be more open to alternative construals.
  • Brown, P. (2012). To ‘put’ or to ‘take’? Verb semantics in Tzeltal placement and removal expressions. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 55-78). Amsterdam: Benjamins.

    Abstract

    This paper examines the verbs and other spatial vocabulary used for describing events of ‘putting’ and ‘taking’ in Tzeltal (Mayan). I discuss the semantics of different ‘put’ and ‘take’ verbs, the constructions they occur in, and the extensional patterns of verbs used in ‘put’ (Goal-oriented) vs. ‘take’ (Source-oriented) descriptions. A relatively limited role for semantically general verbs was found. Instead, Tzeltal is a ‘multiverb language’ with many different verbs usable to predicate ‘put’ and ‘take’ events, with verb choice largely determined by the shape, orientation, and resulting disposition of the Figure and Ground objects. The asymmetry that has been observed in other languages, with Goal-oriented ‘put’ verbs more finely distinguished lexically than Source-oriented ‘take’ verbs, is also apparent in Tzeltal.
  • Brucato, N., Mazières, S., Guitard, E., Giscard, P.-H., Bois, É., Larrouy, G., & Dugoujon, J.-M. (2012). The Hmong diaspora: Preserved South-East Asian genetic ancestry in French Guianese Asians. Comptes Rendus Biologies, 335, 698-707. doi:10.1016/j.crvi.2012.10.003.

    Abstract

    The Hmong Diaspora is one of the widest modern human migrations. Mainly localised in South-East Asia, the United States of America, and metropolitan France, a small community has also settled the Amazonian forest of French Guiana. We have biologically analysed 62 individuals of this unique Guianese population through three complementary genetic markers: mitochondrial DNA (HVS-I/II and coding region SNPs), Y-chromosome (SNPs and STRs), and the Gm allotypic system. All genetic systems showed a high conservation of the Asian gene pool (Asian ancestry: mtDNA = 100.0%; NRY = 99.1%; Gm = 96.6%), without a trace of founder effect. When compared across various Asian populations, the highest correlations were observed with Hmong-Mien groups still living in South-East Asia (Fst < 0.05; P-value < 0.05). Despite a long history punctuated by exodus, the French Guianese Hmong have maintained their original genetic diversity.
  • Bruggeman, L., & Cutler, A. (2023). Listening like a native: Unprofitable procedures need to be discarded. Bilingualism: Language and Cognition, 26(5), 1093-1102. doi:10.1017/S1366728923000305.

    Abstract

    Two languages, historically related, both have lexical stress, with word stress distinctions signalled in each by the same suprasegmental cues. In each language, words can overlap segmentally but differ in placement of primary versus secondary stress (OCtopus, ocTOber). However, secondary stress occurs more often in the words of one language, Dutch, than in the other, English, and largely because of this, Dutch listeners find it helpful to use suprasegmental stress cues when recognising spoken words. English listeners, in contrast, do not; indeed, Dutch listeners can outdo English listeners in correctly identifying the source words of English word fragments (oc-). Here we show that Dutch-native listeners who reside in an English-speaking environment and have become dominant in English, though still maintaining their use of these stress cues in their L1, ignore the same cues in their L2 English, performing as poorly in the fragment identification task as the L1 English do.
  • Budwig, N., Narasimhan, B., & Srivastava, S. (2006). Interim solutions: The acquisition of early constructions in Hindi. In E. Clark, & B. Kelly (Eds.), Constructions in acquisition (pp. 163-185). Stanford: CSLI Publications.
  • Bulut, T. (2023). Domain‐general and domain‐specific functional networks of Broca's area underlying language processing. Brain and Behavior, 13(7): e3046. doi:10.1002/brb3.3046.

    Abstract

    Introduction
    Despite abundant research on the role of Broca's area in language processing, there is still no consensus on language specificity of this region and its connectivity network.

    Methods
    The present study employed the meta-analytic connectivity modeling procedure to identify and compare domain-specific (language-specific) and domain-general (shared between language and other domains) functional connectivity patterns of three subdivisions within the broadly defined Broca's area: pars opercularis (IFGop), pars triangularis (IFGtri), and pars orbitalis (IFGorb) of the left inferior frontal gyrus.

    Results
    The findings revealed a left-lateralized frontotemporal network for all regions of interest underlying domain-specific linguistic functions. The domain-general network, however, spanned frontoparietal regions that overlap with the multiple-demand network and subcortical regions spanning the thalamus and the basal ganglia.

    Conclusions
    The findings suggest that language specificity of Broca's area emerges within a left-lateralized frontotemporal network, and that domain-general resources are garnered from frontoparietal and subcortical networks when required by task demands.

    Additional information

    Supporting Information Data availability
  • Burenhult, N. (2006). Body part terms in Jahai. Language Sciences, 28(2-3), 162-180. doi:10.1016/j.langsci.2005.11.002.

    Abstract

    This article explores the lexicon of body part terms in Jahai, a Mon-Khmer language spoken by a group of hunter–gatherers in the Malay Peninsula. It provides an extensive inventory of body part terms and describes their structural and semantic properties. The Jahai body part lexicon pays attention to fine anatomical detail but lacks labels for major, ‘higher-level’ categories, like ‘trunk’, ‘limb’, ‘arm’ and ‘leg’. In this lexicon it is therefore sometimes difficult to discern a clear partonomic hierarchy, a presumed universal of body part terminology.
  • Burenhult, N. (2012). The linguistic encoding of placement and removal events in Jahai. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 21-36). Amsterdam: Benjamins.

    Abstract

    This paper explores the linguistic encoding of placement and removal events in Jahai (Austroasiatic, Malay Peninsula) on the basis of descriptions from a video elicitation task. It outlines the structural characteristics of the descriptions and isolates semantically a set of situation types that find expression in lexical opposites: (1) putting/taking, (2) inserting/extracting, (3) dressing/undressing, and (4) placing/removing one’s body parts. All involve deliberate and controlled placing/removing of a solid Figure object in relation to a Ground which is not a human recipient. However, they differ as to the identity of and physical relationship between Figure and Ground. The data also provide evidence of variation in how semantic roles are mapped onto syntactic constituents: in most situation types, Agent, Figure and Ground associate with particular constituent NPs, but some placement events are described with semantically specialised verbs encoding the Figure and even the Ground.
  • Buzon, V., Carbo, L. R., Estruch, S. B., Fletterick, R. J., & Estebanez-Perpina, E. (2012). A conserved surface on the ligand binding domain of nuclear receptors for allosteric control. Molecular and Cellular Endocrinology, 348(2), 394-402. doi:10.1016/j.mce.2011.08.012.

    Abstract

    Nuclear receptors (NRs) form a large superfamily of transcription factors that participate in virtually every key biological process. They control development, fertility, gametogenesis and are misregulated in many cancers. Their enormous functional plasticity as transcription factors relates in part to NR-mediated interactions with hundreds of coregulatory proteins upon ligand (e.g., hormone) binding to their ligand binding domains (LBD), or following covalent modification. Some coregulator association relates to the distinct residues that shape a coactivator binding pocket termed AF-2, a surface groove that primarily determines the preference and specificity of protein–protein interactions. However, the highly conserved AF-2 pocket in the NR superfamily appears to be insufficient to account for NR subtype specificity leading to fine transcriptional modulation in certain settings. Additional protein–protein interaction surfaces, most notably on their LBD, may contribute to modulating NR function. NR coregulators and chaperones, normally much larger than the NR itself, may also bind to such interfaces. In the case of the androgen receptor (AR) LBD surface, structural and functional data highlighted the presence of another site named BF-3, which lies at a distinct but topographically adjacent surface to AF-2. AR BF-3 is a hot spot for mutations involved in prostate cancer and androgen insensitivity syndromes, and some FDA-approved drugs bind at this site. Structural studies suggested an allosteric relationship between AF-2 and BF-3, as occupancy of the latter affected coactivator recruitment to AF-2. Physiological relevant partners of AR BF-3 have not been described as yet. The newly discovered site is highly conserved among the steroid receptors subclass, but is also present in other NRs. Several missense mutations in the BF-3 regions of these human NRs are implicated in pathology and affect their function in vitro. The fact that AR BF-3 pocket is a druggable site evidences its pharmacological potential. Compounds that may affect allosterically NR function by binding to BF-3 open promising avenues to develop type-specific NR modulators.

    Files private

    Request files
  • Cabrelli, J., Chaouch-Orozco, A., González Alonso, J., Pereira Soares, S. M., Puig-Mayenco, E., & Rothman, J. (2023). Introduction - Multilingualism: Language, brain, and cognition. In J. Cabrelli, A. Chaouch-Orozco, J. González Alonso, S. M. Pereira Soares, E. Puig-Mayenco, & J. Rothman (Eds.), The Cambridge handbook of third language acquisition (pp. 1-20). Cambridge: Cambridge University Press. doi:10.1017/9781108957823.001.

    Abstract

    This chapter provides an introduction to the handbook. It succintly overviews the key questions in the field of L3/Ln acquisition and summarizes the scope of all the chapters included. The chapter ends by raising some outstanding questions that the field needs to address.
  • Carlsson, K., Andersson, J., Petrovic, P., Petersson, K. M., Öhman, A., & Ingvar, M. (2006). Predictability modulates the affective and sensory-discriminative neural processing of pain. NeuroImage, 32(4), 1804-1814. doi:10.1016/j.neuroimage.2006.05.027.

    Abstract

    Knowing what is going to happen next, that is, the capacity to predict upcoming events, modulates the extent to which aversive stimuli induce stress and anxiety. We explored this issue by manipulating the temporal predictability of aversive events by means of a visual cue, which was either correlated or uncorrelated with pain stimuli (electric shocks). Subjects reported lower levels of anxiety, negative valence and pain intensity when shocks were predictable. In addition to attenuate focus on danger, predictability allows for correct temporal estimation of, and selective attention to, the sensory input. With functional magnetic resonance imaging, we found that predictability was related to enhanced activity in relevant sensory-discriminative processing areas, such as the primary and secondary sensory cortex and posterior insula. In contrast, the unpredictable more aversive context was correlated to brain activity in the anterior insula and the orbitofrontal cortex, areas associated with affective pain processing. This context also prompted increased activity in the posterior parietal cortex and lateral prefrontal cortex that we attribute to enhanced alertness and sustained attention during unpredictability.
  • Carota, F., Moseley, R., & Pulvermüller, F. (2012). Body-part-specific Representations of Semantic Noun Categories. Journal of Cognitive Neuroscience, 24(6), 1492-1509. doi:10.1162/jocn\_a\_00219.

    Abstract

    Word meaning processing in the brain involves ventrolateral temporal cortex, but a semantic contribution of the dorsal stream, especially frontocentral sensorimotor areas, has been controversial. We here examine brain activation during passive reading of object-related nouns from different semantic categories, notably animal, food, and tool words, matched for a range of psycholinguistic features. Results show ventral stream activation in temporal cortex along with category-specific activation patterns in both ventral and dorsal streams, including sensorimotor systems and adjacent pFC. Precentral activation reflected action-related semantic features of the word categories. Cortical regions implicated in mouth and face movements were sparked by food words, and hand area activation was seen for tool words, consistent with the actions implicated by the objects the words are used to speak about. Furthermore, tool words specifically activated the right cerebellum, and food words activated the left orbito-frontal and fusiform areas. We discuss our results in the context of category-specific semantic deficits in the processing of words and concepts, along with previous neuroimaging research, and conclude that specific dorsal and ventral areas in frontocentral and temporal cortex index visual and affective–emotional semantic attributes of object-related nouns and action-related affordances of their referent objects.
  • Carota, F. (2006). Derivational morphology of Italian: Principles for formalization. Literary and Linguistic Computing, 21(SUPPL. 1), 41-53. doi:10.1093/llc/fql007.

    Abstract

    The present paper investigates the major derivational strategies underlying the formation of suffixed words in Italian, with the purpose of tackling the issue of their formalization. After having specified the theoretical cognitive premises that orient the work, the interacting component modules of the suffixation process, i.e. morphonology, morphotactics and affixal semantics, are explored empirically, by drawing ample naturally occurring data on a Corpus of written Italian. A special attention is paid to the semantic mechanisms that are involved into suffixation. Some semantic nuclei are identified for the major suffixed word types of Italian, which are due to word formation rules active at the synchronic level, and a semantic configuration of productive suffixes is suggested. A general framework is then sketched, which combines classical finite-state methods with a feature unification-based word grammar. More specifically, the semantic information specified for the affixal material is internalised into the structures of the Lexical Functional Grammar (LFG). The formal model allows us to integrate the various modules of suffixation. In particular, it treats, on the one hand, the interface between morphonology/morphotactics and semantics and, on the other hand, the interface between suffixation and inflection. Furthermore, since LFG exploits a hierarchically organised lexicon in order to structure the information regarding the affixal material, affixal co-selectional restrictions are advatageously constrained, avoiding potential multiple spurious analysis/generations.
  • Carota, F., Nili, H., Kriegeskorte, N., & Pulvermüller, F. (2023). Experientially-grounded and distributional semantic vectors uncover dissociable representations of semantic categories. Language, Cognition and Neuroscience. Advance online publication. doi:10.1080/23273798.2023.2232481.

    Abstract

    Neuronal populations code similar concepts by similar activity patterns across the human brain's semantic networks. However, it is unclear to what extent such meaning-to-symbol mapping reflects distributional statistics, or experiential information grounded in sensorimotor and emotional knowledge. We asked whether integrating distributional and experiential data better distinguished conceptual categories than each method taken separately. We examined the similarity structure of fMRI patterns elicited by visually presented action- and object-related words using representational similarity analysis (RSA). We found that the distributional and experiential/integrative models respectively mapped the high-dimensional semantic space in left inferior frontal, anterior temporal, and in left precentral, posterior inferior/middle temporal cortex. Furthermore, results from model comparisons uncovered category-specific similarity patterns, as both distributional and experiential models matched the similarity patterns for action concepts in left fronto-temporal cortex, whilst the experiential/integrative (but not distributional) models matched the similarity patterns for object concepts in left fusiform and angular gyrus.
  • Carota, F., Schoffelen, J.-M., Oostenveld, R., & Indefrey, P. (2023). Parallel or sequential? Decoding conceptual and phonological/phonetic information from MEG signals during language production. Cognitive Neuropsychology, 40(5-6), 298-317. doi:10.1080/02643294.2023.2283239.

    Abstract

    Speaking requires the temporally coordinated planning of core linguistic information, from conceptual meaning to articulation. Recent neurophysiological results suggested that these operations involve a cascade of neural events with subsequent onset times, whilst competing evidence suggests early parallel neural activation. To test these hypotheses, we examined the sources of neuromagnetic activity recorded from 34 participants overtly naming 134 images from 4 object categories (animals, tools, foods and clothes). Within each category, word length and phonological neighbourhood density were co-varied to target phonological/phonetic processes. Multivariate pattern analyses (MVPA) searchlights in source space decoded object categories in occipitotemporal and middle temporal cortex, and phonological/phonetic variables in left inferior frontal (BA 44) and motor cortex early on. The findings suggest early activation of multiple variables due to intercorrelated properties and interactivity of processing, thus raising important questions about the representational properties of target words during the preparatory time enabling overt speaking.
  • Carroll, M., & Flecken, M. (2012). Language production under time pressure: insights into grammaticalisation of aspect (Dutch, Italian) and language processing in bilinguals (Dutch, German). In B. Ahrenholz (Ed.), Einblicke in die Zweitspracherwerbsforschung und Ihre methodischen Verfahren (pp. 49-76). Berlin: De Gruyter.
  • Carroll, M., Lambert, M., Weimar, K., Flecken, M., & von Stutterheim, C. (2012). Tracing trajectories: Motion event construal by advanced L2 French-English and L2 French-German speakers. Language Interaction and Acquisition, 3(2), 202-230. doi:10.1075/lia.3.2.03car.

    Abstract

    Although the typological contrast between Romance and Germanic languages as verb-framed versus satellite-framed (Talmy 1985) forms the background for many empirical studies on L2 acquisition, the inconclusive picture to date calls for more differentiated, fine-grained analyses. The present study goes beyond explanations based on this typological contrast and takes into account the sources from which spatial concepts are mainly derived in order to shape the trajectory traced by the entity in motion when moving through space: the entity in V-languages versus features of the ground in S-languages. It investigates why advanced French learners of English and German have difficulty acquiring the use of spatial concepts typical of the L2s to shape the trajectory, although relevant concepts can be expressed in their L1. The analysis compares motion event descriptions, based on the same sets of video clips, of L1 speakers of the three languages to L1 French-L2 English and L1 French-L2 German speakers, showing that the learners do not fully acquire the use of L2-specific spatial concepts. We argue that encoded concepts derived from the entity in motion vs. the ground lead to a focus on different aspects of motion events, in accordance with their compatibility with these sources, and are difficult to restructure in L2 acquisition.
  • Casasanto, D., & Henetz, T. (2012). Handedness shapes children’s abstract concepts. Cognitive Science, 36, 359-372. doi:10.1111/j.1551-6709.2011.01199.x.

    Abstract

    Can children’s handedness influence how they represent abstract concepts like kindness and intelligence? Here we show that from an early age, right-handers associate rightward space more strongly with positive ideas and leftward space with negative ideas, but the opposite is true for left-handers. In one experiment, children indicated where on a diagram a preferred toy and a dispreferred toy should go. Right-handers tended to assign the preferred toy to a box on the right and the dispreferred toy to a box on the left. Left-handers showed the opposite pattern. In a second experiment, children judged which of two cartoon animals looked smarter (or dumber) or nicer (or meaner). Right-handers attributed more positive qualities to animals on the right, but left-handers to animals on the left. These contrasting associations between space and valence cannot be explained by exposure to language or cultural conventions, which consistently link right with good. Rather, right- and left-handers implicitly associated positive valence more strongly with the side of space on which they can act more fluently with their dominant hands. Results support the body-specificity hypothesis (Casasanto, 2009), showing that children with different kinds of bodies think differently in corresponding ways.
  • Casasanto, D. (2012). Whorfian hypothesis. In J. L. Jackson, Jr. (Ed.), Oxford Bibliographies Online: Anthropology. Oxford: Oxford University Press. doi:10.1093/OBO/9780199766567-0058.

    Abstract

    Introduction
    The Sapir-Whorf hypothesis (a.k.a. the Whorfian hypothesis) concerns the relationship between language and thought. Neither the anthropological linguist Edward Sapir (b. 1884–d. 1939) nor his student Benjamin Whorf (b. 1897–d. 1941) ever formally stated any single hypothesis about the influence of language on nonlinguistic cognition and perception. On the basis of their writings, however, two proposals emerged, generating decades of controversy among anthropologists, linguists, philosophers, and psychologists. According to the more radical proposal, linguistic determinism, the languages that people speak rigidly determine the way they perceive and understand the world. On the more moderate proposal, linguistic relativity, habits of using language influence habits of thinking. As a result, people who speak different languages think differently in predictable ways. During the latter half of the 20th century, the Sapir-Whorf hypothesis was widely regarded as false. Around the turn of the 21st century, however, experimental evidence reopened debate about the extent to which language shapes nonlinguistic cognition and perception. Scientific tests of linguistic determinism and linguistic relativity help to clarify what is universal in the human mind and what depends on the particulars of people’s physical and social experience.
    General Overviews and Foundational Texts

    Writing on the relationship between language and thought predates Sapir and Whorf, and extends beyond the academy. The 19th-century German philosopher Wilhelm von Humboldt argued that language constrains people’s worldview, foreshadowing the idea of linguistic determinism later articulated in Sapir 1929 and Whorf 1956 (Humboldt 1988). The intuition that language radically determines thought has been explored in works of fiction such as Orwell’s dystopian fantasy 1984 (Orwell 1949). Although there is little empirical support for radical linguistic determinism, more moderate forms of linguistic relativity continue to generate influential research, reviewed from an anthropologist’s perspective in Lucy 1997, from a psychologist’s perspective in Hunt and Agnoli 1991, and discussed from multidisciplinary perspectives in Gumperz and Levinson 1996 and Gentner and Goldin-Meadow 2003.
  • Castro-Caldas, A., Petersson, K. M., Reis, A., Stone-Elander, S., & Ingvar, M. (1998). The illiterate brain: Learning to read and write during childhood influences the functional organization of the adult brain. Brain, 121, 1053-1063. doi:10.1093/brain/121.6.1053.

    Abstract

    Learning a specific skill during childhood may partly determine the functional organization of the adult brain. This hypothesis led us to study oral language processing in illiterate subjects who, for social reasons, had never entered school and had no knowledge of reading or writing. In a brain activation study using PET and statistical parametric mapping, we compared word and pseudoword repetition in literate and illiterate subjects. Our study confirms behavioural evidence of different phonological processing in illiterate subjects. During repetition of real words, the two groups performed similarly and activated similar areas of the brain. In contrast, illiterate subjects had more difficulty repeating pseudowords correctly and did not activate the same neural structures as literates. These results are consistent with the hypothesis that learning the written form of language (orthography) interacts with the function of oral language. Our results indicate that learning to read and write during childhood influences the functional organization of the adult human brain.
  • Catani, M., Dell'Acqua, F., Bizzi, A., Forkel, S. J., Williams, S. C., Simmons, A., Murphy, D. G., & Thiebaut de Schotten, M. (2012). Beyond cortical localization in clinico-anatomical correlation. Cortex, 48(10), 1262-1287. doi:10.1016/j.cortex.2012.07.001.

    Abstract

    Last year was the 150th anniversary of Paul Broca's landmark case report on speech disorder that paved the way for subsequent studies of cortical localization of higher cognitive functions. However, many complex functions rely on the activity of distributed networks rather than single cortical areas. Hence, it is important to understand how brain regions are linked within large-scale networks and to map lesions onto connecting white matter tracts. To facilitate this network approach we provide a synopsis of classical neurological syndromes associated with frontal, parietal, occipital, temporal and limbic lesions. A review of tractography studies in a variety of neuropsychiatric disorders is also included. The synopsis is accompanied by a new atlas of the human white matter connections based on diffusion tensor tractography freely downloadable on http://www.natbrainlab.com. Clinicians can use the maps to accurately identify the tract affected by lesions visible on conventional CT or MRI. The atlas will also assist researchers to interpret their group analysis results. We hope that the synopsis and the atlas by allowing a precise localization of white matter lesions and associated symptoms will facilitate future work on the functional correlates of human neural networks as derived from the study of clinical populations. Our goal is to stimulate clinicians to develop a critical approach to clinico-anatomical correlative studies and broaden their view of clinical anatomy beyond the cortical surface in order to encompass the dysfunction related to connecting pathways.

    Additional information

    supplementary file
  • Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2023). Ten-month-old infants’ neural tracking of naturalistic speech is not facilitated by the speaker’s eye gaze. Developmental Cognitive Neuroscience, 64: 101297. doi:10.1016/j.dcn.2023.101297.

    Abstract

    Eye gaze is a powerful ostensive cue in infant-caregiver interactions, with demonstrable effects on language acquisition. While the link between gaze following and later vocabulary is well-established, the effects of eye gaze on other aspects of language, such as speech processing, are less clear. In this EEG study, we examined the effects of the speaker’s eye gaze on ten-month-old infants’ neural tracking of naturalistic audiovisual speech, a marker for successful speech processing. Infants watched videos of a speaker telling stories, addressing the infant with direct or averted eye gaze. We assessed infants’ speech-brain coherence at stress (1–1.75 Hz) and syllable (2.5–3.5 Hz) rates, tested for differences in attention by comparing looking times and EEG theta power in the two conditions, and investigated whether neural tracking predicts later vocabulary. Our results showed that infants’ brains tracked the speech rhythm both at the stress and syllable rates, and that infants’ neural tracking at the syllable rate predicted later vocabulary. However, speech-brain coherence did not significantly differ between direct and averted gaze conditions and infants did not show greater attention to direct gaze. Overall, our results suggest significant neural tracking at ten months, related to vocabulary development, but not modulated by speaker’s gaze.

    Additional information

    supplementary material
  • Chang, F., Janciauskas, M., & Fitz, H. (2012). Language adaptation and learning: Getting explicit about implicit learning. Language and Linguistics Compass, 6, 259-278. doi:10.1002/lnc3.337.

    Abstract

    Linguistic adaptation is a phenomenon where language representations change in response to linguistic input. Adaptation can occur on multiple linguistic levels such as phonology (tuning of phonotactic constraints), words (repetition priming), and syntax (structural priming). The persistent nature of these adaptations suggests that they may be a form of implicit learning and connectionist models have been developed which instantiate this hypothesis. Research on implicit learning, however, has also produced evidence that explicit chunk knowledge is involved in the performance of these tasks. In this review, we examine how these interacting implicit and explicit processes may change our understanding of language learning and processing.
  • Chang, F., Tatsumi, T., Hiranuma, Y., & Bannard, C. (2023). Visual heuristics for verb production: Testing a deep‐learning model with experiments in Japanese. Cognitive Science, 47(8): e13324. doi:10.1111/cogs.13324.

    Abstract

    Tense/aspect morphology on verbs is often thought to depend on event features like telicity, but it is not known how speakers identify these features in visual scenes. To examine this question, we asked Japanese speakers to describe computer-generated animations of simple actions with variation in visual features related to telicity. Experiments with adults and children found that they could use goal information in the animations to select appropriate past and progressive verb forms. They also produced a large number of different verb forms. To explain these findings, a deep-learning model of verb production from visual input was created that could produce a human-like distribution of verb forms. It was able to use visual cues to select appropriate tense/aspect morphology. The model predicted that video duration would be related to verb complexity, and past tense production would increase when it received the endpoint as input. These predictions were confirmed in a third study with Japanese adults. This work suggests that verb production could be tightly linked to visual heuristics that support the understanding of events.
  • Chen, J. (2006). The acquisition of verb compounding in Mandarin. In E. V. Clark, & B. F. Kelly (Eds.), Constructions in acquisition (pp. 111-136). Stanford: CSLI Publications.
  • Chen, X. S., & Brown, C. M. (2012). Computational identification of new structured cis-regulatory elements in the 3'-untranslated region of human protein coding genes. Nucleic Acids Research, 40, 8862-8873. doi:10.1093/nar/gks684.

    Abstract

    Messenger ribonucleic acids (RNAs) contain a large number of cis-regulatory RNA elements that function in many types of post-transcriptional regulation. These cis-regulatory elements are often characterized by conserved structures and/or sequences. Although some classes are well known, given the wide range of RNA-interacting proteins in eukaryotes, it is likely that many new classes of cis-regulatory elements are yet to be discovered. An approach to this is to use computational methods that have the advantage of analysing genomic data, particularly comparative data on a large scale. In this study, a set of structural discovery algorithms was applied followed by support vector machine (SVM) classification. We trained a new classification model (CisRNA-SVM) on a set of known structured cis-regulatory elements from 3′-untranslated regions (UTRs) and successfully distinguished these and groups of cis-regulatory elements not been strained on from control genomic and shuffled sequences. The new method outperformed previous methods in classification of cis-regulatory RNA elements. This model was then used to predict new elements from cross-species conserved regions of human 3′-UTRs. Clustering of these elements identified new classes of potential cis-regulatory elements. The model, training and testing sets and novel human predictions are available at: http://mRNA.otago.ac.nz/CisRNA-SVM.
  • Chen, J. (2012). “She from bookshelf take-descend-come the box”: Encoding and categorizing placement events in Mandarin. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 37-54). Amsterdam: Benjamins.

    Abstract

    This paper investigates the lexical semantics of placement verbs in Mandarin. The majority of Mandarin placement verbs are directional verb compounds (e.g., na2-xia4-lai2 ‘take-descend-come’). They are composed of two or three verbs in a fixed order, each encoding certain semantic components of placement events. The first verb usually conveys object manipulation and the second and the third verbs indicate the Path of motion, including Deixis. The first verb, typically encoding object manipulation, can be semantically general or specific: two general verbs, fang4 ‘put’ and na2 ‘take’, have large but constrained extensional categories, and a number of specific verbs are used based on the Manner of manipulation of the Figure object, the relationship between and the physical properties of Figure and Ground, intentionality of the Agent, and the type of instrument.
  • Chen, A. (2012). Shaping the intonation of Wh-questions: Information structure and beyond. In J. P. de Ruiter (Ed.), Questions: Formal, functional and interactional perspectives (pp. 146-164). New York: Cambridge University Press.
  • Chen, A. (2012). The prosodic investigation of information structure. In M. Krifka, & R. Musan (Eds.), The expression of information structure (pp. 249-286). Berlin: de Gruyter.
  • Chen, A., Çetinçelik, M., Roncaglia-Denissen, M. P., & Sadakata, M. (2023). Native language, L2 experience, and pitch processing in music. Linguistic Approaches to Bilingualism, 13(2), 218-237. doi:10.1075/lab.20030.che.

    Abstract

    The current study investigated how the role of pitch in one’s native language and L2 experience influenced musical melodic processing by testing Turkish and Mandarin Chinese advanced and beginning learners of English as an L2. Pitch has a lower functional load and shows a simpler pattern in Turkish than in Chinese as the former only contrasts between presence and the absence of pitch elevation, while the latter makes use of four different pitch contours lexically. Using the Musical Ear Test as the tool, we found that the Chinese listeners outperformed the Turkish listeners, and the advanced L2 learners outperformed the beginning learners. The Turkish listeners were further tested on their discrimination of bisyllabic Chinese lexical tones, and again an L2 advantage was observed. No significant difference was found for working memory between the beginning and advanced L2 learners. These results suggest that richness of tonal inventory of the native language is essential for triggering a music processing advantage, and on top of the tone language advantage, the L2 experience yields a further enhancement. Yet, unlike the tone language advantage that seems to relate to pitch expertise, learning an L2 seems to improve sound discrimination in general, and such improvement exhibits in non-native lexical tone discrimination.
  • Cho, T., & McQueen, J. M. (2006). Phonological versus phonetic cues in native and non-native listening: Korean and Dutch listeners' perception of Dutch and English consonants. Journal of the Acoustical Society of America, 119(5), 3085-3096. doi:10.1121/1.2188917.

    Abstract

    We investigated how listeners of two unrelated languages, Korean and Dutch, process phonologically viable and nonviable consonants spoken in Dutch and American English. To Korean listeners, released final stops are nonviable because word-final stops in Korean are never released in words spoken in isolation, but to Dutch listeners, unreleased word-final stops are nonviable because word-final stops in Dutch are generally released in words spoken in isolation. Two phoneme monitoring experiments showed a phonological effect on both Dutch and English stimuli: Korean listeners detected the unreleased stops more rapidly whereas Dutch listeners detected the released stops more rapidly and/or more accurately. The Koreans, however, detected released stops more accurately than unreleased stops, but only in the non-native language they were familiar with (English). The results suggest that, in non-native speech perception, phonological legitimacy in the native language can be more important than the richness of phonetic information, though familiarity with phonetic detail in the non-native language can also improve listening performance.
  • Cholin, J., Levelt, W. J. M., & Schiller, N. O. (2006). Effects of syllable frequency in speech production. Cognition, 99, 205-235. doi:10.1016/j.cognition.2005.01.009.

    Abstract

    In the speech production model proposed by [Levelt, W. J. M., Roelofs, A., Meyer, A. S. (1999). A theory of lexical access in speech production. Behavioral and Brain Sciences, 22, pp. 1-75.], syllables play a crucial role at the interface of phonological and phonetic encoding. At this interface, abstract phonological syllables are translated into phonetic syllables. It is assumed that this translation process is mediated by a so-called Mental Syllabary. Rather than constructing the motor programs for each syllable on-line, the mental syllabary is hypothesized to provide pre-compiled gestural scores for the articulators. In order to find evidence for such a repository, we investigated syllable-frequency effects: If the mental syllabary consists of retrievable representations corresponding to syllables, then the retrieval process should be sensitive to frequency differences. In a series of experiments using a symbol-position association learning task, we tested whether highfrequency syllables are retrieved and produced faster compared to low-frequency syllables. We found significant syllable frequency effects with monosyllabic pseudo-words and disyllabic pseudo-words in which the first syllable bore the frequency manipulation; no effect was found when the frequency manipulation was on the second syllable. The implications of these results for the theory of word form encoding at the interface of phonological and phonetic encoding; especially with respect to the access mechanisms to the mental syllabary in the speech production model by (Levelt et al.) are discussed.
  • Chu, M., & Kita, S. (2012). The role of spontaneous gestures in spatial problem solving. In E. Efthimiou, G. Kouroupetroglou, & S.-E. Fotinea (Eds.), Gesture and sign language in human-computer interaction and embodied communication: 9th International Gesture Workshop, GW 2011, Athens, Greece, May 25-27, 2011, revised selected papers (pp. 57-68). Heidelberg: Springer.

    Abstract

    When solving spatial problems, people often spontaneously produce hand gestures. Recent research has shown that our knowledge is shaped by the interaction between our body and the environment. In this article, we review and discuss evidence on: 1) how spontaneous gesture can reveal the development of problem solving strategies when people solve spatial problems; 2) whether producing gestures can enhance spatial problem solving performance. We argue that when solving novel spatial problems, adults go through deagentivization and internalization processes, which are analogous to young children’s cognitive development processes. Furthermore, gesture enhances spatial problem solving performance. The beneficial effect of gesturing can be extended to non-gesturing trials and can be generalized to a different spatial task that shares similar spatial transformation processes.
  • Chwilla, D., Hagoort, P., & Brown, C. M. (1998). The mechanism underlying backward priming in a lexical decision task: Spreading activation versus semantic matching. Quarterly Journal of Experimental Psychology, 51A(3), 531-560. doi:10.1080/713755773.

    Abstract

    Koriat (1981) demonstrated that an association from the target to a preceding prime, in the absence of an association from the prime to the target, facilitates lexical decision and referred to this effect as "backward priming". Backward priming is of relevance, because it can provide information about the mechanism underlying semantic priming effects. Following Neely (1991), we distinguish three mechanisms of priming: spreading activation, expectancy, and semantic matching/integration. The goal was to determine which of these mechanisms causes backward priming, by assessing effects of backward priming on a language-relevant ERP component, the N400, and reaction time (RT). Based on previous work, we propose that the N400 priming effect reflects expectancy and semantic matching/integration, but in contrast with RT does not reflect spreading activation. Experiment 1 shows a backward priming effect that is qualitatively similar for the N400 and RT in a lexical decision task. This effect was not modulated by an ISI manipulation. Experiment 2 clarifies that the N400 backward priming effect reflects genuine changes in N400 amplitude and cannot be ascribed to other factors. We will argue that these backward priming effects cannot be due to expectancy but are best accounted for in terms of semantic matching/integration.
  • Clough, S., Morrow, E., Mutlu, B., Turkstra, L., & Duff, M. C. C. (2023). Emotion recognition of faces and emoji in individuals with moderate-severe traumatic brain injury. Brain Injury, 37(7), 596-610. doi:10.1080/02699052.2023.2181401.

    Abstract

    Background. Facial emotion recognition deficits are common after moderate-severe traumatic brain injury (TBI) and linked to poor social outcomes. We examine whether emotion recognition deficits extend to facial expressions depicted by emoji.
    Methods. Fifty-one individuals with moderate-severe TBI (25 female) and fifty-one neurotypical peers (26 female) viewed photos of human faces and emoji. Participants selected the best-fitting label from a set of basic emotions (anger, disgust, fear, sadness, neutral, surprise, happy) or social emotions (embarrassed, remorseful, anxious, neutral, flirting, confident, proud).
    Results. We analyzed the likelihood of correctly labeling an emotion by group (neurotypical, TBI), stimulus condition (basic faces, basic emoji, social emoji), sex (female, male), and their interactions. Participants with TBI did not significantly differ from neurotypical peers in overall emotion labeling accuracy. Both groups had poorer labeling accuracy for emoji compared to faces. Participants with TBI (but not neurotypical peers) had poorer accuracy for labeling social emotions depicted by emoji compared to basic emotions depicted by emoji. There were no effects of participant sex.
    Discussion. Because emotion representation is more ambiguous in emoji than human faces, studying emoji use and perception in TBI is an important consideration for understanding functional communication and social participation after brain injury.
  • Clough, S., Padilla, V.-G., Brown-Schmidt, S., & Duff, M. C. (2023). Intact speech-gesture integration in narrative recall by adults with moderate-severe traumatic brain injury. Neuropsychologia, 189: 108665. doi:10.1016/j.neuropsychologia.2023.108665.

    Abstract

    Purpose

    Real-world communication is situated in rich multimodal contexts, containing speech and gesture. Speakers often convey unique information in gesture that is not present in the speech signal (e.g., saying “He searched for a new recipe” while making a typing gesture). We examine the narrative retellings of participants with and without moderate-severe traumatic brain injury across three timepoints over two online Zoom sessions to investigate whether people with TBI can integrate information from co-occurring speech and gesture and if information from gesture persists across delays.

    Methods

    60 participants with TBI and 60 non-injured peers watched videos of a narrator telling four short stories. On key details, the narrator produced complementary gestures that conveyed unique information. Participants retold the stories at three timepoints: immediately after, 20-min later, and one-week later. We examined the words participants used when retelling these key details, coding them as a Speech Match (e.g., “He searched for a new recipe”), a Gesture Match (e.g., “He searched for a new recipe online), or Other (“He looked for a new recipe”). We also examined whether participants produced representative gestures themselves when retelling these details.

    Results

    Despite recalling fewer story details, participants with TBI were as likely as non-injured peers to report information from gesture in their narrative retellings. All participants were more likely to report information from gesture and produce representative gestures themselves one-week later compared to immediately after hearing the story.

    Conclusion

    We demonstrated that speech-gesture integration is intact after TBI in narrative retellings. This finding has exciting implications for the utility of gesture to support comprehension and memory after TBI and expands our understanding of naturalistic multimodal language processing in this population.
  • Clough, S., Tanguay, A. F. N., Mutlu, B., Turkstra, L., & Duff, M. C. (2023). How do individuals with and without traumatic brain injury interpret emoji? Similarities and differences in perceived valence, arousal, and emotion representation. Journal of Nonverbal Communication, 47, 489-511. doi:10.1007/s10919-023-00433-w.

    Abstract

    Impaired facial affect recognition is common after traumatic brain injury (TBI) and linked to poor social outcomes. We explored whether perception of emotions depicted by emoji is also impaired after TBI. Fifty participants with TBI and 50 non-injured peers generated free-text labels to describe emotions depicted by emoji and rated their levels of valence and arousal on nine-point rating scales. We compared how the two groups’ valence and arousal ratings were clustered and examined agreement in the words participants used to describe emoji. Hierarchical clustering of affect ratings produced four emoji clusters in the non-injured group and three emoji clusters in the TBI group. Whereas the non-injured group had a strongly positive and a moderately positive cluster, the TBI group had a single positive valence cluster, undifferentiated by arousal. Despite differences in cluster numbers, hierarchical structures of the two groups’ emoji ratings were significantly correlated. Most emoji had high agreement in the words participants with and without TBI used to describe them. Participants with TBI perceived emoji similarly to non-injured peers, used similar words to describe emoji, and rated emoji similarly on the valence dimension. Individuals with TBI showed small differences in perceived arousal for a minority of emoji. Overall, results suggest that basic recognition processes do not explain challenges in computer-mediated communication reported by adults with TBI. Examining perception of emoji in context by people with TBI is an essential next step for advancing our understanding of functional communication in computer-mediated contexts after brain injury.

    Additional information

    supplementary information
  • Cohen, E. (2012). [Review of the book Searching for Africa in Brazil: Power and Tradition in Candomblé by Stefania Capone]. Critique of Anthropology, 32, 217-218. doi:10.1177/0308275X12439961.
  • Cohen, E. (2012). The evolution of tag-based cooperation in humans: The case for accent. Current Anthropology, 53, 588-616. doi:10.1086/667654.

    Abstract

    Recent game-theoretic simulation and analytical models have demonstrated that cooperative strategies mediated by indicators of cooperative potential, or “tags,” can invade, spread, and resist invasion by noncooperators across a range of population-structure and cost-benefit scenarios. The plausibility of these models is potentially relevant for human evolutionary accounts insofar as humans possess some phenotypic trait that could serve as a reliable tag. Linguistic markers, such as accent and dialect, have frequently been either cursorily defended or promptly dismissed as satisfying the criteria of a reliable and evolutionarily viable tag. This paper integrates evidence from a range of disciplines to develop and assess the claim that speech accent mediated the evolution of tag-based cooperation in humans. Existing evidence warrants the preliminary conclusion that accent markers meet the demands of an evolutionarily viable tag and potentially afforded a cost-effective solution to the challenges of maintaining viable cooperative relationships in diffuse, regional social networks.
  • Colzato, L. S., Zech, H., Hommel, B., Verdonschot, R. G., Van den Wildenberg, W. P. M., & Hsieh, S. (2012). Loving-kindness brings loving-kindness: The impact of Buddhism on cognitive self-other integration. Psychonomic Bulletin & Review, 19(3), 541-545. doi:10.3758/s13423-012-0241-y.

    Abstract

    Common wisdom has it that Buddhism enhances compassion and self-other integration. We put this assumption to empirical test by comparing practicing Taiwanese Buddhists with well-matched atheists. Buddhists showed more evidence of self-other integration in the social Simon task, which assesses the degree to which people co-represent the actions of a coactor. This suggests that self-other integration and task co-representation vary as a function of religious practice.
  • Connine, C. M., Clifton, Jr., C., & Cutler, A. (1987). Effects of lexical stress on phonetic categorization. Phonetica, 44, 133-146.
  • Coopmans, C. W., Struiksma, M. E., Coopmans, P. H. A., & Chen, A. (2023). Processing of grammatical agreement in the face of variation in lexical stress: A mismatch negativity study. Language and Speech, 66(1), 202-213. doi:10.1177/00238309221098116.

    Abstract

    Previous electroencephalography studies have yielded evidence for automatic processing of syntax and lexical stress. However, these studies looked at both effects in isolation, limiting their generalizability to everyday language comprehension. In the current study, we investigated automatic processing of grammatical agreement in the face of variation in lexical stress. Using an oddball paradigm, we measured the Mismatch Negativity (MMN) in Dutch-speaking participants while they listened to Dutch subject–verb sequences (linguistic context) or acoustically similar sequences in which the subject was replaced by filtered noise (nonlinguistic context). The verb forms differed in the inflectional suffix, rendering the subject–verb sequences grammatically correct or incorrect, and leading to a difference in the stress pattern of the verb forms. We found that the MMNs were modulated in both the linguistic and nonlinguistic condition, suggesting that the processing load induced by variation in lexical stress can hinder early automatic processing of grammatical agreement. However, as the morphological differences between the verb forms correlated with differences in number of syllables, an interpretation in terms of the prosodic structure of the sequences cannot be ruled out. Future research is needed to determine which of these factors (i.e., lexical stress, syllabic structure) most strongly modulate early syntactic processing.

    Additional information

    supplementary material
  • Coopmans, C. W., Mai, A., Slaats, S., Weissbart, H., & Martin, A. E. (2023). What oscillations can do for syntax depends on your theory of structure building. Nature Reviews Neuroscience, 24, 723. doi:10.1038/s41583-023-00734-5.
  • Coopmans, C. W., Kaushik, K., & Martin, A. E. (2023). Hierarchical structure in language and action: A formal comparison. Psychological Review, 130(4), 935-952. doi:10.1037/rev0000429.

    Abstract

    Since the cognitive revolution, language and action have been compared as cognitive systems, with cross-domain convergent views recently gaining renewed interest in biology, neuroscience, and cognitive science. Language and action are both combinatorial systems whose mode of combination has been argued to be hierarchical, combining elements into constituents of increasingly larger size. This structural similarity has led to the suggestion that they rely on shared cognitive and neural resources. In this article, we compare the conceptual and formal properties of hierarchy in language and action using set theory. We show that the strong compositionality of language requires a particular formalism, a magma, to describe the algebraic structure corresponding to the set of hierarchical structures underlying sentences. When this formalism is applied to actions, it appears to be both too strong and too weak. To overcome these limitations, which are related to the weak compositionality and sequential nature of action structures, we formalize the algebraic structure corresponding to the set of actions as a trace monoid. We aim to capture the different system properties of language and action in terms of the distinction between hierarchical sets and hierarchical sequences and discuss the implications for the way both systems could be represented in the brain.
  • Corps, R. E., Liao, M., & Pickering, M. J. (2023). Evidence for two stages of prediction in non-native speakers: A visual-world eye-tracking study. Bilingualism: Language and Cognition, 26(1), 231-243. doi:10.1017/S1366728922000499.

    Abstract

    Comprehenders predict what a speaker is likely to say when listening to non-native (L2) and native (L1) utterances. But what are the characteristics of L2 prediction, and how does it relate to L1 prediction? We addressed this question in a visual-world eye-tracking experiment, which tested when L2 English comprehenders integrated perspective into their predictions. Male and female participants listened to male and female speakers producing sentences (e.g., I would like to wear the nice…) about stereotypically masculine (target: tie; distractor: drill) and feminine (target: dress; distractor: hairdryer) objects. Participants predicted associatively, fixating objects semantically associated with critical verbs (here, the tie and the dress). They also predicted stereotypically consistent objects (e.g., the tie rather than the dress, given the male speaker). Consistent predictions were made later than associative predictions, and were delayed for L2 speakers relative to L1 speakers. These findings suggest prediction involves both automatic and non-automatic stages.
  • Corps, R. E. (2023). What do we know about the mechanisms of response planning in dialog? In Psychology of Learning and Motivation (pp. 41-81). doi:10.1016/bs.plm.2023.02.002.

    Abstract

    During dialog, interlocutors take turns at speaking with little gap or overlap between their contributions. But language production in monolog is comparatively slow. Theories of dialog tend to agree that interlocutors manage these timing demands by planning a response early, before the current speaker reaches the end of their turn. In the first half of this chapter, I review experimental research supporting these theories. But this research also suggests that planning a response early, while simultaneously comprehending, is difficult. Does response planning need to be this difficult during dialog? In other words, is early-planning always necessary? In the second half of this chapter, I discuss research that suggests the answer to this question is no. In particular, corpora of natural conversation demonstrate that speakers do not directly respond to the immediately preceding utterance of their partner—instead, they continue an utterance they produced earlier. This parallel talk likely occurs because speakers are highly incremental and plan only part of their utterance before speaking, leading to pauses, hesitations, and disfluencies. As a result, speakers do not need to engage in extensive advance planning. Thus, laboratory studies do not provide a full picture of language production in dialog, and further research using naturalistic tasks is needed.
  • Corps, R. E., & Meyer, A. S. (2023). Word frequency has similar effects in picture naming and gender decision: A failure to replicate Jescheniak and Levelt (1994). Acta Psychologica, 241: 104073. doi:10.1016/j.actpsy.2023.104073.

    Abstract

    Word frequency plays a key role in theories of lexical access, which assume that the word frequency effect (WFE, faster access to high-frequency than low-frequency words) occurs as a result of differences in the representation and processing of the words. In a seminal paper, Jescheniak and Levelt (1994) proposed that the WFE arises during the retrieval of word forms, rather than the retrieval of their syntactic representations (their lemmas) or articulatory commands. An important part of Jescheniak and Levelt's argument was that they found a stable WFE in a picture naming task, which requires complete lexical access, but not in a gender decision task, which only requires access to the words' lemmas and not their word forms. We report two attempts to replicate this pattern, one with new materials, and one with Jescheniak and Levelt's orginal pictures. In both studies we found a strong WFE when the pictures were shown for the first time, but much weaker effects on their second and third presentation. Importantly these patterns were seen in both the picture naming and the gender decision tasks, suggesting that either word frequency does not exclusively affect word form retrieval, or that the gender decision task does not exclusively tap lemma access.

    Additional information

    raw data and analysis scripts
  • Corps, R. E., Yang, F., & Pickering, M. (2023). Evidence against egocentric prediction during language comprehension. Royal Society Open Science, 10(12): 231252. doi:10.1098/rsos.231252.

    Abstract

    Although previous research has demonstrated that language comprehension can be egocentric, there is little evidence for egocentricity during prediction. In particular, comprehenders do not appear to predict egocentrically when the context makes it clear what the speaker is likely to refer to. But do comprehenders predict egocentrically when the context does not make it clear? We tested this hypothesis using a visual-world eye-tracking paradigm, in which participants heard sentences containing the gender-neutral pronoun They (e.g. They would like to wear…) while viewing four objects (e.g. tie, dress, drill, hairdryer). Two of these objects were plausible targets of the verb (tie and dress), and one was stereotypically compatible with the participant's gender (tie if the participant was male; dress if the participant was female). Participants rapidly fixated targets more than distractors, but there was no evidence that participants ever predicted egocentrically, fixating objects stereotypically compatible with their own gender. These findings suggest that participants do not fall back on their own egocentric perspective when predicting, even when they know that context does not make it clear what the speaker is likely to refer to.
  • Corradi, Z., Khan, M., Hitti-Malin, R., Mishra, K., Whelan, L., Cornelis, S. S., ABCA4-Study Group, Hoyng, C. B., Kämpjärvi, K., Klaver, C. C. W., Liskova, P., Stohr, H., Weber, B. H. F., Banfi, S., Farrar, G. J., Sharon, D., Zernant, J., Allikmets, R., Dhaenens, C.-M., & Cremers, F. P. M. (2023). Targeted sequencing and in vitro splice assays shed light on ABCA4-associated retinopathies missing heritability. Human Genetics and Genomics Advances, 4(4): 100237. doi:10.1016/j.xhgg.2023.100237.

    Abstract

    The ABCA4 gene is the most frequently mutated Mendelian retinopathy-associated gene. Biallelic variants lead to a variety of phenotypes, however, for thousands of cases the underlying variants remain unknown. Here, we aim to shed further light on the missing heritability of ABCA4-associated retinopathy by analyzing a large cohort of macular dystrophy probands. A total of 858 probands were collected from 26 centers, of whom 722 carried no or one pathogenic ABCA4 variant while 136 cases carried two ABCA4 alleles, one of which was a frequent mild variant, suggesting that deep-intronic variants (DIVs) or other cis-modifiers might have been missed. After single molecule molecular inversion probes (smMIPs)-based sequencing of the complete 128-kb ABCA4 locus, the effect of putative splice variants was assessed in vitro by midigene splice assays in HEK293T cells. The breakpoints of copy number variants (CNVs) were determined by junction PCR and Sanger sequencing. ABCA4 sequence analysis solved 207/520 (39.8%) naïve or unsolved cases and 70/202 (34.7%) monoallelic cases, while additional causal variants were identified in 54/136 (39.7%) of probands carrying two variants. Seven novel DIVs and six novel non-canonical splice site variants were detected in a total of 35 alleles and characterized, including the c.6283-321C>G variant leading to a complex splicing defect. Additionally, four novel CNVs were identified and characterized in five alleles. These results confirm that smMIPs-based sequencing of the complete ABCA4 gene provides a cost-effective method to genetically solve retinopathy cases and that several rare structural and splice altering defects remain undiscovered in STGD1 cases.
  • Costa, A., Cutler, A., & Sebastian-Galles, N. (1998). Effects of phoneme repertoire on phoneme decision. Perception and Psychophysics, 60, 1022-1031.

    Abstract

    In three experiments, listeners detected vowel or consonant targets in lists of CV syllables constructed from five vowels and five consonants. Responses were faster in a predictable context (e.g., listening for a vowel target in a list of syllables all beginning with the same consonant) than in an unpredictable context (e.g., listening for a vowel target in a list of syllables beginning with different consonants). In Experiment 1, the listeners’ native language was Dutch, in which vowel and consonant repertoires are similar in size. The difference between predictable and unpredictable contexts was comparable for vowel and consonant targets. In Experiments 2 and 3, the listeners’ native language was Spanish, which has four times as many consonants as vowels; here effects of an unpredictable consonant context on vowel detection were significantly greater than effects of an unpredictable vowel context on consonant detection. This finding suggests that listeners’ processing of phonemes takes into account the constitution of their language’s phonemic repertoire and the implications that this has for contextual variability.
  • Coventry, K. R., Gudde, H. B., Diessel, H., Collier, J., Guijarro-Fuentes, P., Vulchanova, M., Vulchanov, V., Todisco, E., Reile, M., Breunesse, M., Plado, H., Bohnemeyer, J., Bsili, R., Caldano, M., Dekova, R., Donelson, K., Forker, D., Park, Y., Pathak, L. S., Peeters, D. and 25 moreCoventry, K. R., Gudde, H. B., Diessel, H., Collier, J., Guijarro-Fuentes, P., Vulchanova, M., Vulchanov, V., Todisco, E., Reile, M., Breunesse, M., Plado, H., Bohnemeyer, J., Bsili, R., Caldano, M., Dekova, R., Donelson, K., Forker, D., Park, Y., Pathak, L. S., Peeters, D., Pizzuto, G., Serhan, B., Apse, L., Hesse, F., Hoang, L., Hoang, P., Igari, Y., Kapiley, K., Haupt-Khutsishvili, T., Kolding, S., Priiki, K., Mačiukaitytė, I., Mohite, V., Nahkola, T., Tsoi, S. Y., Williams, S., Yasuda, S., Cangelosi, A., Duñabeitia, J. A., Mishra, R. K., Rocca, R., Šķilters, J., Wallentin, M., Žilinskaitė-Šinkūnienė, E., & Incel, O. D. (2023). Spatial communication systems across languages reflect universal action constraints. Nature Human Behaviour, 77, 2099-2110. doi:10.1038/s41562-023-01697-4.

    Abstract

    The extent to which languages share properties reflecting the non-linguistic constraints of the speakers who speak them is key to the debate regarding the relationship between language and cognition. A critical case is spatial communication, where it has been argued that semantic universals should exist, if anywhere. Here, using an experimental paradigm able to separate variation within a language from variation between languages, we tested the use of spatial demonstratives—the most fundamental and frequent spatial terms across languages. In n = 874 speakers across 29 languages, we show that speakers of all tested languages use spatial demonstratives as a function of being able to reach or act on an object being referred to. In some languages, the position of the addressee is also relevant in selecting between demonstrative forms. Commonalities and differences across languages in spatial communication can be understood in terms of universal constraints on action shaping spatial language and cognition.
  • Cox, C., Bergmann, C., Fowler, E., Keren-Portnoy, T., Roepstorff, A., Bryant, G., & Fusaroli, R. (2023). A systematic review and Bayesian meta-analysis of the acoustic features of infant-directed speech. Nature Human Behaviour, 7, 114-133. doi:10.1038/s41562-022-01452-1.

    Abstract

    When speaking to infants, adults often produce speech that differs systematically from that directed to other adults. In order to quantify the acoustic properties of this speech style across a wide variety of languages and cultures, we extracted results from empirical studies on the acoustic features of infant-directed speech (IDS). We analyzed data from 88 unique studies (734 effect sizes) on the following five acoustic parameters that have been systematically examined in the literature: i) fundamental frequency (fo), ii) fo variability, iii) vowel space area, iv) articulation rate, and v) vowel duration. Moderator analyses were conducted in hierarchical Bayesian robust regression models in order to examine how these features change with infant age and differ across languages, experimental tasks and recording environments. The moderator analyses indicated that fo, articulation rate, and vowel duration became more similar to adult-directed speech (ADS) over time, whereas fo variability and vowel space area exhibited stability throughout development. These results point the way for future research to disentangle different accounts of the functions and learnability of IDS by conducting theory-driven comparisons among different languages and using computational models to formulate testable predictions.

    Additional information

    supplementary information
  • Crago, M. B., & Allen, S. E. M. (1998). Acquiring Inuktitut. In O. L. Taylor, & L. Leonard (Eds.), Language Acquisition Across North America: Cross-Cultural And Cross-Linguistic Perspectives (pp. 245-279). San Diego, CA, USA: Singular Publishing Group, Inc.
  • Crago, M. B., Chen, C., Genesee, F., & Allen, S. E. M. (1998). Power and deference. Journal for a Just and Caring Education, 4(1), 78-95.
  • Crasborn, O., & Windhouwer, M. (2012). ISOcat data categories for signed language resources. In E. Efthimiou, G. Kouroupetroglou, & S.-E. Fotinea (Eds.), Gesture and sign language in human-computer interaction and embodied communication: 9th International Gesture Workshop, GW 2011, Athens, Greece, May 25-27, 2011, revised selected papers (pp. 118-128). Heidelberg: Springer.

    Abstract

    As the creation of signed language resources is gaining speed world-wide, the need for standards in this field becomes more acute. This paper discusses the state of the field of signed language resources, their metadata descriptions, and annotations that are typically made. It then describes the role that ISOcat may play in this process and how it can stimulate standardisation without imposing standards. Finally, it makes some initial proposals for the thematic domain ‘sign language’ that was introduced in 2011.
  • Creemers, A. (2023). Morphological processing in spoken-word recognition. In D. Crepaldi (Ed.), Linguistic morphology in the mind and brain (pp. 50-64). New York: Routledge.

    Abstract

    Most psycholinguistic studies on morphological processing have examined the role of morphological structure in the visual modality. This chapter discusses morphological processing in the auditory modality, which is an area of research that has only recently received more attention. It first discusses why results in the visual modality cannot straightforwardly be applied to the processing of spoken words, stressing the importance of acknowledging potential modality effects. It then gives a brief overview of the existing research on the role of morphology in the auditory modality, for which an increasing number of studies report that listeners show sensitivity to morphological structure. Finally, the chapter highlights insights gained by looking at morphological processing not only in reading, but also in listening, and it discusses directions for future research
  • Cristia, A., Seidl, A., Vaughn, C., Schmale, R., Bradlow, A., & Floccia, C. (2012). Linguistic processing of accented speech across the lifespan. Frontiers in Psychology, 3, 479. doi:10.3389/fpsyg.2012.00479.

    Abstract

    In most of the world, people have regular exposure to multiple accents. Therefore, learning to quickly process accented speech is a prerequisite to successful communication. In this paper, we examine work on the perception of accented speech across the lifespan, from early infancy to late adulthood. Unfamiliar accents initially impair linguistic processing by infants, children, younger adults, and older adults, but listeners of all ages come to adapt to accented speech. Emergent research also goes beyond these perceptual abilities, by assessing links with production and the relative contributions of linguistic knowledge and general cognitive skills. We conclude by underlining points of convergence across ages, and the gaps left to face in future work.
  • Cronin, K. A. (2012). Cognitive aspects of prosocial behavior in nonhuman primates. In N. M. Seel (Ed.), Encyclopedia of the sciences of learning. Part 3 (2nd ed., pp. 581-583). Berlin: Springer.

    Abstract

    Definition Prosocial behavior is any behavior performed by one individual that results in a benefit for another individual. Prosocial motivations, prosocial preferences, or other-regarding preferences refer to the psychological predisposition to behave in the best interest of another individual. A behavior need not be costly to the actor to be considered prosocial, thus the concept is distinct from altruistic behavior which requires that the actor incurs some cost when providing a benefit to another.
  • Cronin, K. A., Mitchell, M. A., Lonsdorf, E. V., & Thompson, S. D. (2006). One year later: Evaluation of PMC-Recommended births and transfers. Zoo Biology, 25, 267-277. doi:10.1002/zoo.20100.

    Abstract

    To meet their exhibition, conservation, education, and scientific goals, members of the American Zoo and Aquarium Association (AZA) collaborate to manage their living collections as single species populations. These cooperative population management programs, Species Survival Planss (SSP) and Population Management Plans (PMP), issue specimen-by-specimen recommendations aimed at perpetuating captive populations by maintaining genetic diversity and demographic stability. Species Survival Plans and PMPs differ in that SSP participants agree to complete recommendations, whereas PMP participants need only take recommendations under advisement. We evaluated the effect of program type and the number of participating institutions on the success of actions recommended by the Population Management Center (PMC): transfers of specimens between institutions, breeding, and target number of offspring. We analyzed AZA studbook databases for the occurrence of recommended or unrecommended transfers and births during the 1-year period after the distribution of standard AZA Breeding-and-Transfer Plans. We had three major findings: 1) on average, both SSPs and PMPs fell about 25% short of their target; however, as the number of participating institutions increased so too did the likelihood that programs met or exceeded their target; 2) SSPs exhibited significantly greater transfer success than PMPs, although transfer success for both program types was below 50%; and 3) SSPs exhibited significantly greater breeding success than PMPs, although breeding success for both program types was below 20%. Together, these results indicate that the science and sophistication behind genetic and demographic management of captive populations may be compromised by the challenges of implementation.
  • Cronin, K. A. (2012). Prosocial behaviour in animals: The influence of social relationships, communication and rewards. Animal Behaviour, 84, 1085-1093. doi:10.1016/j.anbehav.2012.08.009.

    Abstract

    Researchers have struggled to obtain a clear account of the evolution of prosocial behaviour despite a great deal of recent effort. The aim of this review is to take a brief step back from addressing the question of evolutionary origins of prosocial behaviour in order to identify contextual factors that are contributing to variation in the expression of prosocial behaviour and hindering progress towards identifying phylogenetic patterns. Most available data come from the Primate Order, and the choice of contextual factors to consider was informed by theory and practice, including the nature of the relationship between the potential donor and recipient, the communicative behaviour of the recipients, and features of the prosocial task including whether rewards are visible and whether the prosocial choice creates an inequity between actors. Conclusions are drawn about the facilitating or inhibiting impact of each of these factors on the expression of prosocial behaviour, and areas for future research are highlighted. Acknowledging the impact of these contextual features on the expression of prosocial behaviours should stimulate new research into the proximate mechanisms that drive these effects, yield experimental designs that better control for potential influences on prosocial expression, and ultimately allow progress towards reconstructing the evolutionary origins of prosocial behaviour.
  • Cronin, K. A., & Sanchez, A. (2012). Social dynamics and cooperation: The case of nonhuman primates and its implications for human behavior. Advances in complex systems, 15, 1250066. doi:10.1142/S021952591250066X.

    Abstract

    The social factors that influence cooperation have remained largely uninvestigated but have the potential to explain much of the variation in cooperative behavior observed in the natural world. We show here that certain dimensions of the social environment, namely the size of the social group, the degree of social tolerance expressed, the structure of the dominance hierarchy, and the patterns of dispersal, may influence the emergence and stability of cooperation in predictable ways. Furthermore, the social environment experienced by a species over evolutionary time will have shaped their cognition to provide certain strengths and strategies that are beneficial in their species‟ social world. These cognitive adaptations will in turn impact the likelihood of cooperating in a given social environment. Experiments with one primate species, the cottontop tamarin, illustrate how social dynamics may influence emergence and stability of cooperative behavior in this species. We then take a more general viewpoint and argue that the hypotheses presented here require further experimental work and the addition of quantitative modeling to obtain a better understanding of how social dynamics influence the emergence and stability of cooperative behavior in complex systems. We conclude by pointing out subsequent specific directions for models and experiments that will allow relevant advances in the understanding of the emergence of cooperation.
  • Cutfield, S. (2012). Foreword. Australian Journal of Linguistics, 32(4), 457-458.
  • Cutfield, S. (2012). Principles of Dalabon plant and animal names and classification. In D. Bordulk, N. Dalak, M. Tukumba, L. Bennett, R. Bordro Tingey, M. Katherine, S. Cutfield, M. Pamkal, & G. Wightman (Eds.), Dalabon plants and animals: Aboriginal biocultural knowledge from Southern Arnhem Land, North Australia (pp. 11-12). Palmerston, NT, Australia: Department of Land and Resource Management, Northern Territory.
  • Cutler, A. (2006). Rudolf Meringer. In K. Brown (Ed.), Encyclopedia of Language and Linguistics (vol. 8) (pp. 12-13). Amsterdam: Elsevier.

    Abstract

    Rudolf Meringer (1859–1931), Indo-European philologist, published two collections of slips of the tongue, annotated and interpreted. From 1909, he was the founding editor of the cultural morphology movement's journal Wörter und Sachen. Meringer was the first to note the linguistic significance of speech errors, and his interpretations have stood the test of time. This work, rather than his mainstream philological research, has proven his most lasting linguistic contribution
  • Cutler, A. (2006). Van spraak naar woorden in een tweede taal. In J. Morais, & G. d'Ydewalle (Eds.), Bilingualism and Second Language Acquisition (pp. 39-54). Brussels: Koninklijke Vlaamse Academie van België voor Wetenschappen en Kunsten.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1983). A language-specific comprehension strategy [Letters to Nature]. Nature, 304, 159-160. doi:10.1038/304159a0.

    Abstract

    Infants acquire whatever language is spoken in the environment into which they are born. The mental capability of the newborn child is not biased in any way towards the acquisition of one human language rather than another. Because psychologists who attempt to model the process of language comprehension are interested in the structure of the human mind, rather than in the properties of individual languages, strategies which they incorporate in their models are presumed to be universal, not language-specific. In other words, strategies of comprehension are presumed to be characteristic of the human language processing system, rather than, say, the French, English, or Igbo language processing systems. We report here, however, on a comprehension strategy which appears to be used by native speakers of French but not by native speakers of English.
  • Cutler, A., Norris, D., & Williams, J. (1987). A note on the role of phonological expectations in speech segmentation. Journal of Memory and Language, 26, 480-487. doi:10.1016/0749-596X(87)90103-3.

    Abstract

    Word-initial CVC syllables are detected faster in words beginning consonant-vowel-consonant-vowel (CVCV-) than in words beginning consonant-vowel-consonant-consonant (CVCC-). This effect was reported independently by M. Taft and G. Hambly (1985, Journal of Memory and Language, 24, 320–335) and by A. Cutler, J. Mehler, D. Norris, and J. Segui (1986, Journal of Memory and Language, 25, 385–400). Taft and Hambly explained the effect in terms of lexical factors. This explanation cannot account for Cutler et al.'s results, in which the effect also appeared with nonwords and foreign words. Cutler et al. suggested that CVCV-sequences might simply be easier to perceive than CVCC-sequences. The present study confirms this suggestion, and explains it as a reflection of listener expectations constructed on the basis of distributional characteristics of the language.
  • Cutler, A., & Davis, C. (2012). An orthographic effect in phoneme processing, and its limitations. Frontiers in Psychology, 3, 18. doi:10.3389/fpsyg.2012.00018.

    Abstract

    To examine whether lexically stored knowledge about spelling influences phoneme evaluation, we conducted three experiments with a low-level phonetic judgement task: phoneme goodness rating. In each experiment, listeners heard phonetic tokens varying along a continuum centred on /s/, occurring finally in isolated word or nonword tokens. An effect of spelling appeared in Experiment 1: Native English speakers’ goodness ratings for the best /s/ tokens were significantly higher in words spelled with S (e.g., bless) than in words spelled with C (e.g., voice). No such difference appeared when nonnative speakers rated the same materials in Experiment 2, indicating that the difference could not be due to acoustic characteristics of the S- versus C-words. In Experiment 3, nonwords with lexical neighbours consistently spelled with S (e.g., pless) versus with C (e.g., floice) failed to elicit orthographic neighbourhood effects; no significant difference appeared in native English speakers’ ratings for the S-consistent versus the C-consistent sets. Obligatory influence of lexical knowledge on phonemic processing would have predicted such neighbourhood effects; the findings are thus better accommodated by models in which phonemic decisions draw strategically upon lexical information.
  • Cutler, A., Weber, A., & Otake, T. (2006). Asymmetric mapping from phonetic to lexical representations in second-language listening. Journal of Phonetics, 34(2), 269-284. doi:10.1016/j.wocn.2005.06.002.

    Abstract

    The mapping of phonetic information to lexical representations in second-language (L2) listening was examined using an eyetracking paradigm. Japanese listeners followed instructions in English to click on pictures in a display. When instructed to click on a picture of a rocket, they experienced interference when a picture of a locker was present, that is, they tended to look at the locker instead. However, when instructed to click on the locker, they were unlikely to look at the rocket. This asymmetry is consistent with a similar asymmetry previously observed in Dutch listeners’ mapping of English vowel contrasts to lexical representations. The results suggest that L2 listeners may maintain a distinction between two phonetic categories of the L2 in their lexical representations, even though their phonetic processing is incapable of delivering the perceptual discrimination required for correct mapping to the lexical distinction. At the phonetic processing level, one of the L2 categories is dominant; the present results suggest that dominance is determined by acoustic–phonetic proximity to the nearest L1 category. At the lexical processing level, representations containing this dominant category are more likely than representations containing the non-dominant category to be correctly contacted by the phonetic input.
  • Cutler, A. (1993). Language-specific processing: Does the evidence converge? In G. T. Altmann, & R. C. Shillcock (Eds.), Cognitive models of speech processing: The Sperlonga Meeting II (pp. 115-123). Hillsdale, NJ: Erlbaum.
  • Cutler, A. (1983). Lexical complexity and sentence processing. In G. B. Flores d'Arcais, & R. J. Jarvella (Eds.), The process of language understanding (pp. 43-79). Chichester, Sussex: Wiley.
  • Cutler, A. (2012). Native listening: The flexibility dimension. Dutch Journal of Applied Linguistics, 1(2), 169-187.

    Abstract

    The way we listen to spoken language is tailored to the specific benefit of native-language speech input. Listening to speech in non-native languages can be significantly hindered by this native bias. Is it possible to determine the degree to which a listener is listening in a native-like manner? Promising indications of how this question may be tackled are provided by new research findings concerning the great flexibility that characterises listening to the L1, in online adjustment of phonetic category boundaries for adaptation across talkers, and in modulation of lexical dynamics for adjustment across listening conditions. This flexibility pays off in many dimensions, including listening in noise, adaptation across dialects, and identification of voices. These findings further illuminate the robustness and flexibility of native listening, and potentially point to ways in which we might begin to assess degrees of ‘native-likeness’ in this skill.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1987). Phoneme identification and the lexicon. Cognitive Psychology, 19, 141-177. doi:10.1016/0010-0285(87)90010-7.
  • Cutler, A. (1993). Phonological cues to open- and closed-class words in the processing of spoken sentences. Journal of Psycholinguistic Research, 22, 109-131.

    Abstract

    Evidence is presented that (a) the open and the closed word classes in English have different phonological characteristics, (b) the phonological dimension on which they differ is one to which listeners are highly sensitive, and (c) spoken open- and closed-class words produce different patterns of results in some auditory recognition tasks. What implications might link these findings? Two recent lines of evidence from disparate paradigms—the learning of an artificial language, and natural and experimentally induced misperception of juncture—are summarized, both of which suggest that listeners are sensitive to the phonological reflections of open- vs. closed-class word status. Although these correlates cannot be strictly necessary for efficient processing, if they are present listeners exploit them in making word class assignments. That such a use of phonological information is of value to listeners could be indirect evidence that open- vs. closed-class words undergo different processing operations. Parts of the research reported in this paper were carried out in collaboration with Sally Butterfield and David Carter, and supported by the Alvey Directorate (United Kingdom). Jonathan Stankler's master's research was supported by the Science and Engineering Research Council (United Kingdom). Thanks to all of the above, and to Merrill Garrett, Mike Kelly, James McQueen, and Dennis Norris for further assistance.
  • Cutler, A., Otake, T., & Bruggeman, L. (2012). Phonologically determined asymmetries in vocabulary structure across languages. Journal of the Acoustical Society of America, 132(2), EL155-EL160. doi:10.1121/1.4737596.

    Abstract

    Studies of spoken-word recognition have revealed that competition from embedded words differs in strength as a function of where in the carrier word the embedded word is found and have further shown embedding patterns to be skewed such that embeddings in initial position in carriers outnumber embeddings in final position. Lexico-statistical analyses show that this skew is highly attenuated in Japanese, a noninflectional language. Comparison of the extent of the asymmetry in the three Germanic languages English, Dutch, and German allows the source to be traced to a combination of suffixal morphology and vowel reduction in unstressed syllables.
  • Cutler, A., Kearns, R., Norris, D., & Scott, D. R. (1993). Problems with click detection: Insights from cross-linguistic comparisons. Speech Communication, 13, 401-410. doi:10.1016/0167-6393(93)90038-M.

    Abstract

    Cross-linguistic comparisons may shed light on the levels of processing involved in the performance of psycholinguistic tasks. For instance, if the same pattern of results appears whether or not subjects understand the experimental materials, it may be concluded that the results do not reflect higher-level linguistic processing. In the present study, English and French listeners performed two tasks - click location and speeded click detection - with both English and French sentences, closely matched for syntactic and phonological structure. Clicks were located more accurately in open- than in closed-class words in both English and French; they were detected more rapidly in open- than in closed-class words in English, but not in French. The two listener groups produced the same pattern of responses, suggesting that higher-level linguistic processing was not involved in the listeners' responses. It is concluded that click detection tasks are primarily sensitive to low-level (e.g. acoustic) effects, and hence are not well suited to the investigation of linguistic processing.
  • Cutler, A. (1998). Prosodic structure and word recognition. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 41-70). Heidelberg: Springer.
  • Cutler, A. (1983). Speakers’ conceptions of the functions of prosody. In A. Cutler, & D. R. Ladd (Eds.), Prosody: Models and measurements (pp. 79-91). Heidelberg: Springer.
  • Cutler, A. (1987). Speaking for listening. In A. Allport, D. MacKay, W. Prinz, & E. Scheerer (Eds.), Language perception and production: Relationships between listening, speaking, reading and writing (pp. 23-40). London: Academic Press.

    Abstract

    Speech production is constrained at all levels by the demands of speech perception. The speaker's primary aim is successful communication, and to this end semantic, syntactic and lexical choices are directed by the needs of the listener. Even at the articulatory level, some aspects of production appear to be perceptually constrained, for example the blocking of phonological distortions under certain conditions. An apparent exception to this pattern is word boundary information, which ought to be extremely useful to listeners, but which is not reliably coded in speech. It is argued that the solution to this apparent problem lies in rethinking the concept of the boundary of the lexical access unit. Speech rhythm provides clear information about the location of stressed syllables, and listeners do make use of this information. If stressed syllables can serve as the determinants of word lexical access codes, then once again speakers are providing precisely the necessary form of speech information to facilitate perception.
  • Cutler, A. (1993). Segmentation problems, rhythmic solutions. Lingua, 92, 81-104. doi:10.1016/0024-3841(94)90338-7.

    Abstract

    The lexicon contains discrete entries, which must be located in speech input in order for speech to be understood; but the continuity of speech signals means that lexical access from spoken input involves a segmentation problem for listeners. The speech environment of prelinguistic infants may not provide special information to assist the infant listeners in solving this problem. Mature language users in possession of a lexicon might be thought to be able to avoid explicit segmentation of speech by relying on information from successful lexical access; however, evidence from adult perceptual studies indicates that listeners do use explicit segmentation procedures. These procedures differ across languages and seem to exploit language-specific rhythmic structure. Efficient as these procedures are, they may not have been developed in response to statistical properties of the input, because bilinguals, equally competent in two languages, apparently only possess one rhythmic segmentation procedure. The origin of rhythmic segmentation may therefore lie in the infant's exploitation of rhythm to solve the segmentation problem and gain a first toehold on lexical acquisition. Recent evidence from speech production and perception studies with prelinguistic infants supports the claim that infants are sensitive to rhythmic structure and its relationship to lexical segmentation.
  • Cutler, A. (1993). Segmenting speech in different languages. The Psychologist, 6(10), 453-455.
  • Cutler, A., Butterfield, S., & Williams, J. (1987). The perceptual integrity of syllabic onsets. Journal of Memory and Language, 26, 406-418. doi:10.1016/0749-596X(87)90099-4.
  • Cutler, A., & Mehler, J. (1993). The periodicity bias. Journal of Phonetics, 21, 101-108.
  • Cutler, A., & Carter, D. (1987). The predominance of strong initial syllables in the English vocabulary. Computer Speech and Language, 2, 133-142. doi:10.1016/0885-2308(87)90004-0.

    Abstract

    Studies of human speech processing have provided evidence for a segmentation strategy in the perception of continuous speech, whereby a word boundary is postulated, and a lexical access procedure initiated, at each metrically strong syllable. The likely success of this strategy was here estimated against the characteristics of the English vocabulary. Two computerized dictionaries were found to list approximately three times as many words beginning with strong syllables (i.e. syllables containing a full vowel) as beginning with weak syllables (i.e. syllables containing a reduced vowel). Consideration of frequency of lexical word occurrence reveals that words beginning with strong syllables occur on average more often than words beginning with weak syllables. Together, these findings motivate an estimate for everyday speech recognition that approximately 85% of lexical words (i.e. excluding function words) will begin with strong syllables. This estimate was tested against a corpus of 190 000 words of spontaneous British English conversion. In this corpus, 90% of lexical words were found to begin with strong syllables. This suggests that a strategy of postulating word boundaries at the onset of strong syllables would have a high success rate in that few actual lexical word onsets would be missed.
  • Cutler, A. (1987). The task of the speaker and the task of the hearer [Commentary/Sperber & Wilson: Relevance]. Behavioral and Brain Sciences, 10, 715-716.
  • Cysouw, M., Dediu, D., & Moran, S. (2012). Comment on “Phonemic Diversity Supports a Serial Founder Effect Model of Language Expansion from Africa”. Science, 335, 657-b. doi:10.1126/science.1208841.

    Abstract

    We show that Atkinson’s (Reports, 15 April 2011, p. 346) intriguing proposal—that global
    linguistic diversity supports a single language origin in Africa—is an artifact of using suboptimal
    data, biased methodology, and unjustified assumptions. We criticize his approach using more
    suitable data, and we additionally provide new results suggesting a more complex scenario for the
    emergence of global linguistic diversity.
  • Dagklis, A., Ponzoni, M., Govi, S., Cangi, M. G., Pasini, E., Charlotte, F., Vino, A., Doglioni, C., Davi, F., Lossos, I. S., Ntountas, I., Papadaki, T., Dolcetti, R., Ferreri, A. J. M., Stamatopoulos, K., & Ghia, P. (2012). Immunoglobulin gene repertoire in ocular adnexal lymphomas: hints on the nature of the antigenic stimulation. Leukemia, 26, 814-821. doi:10.1038/leu.2011.276.

    Abstract

    Evidence from certain geographical areas links lymphomas of the ocular adnexa marginal zone B-cell lymphomas (OAMZL) with Chlamydophila psittaci (Cp) infection, suggesting that lymphoma development is dependent upon chronic stimulation by persistent infections. Notwithstanding that, the actual immunopathogenetical mechanisms have not yet been elucidated. As in other B-cell lymphomas, insight into this issue, especially with regard to potential selecting ligands, could be provided by analysis of the immunoglobulin (IG) receptors of the malignant clones. To this end, we studied the molecular features of IGs in 44 patients with OAMZL (40% Cp-positive), identifying features suggestive of a pathogenic mechanism of autoreactivity. Herein, we show that lymphoma cells express a distinctive IG repertoire, with electropositive antigen (Ag)-binding sites, reminiscent of autoantibodies (auto-Abs) recognizing DNA. Additionally, five (11%) cases of OAMZL expressed IGs homologous with autoreactive Abs or IGs of patients with chronic lymphocytic leukemia, a disease known for the expression of autoreactive IGs by neoplastic cells. In contrast, no similarity with known anti-Chlamydophila Abs was found. Taken together, these results strongly indicate that OAMZL may originate from B cells selected for their capability to bind Ags and, in particular, auto-Ags. In OAMZL associated with Cp infection, the pathogen likely acts indirectly on the malignant B cells, promoting the development of an inflammatory milieu, where auto-Ags could be exposed and presented, driving proliferation and expansion of self-reactive B cells.
  • Danziger, E., & Gaskins, S. (1993). Exploring the Intrinsic Frame of Reference. In S. C. Levinson (Ed.), Cognition and space kit 1.0 (pp. 53-64). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3513136.

    Abstract

    We can describe the position of one item with respect to another using a number of different ‘frames of reference’. For example, I can use a ‘deictic’ frame that involves the speaker’s viewpoint (The chair is on the far side of the room), or an ‘intrinsic’ frame that involves a feature of one of the items (The chair is at the back of the room). Where more than one frame of reference is available in a language, what motivates the speaker’s choice? This elicitation task is designed to explore when and why people select intrinsic frames of reference, and how these choices interact with non-linguistic problem-solving strategies.
  • Davidson, D. J. (2006). Strategies for longitudinal neurophysiology [commentary on Osterhout et al.]. Language Learning, 56(suppl. 1), 231-234. doi:10.1111/j.1467-9922.2006.00362.x.
  • Davidson, D. J., Hanulikova, A., & Indefrey, P. (2012). Electrophysiological correlates of morphosyntactic integration in German phrasal context. Language and Cognitive Processes, 27, 288-311. doi:10.1080/01690965.2011.616448.

    Abstract

    The morphosyntactic paradigm of an inflected word can influence isolated word recognition, but its role in multiple-word phrasal integration is less clear. We examined the electrophysiological response to adjectives in short German prepositional phrases to evaluate whether strong and weak forms of the adjective show a differential response, and whether paradigm variables are related to this response. Twenty native German speakers classified serially presented phrases as grammatically correct or not while the electroencephalogram (EEG) was recorded. A functional mixed effects model of the response to grammatically correct trials revealed a differential response to strong and weak forms of the adjectives. This response difference depended on whether the preceding preposition imposed accusative or dative case. The lexically conditioned information content of the adjectives modulated a later interval of the response. The results indicate that grammatical context modulates the response to morphosyntactic information content, and lends support to the role of paradigm structure in integrative phrasal processing.
  • Dediu, D., & Levinson, S. C. (2012). Abstract profiles of structural stability point to universal tendencies, family-specific factors, and ancient connections between languages. PLoS One, 7(9), e45198. doi:10.1371/journal.pone.0045198.

    Abstract

    Language is the best example of a cultural evolutionary system, able to retain a phylogenetic signal over many thousands of years. The temporal stability (conservatism) of basic vocabulary is relatively well understood, but the stability of the structural properties of language (phonology, morphology, syntax) is still unclear. Here we report an extensive Bayesian phylogenetic investigation of the structural stability of numerous features across many language families and we introduce a novel method for analyzing the relationships between the “stability profiles” of language families. We found that there is a strong universal component across language families, suggesting the existence of universal linguistic, cognitive and genetic constraints. Against this background, however, each language family has a distinct stability profile, and these profiles cluster by geographic area and likely deep genealogical relationships. These stability profiles reveal, for example, the ancient historical relationships between the Siberian and American language families, presumed to be separated by at least 12,000 years. Thus, such higher-level properties of language seen as an evolutionary system might allow the investigation of ancient connections between languages and shed light on the peopling of the world.

    Additional information

    journal.pone.0045198.s001.pdf
  • Dediu, D., & Dingemanse, M. (2012). More than accent: Linguistic and cultural cues in the emergence of tag-based cooperation [Commentary]. Current Anthropology, 53, 606-607. doi:10.1086/667654.

    Abstract

    Commentary on Cohen, E. (2012). The evolution of tag-based cooperation in humans: The case for accent. Current Anthropology, 53, 588-616. doi:10.1086/667654.
  • Defina, R., Allen, S. E. M., Davidson, L., Hellwig, B., Kelly, B. F., & Kidd, E. (2023). Sketch Acquisition Manual (SAM), Part I: The sketch corpus. Language Documentation and Conservation Special Publication, 28, 5-38. Retrieved from https://hdl.handle.net/10125/74719.

    Abstract

    This paper presents the first part of a guide for documenting and describing child language, child-directed language and socialization patterns in diverse languages and cultures. The guide is intended for anyone interested in working across child language and language documentation,
    including, for example, field linguists and language documenters, community language workers, child language researchers or graduate students. We assume some basic familiarity with language documentation principles and methods, and, based on this, provide step-by-step suggestions for
    collecting, analyzing and presenting child data. This first part of the guide focuses on constructing a sketch corpus that consists of minimally five hours of annotated and archived data and which documents communicative practices of children between the ages of 2 and 4.
  • Defina, R., Allen, S. E. M., Davidson, L., Hellwig, B., Kelly, B. F., & Kidd, E. (2023). Sketch Acquisition Manual (SAM), Part II: The acquisition sketch. Language Documentation and Conservation Special Publication, 28, 39-86. Retrieved from https://hdl.handle.net/10125/74720.

    Abstract

    This paper presents the second part of a guide for documenting and describing child language, child-directed language and socialization patterns in diverse languages and cultures. The guide is intended for anyone interested in working across child language and language documentation,
    including, for example, field linguists and language documenters, community language workers, child language researchers or graduate students. We assume some basic familiarity with language documentation principles and methods, and, based on this, provide step-by-step suggestions for
    collecting, analyzing and presenting child data. This second part of the guide focuses on developing a child language acquisition sketch. It takes the sketch corpus as its basis (which was introduced in the first part of this guide), and presents a model for analyzing and describing the corpus data.
  • Demir, Ö. E., So, W.-C., Ozyurek, A., & Goldin-Meadow, S. (2012). Turkish- and English-speaking children display sensitivity to perceptual context in referring expressions they produce in speech and gesture. Language and Cognitive Processes, 27, 844 -867. doi:10.1080/01690965.2011.589273.

    Abstract

    Speakers choose a particular expression based on many factors, including availability of the referent in the perceptual context. We examined whether, when expressing referents, monolingual English- and Turkish-speaking children: (1) are sensitive to perceptual context, (2) express this sensitivity in language-specific ways, and (3) use co-speech gestures to specify referents that are underspecified. We also explored the mechanisms underlying children's sensitivity to perceptual context. Children described short vignettes to an experimenter under two conditions: The characters in the vignettes were present in the perceptual context (perceptual context); the characters were absent (no perceptual context). Children routinely used nouns in the no perceptual context condition, but shifted to pronouns (English-speaking children) or omitted arguments (Turkish-speaking children) in the perceptual context condition. Turkish-speaking children used underspecified referents more frequently than English-speaking children in the perceptual context condition; however, they compensated for the difference by using gesture to specify the forms. Gesture thus gives children learning structurally different languages a way to achieve comparable levels of specification while at the same time adhering to the referential expressions dictated by their language.
  • DePape, A., Chen, A., Hall, G., & Trainor, L. (2012). Use of prosody and information structure in high functioning adults with Autism in relation to language ability. Frontiers in Psychology, 3, 72. doi:10.3389/fpsyg.2012.00072.

    Abstract

    Abnormal prosody is a striking feature of the speech of those with Autism Spectrum Disorder (ASD), but previous reports suggest large variability among those with ASD. Here we show that part of this heterogeneity can be explained by level of language functioning. We recorded semi-spontaneous but controlled conversations in adults with and without Autism Spectrum Disorder and measured features related to pitch and duration to determine (1) general use of prosodic features, (2) prosodic use in relation to marking information structure, specifically, the emphasis of new information in a sentence (focus) as opposed to information already given in the conversational context (topic), and (3) the relation between prosodic use and level of language function. We found that, compared to typical adults, those with ASD with high language functioning generally used a larger pitch range than controls but did not mark information structure, whereas those with moderate language functioning generally used a smaller pitch range than controls but marked information structure appropriately to a large extent. Both impaired general prosodic use and impaired marking of information structure would be expected to seriously impact social communication and thereby lead to increased difficulty in personal domains, such as making and keeping friendships, and in professional domains, such as competing for employment opportunities.
  • Desmet, T., De Baecke, C., Drieghe, D., Brysbaert, M., & Vonk, W. (2006). Relative clause attachment in Dutch: On-line comprehension corresponds to corpus frequencies when lexical variables are taken into account. Language and Cognitive Processes, 21(4), 453-485. doi:10.1080/01690960400023485.

    Abstract

    Desmet, Brysbaert, and De Baecke (2002a) showed that the production of relative clauses following two potential attachment hosts (e.g., ‘Someone shot the servant of the actress who was on the balcony’) was influenced by the animacy of the first host. These results were important because they refuted evidence from Dutch against experience-based accounts of syntactic ambiguity resolution, such as the tuning hypothesis. However, Desmet et al. did not provide direct evidence in favour of tuning, because their study focused on production and did not include reading experiments. In the present paper this line of research was extended. A corpus analysis and an eye-tracking experiment revealed that when taking into account lexical properties of the NP host sites (i.e., animacy and concreteness) the frequency pattern and the on-line comprehension of the relative clause attachment ambiguity do correspond. The implications for exposure-based accounts of sentence processing are discussed.

Share this page