Publications

Displaying 101 - 200 of 949
  • Cablitz, G. (2002). The acquisition of an absolute system: learning to talk about space in Marquesan (Oceanic, French Polynesia). In E. V. Clark (Ed.), Space in language location, motion, path, and manner (pp. 40-49). Stanford: Center for the Study of Language & Information (Electronic proceedings.
  • Cablitz, G. (2002). Marquesan: A grammar of space. PhD Thesis, Christian Albrechts U., Kiel.
  • Cabrelli, J., Chaouch-Orozco, A., González Alonso, J., Pereira Soares, S. M., Puig-Mayenco, E., & Rothman, J. (Eds.). (2023). The Cambridge handbook of third language acquisition. Cambridge: Cambridge University Press. doi:10.1017/9781108957823.
  • Cabrelli, J., Chaouch-Orozco, A., González Alonso, J., Pereira Soares, S. M., Puig-Mayenco, E., & Rothman, J. (2023). Introduction - Multilingualism: Language, brain, and cognition. In J. Cabrelli, A. Chaouch-Orozco, J. González Alonso, S. M. Pereira Soares, E. Puig-Mayenco, & J. Rothman (Eds.), The Cambridge handbook of third language acquisition (pp. 1-20). Cambridge: Cambridge University Press. doi:10.1017/9781108957823.001.

    Abstract

    This chapter provides an introduction to the handbook. It succintly overviews the key questions in the field of L3/Ln acquisition and summarizes the scope of all the chapters included. The chapter ends by raising some outstanding questions that the field needs to address.
  • Campisi, E. (2009). La gestualità co-verbale tra comunicazione e cognizione: In che senso i gesti sono intenzionali. In F. Parisi, & M. Primo (Eds.), Natura, comunicazione, neurofilosofie. Atti del III convegno 2009 del CODISCO. Rome: Squilibri.
  • Caplan, S., Peng, M. Z., Zhang, Y., & Yu, C. (2023). Using an Egocentric Human Simulation Paradigm to quantify referential and semantic ambiguity in early word learning. In M. Goldwater, F. K. Anggoro, B. K. Hayes, & D. C. Ong (Eds.), Proceedings of the 45th Annual Meeting of the Cognitive Science Society (CogSci 2023) (pp. 1043-1049).

    Abstract

    In order to understand early word learning we need to better understand and quantify properties of the input that young children receive. We extended the human simulation paradigm (HSP) using egocentric videos taken from infant head-mounted cameras. The videos were further annotated with gaze information indicating in-the-moment visual attention from the infant. Our new HSP prompted participants for two types of responses, thus differentiating referential from semantic ambiguity in the learning input. Consistent with findings on visual attention in word learning, we find a strongly bimodal distribution over HSP accuracy. Even in this open-ended task, most videos only lead to a small handful of common responses. What's more, referential ambiguity was the key bottleneck to performance: participants can nearly always recover the exact word that was said if they identify the correct referent. Finally, analysis shows that adult learners relied on particular, multimodal behavioral cues to infer those target referents.
  • Carota, F., Nili, H., Kriegeskorte, N., & Pulvermüller, F. (2023). Experientially-grounded and distributional semantic vectors uncover dissociable representations of semantic categories. Language, Cognition and Neuroscience. Advance online publication. doi:10.1080/23273798.2023.2232481.

    Abstract

    Neuronal populations code similar concepts by similar activity patterns across the human brain's semantic networks. However, it is unclear to what extent such meaning-to-symbol mapping reflects distributional statistics, or experiential information grounded in sensorimotor and emotional knowledge. We asked whether integrating distributional and experiential data better distinguished conceptual categories than each method taken separately. We examined the similarity structure of fMRI patterns elicited by visually presented action- and object-related words using representational similarity analysis (RSA). We found that the distributional and experiential/integrative models respectively mapped the high-dimensional semantic space in left inferior frontal, anterior temporal, and in left precentral, posterior inferior/middle temporal cortex. Furthermore, results from model comparisons uncovered category-specific similarity patterns, as both distributional and experiential models matched the similarity patterns for action concepts in left fronto-temporal cortex, whilst the experiential/integrative (but not distributional) models matched the similarity patterns for object concepts in left fusiform and angular gyrus.
  • Carota, F., Schoffelen, J.-M., Oostenveld, R., & Indefrey, P. (2023). Parallel or sequential? Decoding conceptual and phonological/phonetic information from MEG signals during language production. Cognitive Neuropsychology, 40(5-6), 298-317. doi:10.1080/02643294.2023.2283239.

    Abstract

    Speaking requires the temporally coordinated planning of core linguistic information, from conceptual meaning to articulation. Recent neurophysiological results suggested that these operations involve a cascade of neural events with subsequent onset times, whilst competing evidence suggests early parallel neural activation. To test these hypotheses, we examined the sources of neuromagnetic activity recorded from 34 participants overtly naming 134 images from 4 object categories (animals, tools, foods and clothes). Within each category, word length and phonological neighbourhood density were co-varied to target phonological/phonetic processes. Multivariate pattern analyses (MVPA) searchlights in source space decoded object categories in occipitotemporal and middle temporal cortex, and phonological/phonetic variables in left inferior frontal (BA 44) and motor cortex early on. The findings suggest early activation of multiple variables due to intercorrelated properties and interactivity of processing, thus raising important questions about the representational properties of target words during the preparatory time enabling overt speaking.
  • Casasanto, D., Willems, R. M., & Hagoort, P. (2009). Body-specific representations of action verbs: Evidence from fMRI in right- and left-handers. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the 31st Annual Meeting of the Cognitive Science Society (pp. 875-880). Austin: Cognitive Science Society.

    Abstract

    According to theories of embodied cognition, understanding a verb like throw involves unconsciously simulating the action throwing, using areas of the brain that support motor planning. If understanding action words involves mentally simulating our own actions, then the neurocognitive representation of word meanings should differ for people with different kinds of bodies, who perform actions in systematically different ways. In a test of the body-specificity hypothesis (Casasanto, 2009), we used fMRI to compare premotor activity correlated with action verb understanding in right- and left-handers. Right-handers preferentially activated left premotor cortex during lexical decision on manual action verbs (compared with non-manual action verbs), whereas left-handers preferentially activated right premotor areas. This finding helps refine theories of embodied semantics, suggesting that implicit mental simulation during language processing is body-specific: Right and left-handers, who perform actions differently, use correspondingly different areas of the brain for representing action verb meanings.
  • Casasanto, D. (2009). Embodiment of abstract concepts: Good and bad in right- and left-handers. Journal of Experimental Psychology: General, 138, 351-367. doi:10.1037/a0015854.

    Abstract

    Do people with different kinds of bodies think differently? According to the body-specificity hypothesis, people who interact with their physical environments in systematically different ways should form correspondingly different mental representations. In a test of this hypothesis, 5 experiments investigated links between handedness and the mental representation of abstract concepts with positive or negative valence (e.g., honesty, sadness, intelligence). Mappings from spatial location to emotional valence differed between right- and left-handed participants. Right-handers tended to associate rightward space with positive ideas and leftward space with negative ideas, but left-handers showed the opposite pattern, associating rightward space with negative ideas and leftward with positive ideas. These contrasting mental metaphors for valence cannot be attributed to linguistic experience, because idioms in English associate good with right but not with left. Rather, right- and left-handers implicitly associated positive valence more strongly with the side of space on which they could act more fluently with their dominant hands. These results support the body-specificity hypothesis and provide evidence for the perceptuomotor basis of even the most abstract ideas.
  • Casasanto, D., & Jasmin, K. (2009). Emotional valence is body-specific: Evidence from spontaneous gestures during US presidential debates. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the 31st Annual Meeting of the Cognitive Science Society (pp. 1965-1970). Austin: Cognitive Science Society.

    Abstract

    What is the relationship between motor action and emotion? Here we investigated whether people associate good things more strongly with the dominant side of their bodies, and bad things with the non-dominant side. To find out, we analyzed spontaneous gestures during speech expressing ideas with positive or negative emotional valence (e.g., freedom, pain, compassion). Samples of speech and gesture were drawn from the 2004 and 2008 US presidential debates, which involved two left-handers (Obama, McCain) and two right-handers (Kerry, Bush). Results showed a strong association between the valence of spoken clauses and the hands used to make spontaneous co-speech gestures. In right-handed candidates, right-hand gestures were more strongly associated with positive-valence clauses, and left-hand gestures with negative-valence clauses. Left-handed candidates showed the opposite pattern. Right- and left-handers implicitly associated positive valence more strongly with their dominant hand: the hand they can use more fluently. These results support the body-specificity hypothesis, (Casasanto, 2009), and suggest a perceptuomotor basis for even our most abstract ideas.
  • Casasanto, D. (2009). [Review of the book Music, language, and the brain by Aniruddh D. Patel]. Language and Cognition, 1(1), 143-146. doi:10.1515/LANGCOG.2009.007.
  • Casasanto, D., Fotakopoulou, O., & Boroditsky, L. (2009). Space and time in the child's mind: Evidence for a cross-dimensional asymmetry. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the 31st Annual Meeting of the Cognitive Science Society (pp. 1090-1095). Austin: Cognitive Science Society.

    Abstract

    What is the relationship between space and time in the human mind? Studies in adults show an asymmetric relationship between mental representations of these basic dimensions of experience: representations of time depend on space more than representations of space depend on time. Here we investigated the relationship between space and time in the developing mind. Native Greek-speaking children (N=99) watched movies of two animals traveling along parallel paths for different distances or durations and judged the spatial and temporal aspects of these events (e.g., Which animal went for a longer time, or a longer distance?) Results showed a reliable cross-dimensional asymmetry: for the same stimuli, spatial information influenced temporal judgments more than temporal information influenced spatial judgments. This pattern was robust to variations in the age of the participants and the type of language used to elicit responses. This finding demonstrates a continuity between space-time representations in children and adults, and informs theories of analog magnitude representation.
  • Casasanto, D. (2009). Space for thinking. In V. Evans, & P. Chilton (Eds.), Language, cognition and space: State of the art and new directions (pp. 453-478). London: Equinox Publishing.
  • Casasanto, D. (2009). When is a linguistic metaphor a conceptual metaphor? In V. Evans, & S. Pourcel (Eds.), New directions in cognitive linguistics (pp. 127-145). Amsterdam: Benjamins.
  • Cavaco, P., Curuklu, B., & Petersson, K. M. (2009). Artificial grammar recognition using two spiking neural networks. Frontiers in Neuroinformatics. Conference abstracts: 2nd INCF Congress of Neuroinformatics. doi:10.3389/conf.neuro.11.2009.08.096.

    Abstract

    In this paper we explore the feasibility of artificial (formal) grammar recognition (AGR) using spiking neural networks. A biologically inspired minicolumn architecture is designed as the basic computational unit. A network topography is defined based on the minicolumn architecture, here referred to as nodes, connected with excitatory and inhibitory connections. Nodes in the network represent unique internal states of the grammar’s finite state machine (FSM). Future work to improve the performance of the networks is discussed. The modeling framework developed can be used by neurophysiological research to implement network layouts and compare simulated performance characteristics to actual subject performance.
  • Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2023). Ten-month-old infants’ neural tracking of naturalistic speech is not facilitated by the speaker’s eye gaze. Developmental Cognitive Neuroscience, 64: 101297. doi:10.1016/j.dcn.2023.101297.

    Abstract

    Eye gaze is a powerful ostensive cue in infant-caregiver interactions, with demonstrable effects on language acquisition. While the link between gaze following and later vocabulary is well-established, the effects of eye gaze on other aspects of language, such as speech processing, are less clear. In this EEG study, we examined the effects of the speaker’s eye gaze on ten-month-old infants’ neural tracking of naturalistic audiovisual speech, a marker for successful speech processing. Infants watched videos of a speaker telling stories, addressing the infant with direct or averted eye gaze. We assessed infants’ speech-brain coherence at stress (1–1.75 Hz) and syllable (2.5–3.5 Hz) rates, tested for differences in attention by comparing looking times and EEG theta power in the two conditions, and investigated whether neural tracking predicts later vocabulary. Our results showed that infants’ brains tracked the speech rhythm both at the stress and syllable rates, and that infants’ neural tracking at the syllable rate predicted later vocabulary. However, speech-brain coherence did not significantly differ between direct and averted gaze conditions and infants did not show greater attention to direct gaze. Overall, our results suggest significant neural tracking at ten months, related to vocabulary development, but not modulated by speaker’s gaze.

    Additional information

    supplementary material
  • Chang, F., Tatsumi, T., Hiranuma, Y., & Bannard, C. (2023). Visual heuristics for verb production: Testing a deep‐learning model with experiments in Japanese. Cognitive Science, 47(8): e13324. doi:10.1111/cogs.13324.

    Abstract

    Tense/aspect morphology on verbs is often thought to depend on event features like telicity, but it is not known how speakers identify these features in visual scenes. To examine this question, we asked Japanese speakers to describe computer-generated animations of simple actions with variation in visual features related to telicity. Experiments with adults and children found that they could use goal information in the animations to select appropriate past and progressive verb forms. They also produced a large number of different verb forms. To explain these findings, a deep-learning model of verb production from visual input was created that could produce a human-like distribution of verb forms. It was able to use visual cues to select appropriate tense/aspect morphology. The model predicted that video duration would be related to verb complexity, and past tense production would increase when it received the endpoint as input. These predictions were confirmed in a third study with Japanese adults. This work suggests that verb production could be tightly linked to visual heuristics that support the understanding of events.
  • Chen, A., Gussenhoven, C., & Rietveld, T. (2002). Language-specific uses of the effort code. In B. Bel, & I. Marlien (Eds.), Proceedings of the 1st Conference on Speech Prosody (pp. 215-218). Aix=en-Provence: Université de Provence.

    Abstract

    Two groups of listeners with Dutch and British English language backgrounds judged Dutch and British English utterances, respectively, which varied in the intonation contour on the scales EMPHATIC vs. NOT EMPHATIC and SURPRISED vs. NOT SURPRISED, two meanings derived from the Effort Code. The stimuli, which differed in sentence mode but were otherwise lexically equivalent, were varied in peak height, peak alignment, end pitch, and overall register. In both languages, there are positive correlations between peak height and degree of emphasis, between peak height and degree of surprise, between peak alignment and degree of surprise, and between pitch register and degree of surprise. However, in all these cases, Dutch stimuli lead to larger perceived meaning differences than the British English stimuli. This difference in the extent to which increased pitch height triggers increases in perceived emphasis and surprise is argued to be due to the difference in the standard pitch ranges between Dutch and British English. In addition, we found a positive correlation between pitch register and the degree of emphasis in Dutch, but a negative correlation in British English. This is an unexpected difference, which illustrates a case of ambiguity in the meaning of pitch.
  • Chen, X. S., Collins, L. J., Biggs, P. J., & Penny, D. (2009). High throughput genome-wide survey of small RNAs from the parasitic protists giardia intestinalis and trichomonas vaginalis. Genome biology and evolution, 1, 165-175. doi:10.1093/gbe/evp017.

    Abstract

    RNA interference (RNAi) is a set of mechanisms which regulate gene expression in eukaryotes. Key elements of RNAi are small sense and antisense RNAs from 19 to 26 nucleotides generated from double-stranded RNAs. miRNAs are a major type of RNAi-associated small RNAs and are found in most eukaryotes studied to date. To investigate whether small RNAs associated with RNAi appear to be present in all eukaryotic lineages, and therefore present in the ancestral eukaryote, we studied two deep-branching protozoan parasites, Giardia intestinalis and Trichomonas vaginalis. Little is known about endogenous small RNAs involved in RNAi of these organisms. Using Illumina Solexa sequencing and genome-wide analysis of small RNAs from these distantly related deep-branching eukaryotes, we identified 10 strong miRNA candidates from Giardia and 11 from Trichomonas. We also found evidence of Giardia siRNAs potentially involved in the expression of variant-specific-surface proteins. In addition, 8 new snoRNAs from Trichomonas are identified. Our results indicate that miRNAs are likely to be general in ancestral eukaryotes, and therefore are likely to be a universal feature of eukaryotes.
  • Chen, A. (2009). Intonation and reference maintenance in Turkish learners of Dutch: A first insight. AILE - Acquisition et Interaction en Langue Etrangère, 28(2), 67-91.

    Abstract

    This paper investigates L2 learners’ use of intonation in reference maintenance in comparison to native speakers at three longitudinal points. Nominal referring expressions were elicited from two untutored Turkish learners of Dutch and five native speakers of Dutch via a film retelling task, and were analysed in terms of pitch span and word duration. Effects of two types of change in information states were examined, between new and given and between new and accessible. We found native-like use of word duration in both types of change early on but different performances between learners and development over time in one learner in the use of pitch span. Further, the use of morphosyntactic devices had different effects on the two learners. The inter-learner differences and late systematic use of pitch span, in spite of similar use of pitch span in learners’ L1 and L2, suggest that learning may play a role in the acquisition of intonation as a device for reference maintenance.
  • Chen, A. (2009). Perception of paralinguistic intonational meaning in a second language. Language Learning, 59(2), 367-409.
  • Chen, A. (2009). The phonetics of sentence-initial topic and focus in adult and child Dutch. In M. Vigário, S. Frota, & M. Freitas (Eds.), Phonetics and Phonology: Interactions and interrelations (pp. 91-106). Amsterdam: Benjamins.
  • Chen, A., Çetinçelik, M., Roncaglia-Denissen, M. P., & Sadakata, M. (2023). Native language, L2 experience, and pitch processing in music. Linguistic Approaches to Bilingualism, 13(2), 218-237. doi:10.1075/lab.20030.che.

    Abstract

    The current study investigated how the role of pitch in one’s native language and L2 experience influenced musical melodic processing by testing Turkish and Mandarin Chinese advanced and beginning learners of English as an L2. Pitch has a lower functional load and shows a simpler pattern in Turkish than in Chinese as the former only contrasts between presence and the absence of pitch elevation, while the latter makes use of four different pitch contours lexically. Using the Musical Ear Test as the tool, we found that the Chinese listeners outperformed the Turkish listeners, and the advanced L2 learners outperformed the beginning learners. The Turkish listeners were further tested on their discrimination of bisyllabic Chinese lexical tones, and again an L2 advantage was observed. No significant difference was found for working memory between the beginning and advanced L2 learners. These results suggest that richness of tonal inventory of the native language is essential for triggering a music processing advantage, and on top of the tone language advantage, the L2 experience yields a further enhancement. Yet, unlike the tone language advantage that seems to relate to pitch expertise, learning an L2 seems to improve sound discrimination in general, and such improvement exhibits in non-native lexical tone discrimination.
  • Chevrefils, L., Morgenstern, A., Beaupoil-Hourdel, P., Bedoin, D., Caët, S., Danet, C., Danino, C., De Pontonx, S., & Parisse, C. (2023). Coordinating eating and languaging: The choreography of speech, sign, gesture and action in family dinners. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527183.

    Abstract

    In this study, we analyze one French signing and one French speaking family’s interaction during dinner. The families composed of two parents and two children aged 3 to 11 were filmed with three cameras to capture all family members’ behaviors. The three videos per dinner were synchronized and coded on ELAN. We annotated all participants’ acting, and languaging.
    Our quantitative analyses show how family members collaboratively manage multiple streams of activity through the embodied performances of dining and interacting. We uncover different profiles according to participants’ modality of expression and status (focusing on the mother and the younger child). The hearing participants’ co-activity management illustrates their monitoring of dining and conversing and how they progressively master the affordances of the visual and vocal channels to maintain the simultaneity of the two activities. The deaf mother skillfully manages to alternate smoothly between dining and interacting. The deaf younger child manifests how she is in the process of developing her skills to manage multi-activity. Our qualitative analyses focus on the ecology of visual-gestural and audio-vocal languaging in the context of co-activity according to language and participant. We open new perspectives on the management of gaze and body parts in multimodal languaging.
  • Cho, T. (2002). The effects of prosody on articulation in English. New York: Routledge.
  • Cho, T., Jun, S.-A., & Ladefoged, P. (2002). Acoustic and aerodynamic correlates of Korean stops and fricatives. Journal of Phonetics, 30(2), 193-228. doi:10.1006/jpho.2001.0153.

    Abstract

    This study examines acoustic and aerodynamic characteristics of consonants in standard Korean and in Cheju, an endangered Korean language. The focus is on the well-known three-way distinction among voiceless stops (i.e., lenis, fortis, aspirated) and the two-way distinction between the voiceless fricatives /s/ and /s*/. While such a typologically unusual contrast among voiceless stops has long drawn the attention of phoneticians and phonologists, there is no single work in the literature that discusses a body of data representing a relatively large number of speakers. This study reports a variety of acoustic and aerodynamic measures obtained from 12 Korean speakers (four speakers of Seoul Korean and eight speakers of Cheju). Results show that, in addition to findings similar to those reported by others, there are three crucial points worth noting. Firstly, lenis, fortis, and aspirated stops are systematically differentiated from each other by the voice quality of the following vowel. Secondly, these stops are also differentiated by aerodynamic mechanisms. The aspirated and fortis stops are similar in supralaryngeal articulation, but employ a different relation between intraoral pressure and flow. Thirdly, our study suggests that the fricative /s/ is better categorized as “lenis” rather than “aspirated”. The paper concludes with a discussion of the implications of Korean data for theories of the voicing contrast and their phonological representations.
  • Choi, S., & Bowerman, M. (1991). Learning to express motion events in English and Korean: The influence of language-specific lexicalization patterns. Cognition, 41, 83-121. doi:10.1016/0010-0277(91)90033-Z.

    Abstract

    English and Korean differ in how they lexicalize the components of motionevents. English characteristically conflates Motion with Manner, Cause, or Deixis, and expresses Path separately. Korean, in contrast, conflates Motion with Path and elements of Figure and Ground in transitive clauses for caused Motion, but conflates motion with Deixis and spells out Path and Manner separately in intransitive clauses for spontaneous motion. Children learningEnglish and Korean show sensitivity to language-specific patterns in the way they talk about motion from as early as 17–20 months. For example, learners of English quickly generalize their earliest spatial words — Path particles like up, down, and in — to both spontaneous and caused changes of location and, for up and down, to posture changes, while learners of Korean keep words for spontaneous and caused motion strictly separate and use different words for vertical changes of location and posture changes. These findings challenge the widespread view that children initially map spatial words directly to nonlinguistic spatial concepts, and suggest that they are influenced by the semantic organization of their language virtually from the beginning. We discuss how input and cognition may interact in the early phases of learning to talk about space.
  • Cholin, J., & Levelt, W. J. M. (2009). Effects of syllable preparation and syllable frequency in speech production: Further evidence for syllabic units at a post-lexical level. Language and Cognitive Processes, 24, 662-684. doi:10.1080/01690960802348852.

    Abstract

    In the current paper, we asked at what level in the speech planning process speakers retrieve stored syllables. There is evidence that syllable structure plays an essential role in the phonological encoding of words (e.g., online syllabification and phonological word formation). There is also evidence that syllables are retrieved as whole units. However, findings that clearly pinpoint these effects to specific levels in speech planning are scarce. We used a naming variant of the implicit priming paradigm to contrast voice onset latencies for frequency-manipulated disyllabic Dutch pseudo-words. While prior implicit priming studies only manipulated the item's form and/or syllable structure overlap we introduced syllable frequency as an additional factor. If the preparation effect for syllables obtained in the implicit priming paradigm proceeds beyond phonological planning, i.e., includes the retrieval of stored syllables, then the preparation effect should differ for high- and low frequency syllables. The findings reported here confirm this prediction: Low-frequency syllables benefit significantly more from the preparation than high-frequency syllables. Our findings support the notion of a mental syllabary at a post-lexical level, between the levels of phonological and phonetic encoding.
  • Chu, M., & Kita, S. (2009). Co-speech gestures do not originate from speech production processes: Evidence from the relationship between co-thought and co-speech gestures. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the Thirty-First Annual Conference of the Cognitive Science Society (pp. 591-595). Austin, TX: Cognitive Science Society.

    Abstract

    When we speak, we spontaneously produce gestures (co-speech gestures). Co-speech gestures and speech production are closely interlinked. However, the exact nature of the link is still under debate. To addressed the question that whether co-speech gestures originate from the speech production system or from a system independent of the speech production, the present study examined the relationship between co-speech and co-thought gestures. Co-thought gestures, produced during silent thinking without speaking, presumably originate from a system independent of the speech production processes. We found a positive correlation between the production frequency of co-thought and co-speech gestures, regardless the communicative function that co-speech gestures might serve. Therefore, we suggest that co-speech gestures and co-thought gestures originate from a common system that is independent of the speech production processes
  • Clahsen, H., Prüfert, P., Eisenbeiss, S., & Cholin, J. (2002). Strong stems in the German mental lexicon: Evidence from child language acquisition and adult processing. In I. Kaufmann, & B. Stiebels (Eds.), More than words. Festschrift for Dieter Wunderlich (pp. 91-112). Berlin: Akadamie Verlag.
  • Clough, S., Morrow, E., Mutlu, B., Turkstra, L., & Duff, M. C. C. (2023). Emotion recognition of faces and emoji in individuals with moderate-severe traumatic brain injury. Brain Injury, 37(7), 596-610. doi:10.1080/02699052.2023.2181401.

    Abstract

    Background. Facial emotion recognition deficits are common after moderate-severe traumatic brain injury (TBI) and linked to poor social outcomes. We examine whether emotion recognition deficits extend to facial expressions depicted by emoji.
    Methods. Fifty-one individuals with moderate-severe TBI (25 female) and fifty-one neurotypical peers (26 female) viewed photos of human faces and emoji. Participants selected the best-fitting label from a set of basic emotions (anger, disgust, fear, sadness, neutral, surprise, happy) or social emotions (embarrassed, remorseful, anxious, neutral, flirting, confident, proud).
    Results. We analyzed the likelihood of correctly labeling an emotion by group (neurotypical, TBI), stimulus condition (basic faces, basic emoji, social emoji), sex (female, male), and their interactions. Participants with TBI did not significantly differ from neurotypical peers in overall emotion labeling accuracy. Both groups had poorer labeling accuracy for emoji compared to faces. Participants with TBI (but not neurotypical peers) had poorer accuracy for labeling social emotions depicted by emoji compared to basic emotions depicted by emoji. There were no effects of participant sex.
    Discussion. Because emotion representation is more ambiguous in emoji than human faces, studying emoji use and perception in TBI is an important consideration for understanding functional communication and social participation after brain injury.
  • Clough, S., Padilla, V.-G., Brown-Schmidt, S., & Duff, M. C. (2023). Intact speech-gesture integration in narrative recall by adults with moderate-severe traumatic brain injury. Neuropsychologia, 189: 108665. doi:10.1016/j.neuropsychologia.2023.108665.

    Abstract

    Purpose

    Real-world communication is situated in rich multimodal contexts, containing speech and gesture. Speakers often convey unique information in gesture that is not present in the speech signal (e.g., saying “He searched for a new recipe” while making a typing gesture). We examine the narrative retellings of participants with and without moderate-severe traumatic brain injury across three timepoints over two online Zoom sessions to investigate whether people with TBI can integrate information from co-occurring speech and gesture and if information from gesture persists across delays.

    Methods

    60 participants with TBI and 60 non-injured peers watched videos of a narrator telling four short stories. On key details, the narrator produced complementary gestures that conveyed unique information. Participants retold the stories at three timepoints: immediately after, 20-min later, and one-week later. We examined the words participants used when retelling these key details, coding them as a Speech Match (e.g., “He searched for a new recipe”), a Gesture Match (e.g., “He searched for a new recipe online), or Other (“He looked for a new recipe”). We also examined whether participants produced representative gestures themselves when retelling these details.

    Results

    Despite recalling fewer story details, participants with TBI were as likely as non-injured peers to report information from gesture in their narrative retellings. All participants were more likely to report information from gesture and produce representative gestures themselves one-week later compared to immediately after hearing the story.

    Conclusion

    We demonstrated that speech-gesture integration is intact after TBI in narrative retellings. This finding has exciting implications for the utility of gesture to support comprehension and memory after TBI and expands our understanding of naturalistic multimodal language processing in this population.
  • Clough, S., Tanguay, A. F. N., Mutlu, B., Turkstra, L., & Duff, M. C. (2023). How do individuals with and without traumatic brain injury interpret emoji? Similarities and differences in perceived valence, arousal, and emotion representation. Journal of Nonverbal Communication, 47, 489-511. doi:10.1007/s10919-023-00433-w.

    Abstract

    Impaired facial affect recognition is common after traumatic brain injury (TBI) and linked to poor social outcomes. We explored whether perception of emotions depicted by emoji is also impaired after TBI. Fifty participants with TBI and 50 non-injured peers generated free-text labels to describe emotions depicted by emoji and rated their levels of valence and arousal on nine-point rating scales. We compared how the two groups’ valence and arousal ratings were clustered and examined agreement in the words participants used to describe emoji. Hierarchical clustering of affect ratings produced four emoji clusters in the non-injured group and three emoji clusters in the TBI group. Whereas the non-injured group had a strongly positive and a moderately positive cluster, the TBI group had a single positive valence cluster, undifferentiated by arousal. Despite differences in cluster numbers, hierarchical structures of the two groups’ emoji ratings were significantly correlated. Most emoji had high agreement in the words participants with and without TBI used to describe them. Participants with TBI perceived emoji similarly to non-injured peers, used similar words to describe emoji, and rated emoji similarly on the valence dimension. Individuals with TBI showed small differences in perceived arousal for a minority of emoji. Overall, results suggest that basic recognition processes do not explain challenges in computer-mediated communication reported by adults with TBI. Examining perception of emoji in context by people with TBI is an essential next step for advancing our understanding of functional communication in computer-mediated contexts after brain injury.

    Additional information

    supplementary information
  • Collins, L. J., & Chen, X. S. (2009). Ancestral RNA: The RNA biology of the eukaryotic ancestor. RNA Biology, 6(5), 495-502. doi:10.4161/rna.6.5.9551.

    Abstract

    Our knowledge of RNA biology within eukaryotes has exploded over the last five years. Within new research we see that some features that were once thought to be part of multicellular life have now been identified in several protist lineages. Hence, it is timely to ask which features of eukaryote RNA biology are ancestral to all eukaryotes. We focus on RNA-based regulation and epigenetic mechanisms that use small regulatory ncRNAs and long ncRNAs, to highlight some of the many questions surrounding eukaryotic ncRNA evolution.
  • Cooper, N., Cutler, A., & Wales, R. (2002). Constraints of lexical stress on lexical access in English: Evidence from native and non-native listeners. Language and Speech, 45(3), 207-228.

    Abstract

    Four cross-modal priming experiments and two forced-choice identification experiments investigated the use of suprasegmental cues to stress in the recognition of spoken English words, by native (English-speaking) and non- native (Dutch) listeners. Previous results had indicated that suprasegmental information was exploited in lexical access by Dutch but not by English listeners. For both listener groups, recognition of visually presented target words was faster, in comparison to a control condition, after stress-matching spoken primes, either monosyllabic (mus- from MUsic /muSEum) or bisyl labic (admi- from ADmiral/admiRAtion). For native listeners, the effect of stress-mismatching bisyllabic primes was not different from that of control primes, but mismatching monosyllabic primes produced partial facilitation. For non-native listeners, both bisyllabic and monosyllabic stress-mismatching primes produced partial facilitation. Native English listeners thus can exploit suprasegmental information in spoken-word recognition, but information from two syllables is used more effectively than information from one syllable. Dutch listeners are less proficient at using suprasegmental information in English than in their native language, but, as in their native language, use mono- and bisyllabic information to an equal extent. In forced-choice identification, Dutch listeners outperformed native listeners at correctly assigning a monosyllabic fragment (e.g., mus-) to one of two words differing in stress.
  • Coopmans, C. W. (2023). Triangles in the brain: The role of hierarchical structure in language use. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Coopmans, C. W., Struiksma, M. E., Coopmans, P. H. A., & Chen, A. (2023). Processing of grammatical agreement in the face of variation in lexical stress: A mismatch negativity study. Language and Speech, 66(1), 202-213. doi:10.1177/00238309221098116.

    Abstract

    Previous electroencephalography studies have yielded evidence for automatic processing of syntax and lexical stress. However, these studies looked at both effects in isolation, limiting their generalizability to everyday language comprehension. In the current study, we investigated automatic processing of grammatical agreement in the face of variation in lexical stress. Using an oddball paradigm, we measured the Mismatch Negativity (MMN) in Dutch-speaking participants while they listened to Dutch subject–verb sequences (linguistic context) or acoustically similar sequences in which the subject was replaced by filtered noise (nonlinguistic context). The verb forms differed in the inflectional suffix, rendering the subject–verb sequences grammatically correct or incorrect, and leading to a difference in the stress pattern of the verb forms. We found that the MMNs were modulated in both the linguistic and nonlinguistic condition, suggesting that the processing load induced by variation in lexical stress can hinder early automatic processing of grammatical agreement. However, as the morphological differences between the verb forms correlated with differences in number of syllables, an interpretation in terms of the prosodic structure of the sequences cannot be ruled out. Future research is needed to determine which of these factors (i.e., lexical stress, syllabic structure) most strongly modulate early syntactic processing.

    Additional information

    supplementary material
  • Coopmans, C. W., Mai, A., Slaats, S., Weissbart, H., & Martin, A. E. (2023). What oscillations can do for syntax depends on your theory of structure building. Nature Reviews Neuroscience, 24, 723. doi:10.1038/s41583-023-00734-5.
  • Coopmans, C. W., Kaushik, K., & Martin, A. E. (2023). Hierarchical structure in language and action: A formal comparison. Psychological Review, 130(4), 935-952. doi:10.1037/rev0000429.

    Abstract

    Since the cognitive revolution, language and action have been compared as cognitive systems, with cross-domain convergent views recently gaining renewed interest in biology, neuroscience, and cognitive science. Language and action are both combinatorial systems whose mode of combination has been argued to be hierarchical, combining elements into constituents of increasingly larger size. This structural similarity has led to the suggestion that they rely on shared cognitive and neural resources. In this article, we compare the conceptual and formal properties of hierarchy in language and action using set theory. We show that the strong compositionality of language requires a particular formalism, a magma, to describe the algebraic structure corresponding to the set of hierarchical structures underlying sentences. When this formalism is applied to actions, it appears to be both too strong and too weak. To overcome these limitations, which are related to the weak compositionality and sequential nature of action structures, we formalize the algebraic structure corresponding to the set of actions as a trace monoid. We aim to capture the different system properties of language and action in terms of the distinction between hierarchical sets and hierarchical sequences and discuss the implications for the way both systems could be represented in the brain.
  • Corps, R. E., Liao, M., & Pickering, M. J. (2023). Evidence for two stages of prediction in non-native speakers: A visual-world eye-tracking study. Bilingualism: Language and Cognition, 26(1), 231-243. doi:10.1017/S1366728922000499.

    Abstract

    Comprehenders predict what a speaker is likely to say when listening to non-native (L2) and native (L1) utterances. But what are the characteristics of L2 prediction, and how does it relate to L1 prediction? We addressed this question in a visual-world eye-tracking experiment, which tested when L2 English comprehenders integrated perspective into their predictions. Male and female participants listened to male and female speakers producing sentences (e.g., I would like to wear the nice…) about stereotypically masculine (target: tie; distractor: drill) and feminine (target: dress; distractor: hairdryer) objects. Participants predicted associatively, fixating objects semantically associated with critical verbs (here, the tie and the dress). They also predicted stereotypically consistent objects (e.g., the tie rather than the dress, given the male speaker). Consistent predictions were made later than associative predictions, and were delayed for L2 speakers relative to L1 speakers. These findings suggest prediction involves both automatic and non-automatic stages.
  • Corps, R. E. (2023). What do we know about the mechanisms of response planning in dialog? In Psychology of Learning and Motivation (pp. 41-81). doi:10.1016/bs.plm.2023.02.002.

    Abstract

    During dialog, interlocutors take turns at speaking with little gap or overlap between their contributions. But language production in monolog is comparatively slow. Theories of dialog tend to agree that interlocutors manage these timing demands by planning a response early, before the current speaker reaches the end of their turn. In the first half of this chapter, I review experimental research supporting these theories. But this research also suggests that planning a response early, while simultaneously comprehending, is difficult. Does response planning need to be this difficult during dialog? In other words, is early-planning always necessary? In the second half of this chapter, I discuss research that suggests the answer to this question is no. In particular, corpora of natural conversation demonstrate that speakers do not directly respond to the immediately preceding utterance of their partner—instead, they continue an utterance they produced earlier. This parallel talk likely occurs because speakers are highly incremental and plan only part of their utterance before speaking, leading to pauses, hesitations, and disfluencies. As a result, speakers do not need to engage in extensive advance planning. Thus, laboratory studies do not provide a full picture of language production in dialog, and further research using naturalistic tasks is needed.
  • Corps, R. E., & Meyer, A. S. (2023). Word frequency has similar effects in picture naming and gender decision: A failure to replicate Jescheniak and Levelt (1994). Acta Psychologica, 241: 104073. doi:10.1016/j.actpsy.2023.104073.

    Abstract

    Word frequency plays a key role in theories of lexical access, which assume that the word frequency effect (WFE, faster access to high-frequency than low-frequency words) occurs as a result of differences in the representation and processing of the words. In a seminal paper, Jescheniak and Levelt (1994) proposed that the WFE arises during the retrieval of word forms, rather than the retrieval of their syntactic representations (their lemmas) or articulatory commands. An important part of Jescheniak and Levelt's argument was that they found a stable WFE in a picture naming task, which requires complete lexical access, but not in a gender decision task, which only requires access to the words' lemmas and not their word forms. We report two attempts to replicate this pattern, one with new materials, and one with Jescheniak and Levelt's orginal pictures. In both studies we found a strong WFE when the pictures were shown for the first time, but much weaker effects on their second and third presentation. Importantly these patterns were seen in both the picture naming and the gender decision tasks, suggesting that either word frequency does not exclusively affect word form retrieval, or that the gender decision task does not exclusively tap lemma access.

    Additional information

    raw data and analysis scripts
  • Corps, R. E., Yang, F., & Pickering, M. (2023). Evidence against egocentric prediction during language comprehension. Royal Society Open Science, 10(12): 231252. doi:10.1098/rsos.231252.

    Abstract

    Although previous research has demonstrated that language comprehension can be egocentric, there is little evidence for egocentricity during prediction. In particular, comprehenders do not appear to predict egocentrically when the context makes it clear what the speaker is likely to refer to. But do comprehenders predict egocentrically when the context does not make it clear? We tested this hypothesis using a visual-world eye-tracking paradigm, in which participants heard sentences containing the gender-neutral pronoun They (e.g. They would like to wear…) while viewing four objects (e.g. tie, dress, drill, hairdryer). Two of these objects were plausible targets of the verb (tie and dress), and one was stereotypically compatible with the participant's gender (tie if the participant was male; dress if the participant was female). Participants rapidly fixated targets more than distractors, but there was no evidence that participants ever predicted egocentrically, fixating objects stereotypically compatible with their own gender. These findings suggest that participants do not fall back on their own egocentric perspective when predicting, even when they know that context does not make it clear what the speaker is likely to refer to.
  • Corradi, Z., Khan, M., Hitti-Malin, R., Mishra, K., Whelan, L., Cornelis, S. S., ABCA4-Study Group, Hoyng, C. B., Kämpjärvi, K., Klaver, C. C. W., Liskova, P., Stohr, H., Weber, B. H. F., Banfi, S., Farrar, G. J., Sharon, D., Zernant, J., Allikmets, R., Dhaenens, C.-M., & Cremers, F. P. M. (2023). Targeted sequencing and in vitro splice assays shed light on ABCA4-associated retinopathies missing heritability. Human Genetics and Genomics Advances, 4(4): 100237. doi:10.1016/j.xhgg.2023.100237.

    Abstract

    The ABCA4 gene is the most frequently mutated Mendelian retinopathy-associated gene. Biallelic variants lead to a variety of phenotypes, however, for thousands of cases the underlying variants remain unknown. Here, we aim to shed further light on the missing heritability of ABCA4-associated retinopathy by analyzing a large cohort of macular dystrophy probands. A total of 858 probands were collected from 26 centers, of whom 722 carried no or one pathogenic ABCA4 variant while 136 cases carried two ABCA4 alleles, one of which was a frequent mild variant, suggesting that deep-intronic variants (DIVs) or other cis-modifiers might have been missed. After single molecule molecular inversion probes (smMIPs)-based sequencing of the complete 128-kb ABCA4 locus, the effect of putative splice variants was assessed in vitro by midigene splice assays in HEK293T cells. The breakpoints of copy number variants (CNVs) were determined by junction PCR and Sanger sequencing. ABCA4 sequence analysis solved 207/520 (39.8%) naïve or unsolved cases and 70/202 (34.7%) monoallelic cases, while additional causal variants were identified in 54/136 (39.7%) of probands carrying two variants. Seven novel DIVs and six novel non-canonical splice site variants were detected in a total of 35 alleles and characterized, including the c.6283-321C>G variant leading to a complex splicing defect. Additionally, four novel CNVs were identified and characterized in five alleles. These results confirm that smMIPs-based sequencing of the complete ABCA4 gene provides a cost-effective method to genetically solve retinopathy cases and that several rare structural and splice altering defects remain undiscovered in STGD1 cases.
  • Coventry, K. R., Gudde, H. B., Diessel, H., Collier, J., Guijarro-Fuentes, P., Vulchanova, M., Vulchanov, V., Todisco, E., Reile, M., Breunesse, M., Plado, H., Bohnemeyer, J., Bsili, R., Caldano, M., Dekova, R., Donelson, K., Forker, D., Park, Y., Pathak, L. S., Peeters, D. and 25 moreCoventry, K. R., Gudde, H. B., Diessel, H., Collier, J., Guijarro-Fuentes, P., Vulchanova, M., Vulchanov, V., Todisco, E., Reile, M., Breunesse, M., Plado, H., Bohnemeyer, J., Bsili, R., Caldano, M., Dekova, R., Donelson, K., Forker, D., Park, Y., Pathak, L. S., Peeters, D., Pizzuto, G., Serhan, B., Apse, L., Hesse, F., Hoang, L., Hoang, P., Igari, Y., Kapiley, K., Haupt-Khutsishvili, T., Kolding, S., Priiki, K., Mačiukaitytė, I., Mohite, V., Nahkola, T., Tsoi, S. Y., Williams, S., Yasuda, S., Cangelosi, A., Duñabeitia, J. A., Mishra, R. K., Rocca, R., Šķilters, J., Wallentin, M., Žilinskaitė-Šinkūnienė, E., & Incel, O. D. (2023). Spatial communication systems across languages reflect universal action constraints. Nature Human Behaviour, 77, 2099-2110. doi:10.1038/s41562-023-01697-4.

    Abstract

    The extent to which languages share properties reflecting the non-linguistic constraints of the speakers who speak them is key to the debate regarding the relationship between language and cognition. A critical case is spatial communication, where it has been argued that semantic universals should exist, if anywhere. Here, using an experimental paradigm able to separate variation within a language from variation between languages, we tested the use of spatial demonstratives—the most fundamental and frequent spatial terms across languages. In n = 874 speakers across 29 languages, we show that speakers of all tested languages use spatial demonstratives as a function of being able to reach or act on an object being referred to. In some languages, the position of the addressee is also relevant in selecting between demonstrative forms. Commonalities and differences across languages in spatial communication can be understood in terms of universal constraints on action shaping spatial language and cognition.
  • Cox, C., Bergmann, C., Fowler, E., Keren-Portnoy, T., Roepstorff, A., Bryant, G., & Fusaroli, R. (2023). A systematic review and Bayesian meta-analysis of the acoustic features of infant-directed speech. Nature Human Behaviour, 7, 114-133. doi:10.1038/s41562-022-01452-1.

    Abstract

    When speaking to infants, adults often produce speech that differs systematically from that directed to other adults. In order to quantify the acoustic properties of this speech style across a wide variety of languages and cultures, we extracted results from empirical studies on the acoustic features of infant-directed speech (IDS). We analyzed data from 88 unique studies (734 effect sizes) on the following five acoustic parameters that have been systematically examined in the literature: i) fundamental frequency (fo), ii) fo variability, iii) vowel space area, iv) articulation rate, and v) vowel duration. Moderator analyses were conducted in hierarchical Bayesian robust regression models in order to examine how these features change with infant age and differ across languages, experimental tasks and recording environments. The moderator analyses indicated that fo, articulation rate, and vowel duration became more similar to adult-directed speech (ADS) over time, whereas fo variability and vowel space area exhibited stability throughout development. These results point the way for future research to disentangle different accounts of the functions and learnability of IDS by conducting theory-driven comparisons among different languages and using computational models to formulate testable predictions.

    Additional information

    supplementary information
  • Creemers, A. (2023). Morphological processing in spoken-word recognition. In D. Crepaldi (Ed.), Linguistic morphology in the mind and brain (pp. 50-64). New York: Routledge.

    Abstract

    Most psycholinguistic studies on morphological processing have examined the role of morphological structure in the visual modality. This chapter discusses morphological processing in the auditory modality, which is an area of research that has only recently received more attention. It first discusses why results in the visual modality cannot straightforwardly be applied to the processing of spoken words, stressing the importance of acknowledging potential modality effects. It then gives a brief overview of the existing research on the role of morphology in the auditory modality, for which an increasing number of studies report that listeners show sensitivity to morphological structure. Finally, the chapter highlights insights gained by looking at morphological processing not only in reading, but also in listening, and it discusses directions for future research
  • Cronin, K. A., Schroeder, K. K. E., Rothwell, E. S., Silk, J. B., & Snowdon, C. T. (2009). Cooperatively breeding cottontop tamarins (Saguinus oedipus) do not donate rewards to their long-term mates. Journal of Comparative Psychology, 123(3), 231-241. doi:10.1037/a0015094.

    Abstract

    This study tested the hypothesis that cooperative breeding facilitates the emergence of prosocial behavior by presenting cottontop tamarins (Saguinus oedipus) with the option to provide food rewards to pair-bonded mates. In Experiment 1, tamarins could provide rewards to mates at no additional cost while obtaining rewards for themselves. Contrary to the hypothesis, tamarins did not demonstrate a preference to donate rewards, behaving similar to chimpanzees in previous studies. In Experiment 2, the authors eliminated rewards for the donor for a stricter test of prosocial behavior, while reducing separation distress and food preoccupation. Again, the authors found no evidence for a donation preference. Furthermore, tamarins were significantly less likely to deliver rewards to mates when the mate displayed interest in the reward. The results of this study contrast with those recently reported for cooperatively breeding common marmosets, and indicate that prosocial preferences in a food donation task do not emerge in all cooperative breeders. In previous studies, cottontop tamarins have cooperated and reciprocated to obtain food rewards; the current findings sharpen understanding of the boundaries of cottontop tamarins’ food-provisioning behavior.
  • Cutler, A. (2002). Phonological processing: Comments on Pierrehumbert, Moates et al., Kubozono, Peperkamp & Dupoux, and Bradlow. In C. Gussenhoven, & N. Warner (Eds.), Papers in Laboratory Phonology VII (pp. 275-296). Berlin: Mouton de Gruyter.
  • Cutler, A., & Otake, T. (2002). Rhythmic categories in spoken-word recognition. Journal of Memory and Language, 46(2), 296-322. doi:10.1006/jmla.2001.2814.

    Abstract

    Rhythmic categories such as morae in Japanese or stress units in English play a role in the perception of spoken
    language. We examined this role in Japanese, since recent evidence suggests that morae may intervene as
    structural units in word recognition. First, we found that traditional puns more often substituted part of a mora
    than a whole mora. Second, when listeners reconstructed distorted words, e.g. panorama from panozema, responses
    were faster and more accurate when only a phoneme was distorted (panozama, panorema) than when a
    whole CV mora was distorted (panozema). Third, lexical decisions on the same nonwords were better predicted
    by duration and number of phonemes from nonword uniqueness point to word end than by number of morae. Our
    results indicate no role for morae in early spoken-word processing; we propose that rhythmic categories constrain
    not initial lexical activation but subsequent processes of speech segmentation and selection among word candidates.
  • Cutler, A., & Norris, D. (2002). The role of strong syllables in segmentation for lexical access. In G. T. Altmann (Ed.), Psycholinguistics: Critical concepts in psychology (pp. 157-177). London: Routledge.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (2002). The syllable's differing role in the segmentation of French and English. In G. T. Altmann (Ed.), Psycholinguistics: Critical concepts in psychology (pp. 115-135). London: Routledge.

    Abstract

    Speech segmentation procedures may differ in speakers of different languages. Earlier work based on French speakers listening to French words suggested that the syllable functions as a segmentation unit in speech processing. However, while French has relatively regular and clearly bounded syllables, other languages, such as English, do not. No trace of syllabifying segmentation was found in English listeners listening to English words, French words, or nonsense words. French listeners, however, showed evidence of syllabification even when they were listening to English words. We conclude that alternative segmentation routines are available to the human language processor. In some cases speech segmentation may involve the operation of more than one procedure.
  • Cutler, A., McQueen, J. M., Jansonius, M., & Bayerl, S. (2002). The lexical statistics of competitor activation in spoken-word recognition. In C. Bow (Ed.), Proceedings of the 9th Australian International Conference on Speech Science and Technology (pp. 40-45). Canberra: Australian Speech Science and Technology Association (ASSTA).

    Abstract

    The Possible Word Constraint is a proposed mechanism whereby listeners avoid recognising words spuriously embedded in other words. It applies to words leaving a vowelless residue between their edge and the nearest known word or syllable boundary. The present study tests the usefulness of this constraint via lexical statistics of both English and Dutch. The analyses demonstrate that the constraint removes a clear majority of embedded words in speech, and thus can contribute significantly to the efficiency of human speech recognition
  • Cutler, A., Demuth, K., & McQueen, J. M. (2002). Universality versus language-specificity in listening to running speech. Psychological Science, 13(3), 258-262. doi:10.1111/1467-9280.00447.

    Abstract

    Recognizing spoken language involves automatic activation of multiple candidate words. The process of selection between candidates is made more efficient by inhibition of embedded words (like egg in beg) that leave a portion of the input stranded (here, b). Results from European languages suggest that this inhibition occurs when consonants are stranded but not when syllables are stranded. The reason why leftover syllables do not lead to inhibition could be that in principle they might themselves be words; in European languages, a syllable can be a word. In Sesotho (a Bantu language), however, a single syllable cannot be a word. We report that in Sesotho, word recognition is inhibited by stranded consonants, but stranded monosyllables produce no more difficulty than stranded bisyllables (which could be Sesotho words). This finding suggests that the viability constraint which inhibits spurious embedded word candidates is not sensitive to language-specific word structure, but is universal.
  • Cutler, A., & Fear, B. D. (1991). Categoricality in acceptability judgements for strong versus weak vowels. In J. Llisterri (Ed.), Proceedings of the ESCA Workshop on Phonetics and Phonology of Speaking Styles (pp. 18.1-18.5). Barcelona, Catalonia: Universitat Autonoma de Barcelona.

    Abstract

    A distinction between strong and weak vowels can be drawn on the basis of vowel quality, of stress, or of both factors. An experiment was conducted in which sets of contextually matched word-intial vowels ranging from clearly strong to clearly weak were cross-spliced, and the naturalness of the resulting words was rated by listeners. The ratings showed that in general cross-spliced words were only significantly less acceptable than unspliced words when schwa was not involved; this supports a categorical distinction based on vowel quality.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1983). A language-specific comprehension strategy [Letters to Nature]. Nature, 304, 159-160. doi:10.1038/304159a0.

    Abstract

    Infants acquire whatever language is spoken in the environment into which they are born. The mental capability of the newborn child is not biased in any way towards the acquisition of one human language rather than another. Because psychologists who attempt to model the process of language comprehension are interested in the structure of the human mind, rather than in the properties of individual languages, strategies which they incorporate in their models are presumed to be universal, not language-specific. In other words, strategies of comprehension are presumed to be characteristic of the human language processing system, rather than, say, the French, English, or Igbo language processing systems. We report here, however, on a comprehension strategy which appears to be used by native speakers of French but not by native speakers of English.
  • Cutler, A. (1971). [Review of the book Probleme der Aufgabenanalyse bei der Erstellung von Sprachprogrammen by K. Bung]. Babel, 7, 29-31.
  • Cutler, A. (2002). Lexical access. In L. Nadel (Ed.), Encyclopedia of cognitive science (pp. 858-864). London: Nature Publishing Group.
  • Cutler, A., McQueen, J. M., Norris, D., & Somejuan, A. (2002). Le rôle de la syllable. In E. Dupoux (Ed.), Les langages du cerveau: Textes en l’honneur de Jacques Mehler (pp. 185-197). Paris: Odile Jacob.
  • Cutler, A. (2002). Native listeners. European Review, 10(1), 27-41. doi:10.1017/S1062798702000030.

    Abstract

    Becoming a native listener is the necessary precursor to becoming a native speaker. Babies in the first year of life undertake a remarkable amount of work; by the time they begin to speak, they have perceptually mastered the phonological repertoire and phoneme co-occurrence probabilities of the native language, and they can locate familiar word-forms in novel continuous-speech contexts. The skills acquired at this early stage form a necessary part of adult listening. However, the same native listening skills also underlie problems in listening to a late-acquired non-native language, accounting for why in such a case listening (an innate ability) is sometimes paradoxically more difficult than, for instance, reading (a learned ability).
  • Cutler, A. (1982). Idioms: the older the colder. Linguistic Inquiry, 13(2), 317-320. Retrieved from http://www.jstor.org/stable/4178278?origin=JSTOR-pdf.
  • Cutler, A. (2009). Greater sensitivity to prosodic goodness in non-native than in native listeners. Journal of the Acoustical Society of America, 125, 3522-3525. doi:10.1121/1.3117434.

    Abstract

    English listeners largely disregard suprasegmental cues to stress in recognizing words. Evidence for this includes the demonstration of Fear et al. [J. Acoust. Soc. Am. 97, 1893–1904 (1995)] that cross-splicings are tolerated between stressed and unstressed full vowels (e.g., au- of autumn, automata). Dutch listeners, however, do exploit suprasegmental stress cues in recognizing native-language words. In this study, Dutch listeners were presented with English materials from the study of Fear et al. Acceptability ratings by these listeners revealed sensitivity to suprasegmental mismatch, in particular, in replacements of unstressed full vowels by higher-stressed vowels, thus evincing greater sensitivity to prosodic goodness than had been shown by the original native listener group.
  • Cutler, A. (1983). Lexical complexity and sentence processing. In G. B. Flores d'Arcais, & R. J. Jarvella (Eds.), The process of language understanding (pp. 43-79). Chichester, Sussex: Wiley.
  • Cutler, A. (1991). Linguistic rhythm and speech segmentation. In J. Sundberg, L. Nord, & R. Carlson (Eds.), Music, language, speech and brain (pp. 157-166). London: Macmillan.
  • Cutler, A., Davis, C., & Kim, J. (2009). Non-automaticity of use of orthographic knowledge in phoneme evaluation. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 380-383). Causal Productions Pty Ltd.

    Abstract

    Two phoneme goodness rating experiments addressed the role of orthographic knowledge in the evaluation of speech sounds. Ratings for the best tokens of /s/ were higher in words spelled with S (e.g., bless) than in words where /s/ was spelled with C (e.g., voice). This difference did not appear for analogous nonwords for which every lexical neighbour had either S or C spelling (pless, floice). Models of phonemic processing incorporating obligatory influence of lexical information in phonemic processing cannot explain this dissociation; the data are consistent with models in which phonemic decisions are not subject to necessary top-down lexical influence.
  • Cutler, A., & Fay, D. A. (1982). One mental lexicon, phonologically arranged: Comments on Hurford’s comments. Linguistic Inquiry, 13, 107-113. Retrieved from http://www.jstor.org/stable/4178262.
  • Cutler, A. (1991). Proceed with caution. New Scientist, (1799), 53-54.
  • Cutler, A. (1982). Prosody and sentence perception in English. In J. Mehler, E. C. Walker, & M. Garrett (Eds.), Perspectives on mental representation: Experimental and theoretical studies of cognitive processes and capacities (pp. 201-216). Hillsdale, N.J: Erlbaum.
  • Cutler, A. (1991). Prosody in situations of communication: Salience and segmentation. In Proceedings of the Twelfth International Congress of Phonetic Sciences: Vol. 1 (pp. 264-270). Aix-en-Provence: Université de Provence, Service des publications.

    Abstract

    Speakers and listeners have a shared goal: to communicate. The processes of speech perception and of speech production interact in many ways under the constraints of this communicative goal; such interaction is as characteristic of prosodic processing as of the processing of other aspects of linguistic structure. Two of the major uses of prosodic information in situations of communication are to encode salience and segmentation, and these themes unite the contributions to the symposium introduced by the present review.
  • Cutler, A., & Ladd, D. R. (Eds.). (1983). Prosody: Models and measurements. Heidelberg: Springer.
  • Cutler, A. (2009). Psycholinguistics in our time. In P. Rabbitt (Ed.), Inside psychology: A science over 50 years (pp. 91-101). Oxford: Oxford University Press.
  • Cutler, A. (1983). Semantics, syntax and sentence accent. In M. Van den Broecke, & A. Cohen (Eds.), Proceedings of the Tenth International Congress of Phonetic Sciences (pp. 85-91). Dordrecht: Foris.
  • Cutler, A. (Ed.). (1982). Slips of the tongue and language production. The Hague: Mouton.
  • Cutler, A. (1983). Speakers’ conceptions of the functions of prosody. In A. Cutler, & D. R. Ladd (Eds.), Prosody: Models and measurements (pp. 79-91). Heidelberg: Springer.
  • Cutler, A. (1982). Speech errors: A classified bibliography. Bloomington: Indiana University Linguistics Club.
  • Cutler, A., Otake, T., & McQueen, J. M. (2009). Vowel devoicing and the perception of spoken Japanese words. Journal of the Acoustical Society of America, 125(3), 1693-1703. doi:10.1121/1.3075556.

    Abstract

    Three experiments, in which Japanese listeners detected Japanese words embedded in nonsense sequences, examined the perceptual consequences of vowel devoicing in that language. Since vowelless sequences disrupt speech segmentation [Norris et al. (1997). Cognit. Psychol. 34, 191– 243], devoicing is potentially problematic for perception. Words in initial position in nonsense sequences were detected more easily when followed by a sequence containing a vowel than by a vowelless segment (with or without further context), and vowelless segments that were potential devoicing environments were no easier than those not allowing devoicing. Thus asa, “morning,” was easier in asau or asazu than in all of asap, asapdo, asaf, or asafte, despite the fact that the /f/ in the latter two is a possible realization of fu, with devoiced [u]. Japanese listeners thus do not treat devoicing contexts as if they always contain vowels. Words in final position in nonsense sequences, however, produced a different pattern: here, preceding vowelless contexts allowing devoicing impeded word detection less strongly (so, sake was detected less accurately, but not less rapidly, in nyaksake—possibly arising from nyakusake—than in nyagusake). This is consistent with listeners treating consonant sequences as potential realizations of parts of existing lexical candidates wherever possible.
  • Cutler, A., & Butterfield, S. (1991). Word boundary cues in clear speech: A supplementary report. Speech Communication, 10, 335-353. doi:10.1016/0167-6393(91)90002-B.

    Abstract

    One of a listener's major tasks in understanding continuous speech is segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately speaking more clearly. In four experiments, we examined how word boundaries are produced in deliberately clear speech. In an earlier report we showed that speakers do indeed mark word boundaries in clear speech, by pausing at the boundary and lengthening pre-boundary syllables; moreover, these effects are applied particularly to boundaries preceding weak syllables. In English, listeners use segmentation procedures which make word boundaries before strong syllables easier to perceive; thus marking word boundaries before weak syllables in clear speech will make clear precisely those boundaries which are otherwise hard to perceive. The present report presents supplementary data, namely prosodic analyses of the syllable following a critical word boundary. More lengthening and greater increases in intensity were applied in clear speech to weak syllables than to strong. Mean F0 was also increased to a greater extent on weak syllables than on strong. Pitch movement, however, increased to a greater extent on strong syllables than on weak. The effects were, however, very small in comparison to the durational effects we observed earlier for syllables preceding the boundary and for pauses at the boundary.
  • Dabrowska, E., Rowland, C. F., & Theakston, A. (2009). The acquisition of questions with long-distance dependencies. Cognitive Linguistics, 20(3), 571-597. doi:10.1515/COGL.2009.025.

    Abstract

    A number of researchers have claimed that questions and other constructions with long distance dependencies (LDDs) are acquired relatively early, by age 4 or even earlier, in spite of their complexity. Analysis of LDD questions in the input available to children suggests that they are extremely stereotypical, raising the possibility that children learn lexically specific templates such as WH do you think S-GAP? rather than general rules of the kind postulated in traditional linguistic accounts of this construction. We describe three elicited imitation experiments with children aged from 4;6 to 6;9 and adult controls. Participants were asked to repeat prototypical questions (i.e., questions which match the hypothesised template), unprototypical questions (which depart from it in several respects) and declarative counterparts of both types of interrogative sentences. The children performed significantly better on the prototypical variants of both constructions, even when both variants contained exactly the same lexical material, while adults showed prototypicality e¤ects for LDD questions only. These results suggest that a general declarative complementation construction emerges quite late in development (after age 6), and that even adults rely on lexically specific templates for LDD questions.
  • Dahan, D., Tanenhaus, M. K., & Chambers, C. G. (2002). Accent and reference resolution in spoken-language comprehension. Journal of Memory and Language, 47(2), 292-314. doi:10.1016/S0749-596X(02)00001-3.

    Abstract

    The role of accent in reference resolution was investigated by monitoring eye fixations to lexical competitors (e.g., candy and candle ) as participants followed prerecorded instructions to move objects above or below fixed geometric shapes using a computer mouse. In Experiment 1, the first utterance instructed participants to move one object above or below a shape (e.g., “Put the candle/candy below the triangle”) and the second utterance contained an accented or deaccented definite noun phrase which referred to the same object or introduced a new entity (e.g., “Now put the CANDLE above the square” vs. “Now put the candle ABOVE THE SQUARE”). Fixations to the competitor (e.g., candy ) demonstrated a bias to interpret deaccented nouns as anaphoric and accented nouns as nonanaphoric. Experiment 2 used only accented nouns in the second instruction, varying whether the referent of this second instruction was the Theme of the first instruction (e.g., “Put the candle below the triangle”) or the Goal of the first instruction (e.g., “Put the necklace below the candle”). Participants preferred to interpret accented noun phrases as referring to a previously mentioned nonfocused entity (the Goal) rather than as introducing a new unmentioned entity.
  • Davids, N. (2009). Neurocognitive markers of phonological processing: A clinical perspective. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Davids, N., Van den Brink, D., Van Turennout, M., Mitterer, H., & Verhoeven, L. (2009). Towards neurophysiological assessment of phonemic discrimination: Context effects of the mismatch negativity. Clinical Neurophysiology, 120, 1078-1086. doi:10.1016/j.clinph.2009.01.018.

    Abstract

    This study focusses on the optimal paradigm for simultaneous assessment of auditory and phonemic discrimination in clinical populations. We investigated (a) whether pitch and phonemic deviants presented together in one sequence are able to elicit mismatch negativities (MMNs) in healthy adults and (b) whether MMN elicited by a change in pitch is modulated by the presence of the phonemic deviants.
  • Davidson, D. J., & Indefrey, P. (2009). An event-related potential study on changes of violation and error responses during morphosyntactic learning. Journal of Cognitive Neuroscience, 21(3), 433-446. Retrieved from http://www.mitpressjournals.org/doi/pdf/10.1162/jocn.2008.21031.

    Abstract

    Based on recent findings showing electrophysiological changes in adult language learners after relatively short periods of training, we hypothesized that adult Dutch learners of German would show responses to German gender and adjective declension violations after brief instruction. Adjective declension in German differs from previously studied morphosyntactic regularities in that the required suffixes depend not only on the syntactic case, gender, and number features to be expressed, but also on whether or not these features are already expressed on linearly preceding elements in the noun phrase. Violation phrases and matched controls were presented over three test phases (pretest and training on the first day, and a posttest one week later). During the pretest, no electrophysiological differences were observed between violation and control conditions, and participants’ classification performance was near chance. During the training and posttest phases, classification improved, and there was a P600-like violation response to declension but not gender violations. An error-related response during training was associated with improvement in grammatical discrimination from pretest to posttest. The results show that rapid changes in neuronal responses can be observed in adult learners of a complex morphosyntactic rule, and also that error-related electrophysiological responses may relate to grammar acquisition.
  • Davidson, D. J., & Indefrey, P. (2009). Plasticity of grammatical recursion in German learners of Dutch. Language and Cognitive Processes, 24, 1335-1369. doi:10.1080/01690960902981883.

    Abstract

    Previous studies have examined cross-serial and embedded complement clauses in West Germanic in order to distinguish between different types of working memory models of human sentence processing, as well as different formal language models. Here, adult plasticity in the use of these constructions is investigated by examining the response of German-speaking learners of Dutch using magnetoencephalography (MEG). In three experimental sessions spanning their initial acquisition of Dutch, participants performed a sentence-scene matching task with Dutch sentences including two different verb constituent orders (Dutch verb order, German verb order), and in addition rated similar constructions in a separate rating task. The average planar gradient of the evoked field to the initial verb within the cluster revealed a larger evoked response for the German order relative to the Dutch order between 0.2 to 0.4 s over frontal sensors after 2 weeks, but not initially. The rating data showed that constructions consistent with Dutch grammar, but inconsistent with the German grammar were initially rated as unacceptable, but this preference reversed after 3 months. The behavioural and electrophysiological results suggest that cortical responses to verb order preferences in complement clauses can change within 3 months after the onset of adult language learning, implying that this aspect of grammatical processing remains plastic into adulthood.
  • Davies, R., Kidd, E., & Lander, K. (2009). Investigating the psycholinguistic correlates of speechreading in preschool age children. International Journal of Language & Communication Disorders, 44(2), 164-174. doi:10.1080/13682820801997189.

    Abstract

    Background: Previous research has found that newborn infants can match phonetic information in the lips and voice from as young as ten weeks old. There is evidence that access to visual speech is necessary for normal speech development. Although we have an understanding of this early sensitivity, very little research has investigated older children's ability to speechread whole words. Aims: The aim of this study was to identify aspects of preschool children's linguistic knowledge and processing ability that may contribute to speechreading ability. We predicted a significant correlation between receptive vocabulary and speechreading, as well as phonological working memory to be a predictor of speechreading performance. Methods & Procedures: Seventy-six children (n = 76) aged between 2;10 and 4;11 years participated. Children were given three pictures and were asked to point to the picture that they thought that the experimenter had silently mouthed (ten trials). Receptive vocabulary and phonological working memory were also assessed. The results were analysed using Pearson correlations and multiple regressions. Outcomes & Results: The results demonstrated that the children could speechread at a rate greater than chance. Pearson correlations revealed significant, positive correlations between receptive vocabulary and speechreading score, phonological error rate and age. Further correlations revealed significant, positive relationships between The Children's Test of Non-Word Repetition (CNRep) and speechreading score, phonological error rate and age. Multiple regression analyses showed that receptive vocabulary best predicts speechreading ability over and above phonological working memory. Conclusions & Implications: The results suggest that preschool children are capable of speechreading, and that this ability is related to vocabulary size. This suggests that children aged between 2;10 and 4;11 are sensitive to visual information in the form of audio-visual mappings. We suggest that current and future therapies are correct to include visual feedback as a therapeutic tool; however, future research needs to be conducted in order to elucidate further the role of speechreading in development.
  • Dediu, D. (2009). Genetic biasing through cultural transmission: Do simple Bayesian models of language evolution generalize? Journal of Theoretical Biology, 259, 552-561. doi:10.1016/j.jtbi.2009.04.004.

    Abstract

    The recent Bayesian approaches to language evolution and change seem to suggest that genetic biases can impact on the characteristics of language, but, at the same time, that its cultural transmission can partially free it from these same genetic constraints. One of the current debates centres on the striking differences between sampling and a posteriori maximising Bayesian learners, with the first converging on the prior bias while the latter allows a certain freedom to language evolution. The present paper shows that this difference disappears if populations more complex than a single teacher and a single learner are considered, with the resulting behaviours more similar to the sampler. This suggests that generalisations based on the language produced by Bayesian agents in such homogeneous single agent chains are not warranted. It is not clear which of the assumptions in such models are responsible, but these findings seem to support the rising concerns on the validity of the “acquisitionist” assumption, whereby the locus of language change and evolution is taken to be the first language acquirers (children) as opposed to the competent language users (the adults).
  • Defina, R., Allen, S. E. M., Davidson, L., Hellwig, B., Kelly, B. F., & Kidd, E. (2023). Sketch Acquisition Manual (SAM), Part I: The sketch corpus. Language Documentation and Conservation Special Publication, 28, 5-38. Retrieved from https://hdl.handle.net/10125/74719.

    Abstract

    This paper presents the first part of a guide for documenting and describing child language, child-directed language and socialization patterns in diverse languages and cultures. The guide is intended for anyone interested in working across child language and language documentation,
    including, for example, field linguists and language documenters, community language workers, child language researchers or graduate students. We assume some basic familiarity with language documentation principles and methods, and, based on this, provide step-by-step suggestions for
    collecting, analyzing and presenting child data. This first part of the guide focuses on constructing a sketch corpus that consists of minimally five hours of annotated and archived data and which documents communicative practices of children between the ages of 2 and 4.
  • Defina, R., Allen, S. E. M., Davidson, L., Hellwig, B., Kelly, B. F., & Kidd, E. (2023). Sketch Acquisition Manual (SAM), Part II: The acquisition sketch. Language Documentation and Conservation Special Publication, 28, 39-86. Retrieved from https://hdl.handle.net/10125/74720.

    Abstract

    This paper presents the second part of a guide for documenting and describing child language, child-directed language and socialization patterns in diverse languages and cultures. The guide is intended for anyone interested in working across child language and language documentation,
    including, for example, field linguists and language documenters, community language workers, child language researchers or graduate students. We assume some basic familiarity with language documentation principles and methods, and, based on this, provide step-by-step suggestions for
    collecting, analyzing and presenting child data. This second part of the guide focuses on developing a child language acquisition sketch. It takes the sketch corpus as its basis (which was introduced in the first part of this guide), and presents a model for analyzing and describing the corpus data.
  • Den Os, E., & Boves, L. (2002). BabelWeb project develops multilingual guidelines. Multilingual Computing and Technologies, 13(1), 33-36.

    Abstract

    European cooperative effort seeks best practices architecture and procedures for international sites
  • Dideriksen, C., Christiansen, M. H., Tylén, K., Dingemanse, M., & Fusaroli, R. (2023). Quantifying the interplay of conversational devices in building mutual understanding. Journal of Experimental Psychology: General, 152(3), 864-889. doi:10.1037/xge0001301.

    Abstract

    Humans readily engage in idle chat and heated discussions and negotiate tough joint decisions without ever having to think twice about how to keep the conversation grounded in mutual understanding. However, current attempts at identifying and assessing the conversational devices that make this possible are fragmented across disciplines and investigate single devices within single contexts. We present a comprehensive conceptual framework to investigate conversational devices, their relations, and how they adjust to contextual demands. In two corpus studies, we systematically test the role of three conversational devices: backchannels, repair, and linguistic entrainment. Contrasting affiliative and task-oriented conversations within participants, we find that conversational devices adaptively adjust to the increased need for precision in the latter: We show that low-precision devices such as backchannels are more frequent in affiliative conversations, whereas more costly but higher-precision mechanisms, such as specific repairs, are more frequent in task-oriented conversations. Further, task-oriented conversations involve higher complementarity of contributions in terms of the content and perspective: lower semantic entrainment and less frequent (but richer) lexical and syntactic entrainment. Finally, we show that the observed variations in the use of conversational devices are potentially adaptive: pairs of interlocutors that show stronger linguistic complementarity perform better across the two tasks. By combining motivated comparisons of several conversational contexts and theoretically informed computational analyses of empirical data the present work lays the foundations for a comprehensive conceptual framework for understanding the use of conversational devices in dialogue.
  • Dideriksen, C., Christiansen, M. H., Dingemanse, M., Højmark‐Bertelsen, M., Johansson, C., Tylén, K., & Fusaroli, R. (2023). Language‐specific constraints on conversation: Evidence from Danish and Norwegian. Cognitive Science, 47(11): e13387. doi:10.1111/cogs.13387.

    Abstract

    Establishing and maintaining mutual understanding in everyday conversations is crucial. To do so, people employ a variety of conversational devices, such as backchannels, repair, and linguistic entrainment. Here, we explore whether the use of conversational devices might be influenced by cross-linguistic differences in the speakers’ native language, comparing two matched languages—Danish and Norwegian—differing primarily in their sound structure, with Danish being more opaque, that is, less acoustically distinguished. Across systematically manipulated conversational contexts, we find that processes supporting mutual understanding in conversations vary with external constraints: across different contexts and, crucially, across languages. In accord with our predictions, linguistic entrainment was overall higher in Danish than in Norwegian, while backchannels and repairs presented a more nuanced pattern. These findings are compatible with the hypothesis that native speakers of Danish may compensate for its opaque sound structure by adopting a top-down strategy of building more conversational redundancy through entrainment, which also might reduce the need for repairs. These results suggest that linguistic differences might be met by systematic changes in language processing and use. This paves the way for further cross-linguistic investigations and critical assessment of the interplay between cultural and linguistic factors on the one hand and conversational dynamics on the other.
  • Dikshit, A. P., Mishra, C., Das, D., & Parashar, S. (2023). Frequency and temperature-dependence ZnO based fractional order capacitor using machine learning. Materials Chemistry and Physics, 307: 128097. doi:10.1016/j.matchemphys.2023.128097.

    Abstract

    This paper investigates the fractional order behavior of ZnO ceramics at different frequencies. ZnO ceramic was prepared by high energy ball milling technique (HEBM) sintered at 1300℃ to study the frequency response properties. The frequency response properties (impedance and phase
    angles) were examined by analyzing through impedance analyzer (100 Hz - 1 MHz). Constant phase angles (84°-88°) were obtained at low temperature ranges (25 ℃-125 ℃). The structural and
    morphological composition of the ZnO ceramic was investigated using X-ray diffraction techniques and FESEM. Raman spectrum was studied to understand the different modes of ZnO ceramics. Machine learning (polynomial regression) models were trained on a dataset of 1280
    experimental values to accurately predict the relationship between frequency and temperature with respect to impedance and phase values of the ZnO ceramic FOC. The predicted impedance values were found to be in good agreement (R2 ~ 0.98, MSE ~ 0.0711) with the experimental results.
    Impedance values were also predicted beyond the experimental frequency range (at 50 Hz and 2 MHz) for different temperatures (25℃ - 500℃) and for low temperatures (10°, 15° and 20℃)
    within the frequency range (100Hz - 1MHz).

    Files private

    Request files
  • Dimitrova, D. V., Redeker, G., & Hoeks, J. C. J. (2009). Did you say a BLUE banana? The prosody of contrast and abnormality in Bulgarian and Dutch. In 10th Annual Conference of the International Speech Communication Association [Interspeech 2009] (pp. 999-1002). ISCA Archive.

    Abstract

    In a production experiment on Bulgarian that was based on a previous study on Dutch [1], we investigated the role of prosody when linguistic and extra-linguistic information coincide or contradict. Speakers described abnormally colored fruits in conditions where contrastive focus and discourse relations were varied. We found that the coincidence of contrast and abnormality enhances accentuation in Bulgarian as it did in Dutch. Surprisingly, when both factors are in conflict, the prosodic prominence of abnormality often overruled focus accentuation in both Bulgarian and Dutch, though the languages also show marked differences.
  • Dimroth, C. (2002). Topics, assertions and additive words: How L2 learners get from information structure to target-language syntax. Linguistics, 40(4), 891-923. doi:10.1515/ling.2002.033.

    Abstract

    The article compares the integration of topic-related additive words at different stages of untutored L2 acquisition. Data stem from an ‘‘additive-elicitation task’’ that was designed in order to capture topic-related additive words in a context that is at the same time controlled for the underlying information structure and nondeviant from other kinds of narrative discourse. We relate the distinction between stressed and nonstressed forms of the German scope particles and adverbials auch ‘also’, noch ‘another’, wieder ‘again’, and immer noch ‘still’ to a uniform, information-structure-based principle: the stressed variants have scope over the topic information of the relevant utterances. It is then the common function of these additive words to express the additive link between the topic of the present utterance and some previous topic for which the same state of affairs is claimed to hold. This phenomenon has often been referred to as ‘‘contrastive topic,’’ but contrary to what this term suggests, these topic elements are by no means deviant from the default in coherent discourse. In the underlying information structure, the validity of some given state of affairs for the present topic must be under discussion. Topic-related additive words then express that the state of affairs indeed applies to this topic, their function therefore coming close to the function of assertion marking. While this functional correspondence goes along with the formal organization of the basic stages of untutored second-language acquisition, its expression brings linguistic constraints into conflict when the acquisition of finiteness pushes learners to reorganize their utterances according to target-language syntax.
  • Dimroth, C., & Narasimhan, B. (2009). Accessibility and topicality in children's use of word order. In J. Chandlee, M. Franchini, S. Lord, & G. M. Rheiner (Eds.), Proceedings of the 33rd annual Boston University Conference on Language Development (BULCD) (pp. 133-138).
  • Dimroth, C., & Klein, W. (2009). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik, 153, 5-9.
  • Dimroth, C., & Lasser, I. (Eds.). (2002). Finite options: How L1 and L2 learners cope with the acquisition of finiteness [Special Issue]. Linguistics, 40(4).
  • Dimroth, C., & Lasser, I. (2002). Finite options: How L1 and L2 learners cope with the acquisition of finiteness. Linguistics, 40(4), 647-651. doi:10.1515/ling.2002.027.
  • Dimroth, C., & Jordens, P. (Eds.). (2009). Functional categories in learner language. Berlin: Mouton de Gruyter.
  • Dimroth, C. (2009). L'acquisition de la finitude en allemand L2 à différents âges. AILE (Acquisition et Interaction en Langue étrangère)/LIA (Languages, Interaction, Acquisition), 1(1), 113-135.

    Abstract

    Ultimate attainment in adult second language learners often differs tremendously from the end state typically achieved by young children learning their first language (L1) or a second language (L2). The research summarized in this article concentrates on developmental steps and orders of acquisition attested in learners of different ages. Findings from a longitudinal study concerned with the acquisition of verbal morpho-syntax in German as an L2 by two young Russian learners (8 and 14 years old) are compared to findings from the acquisition of the same target language by younger children and by untutored adult learners. The study focuses on the acquisition of verbal morphology, the role of auxiliary verbs and the position of finite and non finite verbs in relation to negation and additive scope particles.

Share this page