Publications

Displaying 301 - 400 of 2286
  • Catani, M., Craig, M. C., Forkel, S. J., Kanaan, R., Picchioni, M., Toulopoulou, T., Shergill, S., Williams, S., Murphy, D. G., & McGuire, P. (2011). Altered integrity of perisylvian language pathways in schizophrenia: Relationship to auditory hallucinations. Biological Psychiatry, 70(12), 1143-1150. doi:10.1016/j.biopsych.2011.06.013.

    Abstract

    Background: Functional neuroimaging supports the hypothesis that auditory verbal hallucinations (AVH) in schizophrenia result from altered functional connectivity between perisylvian language regions, although the extent to which AVH are also associated with an altered tract anatomy is less clear.

    Methods: Twenty-eight patients with schizophrenia subdivided into 17 subjects with a history of AVH and 11 without a history of hallucinations and 59 age- and IQ-matched healthy controls were recruited. The number of streamlines, fractional anisotropy (FA), and mean diffusivity were measured along the length of the arcuate fasciculus and its medial and lateral components.

    Results: Patients with schizophrenia had bilateral reduction of FA relative to controls in the arcuate fasciculi (p < .001). Virtual dissection of the subcomponents of the arcuate fasciculi revealed that these reductions were specific to connections between posterior temporal and anterior regions in the inferior frontal and parietal lobe. Also, compared with controls, the reduction in FA of these tracts was highest, and bilateral, in patients with AVH, but in patients without AVH, this reduction was reported only on the left.

    Conclusions: These findings point toward a supraregional network model of AVH in schizophrenia. They support the hypothesis that there may be selective vulnerability of specific anatomical connections to posterior temporal regions in schizophrenia and that extensive bilateral damage is associated with a greater vulnerability to AVH. If confirmed by further studies, these findings may advance our understanding of the anatomical factors that are protective against AVH and predictive of a treatment response.
  • Cavaco, P., Curuklu, B., & Petersson, K. M. (2009). Artificial grammar recognition using two spiking neural networks. Frontiers in Neuroinformatics. Conference abstracts: 2nd INCF Congress of Neuroinformatics. doi:10.3389/conf.neuro.11.2009.08.096.

    Abstract

    In this paper we explore the feasibility of artificial (formal) grammar recognition (AGR) using spiking neural networks. A biologically inspired minicolumn architecture is designed as the basic computational unit. A network topography is defined based on the minicolumn architecture, here referred to as nodes, connected with excitatory and inhibitory connections. Nodes in the network represent unique internal states of the grammar’s finite state machine (FSM). Future work to improve the performance of the networks is discussed. The modeling framework developed can be used by neurophysiological research to implement network layouts and compare simulated performance characteristics to actual subject performance.
  • Chan, A., Yang, W., Chang, F., & Kidd, E. (2018). Four-year-old Cantonese-speaking children's online processing of relative clauses: A permutation analysis. Journal of Child Language, 45(1), 174-203. doi:10.1017/s0305000917000198.

    Abstract


    We report on an eye-tracking study that investigated four-year-old Cantonese-speaking children's online processing of subject and object relative clauses (RCs). Children's eye-movements were recorded as they listened to RC structures identifying a unique referent (e.g. “Can you pick up the horse that pushed the pig?”). Two RC types, classifier (CL) and ge3 RCs, were tested in a between-participants design. The two RC types differ in their syntactic analyses and frequency of occurrence, providing an important point of comparison for theories of RC acquisition and processing. A permutation analysis showed that the two structures were processed differently: CL RCs showed a significant object-over-subject advantage, whereas ge3 RCs showed the opposite effect. This study shows that children can have different preferences even for two very similar RC structures within the same language, suggesting that syntactic processing preferences are shaped by the unique features of particular constructions both within and across different linguistic typologies.
  • Chang, F., Kidd, E., & Rowland, C. F. (2013). Prediction in processing is a by-product of language learning [Commentary on Pickering & Garrod: An integrated theory of language production and comprehension]. Behavioral and Brain Sciences, 36(4), 350-351. doi:10.1017/S0140525X12001495.

    Abstract

    Both children and adults predict the content of upcoming language, suggesting that prediction is useful for learning as well as processing. We present an alternative model which can explain prediction behaviour as a by-product of language learning. We suggest that a consideration of language acquisition places important constraints on Pickering & Garrod's (P&G's) theory.
  • Chen, J. (2008). The acquisition of verb compounding in Mandarin Chinese. PhD Thesis, Vrije Universiteit Amsterdam, Amsterdam.

    Abstract

    Seeing someone breaking a stick into two, an English speaks typically describes with a verb break, but a Mandarin speaker has to say bai1-duan4 ‘bend-be.broken’, a verb
    compound composed of two free verbs with each verb encoding one aspect of the breaking event. Verb compounding represents a typical and productive way to describe
    events of motion (e.g., zou3-chu1 ‘walk-exit’), and state change (e.g., bai1-duan4 ‘bendbe.broken’), the most common types of events that children of all languages are exposed
    to from an early age. Since languages vary in how events are linguistically encoded and categorized, the development of verb compounding provides a window to investigate the
    acquisition of form and meaning mapping for highly productive but constrained constructions and the interaction between children’s linguistic development and cognitive
    development. The theoretical analysis of verb compounds has been one of the central issues in Chinese linguistics, but the acquisition of this grammatical system has never
    been systematically studied. This dissertation constitutes the first in-depth study of this topic. It analyzes speech data from two longitudinal corpora as well as the data collected from five experiments on production and comprehension of verb compounds from children in P. R. China. It provides a description of the developmental process and unravels the complex learning tasks from the perspective of language production, comprehension, event categorization, and the interface of semantics and syntax. In showing how first-language learners acquire the Mandarin-specific way of representing and encoding causal events and motion events, this study has significance both for studies of language acquisition and for studies of cognition and event construal.
  • Chen, C.-h., Zhang, Y., & Yu, C. (2018). Learning object names at different hierarchical levels using cross-situational statistics. Cognitive Science, 42(S2), 591-605. doi:10.1111/cogs.12516.

    Abstract

    Objects in the world usually have names at different hierarchical levels (e.g., beagle, dog, animal). This research investigates adults' ability to use cross-situational statistics to simultaneously learn object labels at individual and category levels. The results revealed that adults were able to use co-occurrence information to learn hierarchical labels in contexts where the labels for individual objects and labels for categories were presented in completely separated blocks, in interleaved blocks, or mixed in the same trial. Temporal presentation schedules significantly affected the learning of individual object labels, but not the learning of category labels. Learners' subsequent generalization of category labels indicated sensitivity to the structure of statistical input.
  • Chen, X. S., Penny, D., & Collins, L. J. (2011). Characterization of RNase MRP RNA and novel snoRNAs from Giardia intestinalis and Trichomonas vaginalis. BMC Genomics, 12, 550. doi:10.1186/1471-2164-12-550.

    Abstract

    Background: Eukaryotic cells possess a complex network of RNA machineries which function in RNA-processing and cellular regulation which includes transcription, translation, silencing, editing and epigenetic control. Studies of model organisms have shown that many ncRNAs of the RNA-infrastructure are highly conserved, but little is known from non-model protists. In this study we have conducted a genome-scale survey of medium-length ncRNAs from the protozoan parasites Giardia intestinalis and Trichomonas vaginalis. Results: We have identified the previously ‘missing’ Giardia RNase MRP RNA, which is a key ribozyme involved in pre-rRNA processing. We have also uncovered 18 new H/ACA box snoRNAs, expanding our knowledge of the H/ ACA family of snoRNAs. Conclusions: Results indicate that Giardia intestinalis and Trichomonas vaginalis, like their distant multicellular relatives, contain a rich infrastructure of RNA-based processing. From here we can investigate the evolution of RNA processing networks in eukaryotes.
  • Chen, X. S., White, W. T. J., Collins, L. J., & Penny, D. (2008). Computational identification of four spliceosomal snRNAs from the deep-branch eukaryote Giardia intestinalis. PLoS One, 3(8), e3106. doi:10.1371/journal.pone.0003106.

    Abstract

    RNAs processing other RNAs is very general in eukaryotes, but is not clear to what extent it is ancestral to eukaryotes. Here we focus on pre-mRNA splicing, one of the most important RNA-processing mechanisms in eukaryotes. In most eukaryotes splicing is predominantly catalysed by the major spliceosome complex, which consists of five uridine-rich small nuclear RNAs (U-snRNAs) and over 200 proteins in humans. Three major spliceosomal introns have been found experimentally in Giardia; one Giardia U-snRNA (U5) and a number of spliceosomal proteins have also been identified. However, because of the low sequence similarity between the Giardia ncRNAs and those of other eukaryotes, the other U-snRNAs of Giardia had not been found. Using two computational methods, candidates for Giardia U1, U2, U4 and U6 snRNAs were identified in this study and shown by RT-PCR to be expressed. We found that identifying a U2 candidate helped identify U6 and U4 based on interactions between them. Secondary structural modelling of the Giardia U-snRNA candidates revealed typical features of eukaryotic U-snRNAs. We demonstrate a successful approach to combine computational and experimental methods to identify expected ncRNAs in a highly divergent protist genome. Our findings reinforce the conclusion that spliceosomal small-nuclear RNAs existed in the last common ancestor of eukaryotes.
  • Chen, A., & Lai, V. T. (2011). Comb or coat: The role of intonation in online reference resolution in a second language. In W. Zonneveld, & H. Quené (Eds.), Sound and Sounds. Studies presented to M.E.H. (Bert) Schouten on the occasion of his 65th birthday (pp. 57-68). Utrecht: UiL OTS.

    Abstract

    1 Introduction In spoken sentence processing, listeners do not wait till the end of a sentence to decipher what message is conveyed. Rather, they make predictions on the most plausible interpretation at every possible point in the auditory signal on the basis of all kinds of linguistic information (e.g., Eberhard et al. 1995; Alman and Kamide 1999, 2007). Intonation is one such kind of linguistic information that is efficiently used in spoken sentence processing. The evidence comes primarily from recent work on online reference resolution conducted in the visual-world eyetracking paradigm (e.g., Tanenhaus et al. 1995). In this paradigm, listeners are shown a visual scene containing a number of objects and listen to one or two short sentences about the scene. They are asked to either inspect the visual scene while listening or to carry out the action depicted in the sentence(s) (e.g., 'Touch the blue square'). Listeners' eye movements directed to each object in the scene are monitored and time-locked to pre-defined time points in the auditory stimulus. Their predictions on the upcoming referent and sources for the predictions in the auditory signal are examined by analysing fixations to the relevant objects in the visual scene before the acoustic information on the referent is available
  • Chen, A., & Mennen, I. (2008). Encoding interrogativity intonationally in a second language. In P. Barbosa, S. Madureira, & C. Reis (Eds.), Proceedings of the 4th International Conferences on Speech Prosody (pp. 513-516). Campinas: Editora RG/CNPq.

    Abstract

    This study investigated how untutored learners encode interrogativity intonationaly in a second language. Questions produced in free conversation were selected from longitudinal data of four untutored Italian learners of English. The questions were mostly wh-questions (WQs) and declarative questions (DQs). We examined the use of three cross-linguistically attested question cues: final rise, high peak and late peak. It was found that across learners the final rise occurred more frequently in DQs than in WQs. This is in line with the Functional Hypothesis whereby less syntactically-marked questions are more intonationally marked. However, the use of peak height and alignment is less consistent. The peak of the nuclear pitch accent was not necessarily higher and later in DQs than in WQs. The difference in learners’ exploitation of these cues can be explained by the relative importance of a question cue in the target language.
  • Chen, A. (2003). Language dependence in continuation intonation. In M. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS.) (pp. 1069-1072). Rundle Mall, SA, Austr.: Causal Productions Pty.
  • Chen, X. S., Collins, L. J., Biggs, P. J., & Penny, D. (2009). High throughput genome-wide survey of small RNAs from the parasitic protists giardia intestinalis and trichomonas vaginalis. Genome biology and evolution, 1, 165-175. doi:10.1093/gbe/evp017.

    Abstract

    RNA interference (RNAi) is a set of mechanisms which regulate gene expression in eukaryotes. Key elements of RNAi are small sense and antisense RNAs from 19 to 26 nucleotides generated from double-stranded RNAs. miRNAs are a major type of RNAi-associated small RNAs and are found in most eukaryotes studied to date. To investigate whether small RNAs associated with RNAi appear to be present in all eukaryotic lineages, and therefore present in the ancestral eukaryote, we studied two deep-branching protozoan parasites, Giardia intestinalis and Trichomonas vaginalis. Little is known about endogenous small RNAs involved in RNAi of these organisms. Using Illumina Solexa sequencing and genome-wide analysis of small RNAs from these distantly related deep-branching eukaryotes, we identified 10 strong miRNA candidates from Giardia and 11 from Trichomonas. We also found evidence of Giardia siRNAs potentially involved in the expression of variant-specific-surface proteins. In addition, 8 new snoRNAs from Trichomonas are identified. Our results indicate that miRNAs are likely to be general in ancestral eukaryotes, and therefore are likely to be a universal feature of eukaryotes.
  • Chen, A. (2009). Intonation and reference maintenance in Turkish learners of Dutch: A first insight. AILE - Acquisition et Interaction en Langue Etrangère, 28(2), 67-91.

    Abstract

    This paper investigates L2 learners’ use of intonation in reference maintenance in comparison to native speakers at three longitudinal points. Nominal referring expressions were elicited from two untutored Turkish learners of Dutch and five native speakers of Dutch via a film retelling task, and were analysed in terms of pitch span and word duration. Effects of two types of change in information states were examined, between new and given and between new and accessible. We found native-like use of word duration in both types of change early on but different performances between learners and development over time in one learner in the use of pitch span. Further, the use of morphosyntactic devices had different effects on the two learners. The inter-learner differences and late systematic use of pitch span, in spite of similar use of pitch span in learners’ L1 and L2, suggest that learning may play a role in the acquisition of intonation as a device for reference maintenance.
  • Chen, A. (2009). Perception of paralinguistic intonational meaning in a second language. Language Learning, 59(2), 367-409.
  • Chen, A. (2003). Reaction time as an indicator to discrete intonational contrasts in English. In Proceedings of Eurospeech 2003 (pp. 97-100).

    Abstract

    This paper reports a perceptual study using a semantically motivated identification task in which we investigated the nature of two pairs of intonational contrasts in English: (1) normal High accent vs. emphatic High accent; (2) early peak alignment vs. late peak alignment. Unlike previous inquiries, the present study employs an on-line method using the Reaction Time measurement, in addition to the measurement of response frequencies. Regarding the peak height continuum, the mean RTs are shortest for within-category identification but longest for across-category identification. As for the peak alignment contrast, no identification boundary emerges and the mean RTs only reflect a difference between peaks aligned with the vowel onset and peaks aligned elsewhere. We conclude that the peak height contrast is discrete but the previously claimed discreteness of the peak alignment contrast is not borne out.
  • Chen, A. (2011). The developmental path to phonological focus-marking in Dutch. In S. Frota, E. Gorka, & P. Prieto (Eds.), Prosodic categories: Production, perception and comprehension (pp. 93-109). Dordrecht: Springer.

    Abstract

    This paper gives an overview of recent studies on the use of phonological cues (accent placement and choice of accent type) to mark focus in Dutch-speaking children aged between 1;9 and 8;10. It is argued that learning to use phonological cues to mark focus is a gradual process. In the light of the findings in these studies, a first proposal is put forward on the developmental path to adult-like phonological focus-marking in Dutch.
  • Chen, A. (2009). The phonetics of sentence-initial topic and focus in adult and child Dutch. In M. Vigário, S. Frota, & M. Freitas (Eds.), Phonetics and Phonology: Interactions and interrelations (pp. 91-106). Amsterdam: Benjamins.
  • Chen, A. (2011). What’s in a rise: Evidence for an off-ramp analysis of Dutch Intonation. In W.-S. Lee, & E. Zee (Eds.), Proceedings of the 17th International Congress of Phonetic Sciences 2011 [ICPhS XVII] (pp. 448-451). Hong Kong: Department of Chinese, Translation and Linguistics, City University of Hong Kong.

    Abstract

    Pitch accents are analysed differently in an onramp analysis (i.e. ToBI) and an off-ramp analysis (e.g. Transcription of Dutch intonation - ToDI), two competing approaches in the Autosegmental Metrical tradition. A case in point is pre-final high rise. A pre-final rise is analysed as H* in ToBI but is phonologically ambiguous between H* or H*L (a (rise-)fall) in ToDI. This is because in ToDI, the L tone of a pre-final H*L can be realised in the following unaccented words and both H* and H*L can show up as a high rise in the accented word. To find out whether there is a two-way phonological contrast in pre-final high rises in Dutch, we examined the distribution of phonologically ambiguous high rises (H*(L)) and their phonetic realisation in different information structural conditions (topic vs. focus), compared to phonologically unambiguous H* and H*L. Results showed that there is indeed a H*L vs. H* contrast in prefinal high rises in Dutch and that H*L is realised as H*(L) when sonorant material is limited in the accented word. These findings provide new evidence for an off-ramp analysis of Dutch intonation and have far-reaching implications for analysis of intonation across languages.
  • Chen, A. (2011). Tuning information packaging: Intonational realization of topic and focus in child Dutch. Journal of Child Language, 38, 1055-1083. doi:10.1017/S0305000910000541.

    Abstract

    This study examined how four- to five-year-olds and seven- to eight-year-olds used intonation (accent placement and accent type) to encode topic and focus in Dutch. Naturally spoken declarative sentences with either sentence-initial topic and sentence-final focus or sentence-initial focus and sentence-final topic were elicited via a picture-matching game. Results showed that the four- to five-year-olds were adult-like in topic-marking, but were not yet fully adult-like in focus-marking, in particular, in the use of accent type in sentence-final focus (i.e. showing no preference for H*L). Between age five and seven, the use of accent type was further developed. In contrast to the four- to five-year-olds, the seven- to eight-year-olds showed a preference for H*L in sentence-final focus. Furthermore, they used accent type to distinguish sentence-initial focus from sentence-initial topic in addition to phonetic cues.
  • Cho, T., & McQueen, J. M. (2008). Not all sounds in assimilation environments are perceived equally: Evidence from Korean. Journal of Phonetics, 36, 239-249. doi:doi:10.1016/j.wocn.2007.06.001.

    Abstract

    This study tests whether potential differences in the perceptual robustness of speech sounds influence continuous-speech processes. Two phoneme-monitoring experiments examined place assimilation in Korean. In Experiment 1, Koreans monitored for targets which were either labials (/p,m/) or alveolars (/t,n/), and which were either unassimilated or assimilated to a following /k/ in two-word utterances. Listeners detected unaltered (unassimilated) labials faster and more accurately than assimilated labials; there was no such advantage for unaltered alveolars. In Experiment 2, labial–velar differences were tested using conditions in which /k/ and /p/ were illegally assimilated to a following /t/. Unassimilated sounds were detected faster than illegally assimilated sounds, but this difference tended to be larger for /k/ than for /p/. These place-dependent asymmetries suggest that differences in the perceptual robustness of segments play a role in shaping phonological patterns.
  • Cho, T. (2003). Lexical stress, phrasal accent and prosodic boundaries in the realization of domain-initial stops in Dutch. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhs 2003) (pp. 2657-2660). Adelaide: Causal Productions.

    Abstract

    This study examines the effects of prosodic boundaries, lexical stress, and phrasal accent on the acoustic realization of stops (/t, d/) in Dutch, with special attention paid to language-specificity in the phonetics-prosody interface. The results obtained from various acoustic measures show systematic phonetic variations in the production of /t d/ as a function of prosodic position, which may be interpreted as being due to prosodicallyconditioned articulatory strengthening. Shorter VOTs were found for the voiceless stop /t/ in prosodically stronger locations (as opposed to longer VOTs in this position in English). The results suggest that prosodically-driven phonetic realization is bounded by a language-specific phonological feature system.
  • Cho, T., & McQueen, J. M. (2011). Perceptual recovery from consonant-cluster simplification using language-specific phonological knowledge. Journal of Psycholinguistic Research, 40, 253-274. doi:10.1007/s10936-011-9168-0.

    Abstract

    Two experiments examined whether perceptual recovery from Korean consonant-cluster simplification is based on language-specific phonological knowledge. In tri-consonantal C1C2C3 sequences such as /lkt/ and /lpt/ in Seoul Korean, either C1 or C2 can be completely deleted. Seoul Koreans monitored for C2 targets (/p/ or / k/, deleted or preserved) in the second word of a two-word phrase with an underlying /l/-C2-/t/ sequence. In Experiment 1 the target-bearing words had contextual lexical-semantic support. Listeners recovered deleted targets as fast and as accurately as preserved targets with both Word and Intonational Phrase (IP) boundaries between the two words. In Experiment 2, contexts were low-pass filtered. Listeners were still able to recover deleted targets as well as preserved targets in IP-boundary contexts, but better with physically-present targets than with deleted targets in Word-boundary contexts. This suggests that the benefit of having target acoustic-phonetic information emerges only when higher-order (contextual and phrase-boundary) information is not available. The strikingly efficient recovery of deleted phonemes with neither acoustic-phonetic cues nor contextual support demonstrates that language-specific phonological knowledge, rather than language-universal perceptual processes which rely on fine-grained phonetic details, is employed when the listener perceives the results of a continuous-speech process in which reduction is phonetically complete.
  • Choi, J., Broersma, M., & Cutler, A. (2018). Phonetic learning is not enhanced by sequential exposure to more than one language. Linguistic Research, 35(3), 567-581. doi:10.17250/khisli.35.3.201812.006.

    Abstract

    Several studies have documented that international adoptees, who in early years have
    experienced a change from a language used in their birth country to a new language
    in an adoptive country, benefit from the limited early exposure to the birth language
    when relearning that language’s sounds later in life. The adoptees’ relearning advantages
    have been argued to be conferred by lasting birth-language knowledge obtained from
    the early exposure. However, it is also plausible to assume that the advantages may
    arise from adoptees’ superior ability to learn language sounds in general, as a result
    of their unusual linguistic experience, i.e., exposure to multiple languages in sequence
    early in life. If this is the case, then the adoptees’ relearning benefits should generalize
    to previously unheard language sounds, rather than be limited to their birth-language
    sounds. In the present study, adult Korean adoptees in the Netherlands and matched
    Dutch-native controls were trained on identifying a Japanese length distinction to which
    they had never been exposed before. The adoptees and Dutch controls did not differ
    on any test carried out before, during, or after the training, indicating that observed
    adoptee advantages for birth-language relearning do not generalize to novel, previously
    unheard language sounds. The finding thus fails to support the suggestion that
    birth-language relearning advantages may arise from enhanced ability to learn language
    sounds in general conferred by early experience in multiple languages. Rather, our
    finding supports the original contention that such advantages involve memory traces
    obtained before adoption
  • Cholin, J., & Levelt, W. J. M. (2009). Effects of syllable preparation and syllable frequency in speech production: Further evidence for syllabic units at a post-lexical level. Language and Cognitive Processes, 24, 662-684. doi:10.1080/01690960802348852.

    Abstract

    In the current paper, we asked at what level in the speech planning process speakers retrieve stored syllables. There is evidence that syllable structure plays an essential role in the phonological encoding of words (e.g., online syllabification and phonological word formation). There is also evidence that syllables are retrieved as whole units. However, findings that clearly pinpoint these effects to specific levels in speech planning are scarce. We used a naming variant of the implicit priming paradigm to contrast voice onset latencies for frequency-manipulated disyllabic Dutch pseudo-words. While prior implicit priming studies only manipulated the item's form and/or syllable structure overlap we introduced syllable frequency as an additional factor. If the preparation effect for syllables obtained in the implicit priming paradigm proceeds beyond phonological planning, i.e., includes the retrieval of stored syllables, then the preparation effect should differ for high- and low frequency syllables. The findings reported here confirm this prediction: Low-frequency syllables benefit significantly more from the preparation than high-frequency syllables. Our findings support the notion of a mental syllabary at a post-lexical level, between the levels of phonological and phonetic encoding.
  • Cholin, J., Dell, G. S., & Levelt, W. J. M. (2011). Planning and articulation in incremental word production: Syllable-frequency effects in English. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 109-122. doi:10.1037/a0021322.

    Abstract

    We investigated the role of syllables during speech planning in English by measuring syllable-frequency effects. So far, syllable-frequency effects in English have not been reported. English has poorly defined syllable boundaries, and thus the syllable might not function as a prominent unit in English speech production. Speakers produced either monosyllabic (Experiment 1) or disyllabic (Experiment 2–4) pseudowords as quickly as possible in response to symbolic cues. Monosyllabic targets consisted of either high- or low-frequency syllables, whereas disyllabic items contained either a 1st or 2nd syllable that was frequency-manipulated. Significant syllable-frequency effects were found in all experiments. Whereas previous findings for disyllables in Dutch and Spanish—languages with relatively clear syllable boundaries—showed effects of a frequency manipulation on 1st but not 2nd syllables, in our study English speakers were sensitive to the frequency of both syllables. We interpret this sensitivity as an indication that the production of English has more extensive planning scopes at the interface of phonetic encoding and articulation.
  • Christoffels, I. K., Ganushchak, L. Y., & Koester, D. (2013). Language conflict in translation; An ERP study of translation production. Journal of Cognitive Psychology, 25, 646-664. doi:10.1080/20445911.2013.821127.

    Abstract

    Although most bilinguals can translate with relative ease, the underlying neuro-cognitive processes are poorly understood. Using event-related brain potentials (ERPs) we investigated the temporal course of word translation. Participants translated words from and to their first (L1, Dutch) and second (L2, English) language while ERPs were recorded. Interlingual homographs (IHs) were included to introduce language conflict. IHs share orthographic form but have different meanings in L1 and L2 (e.g., room in Dutch refers to cream). Results showed that the brain distinguished between translation directions as early as 200 ms after word presentation: the P2 amplitudes were more positive in the L1L2 translation direction. The N400 was also modulated by translation direction, with more negative amplitudes in the L2L1 translation direction. Furthermore, the IHs were translated more slowly, induced more errors, and elicited more negative N400 amplitudes than control words. In a naming experiment, participants read aloud the same words in L1 or L2 while ERPs were recorded. Results showed no effect of either IHs or language, suggesting that task schemas may be crucially related to language control in translation. Furthermore, translation appears to involve conceptual processing in both translation directions, and the task goal appears to influence how words are processed.

    Files private

    Request files
  • Chu, M., & Kita, S. (2009). Co-speech gestures do not originate from speech production processes: Evidence from the relationship between co-thought and co-speech gestures. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the Thirty-First Annual Conference of the Cognitive Science Society (pp. 591-595). Austin, TX: Cognitive Science Society.

    Abstract

    When we speak, we spontaneously produce gestures (co-speech gestures). Co-speech gestures and speech production are closely interlinked. However, the exact nature of the link is still under debate. To addressed the question that whether co-speech gestures originate from the speech production system or from a system independent of the speech production, the present study examined the relationship between co-speech and co-thought gestures. Co-thought gestures, produced during silent thinking without speaking, presumably originate from a system independent of the speech production processes. We found a positive correlation between the production frequency of co-thought and co-speech gestures, regardless the communicative function that co-speech gestures might serve. Therefore, we suggest that co-speech gestures and co-thought gestures originate from a common system that is independent of the speech production processes
  • Chu, M., & Kita, S. (2011). Microgenesis of gestures during mental rotation tasks recapitulates ontogenesis. In G. Stam, & M. Ishino (Eds.), Integrating gestures: The interdisciplinary nature of gesture (pp. 267-276). Amsterdam: John Benjamins.

    Abstract

    People spontaneously produce gestures when they solve problems or explain their solutions to a problem. In this chapter, we will review and discuss evidence on the role of representational gestures in problem solving. The focus will be on our recent experiments (Chu & Kita, 2008), in which we used Shepard-Metzler type of mental rotation tasks to investigate how spontaneous gestures revealed the development of problem solving strategy over the course of the experiment and what role gesture played in the development process. We found that when solving novel problems regarding the physical world, adults go through similar symbolic distancing (Werner & Kaplan, 1963) and internalization (Piaget, 1968) processes as those that occur during young children’s cognitive development and gesture facilitates such processes.
  • Chu, M., & Kita, S. (2008). Spontaneous gestures during mental rotation tasks: Insights into the microdevelopment of the motor strategy. Journal of Experimental Psychology: General, 137, 706-723. doi:10.1037/a0013157.

    Abstract

    This study investigated the motor strategy involved in mental rotation tasks by examining 2 types of spontaneous gestures (hand–object interaction gestures, representing the agentive hand action on an object, vs. object-movement gestures, representing the movement of an object by itself) and different types of verbal descriptions of rotation. Hand–object interaction gestures were produced earlier than object-movement gestures, the rate of both types of gestures decreased, and gestures became more distant from the stimulus object over trials (Experiments 1 and 3). Furthermore, in the first few trials, object-movement gestures increased, whereas hand–object interaction gestures decreased, and this change of motor strategies was also reflected in the type of verbal description of rotation in the concurrent speech (Experiment 2). This change of motor strategies was hampered when gestures were prohibited (Experiment 4). The authors concluded that the motor strategy becomes less dependent on agentive action on the object, and also becomes internalized over the course of the experiment, and that gesture facilitates the former process. When solving a problem regarding the physical world, adults go through developmental processes similar to internalization and symbolic distancing in young children, albeit within a much shorter time span.
  • Chu, M., & Kita, S. (2011). The nature of gestures’ beneficial role in spatial problem solving. Journal of Experimental Psychology: General, 140, 102-116. doi:10.1037/a0021790.

    Abstract

    Co-thought gestures are hand movements produced in silent, noncommunicative, problem-solving situations. In the study, we investigated whether and how such gestures enhance performance in spatial visualization tasks such as a mental rotation task and a paper folding task. We found that participants gestured more often when they had difficulties solving mental rotation problems Experiment 1). The gesture-encouraged group solved more mental rotation problems correctly than did the gesture-allowed and gesture-prohibited groups (Experiment 2). Gestures produced by the gesture-encouraged group enhanced performance in the very trials in which they were produced Experiments 2 & 3). Furthermore, gesture frequency decreased as the participants in the gesture-encouraged group solved more problems (Experiments 2 & 3). In addition, the advantage of the gesture-encouraged group persisted into subsequent spatial visualization problems in which gesturing was prohibited: another mental rotation block (Experiment 2) and a newly introduced paper folding task (Experiment 3). The results indicate that when people have difficulty in solving spatial visualization problems, they spontaneously produce gestures to help them, and gestures can indeed improve performance. As they solve more problems, the spatial computation supported by gestures becomes internalized, and the gesture frequency decreases. The benefit of gestures persists even in subsequent spatial visualization problems in which gesture is prohibited. Moreover, the beneficial effect of gesturing can be generalized to a different spatial visualization task when two tasks require similar spatial transformation processes. We conclude that gestures enhance performance on spatial visualization tasks by improving the internal computation of spatial transformations.
  • Chwilla, D., Hagoort, P., & Brown, C. M. (1998). The mechanism underlying backward priming in a lexical decision task: Spreading activation versus semantic matching. Quarterly Journal of Experimental Psychology, 51A(3), 531-560. doi:10.1080/713755773.

    Abstract

    Koriat (1981) demonstrated that an association from the target to a preceding prime, in the absence of an association from the prime to the target, facilitates lexical decision and referred to this effect as "backward priming". Backward priming is of relevance, because it can provide information about the mechanism underlying semantic priming effects. Following Neely (1991), we distinguish three mechanisms of priming: spreading activation, expectancy, and semantic matching/integration. The goal was to determine which of these mechanisms causes backward priming, by assessing effects of backward priming on a language-relevant ERP component, the N400, and reaction time (RT). Based on previous work, we propose that the N400 priming effect reflects expectancy and semantic matching/integration, but in contrast with RT does not reflect spreading activation. Experiment 1 shows a backward priming effect that is qualitatively similar for the N400 and RT in a lexical decision task. This effect was not modulated by an ISI manipulation. Experiment 2 clarifies that the N400 backward priming effect reflects genuine changes in N400 amplitude and cannot be ascribed to other factors. We will argue that these backward priming effects cannot be due to expectancy but are best accounted for in terms of semantic matching/integration.
  • Clahsen, H., Sonnenstuhl, I., Hadler, M., & Eisenbeiss, S. (2008). Morphological paradigms in language processing and language disorders. Transactions of the Philological Society, 99(2), 247-277. doi:10.1111/1467-968X.00082.

    Abstract

    We present results from two cross‐modal morphological priming experiments investigating regular person and number inflection on finite verbs in German. We found asymmetries in the priming patterns between different affixes that can be predicted from the structure of the paradigm. We also report data from language disorders which indicate that inflectional errors produced by language‐impaired adults and children tend to occur within a given paradigm dimension, rather than randomly across the paradigm. We conclude that morphological paradigms are used by the human language processor and can be systematically affected in language disorders.
  • Cleary, R. A., Poliakoff, E., Galpin, A., Dick, J. P., & Holler, J. (2011). An investigation of co-speech gesture production during action description in Parkinson’s disease. Parkinsonism & Related Disorders, 17, 753-756. doi:10.1016/j.parkreldis.2011.08.001.

    Abstract

    Methods
    The present study provides a systematic analysis of co-speech gestures which spontaneously accompany the description of actions in a group of PD patients (N = 23, Hoehn and Yahr Stage III or less) and age-matched healthy controls (N = 22). The analysis considers different co-speech gesture types, using established classification schemes from the field of gesture research. The analysis focuses on the rate of these gestures as well as on their qualitative nature. In doing so, the analysis attempts to overcome several methodological shortcomings of research in this area.
    Results
    Contrary to expectation, gesture rate was not significantly affected in our patient group, with relatively mild PD. This indicates that co-speech gestures could compensate for speech problems. However, while gesture rate seems unaffected, the qualitative precision of gestures representing actions was significantly reduced.
    Conclusions
    This study demonstrates the feasibility of carrying out fine-grained, detailed analyses of gestures in PD and offers insights into an as yet neglected facet of communication in patients with PD. Based on the present findings, an important next step is the closer investigation of the qualitative changes in gesture (including different communicative situations) and an analysis of the heterogeneity in co-speech gesture production in PD.
  • Clifton, C. J., Meyer, A. S., Wurm, L. H., & Treiman, R. (2013). Language comprehension and production. In A. F. Healy, & R. W. Proctor (Eds.), Handbook of Psychology, Volume 4, Experimental Psychology. 2nd Edition (pp. 523-547). Hoboken, NJ: Wiley.

    Abstract

    In this chapter, we survey the processes of recognizing and producing words and of understanding and creating sentences. Theory and research on these topics have been shaped by debates about how various sources of information are integrated in these processes, and about the role of language structure, as analyzed in the discipline of linguistics. In this chapter, we describe current views of fluent language users' comprehension of spoken and written language and their production of spoken language. We review what we consider to be the most important findings and theories in psycholinguistics, returning again and again to the questions of modularity and the importance of linguistic knowledge. Although we acknowledge the importance of social factors in language use, our focus is on core processes such as parsing and word retrieval that are not necessarily affected by such factors. We do not have space to say much about the important fields of developmental psycholinguistics, which deals with the acquisition of language by children, or applied psycholinguistics, which encompasses such topics as language disorders and language teaching. Although we recognize that there is burgeoning interest in the measurement of brain activity during language processing and how language is represented in the brain, space permits only occasional pointers to work in neuropsychology and the cognitive neuroscience of language. For treatment of these topics, and others, the interested reader could begin with two recent handbooks of psycholinguistics (Gaskell, 2007; Traxler & Gemsbacher, 2006) and a handbook of cognitive neuroscience (Gazzaniga, 2004).
  • Clough, S., & Hilverman, C. (2018). Hand gestures and how they help children learn. Frontiers for Young Minds, 6: 29. doi:10.3389/frym.2018.00029.

    Abstract

    When we talk, we often make hand movements called gestures at the same time. Although just about everyone gestures when they talk, we usually do not even notice the gestures. Our hand gestures play an important role in helping us learn and remember! When we see other people gesturing when they talk—or when we gesture when we talk ourselves—we are more likely to remember the information being talked about than if gestures were not involved. Our hand gestures can even indicate when we are ready to learn new things! In this article, we explain how gestures can help learning. To investigate this, we studied children learning a new mathematical concept called equivalence. We hope that this article will help you notice when you, your friends and family, and your teachers are gesturing, and that it will help you understand how those gestures can help people learn.
  • Coenen, J., & Klein, W. (1992). The acquisition of Dutch. In W. Klein, & C. Perdue (Eds.), Utterance structure: Developing grammars again (pp. 189-224). Amsterdam: Benjamins.
  • Cohen, E. (2011). Broadening the critical perspective on supernatural punishment theories. Religion, Brain & Behavior, 1(1), 70-72. doi:10.1080/2153599X.2011.558709.
  • Cohen, E., Burdett, E., Knight, N., & Barrett, J. (2011). Cross-cultural similarities and differences in person-body reasoning: Experimental evidence from the United Kingdom and Brazilian Amazon. Cognitive Science, 35, 1282-1304. doi:10.1111/j.1551-6709.2011.01172.x.

    Abstract

    We report the results of a cross-cultural investigation of person-body reasoning in the United Kingdom and northern Brazilian Amazon (Marajo´ Island). The study provides evidence that directly bears upon divergent theoretical claims in cognitive psychology and anthropology, respectively, on the cognitive origins and cross-cultural incidence of mind-body dualism. In a novel reasoning task, we found that participants across the two sample populations parsed a wide range of capacities similarly in terms of the capacities’ perceived anchoring to bodily function. Patterns of reasoning concerning the respective roles of physical and biological properties in sustaining various capacities did vary between sample populations, however. Further, the data challenge prior ad-hoc categorizations in the empirical literature on the developmental origins of and cognitive constraints on psycho-physical reasoning (e.g., in afterlife concepts). We suggest cross-culturally validated categories of ‘‘Body Dependent’’ and ‘‘Body Independent’’ items for future developmental and cross-cultural research in this emerging area.
  • Cohen, E. (2011). “Out with ‘Religion’: A novel framing of the religion debate”. In W. Williams (Ed.), Religion and rights: The Oxford Amnesty Lectures 2008. Manchester: Manchester University Press.
  • Cohen, E., & Barrett, J. L. (2011). In search of "Folk anthropology": The cognitive anthropology of the person. In J. W. Van Huysteen, & E. Wiebe (Eds.), In search of self: Interdisciplinary perspectives on personhood (pp. 104-124). Grand Rapids, CA: Wm. B. Eerdmans Publishing Company.
  • Cohen, E., & Haun, D. B. M. (2013). The development of tag-based cooperation via a socially acquired trait. Evolution and Human Behavior, 24, 230-235. doi:10.1016/j.evolhumbehav.2013.02.001.

    Abstract

    Recent theoretical models have demonstrated that phenotypic traits can support the non-random assortment of cooperators in a population, thereby permitting the evolution of cooperation. In these “tag-based models”, cooperators modulate cooperation according to an observable and hard-to-fake trait displayed by potential interaction partners. Socially acquired vocalizations in general, and speech accent among humans in particular, are frequently proposed as hard to fake and hard to hide traits that display sufficient cross-populational variability to reliably guide such social assortment in fission–fusion societies. Adults’ sensitivity to accent variation in social evaluation and decisions about cooperation is well-established in sociolinguistic research. The evolutionary and developmental origins of these biases are largely unknown, however. Here, we investigate the influence of speech accent on 5–10-year-old children's developing social and cooperative preferences across four Brazilian Amazonian towns. Two sites have a single dominant accent, and two sites have multiple co-existing accent varieties. We found that children's friendship and resource allocation preferences were guided by accent only in sites characterized by accent heterogeneity. Results further suggest that this may be due to a more sensitively tuned ear for accent variation. The demonstrated local-accent preference did not hold in the face of personal cost. Results suggest that mechanisms guiding tag-based assortment are likely tuned according to locally relevant tag-variation.

    Additional information

    Cohen_Suppl_Mat_2013.docx
  • Collins, L. J., & Chen, X. S. (2009). Ancestral RNA: The RNA biology of the eukaryotic ancestor. RNA Biology, 6(5), 495-502. doi:10.4161/rna.6.5.9551.

    Abstract

    Our knowledge of RNA biology within eukaryotes has exploded over the last five years. Within new research we see that some features that were once thought to be part of multicellular life have now been identified in several protist lineages. Hence, it is timely to ask which features of eukaryote RNA biology are ancestral to all eukaryotes. We focus on RNA-based regulation and epigenetic mechanisms that use small regulatory ncRNAs and long ncRNAs, to highlight some of the many questions surrounding eukaryotic ncRNA evolution.
  • Collins, L. J., Schönfeld, B., & Chen, X. S. (2011). The epigenetics of non-coding RNA. In T. Tollefsbol (Ed.), Handbook of epigenetics: the new molecular and medical genetics (pp. 49-61). London: Academic.

    Abstract

    Summary Non-coding RNAs (ncRNAs) have been implicated in the epigenetic marking of many genes. Short regulatory ncRNAs, including miRNAs, siRNAs, piRNAs and snoRNAs as well as long ncRNAs such as Xist and Air are discussed in light of recent research of mechanisms regulating chromatin marking and RNA editing. The topic is expanding rapidly so we will concentrate on examples to highlight the main mechanisms, including simple mechanisms where complementary binding affect methylation or RNA sites. However, other examples especially with the long ncRNAs highlight very complex regulatory systems with multiple layers of ncRNA control.
  • Connell, L., Cai, Z. G., & Holler, J. (2013). Do you see what I'm singing? Visuospatial movement biases pitch perception. Brain and Cognition, 81, 124-130. doi:10.1016/j.bandc.2012.09.005.

    Abstract

    The nature of the connection between musical and spatial processing is controversial. While pitch may be described in spatial terms such as “high” or “low”, it is unclear whether pitch and space are associated but separate dimensions or whether they share representational and processing resources. In the present study, we asked participants to judge whether a target vocal note was the same as (or different from) a preceding cue note. Importantly, target trials were presented as video clips where a singer sometimes gestured upward or downward while singing that target note, thus providing an alternative, concurrent source of spatial information. Our results show that pitch discrimination was significantly biased by the spatial movement in gesture, such that downward gestures made notes seem lower in pitch than they really were, and upward gestures made notes seem higher in pitch. These effects were eliminated by spatial memory load but preserved under verbal memory load conditions. Together, our findings suggest that pitch and space have a shared representation such that the mental representation of pitch is audiospatial in nature.
  • Cook, A. E., & Meyer, A. S. (2008). Capacity demands of phoneme selection in word production: New evidence from dual-task experiments. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 886-899. doi:10.1037/0278-7393.34.4.886.

    Abstract

    Three dual-task experiments investigated the capacity demands of phoneme selection in picture naming. On each trial, participants named a target picture (Task 1) and carried out a tone discrimination task (Task 2). To vary the time required for phoneme selection, the authors combined the targets with phonologically related or unrelated distractor pictures (Experiment 1) or words, which were clearly visible (Experiment 2) or masked (Experiment 3). When pictures or masked words were presented, the tone discrimination and picture naming latencies were shorter in the related condition than in the unrelated condition, which indicates that phoneme selection requires central processing capacity. However, when the distractor words were clearly visible, the facilitatory effect was confined to the picture naming latencies. This pattern arose because the visible related distractor words facilitated phoneme selection but slowed down speech monitoring processes that had to be completed before the response to the tone could be selected.
  • Cooke, M., & Scharenborg, O. (2008). The Interspeech 2008 consonant challenge. In INTERSPEECH 2008 - 9th Annual Conference of the International Speech Communication Association (pp. 1765-1768). ISCA Archive.

    Abstract

    Listeners outperform automatic speech recognition systems at every level, including the very basic level of consonant identification. What is not clear is where the human advantage originates. Does the fault lie in the acoustic representations of speech or in the recognizer architecture, or in a lack of compatibility between the two? Many insights can be gained by carrying out a detailed human-machine comparison. The purpose of the Interspeech 2008 Consonant Challenge is to promote focused comparisons on a task involving intervocalic consonant identification in noise, with all participants using the same training and test data. This paper describes the Challenge, listener results and baseline ASR performance.
  • Corcoran, A. W., Alday, P. M., Schlesewsky, M., & Bornkessel-Schlesewsky, I. (2018). Toward a reliable, automated method of individual alpha frequency (IAF) quantification. Psychophysiology, 55(7): e13064. doi:10.1111/psyp.13064.

    Abstract

    Individual alpha frequency (IAF) is a promising electrophysiological marker of interindividual differences in cognitive function. IAF has been linked with trait-like differences in information processing and general intelligence, and provides an empirical basis for the definition of individualized frequency bands. Despite its widespread application, however, there is little consensus on the optimal method for estimating IAF, and many common approaches are prone to bias and inconsistency. Here, we describe an automated strategy for deriving two of the most prevalent IAF estimators in the literature: peak alpha frequency (PAF) and center of gravity (CoG). These indices are calculated from resting-state power spectra that have been smoothed using a Savitzky-Golay filter (SGF). We evaluate the performance characteristics of this analysis procedure in both empirical and simulated EEG data sets. Applying the SGF technique to resting-state data from n = 63 healthy adults furnished 61 PAF and 62 CoG estimates. The statistical properties of these estimates were consistent with previous reports. Simulation analyses revealed that the SGF routine was able to reliably extract target alpha components, even under relatively noisy spectral conditions. The routine consistently outperformed a simpler method of automated peak detection that did not involve spectral smoothing. The SGF technique is fast, open source, and available in two popular programming languages (MATLAB, Python), and thus can easily be integrated within the most popular M/EEG toolsets (EEGLAB, FieldTrip, MNE-Python). As such, it affords a convenient tool for improving the reliability and replicability of future IAF-related research.

    Additional information

    psyp13064-sup-0001-s01.docx
  • Corps, R. E. (2018). Coordinating utterances during conversational dialogue: The role of content and timing predictions. PhD Thesis, The University of Edinburgh, Edinburgh.
  • Corps, R. E., Gambi, C., & Pickering, M. J. (2018). Coordinating utterances during turn-taking: The role of prediction, response preparation, and articulation. Discourse processes, 55(2, SI), 230-240. doi:10.1080/0163853X.2017.1330031.

    Abstract

    During conversation, interlocutors rapidly switch between speaker and listener
    roles and take turns at talk. How do they achieve such fine coordination?
    Most research has concentrated on the role of prediction, but listeners
    must also prepare a response in advance (assuming they wish to respond)
    and articulate this response at the appropriate moment. Such mechanisms
    may overlap with the processes of comprehending the speaker’s incoming
    turn and predicting its end. However, little is known about the stages of
    response preparation and production. We discuss three questions pertaining
    to such stages: (1) Do listeners prepare their own response in advance?,
    (2) Can listeners buffer their prepared response?, and (3) Does buffering
    lead to interference with concurrent comprehension? We argue that fine
    coordination requires more than just an accurate prediction of the interlocutor’s
    incoming turn: Listeners must also simultaneously prepare their own
    response.
  • Corps, R. E., Crossley, A., Gambi, C., & Pickering, M. J. (2018). Early preparation during turn-taking: Listeners use content predictions to determine what to say but not when to say it. Cognition, 175, 77-95. doi:10.1016/j.cognition.2018.01.015.

    Abstract

    During conversation, there is often little gap between interlocutors’ utterances. In two pairs of experiments, we manipulated the content predictability of yes/no questions to investigate whether listeners achieve such coordination by (i) preparing a response as early as possible or (ii) predicting the end of the speaker’s turn. To assess these two mechanisms, we varied the participants’ task: They either pressed a button when they thought the question was about to end (Experiments 1a and 2a), or verbally answered the questions with either yes or no (Experiments 1b and 2b). Predictability effects were present when participants had to prepare a verbal response, but not when they had to predict the turn-end. These findings suggest content prediction facilitates turn-taking because it allows listeners to prepare their own response early, rather than because it helps them predict when the speaker will reach the end of their turn.

    Additional information

    Supplementary material
  • Costa, A., Cutler, A., & Sebastian-Galles, N. (1998). Effects of phoneme repertoire on phoneme decision. Perception and Psychophysics, 60, 1022-1031.

    Abstract

    In three experiments, listeners detected vowel or consonant targets in lists of CV syllables constructed from five vowels and five consonants. Responses were faster in a predictable context (e.g., listening for a vowel target in a list of syllables all beginning with the same consonant) than in an unpredictable context (e.g., listening for a vowel target in a list of syllables beginning with different consonants). In Experiment 1, the listeners’ native language was Dutch, in which vowel and consonant repertoires are similar in size. The difference between predictable and unpredictable contexts was comparable for vowel and consonant targets. In Experiments 2 and 3, the listeners’ native language was Spanish, which has four times as many consonants as vowels; here effects of an unpredictable consonant context on vowel detection were significantly greater than effects of an unpredictable vowel context on consonant detection. This finding suggests that listeners’ processing of phonemes takes into account the constitution of their language’s phonemic repertoire and the implications that this has for contextual variability.
  • Cousminer, D. L., Berry, D. J., Timpson, N. J., Ang, W., Thiering, E., Byrne, E. M., Taal, H. R., Huikari, V., Bradfield, J. P., Kerkhof, M., Groen-Blokhuis, M. M., Kreiner-Møller, E., Marinelli, M., Holst, C., Leinonen, J. T., Perry, J. R. B., Surakka, I., Pietiläinen, O., Kettunen, J., Anttila, V. and 50 moreCousminer, D. L., Berry, D. J., Timpson, N. J., Ang, W., Thiering, E., Byrne, E. M., Taal, H. R., Huikari, V., Bradfield, J. P., Kerkhof, M., Groen-Blokhuis, M. M., Kreiner-Møller, E., Marinelli, M., Holst, C., Leinonen, J. T., Perry, J. R. B., Surakka, I., Pietiläinen, O., Kettunen, J., Anttila, V., Kaakinen, M., Sovio, U., Pouta, A., Das, S., Lagou, V., Power, C., Prokopenko, I., Evans, D. M., Kemp, J. P., St Pourcain, B., Ring, S., Palotie, A., Kajantie, E., Osmond, C., Lehtimäki, T., Viikari, J. S., Kähönen, M., Warrington, N. M., Lye, S. J., Palmer, L. J., Tiesler, C. M. T., Flexeder, C., Montgomery, G. W., Medland, S. E., Hofman, A., Hakonarson, H., Guxens, M., Bartels, M., Salomaa, V., Murabito, J. M., Kaprio, J., Sørensen, T. I. A., Ballester, F., Bisgaard, H., Boomsma, D. I., Koppelman, G. H., Grant, S. F. A., Jaddoe, V. W. V., Martin, N. G., Heinrich, J., Pennell, C. E., Raitakari, O. T., Eriksson, J. G., Smith, G. D., Hyppönen, E., Järvelin, M.-R., McCarthy, M. I., Ripatti, S., Widén, E., Consortium ReproGen, & Consortium Early Growth Genetics (EGG) (2013). Genome-wide association and longitudinal analyses reveal genetic loci linking pubertal height growth, pubertal timing and childhood adiposity. Human Molecular Genetics, 22(13), 2735-2747. doi:10.1093/hmg/ddt104.

    Abstract

    The pubertal height growth spurt is a distinctive feature of childhood growth reflecting both the central onset of puberty and local growth factors. Although little is known about the underlying genetics, growth variability during puberty correlates with adult risks for hormone-dependent cancer and adverse cardiometabolic health. The only gene so far associated with pubertal height growth, LIN28B, pleiotropically influences childhood growth, puberty and cancer progression, pointing to shared underlying mechanisms. To discover genetic loci influencing pubertal height and growth and to place them in context of overall growth and maturation, we performed genome-wide association meta-analyses in 18 737 European samples utilizing longitudinally collected height measurements. We found significant associations (P < 1.67 × 10(-8)) at 10 loci, including LIN28B. Five loci associated with pubertal timing, all impacting multiple aspects of growth. In particular, a novel variant correlated with expression of MAPK3, and associated both with increased prepubertal growth and earlier menarche. Another variant near ADCY3-POMC associated with increased body mass index, reduced pubertal growth and earlier puberty. Whereas epidemiological correlations suggest that early puberty marks a pathway from rapid prepubertal growth to reduced final height and adult obesity, our study shows that individual loci associating with pubertal growth have variable longitudinal growth patterns that may differ from epidemiological observations. Overall, this study uncovers part of the complex genetic architecture linking pubertal height growth, the timing of puberty and childhood obesity and provides new information to pinpoint processes linking these traits.
  • Cozijn, R., Vonk, W., & Noordman, L. G. M. (2003). Afleidingen uit oogbewegingen: De invloed van het connectief 'omdat' op het maken van causale inferenties. Gramma/TTT, 9, 141-156.
  • Cozijn, R., Noordman, L. G., & Vonk, W. (2011). Propositional integration and world-knowledge inference: Processes in understanding because sentences. Discourse Processes, 48, 475-500. doi:10.1080/0163853X.2011.594421.

    Abstract

    The issue addressed in this study is whether propositional integration and world-knowledge inference can be distinguished as separate processes during the comprehension of Dutch omdat (because) sentences. “Propositional integration” refers to the process by which the reader establishes the type of relation between two clauses or sentences. “World-knowledge inference” refers to the process of deriving the general causal relation and checking it against the reader's world knowledge. An eye-tracking experiment showed that the presence of the conjunction speeds up the processing of the words immediately following the conjunction, and slows down the processing of the sentence final words in comparison to the absence of the conjunction. A second, subject-paced reading experiment replicated the reading time findings, and the results of a verification task confirmed that the effect at the end of the sentence was due to inferential processing. The findings evidence integrative processing and inferential processing, respectively.
  • Cozijn, R., Commandeur, E., Vonk, W., & Noordman, L. G. (2011). The time course of the use of implicit causality information in the processing of pronouns: A visual world paradigm study. Journal of Memory and Language, 64, 381-403. doi:10.1016/j.jml.2011.01.001.

    Abstract

    Several theoretical accounts have been proposed with respect to the issue how quickly the implicit causality verb bias affects the understanding of sentences such as “John beat Pete at the tennis match, because he had played very well”. They can be considered as instances of two viewpoints: the focusing and the integration account. The focusing account claims that the bias should be manifest soon after the verb has been processed, whereas the integration account claims that the interpretation is deferred until disambiguating information is encountered. Up to now, this issue has remained unresolved because materials or methods have failed to address it conclusively. We conducted two experiments that exploited the visual world paradigm and ambiguous pronouns in subordinate because clauses. The first experiment presented implicit causality sentences with the task to resolve the ambiguous pronoun. To exclude strategic processing, in the second experiment, the task was to answer simple comprehension questions and only a minority of the sentences contained implicit causality verbs. In both experiments, the implicit causality of the verb had an effect before the disambiguating information was available. This result supported the focusing account.
  • Crago, M. B., & Allen, S. E. M. (1998). Acquiring Inuktitut. In O. L. Taylor, & L. Leonard (Eds.), Language Acquisition Across North America: Cross-Cultural And Cross-Linguistic Perspectives (pp. 245-279). San Diego, CA, USA: Singular Publishing Group, Inc.
  • Crago, M. B., Allen, S. E. M., & Pesco, D. (1998). Issues of Complexity in Inuktitut and English Child Directed Speech. In Proceedings of the twenty-ninth Annual Stanford Child Language Research Forum (pp. 37-46).
  • Crago, M. B., Chen, C., Genesee, F., & Allen, S. E. M. (1998). Power and deference. Journal for a Just and Caring Education, 4(1), 78-95.
  • Crasborn, O. A., Hanke, T., Efthimiou, E., Zwitserlood, I., & Thoutenhooft, E. (Eds.). (2008). Construction and Exploitation of Sign Language Corpora. 3rd Workshop on the Representation and Processing of Sign Languages. Paris: ELDA.
  • Crasborn, O., & Sloetjes, H. (2008). Enhanced ELAN functionality for sign language corpora. In Proceedings of the 3rd Workshop on the Representation and Processing of Sign Languages: Construction and Exploitation of Sign Language Corpora (pp. 39-43).

    Abstract

    The multimedia annotation tool ELAN was enhanced within the Corpus NGT project by a number of new and improved functions. Most of these functions were not specific to working with sign language video data, and can readily be used for other annotation purposes as well. Their direct utility for working with large amounts of annotation files during the development and use of the Corpus NGT project is what unites the various functions, which are described in this paper. In addition, we aim to characterise future developments that will be needed in order to work efficiently with larger amounts of annotation files, for which a closer integration with the use and display of metadata is foreseen.
  • Crasborn, O. A., & Zwitserlood, I. (2008). The Corpus NGT: An online corpus for professionals and laymen. In O. A. Crasborn, T. Hanke, E. Efthimiou, I. Zwitserlood, & E. Thoutenhooft (Eds.), Construction and Exploitation of Sign Language Corpora. (pp. 44-49). Paris: ELDA.

    Abstract

    The Corpus NGT is an ambitious effort to record and archive video data from Sign Language of the Netherlands (Nederlandse Gebarentaal: NGT), guaranteeing online access to all interested parties and long-term availability. Data are collected from 100 native signers of NGT of different ages and from various regions in the country. Parts of these data are annotated and/or translated; the annotations and translations are part of the corpus. The Corpus NGT is accommodated in the Browsable Corpus based at the Max Planck Institute for Psycholinguistics. In this paper we share our experiences in data collection, video processing, annotation/translation and licensing involved in building the corpus.
  • Creemers, A., Don, J., & Fenger, P. (2018). Some affixes are roots, others are heads. Natural Language & Linguistic Theory, 36(1), 45-84. doi:10.1007/s11049-017-9372-1.

    Abstract

    A recent debate in the morphological literature concerns the status of derivational affixes. While some linguists (Marantz 1997, 2001; Marvin 2003) consider derivational affixes a type of functional morpheme that realizes a categorial head, others (Lowenstamm 2015; De Belder 2011) argue that derivational affixes are roots. Our proposal, which finds its empirical basis in a study of Dutch derivational affixes, takes a middle position. We argue that there are two types of derivational affixes: some that are roots (i.e. lexical morphemes) and others that are categorial heads (i.e. functional morphemes). Affixes that are roots show ‘flexible’ categorial behavior, are subject to ‘lexical’ phonological rules, and may trigger idiosyncratic meanings. Affixes that realize categorial heads, on the other hand, are categorially rigid, do not trigger ‘lexical’ phonological rules nor allow for idiosyncrasies in their interpretation.
  • Cristia, A. (2008). Cue weighting at different ages. Purdue Linguistics Association Working Papers, 1, 87-105.
  • Cristia, A., McGuire, G. L., Seidl, A., & Francis, A. L. (2011). Effects of the distribution of acoustic cues on infants' perception of sibilants. Journal of Phonetics, 39, 388-402. doi:10.1016/j.wocn.2011.02.004.

    Abstract

    A current theoretical view proposes that infants converge on the speech categories of their native language by attending to frequency distributions that occur in the acoustic input. To date, the only empirical support for this statistical learning hypothesis comes from studies where a single, salient dimension was manipulated. Additional evidence is sought here, by introducing a less salient pair of categories supported by multiple cues. We exposed English-learning infants to a multi-cue bidimensional grid ranging between retroflex and alveolopalatal sibilants in prevocalic position. This contrast is substantially more difficult according to previous cross-linguistic and perceptual research, and its perception is driven by cues in both the consonantal and the following vowel portions. Infants heard one of two distributions (flat, or with two peaks), and were tested with sounds varying along only one dimension. Infants' responses differed depending on the familiarization distribution, and their performance was equally good for the vocalic and the frication dimension, lending some support to the statistical hypothesis even in this harder learning situation. However, learning was restricted to the retroflex category, and a control experiment showed that lack of learning for the alveolopalatal category was not due to the presence of a competing category. Thus, these results contribute fundamental evidence on the extent and limitations of the statistical hypothesis as an explanation for infants' perceptual tuning.
  • Cristia, A., Dupoux, E., Hakuno, Y., Lloyd-Fox, S., Schuetze, M., Kivits, J., Bergvelt, T., Van Gelder, M., Filippin, L., Charron, S., & Minagawa-Kawai, Y. (2013). An online database of infant functional Near InfraRed Spectroscopy studies: A community-augmented systematic review. PLoS One, 8(3): e58906. doi:10.1371/journal.pone.0058906.

    Abstract

    Until recently, imaging the infant brain was very challenging. Functional Near InfraRed Spectroscopy (fNIRS) is a promising, relatively novel technique, whose use is rapidly expanding. As an emergent field, it is particularly important to share methodological knowledge to ensure replicable and robust results. In this paper, we present a community-augmented database which will facilitate precisely this exchange. We tabulated articles and theses reporting empirical fNIRS research carried out on infants below three years of age along several methodological variables. The resulting spreadsheet has been uploaded in a format allowing individuals to continue adding new results, and download the most recent version of the table. Thus, this database is ideal to carry out systematic reviews. We illustrate its academic utility by focusing on the factors affecting three key variables: infant attrition, the reliability of oxygenated and deoxygenated responses, and signal-to-noise ratios. We then discuss strengths and weaknesses of the DBIfNIRS, and conclude by suggesting a set of simple guidelines aimed to facilitate methodological convergence through the standardization of reports.
  • Cristia, A. (2011). Fine-grained variation in caregivers' speech predicts their infants' discrimination. Journal of the Acoustical Society of America, 129, 3271-3280. doi:10.1121/1.3562562.

    Abstract

    Within the debate on the mechanisms underlying infants’ perceptual acquisition, one hypothesis proposes that infants’ perception is directly affected by the acoustic implementation of sound categories in the speech they hear. In consonance with this view, the present study shows that individual variation in fine-grained, subphonemic aspects of the acoustic realization of /s/ in caregivers’ speech predicts infants’ discrimination of this sound from the highly similar /∫/, suggesting that learning based on acoustic cue distributions may indeed drive natural phonological acquisition.
  • Cristia, A. (2013). Input to language: The phonetics of infant-directed speech. Language and Linguistics Compass, 7, 157-170. doi:10.1111/lnc3.12015.

    Abstract

    Over the first year of life, infant perception changes radically as the child learns the phonology of the ambient language from the speech she is exposed to. Since infant-directed speech attracts the child's attention more than other registers, it is necessary to describe that input in order to understand language development, and to address questions of learnability. In this review, evidence from corpora analyses, experimental studies, and observational paradigms is brought together to outline the first comprehensive empirical picture of infant-directed speech and its effects on language acquisition. The ensuing landscape suggests that infant-directed speech provides an emotionally and linguistically rich input to language acquisition

    Additional information

    Cristia_Suppl_Material.xls
  • Cristia, A., Seidl, A., & Gerken, L. (2011). Learning classes of sounds in infancy. University of Pennsylvania Working Papers in Linguistics, 17, 9.

    Abstract

    Adults' phonotactic learning is affected by perceptual biases. One such bias concerns learning of constraints affecting groups of sounds: all else being equal, learning constraints affecting a natural class (a set of sounds sharing some phonetic characteristic) is easier than learning a constraint affecting an arbitrary set of sounds. This perceptual bias could be a given, for example, the result of innately guided learning; alternatively, it could be due to human learners’ experience with sounds. Using artificial grammars, we investigated whether such a bias arises in development, or whether it is present as soon as infants can learn phonotactics. Seven-month-old English-learning infants fail to generalize a phonotactic pattern involving fricatives and nasals, which does not form a coherent phonetic group, but succeed with the natural class of oral and nasal stops. In this paper, we report an experiment that explored whether those results also follow in a cohort of 4-month-olds. Unlike the older infants, 4-month-olds were able to generalize both groups, suggesting that the perceptual bias that makes phonotactic constraints on natural classes easier to learn is likely the effect of experience.
  • Cristia, A., & Seidl, A. (2008). Is infants' learning of sound patterns constrained by phonological features? Language Learning and Development, 4, 203-227. doi:10.1080/15475440802143109.

    Abstract

    Phonological patterns in languages often involve groups of sounds rather than individual sounds, which may be explained if phonology operates on the abstract features shared by those groups (Troubetzkoy, 193957. Troubetzkoy , N. 1939/1969 . Principles of phonology , Berkeley : University of California Press . View all references/1969; Chomsky & Halle, 19688. Chomsky , N. and Halle , M. 1968 . The sound pattern of English , New York : Harper and Row . View all references). Such abstract features may be present in the developing grammar either because they are part of a Universal Grammar included in the genetic endowment of humans (e.g., Hale, Kissock and Reiss, 200618. Hale , M. , Kissock , M. and Reiss , C. 2006 . Microvariation, variation, and the features of universal grammar . Lingua , 32 : 402 – 420 . View all references), or plausibly because infants induce features from their linguistic experience (e.g., Mielke, 200438. Mielke , J. 2004 . The emergence of distinctive features , Ohio State University : Unpublished doctoral dissertation . View all references). A first experiment tested 7-month-old infants' learning of an artificial grammar pattern involving either a set of sounds defined by a phonological feature, or a set of sounds that cannot be described with a single feature—an “arbitrary” set. Infants were able to induce the constraint and generalize it to a novel sound only for the set that shared the phonological feature. A second study showed that infants' inability to learn the arbitrary grouping was not due to their inability to encode a constraint on some of the sounds involved.
  • Cristia, A., Seidl, A., & Francis, A. L. (2011). Phonological features in infancy. In G. N. Clements, & R. Ridouane (Eds.), Where do phonological contrasts come from? Cognitive, physical and developmental bases of phonological features (pp. 303-326). Amsterdam: Benjamins.

    Abstract

    Features serve two main functions in the phonology of languages: they encode the distinction between pairs of contrastive phonemes (distinctive function); and they delimit sets of sounds that participate in phonological processes and patterns (classificatory function). We summarize evidence from a variety of experimental paradigms bearing on the functional relevance of phonological features. This research shows that while young infants may use abstract phonological features to learn sound patterns, this ability becomes more constrained with development and experience. Furthermore, given the lack of overlap between the ability to learn a pair of words differing in a single feature and the ability to learn sound patterns based on features, we argue for the separation of the distinctive and the classificatory function.
  • Cristia, A., Ganesh, S., Casillas, M., & Ganapathy, S. (2018). Talker diarization in the wild: The case of child-centered daylong audio-recordings. In Proceedings of Interspeech 2018 (pp. 2583-2587). doi:10.21437/Interspeech.2018-2078.

    Abstract

    Speaker diarization (answering 'who spoke when') is a widely researched subject within speech technology. Numerous experiments have been run on datasets built from broadcast news, meeting data, and call centers—the task sometimes appears close to being solved. Much less work has begun to tackle the hardest diarization task of all: spontaneous conversations in real-world settings. Such diarization would be particularly useful for studies of language acquisition, where researchers investigate the speech children produce and hear in their daily lives. In this paper, we study audio gathered with a recorder worn by small children as they went about their normal days. As a result, each child was exposed to different acoustic environments with a multitude of background noises and a varying number of adults and peers. The inconsistency of speech and noise within and across samples poses a challenging task for speaker diarization systems, which we tackled via retraining and data augmentation techniques. We further studied sources of structured variation across raw audio files, including the impact of speaker type distribution, proportion of speech from children, and child age on diarization performance. We discuss the extent to which these findings might generalize to other samples of speech in the wild.
  • Cristia, A., & Seidl, A. (2011). Sensitivity to prosody at 6 months predicts vocabulary at 24 months. In N. Danis, K. Mesh, & H. Sung (Eds.), BUCLD 35: Proceedings of the 35th annual Boston University Conference on Language Development (pp. 145-156). Somerville, Mass: Cascadilla Press.
  • Cristia, A., Mielke, J., Daland, R., & Peperkamp, S. (2013). Similarity in the generalization of implicitly learned sound patterns. Journal of Laboratory Phonology, 4(2), 259-285.

    Abstract

    A core property of language is the ability to generalize beyond observed examples. In two experiments, we explore how listeners generalize implicitly learned sound patterns to new nonwords and to new sounds, with the goal of shedding light on how similarity affects treatment of potential generalization targets. During the exposure phase, listeners heard nonwords whose onset consonant was restricted to a subset of a natural class (e.g., /d g v z Z/). During the test phase, listeners were presented with new nonwords and asked to judge how frequently they had been presented before; some of the test items began with a consonant from the exposure set (e.g., /d/), and some began with novel consonants with varying relations to the exposure set (e.g., /b/, which is highly similar to all onsets in the training set; /t/, which is highly similar to one of the training onsets; and /p/, which is less similar than the other two). The exposure onset was rated most frequent, indicating that participants encoded onset attestation in the exposure set, and generalized it to new nonwords. Participants also rated novel consonants as somewhat frequent, indicating generalization to onsets that did not occur in the exposure phase. While generalization could be accounted for in terms of featural distance, it was insensitive to natural class structure. Generalization to new sounds was predicted better by models requiring prior linguistic knowledge (either traditional distinctive features or articulatory phonetic information) than by a model based on a linguistically naïve measure of acoustic similarity.
  • Cristia, A., & Seidl, A. (2008). Why cross-linguistic frequency cannot be equated with ease of acquisition. University of Pennsylvania Working Papers in Linguistics, 14(1), 71-82. Retrieved from http://repository.upenn.edu/pwpl/vol14/iss1/6.
  • Croijmans, I. (2018). Wine expertise shapes olfactory language and cognition. PhD Thesis, Radboud University, Nijmegen.
  • Cronin, K. A., Van Leeuwen, E. J. C., Mulenga, I. C., & Bodamer, M. D. (2011). Behavioral response of a chimpanzee mother toward her dead infant. American Journal of Primatology, 73(5), 415-421. doi:10.1002/ajp.20927.

    Abstract

    The mother-offspring bond is one of the strongest and most essential social bonds. Following is a detailed behavioral report of a female chimpanzee two days after her 16-month-old infant died, on the first day that the mother is observed to create distance between her and the corpse. A series of repeated approaches and retreats to and from the body are documented, along with detailed accounts of behaviors directed toward the dead infant by the mother and other group members. The behavior of the mother toward her dead infant not only highlights the maternal contribution to the mother-infant relationship but also elucidates the opportunities chimpanzees have to learn about the sensory cues associated with death, and the implications of death for the social environment.
  • Cronin, K. A., Schroeder, K. K. E., Rothwell, E. S., Silk, J. B., & Snowdon, C. T. (2009). Cooperatively breeding cottontop tamarins (Saguinus oedipus) do not donate rewards to their long-term mates. Journal of Comparative Psychology, 123(3), 231-241. doi:10.1037/a0015094.

    Abstract

    This study tested the hypothesis that cooperative breeding facilitates the emergence of prosocial behavior by presenting cottontop tamarins (Saguinus oedipus) with the option to provide food rewards to pair-bonded mates. In Experiment 1, tamarins could provide rewards to mates at no additional cost while obtaining rewards for themselves. Contrary to the hypothesis, tamarins did not demonstrate a preference to donate rewards, behaving similar to chimpanzees in previous studies. In Experiment 2, the authors eliminated rewards for the donor for a stricter test of prosocial behavior, while reducing separation distress and food preoccupation. Again, the authors found no evidence for a donation preference. Furthermore, tamarins were significantly less likely to deliver rewards to mates when the mate displayed interest in the reward. The results of this study contrast with those recently reported for cooperatively breeding common marmosets, and indicate that prosocial preferences in a food donation task do not emerge in all cooperative breeders. In previous studies, cottontop tamarins have cooperated and reciprocated to obtain food rewards; the current findings sharpen understanding of the boundaries of cottontop tamarins’ food-provisioning behavior.
  • Cronin, K. A. (2013). [Review of the book Chimpanzees of the Lakeshore: Natural history and culture at Mahale by Toshisada Nishida]. Animal Behaviour, 85, 685-686. doi:10.1016/j.anbehav.2013.01.001.

    Abstract

    First paragraph: Motivated by his quest to characterize the society of the last common ancestor of humans and other great apes, Toshisada Nishida set out as a graduate student to the Mahale Mountains on the eastern shore of Lake Tanganyika, Tanzania. This book is a story of his 45 years with the Mahale chimpanzees, or as he calls it, their ethnography. Beginning with his accounts of meeting the Tongwe people and the challenges of provisioning the chimpanzees for habituation, Nishida reveals how he slowly unravelled the unit group and community basis of chimpanzee social organization. The book begins and ends with a feeling of chronological order, starting with his arrival at Mahale and ending with an eye towards the future, with concrete recommendations for protecting wild chimpanzees. However, the bulk of the book is topically organized with chapters on feeding behaviour, growth and development, play and exploration, communication, life histories, sexual strategies, politics and culture.
  • Cronin, K. A., & Snowdon, C. T. (2008). The effects of unequal reward distributions on cooperative problem solving by cottontop tamarins, Saguinus oedipus. Animal Behaviour, 75, 245-257. doi:10.1016/j.anbehav.2007.04.032.

    Abstract

    Cooperation among nonhuman animals has been the topic of much theoretical and empirical research, but few studies have examined systematically the effects of various reward payoffs on cooperative behaviour. Here, we presented heterosexual pairs of cooperatively breeding cottontop tamarins with a cooperative problem-solving task. In a series of four experiments, we examined how the tamarins’ cooperative performance changed under conditions in which (1) both actors were mutually rewarded, (2) both actors were rewarded reciprocally across days, (3) both actors competed for a monopolizable reward and (4) one actor repeatedly delivered a single reward to the other actor. The tamarins showed sensitivity to the reward structure, showing the greatest percentage of trials solved and shortest latency to solve the task in the mutual reward experiment and the lowest percentage of trials solved and longest latency to solve the task in the experiment in which one actor was repeatedly rewarded. However, even in the experiment in which the fewest trials were solved, the tamarins still solved 46 _ 12% of trials and little to no aggression was observed among partners following inequitable reward distributions. The tamarins did, however, show selfish motivation in each of the experiments. Nevertheless, in all experiments, unrewarded individuals continued to cooperate and procure rewards for their social partners.
  • Croxson, P., Forkel, S. J., Cerliani, L., & Thiebaut De Schotten, M. (2018). Structural Variability Across the Primate Brain: A Cross-Species Comparison. Cerebral Cortex, 28(11), 3829-3841. doi:10.1093/cercor/bhx244.

    Abstract

    A large amount of variability exists across human brains; revealed initially on a small scale by postmortem studies and,
    more recently, on a larger scale with the advent of neuroimaging. Here we compared structural variability between human
    and macaque monkey brains using grey and white matter magnetic resonance imaging measures. The monkey brain was
    overall structurally as variable as the human brain, but variability had a distinct distribution pattern, with some key areas
    showing high variability. We also report the first evidence of a relationship between anatomical variability and evolutionary
    expansion in the primate brain. This suggests a relationship between variability and stability, where areas of low variability
    may have evolved less recently and have more stability, while areas of high variability may have evolved more recently and
    be less similar across individuals. We showed specific differences between the species in key areas, including the amount of
    hemispheric asymmetry in variability, which was left-lateralized in the human brain across several phylogenetically recent
    regions. This suggests that cerebral variability may be another useful measure for comparison between species and may add
    another dimension to our understanding of evolutionary mechanisms.
  • Cutler, A. (2008). The abstract representations in speech processing. Quarterly Journal of Experimental Psychology, 61(11), 1601-1619. doi:10.1080/13803390802218542.

    Abstract

    Speech processing by human listeners derives meaning from acoustic input via intermediate steps involving abstract representations of what has been heard. Recent results from several lines of research are here brought together to shed light on the nature and role of these representations. In spoken-word recognition, representations of phonological form and of conceptual content are dissociable. This follows from the independence of patterns of priming for a word's form and its meaning. The nature of the phonological-form representations is determined not only by acoustic-phonetic input but also by other sources of information, including metalinguistic knowledge. This follows from evidence that listeners can store two forms as different without showing any evidence of being able to detect the difference in question when they listen to speech. The lexical representations are in turn separate from prelexical representations, which are also abstract in nature. This follows from evidence that perceptual learning about speaker-specific phoneme realization, induced on the basis of a few words, generalizes across the whole lexicon to inform the recognition of all words containing the same phoneme. The efficiency of human speech processing has its basis in the rapid execution of operations over abstract representations.
  • Cutler, A., & Butterfield, S. (2003). Rhythmic cues to speech segmentation: Evidence from juncture misperception. In J. Field (Ed.), Psycholinguistics: A resource book for students. (pp. 185-189). London: Routledge.
  • Cutler, A., Murty, L., & Otake, T. (2003). Rhythmic similarity effects in non-native listening? In Proceedings of the 15th International Congress of Phonetic Sciences (PCPhS 2003) (pp. 329-332). Adelaide: Causal Productions.

    Abstract

    Listeners rely on native-language rhythm in segmenting speech; in different languages, stress-, syllable- or mora-based rhythm is exploited. This language-specificity affects listening to non- native speech, if native procedures are applied even though inefficient for the non-native language. However, speakers of two languages with similar rhythmic interpretation should segment their own and the other language similarly. This was observed to date only for related languages (English-Dutch; French-Spanish). We now report experiments in which Japanese listeners heard Telugu, a Dravidian language unrelated to Japanese, and Telugu listeners heard Japanese. In both cases detection of target sequences in speech was harder when target boundaries mismatched mora boundaries, exactly the pattern that Japanese listeners earlier exhibited with Japanese and other languages. These results suggest that Telugu and Japanese listeners use similar procedures in segmenting speech, and support the idea that languages fall into rhythmic classes, with aspects of phonological structure affecting listeners' speech segmentation.
  • Cutler, A., McQueen, J. M., Butterfield, S., & Norris, D. (2008). Prelexically-driven perceptual retuning of phoneme boundaries. In Proceedings of Interspeech 2008 (pp. 2056-2056).

    Abstract

    Listeners heard an ambiguous /f-s/ in nonword contexts where only one of /f/ or /s/ was legal (e.g., frul/*srul or *fnud/snud). In later categorisation of a phonetic continuum from /f/ to /s/, their category boundaries had shifted; hearing -rul led to expanded /f/ categories, -nud expanded /s/. Thus phonotactic sequence information alone induces perceptual retuning of phoneme category boundaries; lexical access is not required.
  • Cutler, A. (2003). The perception of speech: Psycholinguistic aspects. In W. Frawley (Ed.), International encyclopaedia of linguistics (pp. 154-157). Oxford: Oxford University Press.
  • Cutler, A., & Otake, T. (1998). Assimilation of place in Japanese and Dutch. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: vol. 5 (pp. 1751-1754). Sydney: ICLSP.

    Abstract

    Assimilation of place of articulation across a nasal and a following stop consonant is obligatory in Japanese, but not in Dutch. In four experiments the processing of assimilated forms by speakers of Japanese and Dutch was compared, using a task in which listeners blended pseudo-word pairs such as ranga-serupa. An assimilated blend of this pair would be rampa, an unassimilated blend rangpa. Japanese listeners produced significantly more assimilated than unassimilated forms, both with pseudo-Japanese and pseudo-Dutch materials, while Dutch listeners produced significantly more unassimilated than assimilated forms in each materials set. This suggests that Japanese listeners, whose native-language phonology involves obligatory assimilation constraints, represent the assimilated nasals in nasal-stop sequences as unmarked for place of articulation, while Dutch listeners, who are accustomed to hearing unassimilated forms, represent the same nasal segments as marked for place of articulation.
  • Ip, M. H. K., & Cutler, A. (2018). Asymmetric efficiency of juncture perception in L1 and L2. In K. Klessa, J. Bachan, A. Wagner, M. Karpiński, & D. Śledziński (Eds.), Proceedings of Speech Prosody 2018 (pp. 289-296). Baixas, France: ISCA. doi:10.21437/SpeechProsody.2018-59.

    Abstract

    In two experiments, Mandarin listeners resolved potential syntactic ambiguities in spoken utterances in (a) their native language (L1) and (b) English which they had learned as a second language (L2). A new disambiguation task was used, requiring speeded responses to select the correct meaning for structurally ambiguous sentences. Importantly, the ambiguities used in the study are identical in Mandarin and in English, and production data show that prosodic disambiguation of this type of ambiguity is also realised very similarly in the two languages. The perceptual results here showed however that listeners’ response patterns differed for L1 and L2, although there was a significant increase in similarity between the two response patterns with increasing exposure to the L2. Thus identical ambiguity and comparable disambiguation patterns in L1 and L2 do not lead to immediate application of the appropriate L1 listening strategy to L2; instead, it appears that such a strategy may have to be learned anew for the L2.
  • Cutler, A. (1992). Cross-linguistic differences in speech segmentation. MRC News, 56, 8-9.
  • Ip, M. H. K., & Cutler, A. (2018). Cue equivalence in prosodic entrainment for focus detection. In J. Epps, J. Wolfe, J. Smith, & C. Jones (Eds.), Proceedings of the 17th Australasian International Conference on Speech Science and Technology (pp. 153-156).

    Abstract

    Using a phoneme detection task, the present series of
    experiments examines whether listeners can entrain to
    different combinations of prosodic cues to predict where focus
    will fall in an utterance. The stimuli were recorded by four
    female native speakers of Australian English who happened to
    have used different prosodic cues to produce sentences with
    prosodic focus: a combination of duration cues, mean and
    maximum F0, F0 range, and longer pre-target interval before
    the focused word onset, only mean F0 cues, only pre-target
    interval, and only duration cues. Results revealed that listeners
    can entrain in almost every condition except for where
    duration was the only reliable cue. Our findings suggest that
    listeners are flexible in the cues they use for focus processing.
  • Cutler, A., & Norris, D. (1992). Detection of vowels and consonants with minimal acoustic variation. Speech Communication, 11, 101-108. doi:10.1016/0167-6393(92)90004-Q.

    Abstract

    Previous research has shown that, in a phoneme detection task, vowels produce longer reaction times than consonants, suggesting that they are harder to perceive. One possible explanation for this difference is based upon their respective acoustic/articulatory characteristics. Another way of accounting for the findings would be to relate them to the differential functioning of vowels and consonants in the syllabic structure of words. In this experiment, we examined the second possibility. Targets were two pairs of phonemes, each containing a vowel and a consonant with similar phonetic characteristics. Subjects heard lists of English words had to press a response key upon detecting the occurrence of a pre-specified target. This time, the phonemes which functioned as vowels in syllabic structure yielded shorter reaction times than those which functioned as consonants. This rules out an explanation for response time difference between vowels and consonants in terms of function in syllable structure. Instead, we propose that consonantal and vocalic segments differ with respect to variability of tokens, both in the acoustic realisation of targets and in the representation of targets by listeners.
  • Cutler, A., Garcia Lecumberri, M. L., & Cooke, M. (2008). Consonant identification in noise by native and non-native listeners: Effects of local context. Journal of the Acoustical Society of America, 124(2), 1264-1268. doi:10.1121/1.2946707.

    Abstract

    Speech recognition in noise is harder in second (L2) than first languages (L1). This could be because noise disrupts speech processing more in L2 than L1, or because L1 listeners recover better though disruption is equivalent. Two similar prior studies produced discrepant results: Equivalent noise effects for L1 and L2 (Dutch) listeners, versus larger effects for L2 (Spanish) than L1. To explain this, the latter experiment was presented to listeners from the former population. Larger noise effects on consonant identification emerged for L2 (Dutch) than L1 listeners, suggesting that task factors rather than L2 population differences underlie the results discrepancy.
  • Cutler, A., Burchfield, L. A., & Antoniou, M. (2018). Factors affecting talker adaptation in a second language. In J. Epps, J. Wolfe, J. Smith, & C. Jones (Eds.), Proceedings of the 17th Australasian International Conference on Speech Science and Technology (pp. 33-36).

    Abstract

    Listeners adapt rapidly to previously unheard talkers by
    adjusting phoneme categories using lexical knowledge, in a
    process termed lexically-guided perceptual learning. Although
    this is firmly established for listening in the native language
    (L1), perceptual flexibility in second languages (L2) is as yet
    less well understood. We report two experiments examining L1
    and L2 perceptual learning, the first in Mandarin-English late
    bilinguals, the second in Australian learners of Mandarin. Both
    studies showed stronger learning in L1; in L2, however,
    learning appeared for the English-L1 group but not for the
    Mandarin-L1 group. Phonological mapping differences from
    the L1 to the L2 are suggested as the reason for this result.
  • Cutler, A. (1998). How listeners find the right words. In Proceedings of the Sixteenth International Congress on Acoustics: Vol. 2 (pp. 1377-1380). Melville, NY: Acoustical Society of America.

    Abstract

    Languages contain tens of thousands of words, but these are constructed from a tiny handful of phonetic elements. Consequently, words resemble one another, or can be embedded within one another, a coup stick snot with standing. me process of spoken-word recognition by human listeners involves activation of multiple word candidates consistent with the input, and direct competition between activated candidate words. Further, human listeners are sensitive, at an early, prelexical, stage of speeeh processing, to constraints on what could potentially be a word of the language.
  • Cutler, A., Andics, A., & Fang, Z. (2011). Inter-dependent categorization of voices and segments. In W.-S. Lee, & E. Zee (Eds.), Proceedings of the 17th International Congress of Phonetic Sciences [ICPhS 2011] (pp. 552-555). Hong Kong: Department of Chinese, Translation and Linguistics, City University of Hong Kong.

    Abstract

    Listeners performed speeded two-alternative choice between two unfamiliar and relatively similar voices or between two phonetically close segments, in VC syllables. For each decision type (segment, voice), the non-target dimension (voice, segment) either was constant, or varied across four alternatives. Responses were always slower when a non-target dimension varied than when it did not, but the effect of phonetic variation on voice identity decision was stronger than that of voice variation on phonetic identity decision. Cues to voice and segment identity in speech are processed inter-dependently, but hard categorization decisions about voices draw on, and are hence sensitive to, segmental information.
  • Cutler, A. (2009). Greater sensitivity to prosodic goodness in non-native than in native listeners. Journal of the Acoustical Society of America, 125, 3522-3525. doi:10.1121/1.3117434.

    Abstract

    English listeners largely disregard suprasegmental cues to stress in recognizing words. Evidence for this includes the demonstration of Fear et al. [J. Acoust. Soc. Am. 97, 1893–1904 (1995)] that cross-splicings are tolerated between stressed and unstressed full vowels (e.g., au- of autumn, automata). Dutch listeners, however, do exploit suprasegmental stress cues in recognizing native-language words. In this study, Dutch listeners were presented with English materials from the study of Fear et al. Acceptability ratings by these listeners revealed sensitivity to suprasegmental mismatch, in particular, in replacements of unstressed full vowels by higher-stressed vowels, thus evincing greater sensitivity to prosodic goodness than had been shown by the original native listener group.
  • Cutler, A., Kearns, R., Norris, D., & Scott, D. (1992). Listeners’ responses to extraneous signals coincident with English and French speech. In J. Pittam (Ed.), Proceedings of the 4th Australian International Conference on Speech Science and Technology (pp. 666-671). Canberra: Australian Speech Science and Technology Association.

    Abstract

    English and French listeners performed two tasks - click location and speeded click detection - with both English and French sentences, closely matched for syntactic and phonological structure. Clicks were located more accurately in open- than in closed-class words in both English and French; they were detected more rapidly in open- than in closed-class words in English, but not in French. The two listener groups produced the same pattern of responses, suggesting that higher-level linguistic processing was not involved in these tasks.
  • Cutler, A., & Farrell, J. (2018). Listening in first and second language. In J. I. Liontas (Ed.), The TESOL encyclopedia of language teaching. New York: Wiley. doi:10.1002/9781118784235.eelt0583.

    Abstract

    Listeners' recognition of spoken language involves complex decoding processes: The continuous speech stream must be segmented into its component words, and words must be recognized despite great variability in their pronunciation (due to talker differences, or to influence of phonetic context, or to speech register) and despite competition from many spuriously present forms supported by the speech signal. L1 listeners deal more readily with all levels of this complexity than L2 listeners. Fortunately, the decoding processes necessary for competent L2 listening can be taught in the classroom. Evidence-based methodologies targeted at the development of efficient speech decoding include teaching of minimal pairs, of phonotactic constraints, and of reduction processes, as well as the use of dictation and L2 video captions.
  • Cutler, A. (2011). Listening to REAL second language. AATSEEL Newsletter, 54(3), 14.
  • Cutler, A., Davis, C., & Kim, J. (2009). Non-automaticity of use of orthographic knowledge in phoneme evaluation. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 380-383). Causal Productions Pty Ltd.

    Abstract

    Two phoneme goodness rating experiments addressed the role of orthographic knowledge in the evaluation of speech sounds. Ratings for the best tokens of /s/ were higher in words spelled with S (e.g., bless) than in words where /s/ was spelled with C (e.g., voice). This difference did not appear for analogous nonwords for which every lexical neighbour had either S or C spelling (pless, floice). Models of phonemic processing incorporating obligatory influence of lexical information in phonemic processing cannot explain this dissociation; the data are consistent with models in which phonemic decisions are not subject to necessary top-down lexical influence.
  • Cutler, A., Treiman, R., & Van Ooijen, B. (1998). Orthografik inkoncistensy ephekts in foneme detektion? In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2783-2786). Sydney: ICSLP.

    Abstract

    The phoneme detection task is widely used in spoken word recognition research. Alphabetically literate participants, however, are more used to explicit representations of letters than of phonemes. The present study explored whether phoneme detection is sensitive to how target phonemes are, or may be, orthographically realised. Listeners detected the target sounds [b,m,t,f,s,k] in word-initial position in sequences of isolated English words. Response times were faster to the targets [b,m,t], which have consistent word-initial spelling, than to the targets [f,s,k], which are inconsistently spelled, but only when listeners’ attention was drawn to spelling by the presence in the experiment of many irregularly spelled fillers. Within the inconsistent targets [f,s,k], there was no significant difference between responses to targets in words with majority and minority spellings. We conclude that performance in the phoneme detection task is not necessarily sensitive to orthographic effects, but that salient orthographic manipulation can induce such sensitivity.

Share this page