Publications

Displaying 201 - 300 of 1444
  • Chwilla, D., Hagoort, P., & Brown, C. M. (1998). The mechanism underlying backward priming in a lexical decision task: Spreading activation versus semantic matching. Quarterly Journal of Experimental Psychology, 51A(3), 531-560. doi:10.1080/713755773.

    Abstract

    Koriat (1981) demonstrated that an association from the target to a preceding prime, in the absence of an association from the prime to the target, facilitates lexical decision and referred to this effect as "backward priming". Backward priming is of relevance, because it can provide information about the mechanism underlying semantic priming effects. Following Neely (1991), we distinguish three mechanisms of priming: spreading activation, expectancy, and semantic matching/integration. The goal was to determine which of these mechanisms causes backward priming, by assessing effects of backward priming on a language-relevant ERP component, the N400, and reaction time (RT). Based on previous work, we propose that the N400 priming effect reflects expectancy and semantic matching/integration, but in contrast with RT does not reflect spreading activation. Experiment 1 shows a backward priming effect that is qualitatively similar for the N400 and RT in a lexical decision task. This effect was not modulated by an ISI manipulation. Experiment 2 clarifies that the N400 backward priming effect reflects genuine changes in N400 amplitude and cannot be ascribed to other factors. We will argue that these backward priming effects cannot be due to expectancy but are best accounted for in terms of semantic matching/integration.
  • Clark, N., & Perlman, M. (2014). Breath, vocal, and supralaryngeal flexibility in a human-reared gorilla. In B. De Boer, & T. Verhoef (Eds.), Proceedings of Evolang X, Workshop on Signals, Speech, and Signs (pp. 11-15).

    Abstract

    “Gesture-first” theories dismiss ancestral great apes’ vocalization as a substrate for language evolution based on the claim that extant apes exhibit minimal learning and volitional control of vocalization. Contrary to this claim, we present data of novel learned and voluntarily controlled vocal behaviors produced by a human-fostered gorilla (G. gorilla gorilla). These behaviors demonstrate varying degrees of flexibility in the vocal apparatus (including diaphragm, lungs, larynx, and supralaryngeal articulators), and are predominantly performed in coordination with manual behaviors and gestures. Instead of a gesture-first theory, we suggest that these findings support multimodal theories of language evolution in which vocal and gestural forms are coordinated and supplement one another
  • Coombs, P. J., Graham, S. A., Drickamer, K., & Taylor, M. E. (2005). Selective binding of the scavenger receptor C-type lectin to Lewisx trisaccharide and related glycan ligands. The Journal of Biological Chemistry, 280, 22993-22999. doi:10.1074/jbc.M504197200.

    Abstract

    The scavenger receptor C-type lectin (SRCL) is an endothelial receptor that is similar in organization to type A scavenger receptors for modified low density lipoproteins but contains a C-type carbohydrate-recognition domain (CRD). Fragments of the receptor consisting of the entire extracellular domain and the CRD have been expressed and characterized. The extracellular domain is a trimer held together by collagen-like and coiled-coil domains adjacent to the CRD. The amino acid sequence of the CRD is very similar to the CRD of the asialoglycoprotein receptor and other galactose-specific receptors, but SRCL binds selectively to asialo-orosomucoid rather than generally to asialoglycoproteins. Screening of a glycan array and further quantitative binding studies indicate that this selectivity results from high affinity binding to glycans bearing the Lewis(x) trisaccharide. Thus, SRCL shares with the dendritic cell receptor DC-SIGN the ability to bind the Lewis(x) epitope. However, it does so in a fundamentally different way, making a primary binding interaction with the galactose moiety of the glycan rather than the fucose residue. SRCL shares with the asialoglycoprotein receptor the ability to mediate endocytosis and degradation of glycoprotein ligands. These studies suggest that SRCL might be involved in selective clearance of specific desialylated glycoproteins from circulation and/or interaction of cells bearing Lewis(x)-type structures with the vascular endothelium.
  • Cooper, R. P., & Guest, O. (2014). Implementations are not specifications: Specification, replication and experimentation in computational cognitive modeling. Cognitive Systems Research, 27, 42-49. doi:10.1016/j.cogsys.2013.05.001.

    Abstract

    Contemporary methods of computational cognitive modeling have recently been criticized by Addyman and French (2012) on the grounds that they have not kept up with developments in computer technology and human–computer interaction. They present a manifesto for change according to which, it is argued, modelers should devote more effort to making their models accessible, both to non-modelers (with an appropriate easy-to-use user interface) and modelers alike. We agree that models, like data, should be freely available according to the normal standards of science, but caution against confusing implementations with specifications. Models may embody theories, but they generally also include implementation assumptions. Cognitive modeling methodology needs to be sensitive to this. We argue that specification, replication and experimentation are methodological approaches that can address this issue.
  • Costa, A., Cutler, A., & Sebastian-Galles, N. (1998). Effects of phoneme repertoire on phoneme decision. Perception and Psychophysics, 60, 1022-1031.

    Abstract

    In three experiments, listeners detected vowel or consonant targets in lists of CV syllables constructed from five vowels and five consonants. Responses were faster in a predictable context (e.g., listening for a vowel target in a list of syllables all beginning with the same consonant) than in an unpredictable context (e.g., listening for a vowel target in a list of syllables beginning with different consonants). In Experiment 1, the listeners’ native language was Dutch, in which vowel and consonant repertoires are similar in size. The difference between predictable and unpredictable contexts was comparable for vowel and consonant targets. In Experiments 2 and 3, the listeners’ native language was Spanish, which has four times as many consonants as vowels; here effects of an unpredictable consonant context on vowel detection were significantly greater than effects of an unpredictable vowel context on consonant detection. This finding suggests that listeners’ processing of phonemes takes into account the constitution of their language’s phonemic repertoire and the implications that this has for contextual variability.
  • Cousijn, H., Eissing, M., Fernández, G., Fisher, S. E., Franke, B., Zwers, M., Harrison, P. J., & Arias-Vasquez, A. (2014). No effect of schizophrenia risk genes MIR137, TCF4, and ZNF804A on macroscopic brain structure. Schizophrenia Research, 159, 329-332. doi:10.1016/j.schres.2014.08.007.

    Abstract

    Single nucleotide polymorphisms (SNPs) within the MIR137, TCF4, and ZNF804A genes show genome-wide association to schizophrenia. However, the biological basis for the associations is unknown. Here, we tested the effects of these genes on brain structure in 1300 healthy adults. Using volumetry and voxel-based morphometry, neither gene-wide effects—including the combined effect of the genes—nor single SNP effects—including specific psychosis risk SNPs—were found on total brain volume, grey matter, white matter, or hippocampal volume. These results suggest that the associations between these risk genes and schizophrenia are unlikely to be mediated via effects on macroscopic brain structure.
  • Cozijn, R., Vonk, W., & Noordman, L. G. M. (2003). Afleidingen uit oogbewegingen: De invloed van het connectief 'omdat' op het maken van causale inferenties. Gramma/TTT, 9, 141-156.
  • Crago, M. B., & Allen, S. E. M. (1998). Acquiring Inuktitut. In O. L. Taylor, & L. Leonard (Eds.), Language Acquisition Across North America: Cross-Cultural And Cross-Linguistic Perspectives (pp. 245-279). San Diego, CA, USA: Singular Publishing Group, Inc.
  • Crago, M. B., Allen, S. E. M., & Pesco, D. (1998). Issues of Complexity in Inuktitut and English Child Directed Speech. In Proceedings of the twenty-ninth Annual Stanford Child Language Research Forum (pp. 37-46).
  • Crago, M. B., Chen, C., Genesee, F., & Allen, S. E. M. (1998). Power and deference. Journal for a Just and Caring Education, 4(1), 78-95.
  • Crasborn, O., & Sloetjes, H. (2014). Improving the exploitation of linguistic annotations in ELAN. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2014: 9th International Conference on Language Resources and Evaluation (pp. 3604-3608).

    Abstract

    This paper discusses some improvements in recent and planned versions of the multimodal annotation tool ELAN, which are targeted at improving the usability of annotated files. Increased support for multilingual documents is provided, by allowing for multilingual vocabularies and by specifying a language per document, annotation layer (tier) or annotation. In addition, improvements in the search possibilities and the display of the results have been implemented, which are especially relevant in the interpretation of the results of complex multi-tier searches.
  • Crasborn, O., Hulsbosch, M., Lampen, L., & Sloetjes, H. (2014). New multilayer concordance functions in ELAN and TROVA. In Proceedings of the Tilburg Gesture Research Meeting [TiGeR 2013].

    Abstract

    Collocations generated by concordancers are a standard instrument in the exploitation of text corpora for the analysis of language use. Multimodal corpora show similar types of patterns, activities that frequently occur together, but there is no tool that offers facilities for visualising such patterns. Examples include timing of eye contact with respect to speech, and the alignment of activities of the two hands in signed languages. This paper describes recent enhancements to the standard CLARIN tools ELAN and TROVA for multimodal annotation to address these needs: first of all the query and concordancing functions were improved, and secondly the tools now generate visualisations of multilayer collocations that allow for intuitive explorations and analyses of multimodal data. This will provide a boost to the linguistic fields of gesture and sign language studies, as it will improve the exploitation of multimodal corpora.
  • Cristia, A., Minagawa-Kawai, Y., Egorova, N., Gervain, J., Filippin, L., Cabrol, D., & Dupoux, E. (2014). Neural correlates of infant accent discrimination: An fNIRS study. Developmental Science, 17(4), 628-635. doi:10.1111/desc.12160.

    Abstract

    The present study investigated the neural correlates of infant discrimination of very similar linguistic varieties (Quebecois and Parisian French) using functional Near InfraRed Spectroscopy. In line with previous behavioral and electrophysiological data, there was no evidence that 3-month-olds discriminated the two regional accents, whereas 5-month-olds did, with the locus of discrimination in left anterior perisylvian regions. These neuroimaging results suggest that a developing language network relying crucially on left perisylvian cortices sustains infants' discrimination of similar linguistic varieties within this early period of infancy.

    Files private

    Request files
  • Cristia, A., Seidl, A., Junge, C., Soderstrom, M., & Hagoort, P. (2014). Predicting individual variation in language from infant speech perception measures. Child development, 85(4), 1330-1345. doi:10.1111/cdev.12193.

    Abstract

    There are increasing reports that individual variation in behavioral and neurophysiological measures of infant speech processing predicts later language outcomes, and specifically concurrent or subsequent vocabulary size. If such findings are held up under scrutiny, they could both illuminate theoretical models of language development and contribute to the prediction of communicative disorders. A qualitative, systematic review of this emergent literature illustrated the variety of approaches that have been used and highlighted some conceptual problems regarding the measurements. A quantitative analysis of the same data established that the bivariate relation was significant, with correlations of similar strength to those found for well-established nonlinguistic predictors of language. Further exploration of infant speech perception predictors, particularly from a methodological perspective, is recommended.
  • Cristia, A., & Seidl, A. (2014). The hyperarticulation hypothesis of infant-directed speech. Journal of Child Language, 41(4), 913-934. doi:10.1017/S0305000912000669.

    Abstract

    Typically, the point vowels [i,ɑ,u] are acoustically more peripheral in infant-directed speech (IDS) compared to adult-directed speech (ADS). If caregivers seek to highlight lexically relevant contrasts in IDS, then two sounds that are contrastive should become more distinct, whereas two sounds that are surface realizations of the same underlying sound category should not. To test this prediction, vowels that are phonemically contrastive ([i-ɪ] and [eɪ-ε]), vowels that map onto the same underlying category ([æ- ] and [ε- ]), and the point vowels [i,ɑ,u] were elicited in IDS and ADS by American English mothers of two age groups of infants (four- and eleven-month-olds). As in other work, point vowels were produced in more peripheral positions in IDS compared to ADS. However, there was little evidence of hyperarticulation per se (e.g. [i-ɪ] was hypoarticulated). We suggest that across-the-board lexically based hyperarticulation is not a necessary feature of IDS.

    Additional information

    CORRIGENDUM
  • Cronin, K. A., Kurian, A. V., & Snowdon, C. T. (2005). Cooperative problem solving in a cooperatively breeding primate. Animal Behaviour, 69, 133-142. doi:10.1016/j.anbehav.2004.02.024.

    Abstract

    We investigated cooperative problem solving in unrelated pairs of the cooperatively breeding cottontop tamarin, Saguinus oedipus, to assess the cognitive basis of cooperative behaviour in this species and to compare abilities with other apes and monkeys. A transparent apparatus was used that required extension of two handles at opposite ends of the apparatus for access to rewards. Resistance was applied to both handles so that two tamarins had to act simultaneously in order to receive rewards. In contrast to several previous studies of cooperation, both tamarins received rewards as a result of simultaneous pulling. The results from two experiments indicated that the cottontop tamarins (1) had a much higher success rate and efficiency of pulling than many of the other species previously studied, (2) adjusted pulling behaviour to the presence or absence of a partner, and (3) spontaneously developed sustained pulling techniques to solve the task. These findings suggest that cottontop tamarins understand the role of the partner in this cooperative task, a cognitive ability widely ascribed only to great apes. The cooperative social system of tamarins, the intuitive design of the apparatus, and the provision of rewards to both participants may explain the performance of the tamarins.
  • Cronin, K. A., Pieper, B., Van Leeuwen, E. J. C., Mundry, R., & Haun, D. B. M. (2014). Problem solving in the presence of others: How rank and relationship quality impact resource acquisition in chimpanzees (Pan troglodytes). PLoS One, 9(4): e93204. doi:10.1371/journal.pone.0093204.

    Abstract

    In the wild, chimpanzees (Pan troglodytes) are often faced with clumped food resources that they may know how to access but abstain from doing so due to social pressures. To better understand how social settings influence resource acquisition, we tested fifteen semi-wild chimpanzees from two social groups alone and in the presence of others. We investigated how resource acquisition was affected by relative social dominance, whether collaborative problem solving or (active or passive) sharing occurred amongst any of the dyads, and whether these outcomes were related to relationship quality as determined from six months of observational data. Results indicated that chimpanzees, regardless of rank, obtained fewer rewards when tested in the presence of others compared to when they were tested alone. Chimpanzees demonstrated behavioral inhibition; chimpanzees who showed proficient skill when alone often abstained from solving the task when in the presence of others. Finally, individuals with close social relationships spent more time together in the problem solving space, but collaboration and sharing were infrequent and sessions in which collaboration or sharing did occur contained more instances of aggression. Group living provides benefits and imposes costs, and these findings highlight that one cost of group living may be diminishing productive individual behaviors.
  • Cronin, K. A., Van Leeuwen, E. J. C., Vreeman, V., & Haun, D. B. M. (2014). Population-level variability in the social climates of four chimpanzee societies. Evolution and Human Behavior, 35(5), 389-396. doi:10.1016/j.evolhumbehav.2014.05.004.

    Abstract

    Recent debates have questioned the extent to which culturally-transmitted norms drive behavioral variation in resource sharing across human populations. We shed new light on this discussion by examining the group-level variation in the social dynamics and resource sharing of chimpanzees, a species that is highly social and forms long-term community associations but differs from humans in the extent to which cultural norms are adopted and enforced. We rely on theory developed in primate socioecology to guide our investigation in four neighboring chimpanzee groups at a sanctuary in Zambia. We used a combination of experimental and observational approaches to assess the distribution of resource holding potential in each group. In the first assessment, we measured the proportion of the population that gathered in a resource-rich zone, in the second we assessed naturally occurring social spacing via social network analysis, and in the third we assessed the degree to which benefits were equally distributed within the group. We report significant, stable group-level variation across these multiple measures, indicating that group-level variation in resource sharing and social tolerance is not necessarily reliant upon human-like cultural norms.
  • Cutler, A., & Broersma, M. (2005). Phonetic precision in listening. In W. J. Hardcastle, & J. M. Beck (Eds.), A figure of speech: A Festschrift for John Laver (pp. 63-91). Mahwah, NJ: Erlbaum.
  • Cutler, A., & Butterfield, S. (2003). Rhythmic cues to speech segmentation: Evidence from juncture misperception. In J. Field (Ed.), Psycholinguistics: A resource book for students. (pp. 185-189). London: Routledge.
  • Cutler, A., Murty, L., & Otake, T. (2003). Rhythmic similarity effects in non-native listening? In Proceedings of the 15th International Congress of Phonetic Sciences (PCPhS 2003) (pp. 329-332). Adelaide: Causal Productions.

    Abstract

    Listeners rely on native-language rhythm in segmenting speech; in different languages, stress-, syllable- or mora-based rhythm is exploited. This language-specificity affects listening to non- native speech, if native procedures are applied even though inefficient for the non-native language. However, speakers of two languages with similar rhythmic interpretation should segment their own and the other language similarly. This was observed to date only for related languages (English-Dutch; French-Spanish). We now report experiments in which Japanese listeners heard Telugu, a Dravidian language unrelated to Japanese, and Telugu listeners heard Japanese. In both cases detection of target sequences in speech was harder when target boundaries mismatched mora boundaries, exactly the pattern that Japanese listeners earlier exhibited with Japanese and other languages. These results suggest that Telugu and Japanese listeners use similar procedures in segmenting speech, and support the idea that languages fall into rhythmic classes, with aspects of phonological structure affecting listeners' speech segmentation.
  • Cutler, A. (2003). The perception of speech: Psycholinguistic aspects. In W. Frawley (Ed.), International encyclopaedia of linguistics (pp. 154-157). Oxford: Oxford University Press.
  • Cutler, A., Klein, W., & Levinson, S. C. (2005). The cornerstones of twenty-first century psycholinguistics. In A. Cutler (Ed.), Twenty-first century psycholinguistics: Four cornerstones (pp. 1-20). Mahwah, NJ: Erlbaum.
  • Cutler, A. (2005). The lexical statistics of word recognition problems caused by L2 phonetic confusion. In Proceedings of the 9th European Conference on Speech Communication and Technology (pp. 413-416).
  • Cutler, A., McQueen, J. M., & Norris, D. (2005). The lexical utility of phoneme-category plasticity. In Proceedings of the ISCA Workshop on Plasticity in Speech Perception (PSP2005) (pp. 103-107).
  • Cutler, A. (Ed.). (2005). Twenty-first century psycholinguistics: Four cornerstones. Mahwah, NJ: Erlbaum.
  • Cutler, A., Smits, R., & Cooper, N. (2005). Vowel perception: Effects of non-native language vs. non-native dialect. Speech Communication, 47(1-2), 32-42. doi:10.1016/j.specom.2005.02.001.

    Abstract

    Three groups of listeners identified the vowel in CV and VC syllables produced by an American English talker. The listeners were (a) native speakers of American English, (b) native speakers of Australian English (different dialect), and (c) native speakers of Dutch (different language). The syllables were embedded in multispeaker babble at three signal-to-noise ratios (0 dB, 8 dB, and 16 dB). The identification performance of native listeners was significantly better than that of listeners with another language but did not significantly differ from the performance of listeners with another dialect. Dialect differences did however affect the type of perceptual confusions which listeners made; in particular, the Australian listeners’ judgements of vowel tenseness were more variable than the American listeners’ judgements, which may be ascribed to cross-dialectal differences in this vocalic feature. Although listening difficulty can result when speech input mismatches the native dialect in terms of the precise cues for and boundaries of phonetic categories, the difficulty is very much less than that which arises when speech input mismatches the native language in terms of the repertoire of phonemic categories available.
  • Cutler, A. (2005). Why is it so hard to understand a second language in noise? Newsletter, American Association of Teachers of Slavic and East European Languages, 48, 16-16.
  • Cutler, A., & Otake, T. (1998). Assimilation of place in Japanese and Dutch. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: vol. 5 (pp. 1751-1754). Sydney: ICLSP.

    Abstract

    Assimilation of place of articulation across a nasal and a following stop consonant is obligatory in Japanese, but not in Dutch. In four experiments the processing of assimilated forms by speakers of Japanese and Dutch was compared, using a task in which listeners blended pseudo-word pairs such as ranga-serupa. An assimilated blend of this pair would be rampa, an unassimilated blend rangpa. Japanese listeners produced significantly more assimilated than unassimilated forms, both with pseudo-Japanese and pseudo-Dutch materials, while Dutch listeners produced significantly more unassimilated than assimilated forms in each materials set. This suggests that Japanese listeners, whose native-language phonology involves obligatory assimilation constraints, represent the assimilated nasals in nasal-stop sequences as unmarked for place of articulation, while Dutch listeners, who are accustomed to hearing unassimilated forms, represent the same nasal segments as marked for place of articulation.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1983). A language-specific comprehension strategy [Letters to Nature]. Nature, 304, 159-160. doi:10.1038/304159a0.

    Abstract

    Infants acquire whatever language is spoken in the environment into which they are born. The mental capability of the newborn child is not biased in any way towards the acquisition of one human language rather than another. Because psychologists who attempt to model the process of language comprehension are interested in the structure of the human mind, rather than in the properties of individual languages, strategies which they incorporate in their models are presumed to be universal, not language-specific. In other words, strategies of comprehension are presumed to be characteristic of the human language processing system, rather than, say, the French, English, or Igbo language processing systems. We report here, however, on a comprehension strategy which appears to be used by native speakers of French but not by native speakers of English.
  • Cutler, A., Sebastian-Galles, N., Soler-Vilageliu, O., & Van Ooijen, B. (2000). Constraints of vowels and consonants on lexical selection: Cross-linguistic comparisons. Memory & Cognition, 28, 746-755.

    Abstract

    Languages differ in the constitution of their phonemic repertoire and in the relative distinctiveness of phonemes within the repertoire. In the present study, we asked whether such differences constrain spoken-word recognition, via two word reconstruction experiments, in which listeners turned non-words into real words by changing single sounds. The experiments were carried out in Dutch (which has a relatively balanced vowel-consonant ratio and many similar vowels) and in Spanish (which has many more consonants than vowels and high distinctiveness among the vowels). Both Dutch and Spanish listeners responded significantly faster and more accurately when required to change vowels as opposed to consonants; when allowed to change any phoneme, they more often altered vowels than consonants. Vowel information thus appears to constrain lexical selection less tightly (allow more potential candidates) than does consonant information, independent of language-specific phoneme repertoire and of relative distinctiveness of vowels.
  • Cutler, A., & Van de Weijer, J. (2000). De ontdekking van de eerste woorden. Stem-, Spraak- en Taalpathologie, 9, 245-259.

    Abstract

    Spraak is continu, er zijn geen betrouwbare signalen waardoor de luisteraar weet waar het ene woord eindigt en het volgende begint. Voor volwassen luisteraars is het segmenteren van gesproken taal in afzonderlijke woorden dus niet onproblematisch, maar voor een kind dat nog geen woordenschat bezit, vormt de continuïteit van spraak een nog grotere uitdaging. Desalniettemin produceren de meeste kinderen hun eerste herkenbare woorden rond het begin van het tweede levensjaar. Aan deze vroege spraakproducties gaat een formidabele perceptuele prestatie vooraf. Tijdens het eerste levensjaar - met name gedurende de tweede helft - ontwikkelt de spraakperceptie zich van een algemeen fonetisch discriminatievermogen tot een selectieve gevoeligheid voor de fonologische contrasten die in de moedertaal voorkomen. Recent onderzoek heeft verder aangetoond dat kinderen, lang voordat ze ook maar een enkel woord kunnen zeggen, in staat zijn woorden die kenmerkend zijn voor hun moedertaal te onderscheiden van woorden die dat niet zijn. Bovendien kunnen ze woorden die eerst in isolatie werden aangeboden herkennen in een continue spraakcontext. Het dagelijkse taalaanbod aan een kind van deze leeftijd maakt het in zekere zin niet gemakkelijk, bijvoorbeeld doordat de meeste woorden niet in isolatie voorkomen. Toch wordt het kind ook wel houvast geboden, onder andere doordat het woordgebruik beperkt is.
  • Cutler, A., Wales, R., Cooper, N., & Janssen, J. (2007). Dutch listeners' use of suprasegmental cues to English stress. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetics Sciences (ICPhS 2007) (pp. 1913-1916). Dudweiler: Pirrot.

    Abstract

    Dutch listeners outperform native listeners in identifying syllable stress in English. This is because lexical stress is more useful in recognition of spoken words of Dutch than of English, so that Dutch listeners pay greater attention to stress in general. We examined Dutch listeners’ use of the acoustic correlates of English stress. Primary- and secondary-stressed syllables differ significantly on acoustic measures, and some differences, in F0 especially, correlate with data of earlier listening experiments. The correlations found in the Dutch responses were not paralleled in data from native listeners. Thus the acoustic cues which distinguish English primary versus secondary stress are better exploited by Dutch than by native listeners.
  • Cutler, A. (2005). Lexical stress. In D. B. Pisoni, & R. E. Remez (Eds.), The handbook of speech perception (pp. 264-289). Oxford: Blackwell.
  • Cutler, A., & Weber, A. (2007). Listening experience and phonetic-to-lexical mapping in L2. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 43-48). Dudweiler: Pirrot.

    Abstract

    In contrast to initial L1 vocabularies, which of necessity depend largely on heard exemplars, L2 vocabulary construction can draw on a variety of knowledge sources. This can lead to richer stored knowledge about the phonology of the L2 than the listener's prelexical phonetic processing capacity can support, and thus to mismatch between the level of detail required for accurate lexical mapping and the level of detail delivered by the prelexical processor. Experiments on spoken word recognition in L2 have shown that phonetic contrasts which are not reliably perceived are represented in the lexicon nonetheless. This lexical representation of contrast must be based on abstract knowledge, not on veridical representation of heard exemplars. New experiments confirm that provision of abstract knowledge (in the form of spelling) can induce lexical representation of a contrast which is not reliably perceived; but also that experience (in the form of frequency of occurrence) modulates the mismatch of phonetic and lexical processing. We conclude that a correct account of word recognition in L2 (as indeed in L1) requires consideration of both abstract and episodic information.
  • Cutler, A., Cooke, M., Garcia-Lecumberri, M. L., & Pasveer, D. (2007). L2 consonant identification in noise: Cross-language comparisons. In H. van Hamme, & R. van Son (Eds.), Proceedings of Interspeech 2007 (pp. 1585-1588). Adelaide: Causal productions.

    Abstract

    The difficulty of listening to speech in noise is exacerbated when the speech is in the listener’s L2 rather than L1. In this study, Spanish and Dutch users of English as an L2 identified American English consonants in a constant intervocalic context. Their performance was compared with that of L1 (British English) listeners, under quiet conditions and when the speech was masked by speech from another talker or by noise. Masking affected performance more for the Spanish listeners than for the L1 listeners, but not for the Dutch listeners, whose performance was worse than the L1 case to about the same degree in all conditions. There were, however,large differences in the pattern of results across individual consonants, which were consistent with differences in how consonants are identified in the respective L1s.
  • Cutler, A. (1998). How listeners find the right words. In Proceedings of the Sixteenth International Congress on Acoustics: Vol. 2 (pp. 1377-1380). Melville, NY: Acoustical Society of America.

    Abstract

    Languages contain tens of thousands of words, but these are constructed from a tiny handful of phonetic elements. Consequently, words resemble one another, or can be embedded within one another, a coup stick snot with standing. me process of spoken-word recognition by human listeners involves activation of multiple word candidates consistent with the input, and direct competition between activated candidate words. Further, human listeners are sensitive, at an early, prelexical, stage of speeeh processing, to constraints on what could potentially be a word of the language.
  • Cutler, A., & McQueen, J. M. (2014). How prosody is both mandatory and optional. In J. Caspers, Y. Chen, W. Heeren, J. Pacilly, N. O. Schiller, & E. Van Zanten (Eds.), Above and Beyond the Segments: Experimental linguistics and phonetics (pp. 71-82). Amsterdam: Benjamins.

    Abstract

    Speech signals originate as a sequence of linguistic units selected by speakers, but these units are necessarily realised in the suprasegmental dimensions of time, frequency and amplitude. For this reason prosodic structure has been viewed as a mandatory target of language processing by both speakers and listeners. In apparent contradiction, however, prosody has also been argued to be ancillary rather than core linguistic structure, making processing of prosodic structure essentially optional. In the present tribute to one of the luminaries of prosodic research for the past quarter century, we review evidence from studies of the processing of lexical stress and focal accent which reconciles these views and shows that both claims are, each in their own way, fully true.
  • Cutler, A. (2000). How the ear comes to hear. In New Trends in Modern Linguistics [Part of Annual catalogue series] (pp. 6-10). Tokyo, Japan: Maruzen Publishers.
  • Cutler, A. (2014). In thrall to the vocabulary. Acoustics Australia, 42, 84-89.

    Abstract

    Vocabularies contain hundreds of thousands of words built from only a handful of phonemes; longer words inevitably tend to contain shorter ones. Recognising speech thus requires distinguishing intended words from accidentally present ones. Acoustic information in speech is used wherever it contributes significantly to this process; but as this review shows, its contribution differs across languages, with the consequences of this including: identical and equivalently present information distinguishing the same phonemes being used in Polish but not in German, or in English but not in Italian; identical stress cues being used in Dutch but not in English; expectations about likely embedding patterns differing across English, French, Japanese.
  • Cutler, A. (2000). Hoe het woord het oor verovert. In Voordrachten uitgesproken tijdens de uitreiking van de SPINOZA-premies op 15 februari 2000 (pp. 29-41). The Hague, The Netherlands: Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO).
  • Cutler, A. (1983). Lexical complexity and sentence processing. In G. B. Flores d'Arcais, & R. J. Jarvella (Eds.), The process of language understanding (pp. 43-79). Chichester, Sussex: Wiley.
  • Cutler, A., Treiman, R., & Van Ooijen, B. (1998). Orthografik inkoncistensy ephekts in foneme detektion? In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2783-2786). Sydney: ICSLP.

    Abstract

    The phoneme detection task is widely used in spoken word recognition research. Alphabetically literate participants, however, are more used to explicit representations of letters than of phonemes. The present study explored whether phoneme detection is sensitive to how target phonemes are, or may be, orthographically realised. Listeners detected the target sounds [b,m,t,f,s,k] in word-initial position in sequences of isolated English words. Response times were faster to the targets [b,m,t], which have consistent word-initial spelling, than to the targets [f,s,k], which are inconsistently spelled, but only when listeners’ attention was drawn to spelling by the presence in the experiment of many irregularly spelled fillers. Within the inconsistent targets [f,s,k], there was no significant difference between responses to targets in words with majority and minority spellings. We conclude that performance in the phoneme detection task is not necessarily sensitive to orthographic effects, but that salient orthographic manipulation can induce such sensitivity.
  • Cutler, A., McQueen, J. M., & Zondervan, R. (2000). Proceedings of SWAP (Workshop on Spoken Word Access Processes). Nijmegen: MPI for Psycholinguistics.
  • Cutler, A. (1998). Prosodic structure and word recognition. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 41-70). Heidelberg: Springer.
  • Cutler, A., & Ladd, D. R. (Eds.). (1983). Prosody: Models and measurements. Heidelberg: Springer.
  • Cutler, A. (2000). Real words, phantom words and impossible words. In D. Burnham, S. Luksaneeyanawin, C. Davis, & M. Lafourcade (Eds.), Interdisciplinary approaches to language processing: The international conference on human and machine processing of language and speech (pp. 32-42). Bangkok: NECTEC.
  • Cutler, A. (1983). Semantics, syntax and sentence accent. In M. Van den Broecke, & A. Cohen (Eds.), Proceedings of the Tenth International Congress of Phonetic Sciences (pp. 85-91). Dordrecht: Foris.
  • Cutler, A. (1983). Speakers’ conceptions of the functions of prosody. In A. Cutler, & D. R. Ladd (Eds.), Prosody: Models and measurements (pp. 79-91). Heidelberg: Springer.
  • Cutler, A., & Koster, M. (2000). Stress and lexical activation in Dutch. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the Sixth International Conference on Spoken Language Processing: Vol. 1 (pp. 593-596). Beijing: China Military Friendship Publish.

    Abstract

    Dutch listeners were slower to make judgements about the semantic relatedness between a spoken target word (e.g. atLEET, 'athlete') and a previously presented visual prime word (e.g. SPORT 'sport') when the spoken word was mis-stressed. The adverse effect of mis-stressing confirms the role of stress information in lexical recognition in Dutch. However, although the erroneous stress pattern was always initially compatible with a competing word (e.g. ATlas, 'atlas'), mis-stressed words did not produced high false alarm rates in unrelated pairs (e.g. SPORT - atLAS). This suggests that stress information did not completely rule out segmentally matching but suprasegmentally mismatching words, a finding consistent with spoken-word recognition models involving multiple activation and inter-word competition.
  • Cutler, A. (1998). The recognition of spoken words with variable representations. In D. Duez (Ed.), Proceedings of the ESCA Workshop on Sound Patterns of Spontaneous Speech (pp. 83-92). Aix-en-Provence: Université de Aix-en-Provence.
  • Cutler, A., Norris, D., & McQueen, J. M. (2000). Tracking TRACE’s troubles. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP (Workshop on Spoken Word Access Processes) (pp. 63-66). Nijmegen: Max-Planck-Institute for Psycholinguistics.

    Abstract

    Simulations explored the inability of the TRACE model of spoken-word recognition to model the effects on human listening of acoustic-phonetic mismatches in word forms. The source of TRACE's failure lay not in its interactive connectivity, not in the presence of interword competition, and not in the use of phonemic representations, but in the need for continuously optimised interpretation of the input. When an analogue of TRACE was allowed to cycle to asymptote on every slice of input, an acceptable simulation of the subcategorical mismatch data was achieved. Even then, however, the simulation was not as close as that produced by the Merge model.
  • Cutler, A. (Ed.). (2005). Twenty-first century psycholinguistics: Four cornerstones. Hillsdale, NJ: Erlbaum.
  • Dahan, D., & Gaskell, M. G. (2007). The temporal dynamics of ambiguity resolution: Evidence from spoken-word recognition. Journal of Memory and Language, 57(4), 483-501. doi:10.1016/j.jml.2007.01.001.

    Abstract

    Two experiments examined the dynamics of lexical activation in spoken-word recognition. In both, the key materials were pairs of onset-matched picturable nouns varying in frequency. Pictures associated with these words, plus two distractor pictures were displayed. A gating task, in which participants identified the picture associated with gradually lengthening fragments of spoken words, examined the availability of discriminating cues in the speech waveforms for these pairs. There was a clear frequency bias in participants’ responses to short, ambiguous fragments, followed by a temporal window in which discriminating information gradually became available. A visual-world experiment examined speech contingent eye movements. Fixation analyses suggested that frequency influences lexical competition well beyond the point in the speech signal at which the spoken word has been fully discriminated from its competitor (as identified using gating). Taken together, these data support models in which the processing dynamics of lexical activation are a limiting factor on recognition speed, over and above the temporal unfolding of the speech signal.
  • Dahan, D., & Tanenhaus, M. K. (2005). Looking at the rope when looking for the snake: Conceptually mediated eye movements during spoken-word recognition. Psychonomic Bulletin & Review, 12(3), 453-459.

    Abstract

    Participants' eye movements to four objects displayed on a computer screen were monitored as the participants clicked on the object named in a spoken instruction. The display contained pictures of the referent (e.g., a snake), a competitor that shared features with the visual representation associated with the referent's concept (e.g., a rope), and two distractor objects (e.g., a couch and an umbrella). As the first sounds of the referent's name were heard, the participants were more likely to fixate the visual competitor than to fixate either of the distractor objects. Moreover, this effect was not modulated by the visual similarity between the referent and competitor pictures, independently estimated in a visual similarity rating task. Because the name of the visual competitor did not overlap with the phonetic input, eye movements reflected word-object matching at the level of lexically activated perceptual features and not merely at the level of preactivated sound forms.
  • Damian, M. F., & Abdel Rahman, R. (2003). Semantic priming in the naming of objects and famous faces. British Journal of Psychology, 94(4), 517-527.

    Abstract

    Researchers interested in face processing have recently debated whether access to the name of a known person occurs in parallel with retrieval of semantic-biographical codes, rather than in a sequential fashion. Recently, Schweinberger, Burton, and Kelly (2001) took a failure to obtain a semantic context effect in a manual syllable judgment task on names of famous faces as support for this position. In two experiments, we compared the effects of visually presented categorically related prime words with either objects (e.g. prime: animal; target: dog) or faces of celebrities (e.g. prime: actor; target: Bruce Willis) as targets. Targets were either manually categorized with regard to the number of syllables (as in Schweinberger et al.), or they were overtly named. For neither objects nor faces was semantic priming obtained in syllable decisions; crucially, however, priming was obtained when objects and faces were overtly named. These results suggest that both face and object naming are susceptible to semantic context effects
  • Dautriche, I., Cristia, A., Brusini, P., Yuan, S., Fisher, C., & Christophe, A. (2014). Toddlers default to canonical surface-to-meaning mapping when learning verbs. Child Development, 85(3), 1168-1180. doi:10.1111/cdev.12183.

    Abstract

    This work was supported by grants from the French Agence Nationale de la Recherche (ANR-2010-BLAN-1901) and from French Fondation de France to Anne Christophe, from the National Institute of Child Health and Human Development (HD054448) to Cynthia Fisher, Fondation Fyssen and Ecole de Neurosciences de Paris to Alex Cristia, and a PhD fellowship from the Direction Générale de l'Armement (DGA, France) supported by the PhD program FdV (Frontières du Vivant) to Isabelle Dautriche. We thank Isabelle Brunet for the recruitment, Michel Dutat for the technical support, and Hernan Anllo for his puppet mastery skill. We are grateful to the families that participated in this study. We also thank two anonymous reviewers for their comments on an earlier draft of this manuscript.
  • Davidson, D. J., & Indefrey, P. (2007). An inverse relation between event-related and time–frequency violation responses in sentence processing. Brain Research, 1158, 81-92. doi:10.1016/j.brainres.2007.04.082.

    Abstract

    The relationship between semantic and grammatical processing in sentence comprehension was investigated by examining event-related potential (ERP) and event-related power changes in response to semantic and grammatical violations. Sentences with semantic, phrase structure, or number violations and matched controls were presented serially (1.25 words/s) to 20 participants while EEG was recorded. Semantic violations were associated with an N400 effect and a theta band increase in power, while grammatical violations were associated with a P600 effect and an alpha/beta band decrease in power. A quartile analysis showed that for both types of violations, larger average violation effects were associated with lower relative amplitudes of oscillatory activity, implying an inverse relation between ERP amplitude and event-related power magnitude change in sentence processing.
  • Davis, M. H., Johnsrude, I. S., Hervais-Adelman, A., Taylor, K., & McGettigan, C. (2005). Lexical information drives perceptual learning of distorted speech: Evidence from the comprehension of noise-vocoded sentences. Journal of Experimental Psychology-General, 134(2), 222-241. doi:10.1037/0096-3445.134.2.222.

    Abstract

    Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. For noise-vocoded sentences, a manipulation that removes spectral detail from speech, listeners' reporting improved from near 0% to 70% correct over 30 sentences (Experiment 1). Learning was enhanced if listeners heard distorted sentences while they knew the identity of the undistorted target (Experiments 2 and 3). Learning was absent when listeners were trained with nonword sentences (Experiments 4 and 5), although the meaning of the training sentences did not affect learning (Experiment 5). Perceptual learning of noise-vocoded speech depends on higher level information, consistent with top-down, lexically driven learning. Similar processes may facilitate comprehension of speech in an unfamiliar accent or following cochlear implantation.
  • Declerck, T., Cunningham, H., Saggion, H., Kuper, J., Reidsma, D., & Wittenburg, P. (2003). MUMIS - Advanced information extraction for multimedia indexing and searching digital media - Processing for multimedia interactive services. 4th European Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS), 553-556.
  • Dediu, D., & Graham, S. A. (2014). Genetics and Language. In M. Aronoff (Ed.), Oxford Bibliographies in Linguistics. New York: Oxford University Press. Retrieved from http://www.oxfordbibliographies.com/view/document/obo-9780199772810/obo-9780199772810-0184.xml.

    Abstract

    This article surveys what is currently known about the complex interplay between genetics and the language sciences. It focuses not only on the genetic architecture of language and speech, but also on their interactions on the cultural and evolutionary timescales. Given the complexity of these issues and their current state of flux and high dynamism, this article surveys the main findings and topics of interest while also briefly introducing the main relevant methods, thus allowing the interested reader to fully appreciate and understand them in their proper context. Of course, not all the relevant publications and resources are mentioned, but this article aims to select the most relevant, promising, or accessible for nonspecialists.

    Files private

    Request files
  • Dediu, D. (2014). Language and biology: The multiple interactions between genetics and language. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 686-707). Cambridge: Cambridge University Press.
  • Dediu, D., & Levinson, S. C. (2014). Language and speech are old: A review of the evidence and consequences for modern linguistic diversity. In E. A. Cartmill, S. G. Roberts, H. Lyn, & H. Cornish (Eds.), The Evolution of Language: Proceedings of the 10th International Conference (pp. 421-422). Singapore: World Scientific.
  • Dediu, D. (2007). Non-spurious correlations between genetic and linguistic diversities in the context of human evolution. PhD Thesis, University of Edinburgh, Edinburgh, UK.
  • Dediu, D., & Levinson, S. C. (2014). The time frame of the emergence of modern language and its implications. In D. Dor, C. Knight, & J. Lewis (Eds.), The social origins of language (pp. 184-195). Oxford: Oxford University Press.
  • Dediu, D., & Ladd, D. R. (2007). Linguistic tone is related to the population frequency of the adaptive haplogroups of two brain size genes, ASPM and Microcephalin. Proceedings of the National Academy of Sciences of the United States of America, 104, 10944-10949. doi:10.1073/pnas.0610848104.

    Abstract

    The correlations between interpopulation genetic and linguistic diversities are mostly noncausal (spurious), being due to historical processes and geographical factors that shape them in similar ways. Studies of such correlations usually consider allele frequencies and linguistic groupings (dialects, languages, linguistic families or phyla), sometimes controlling for geographic, topographic, or ecological factors. Here, we consider the relation between allele frequencies and linguistic typological features. Specifically, we focus on the derived haplogroups of the brain growth and development-related genes ASPM and Microcephalin, which show signs of natural selection and a marked geographic structure, and on linguistic tone, the use of voice pitch to convey lexical or grammatical distinctions. We hypothesize that there is a relationship between the population frequency of these two alleles and the presence of linguistic tone and test this hypothesis relative to a large database (983 alleles and 26 linguistic features in 49 populations), showing that it is not due to the usual explanatory factors represented by geography and history. The relationship between genetic and linguistic diversity in this case may be causal:
    certain alleles can bias language acquisition or processing and thereby influence the trajectory of language change through iterated cultural transmission.

    Files private

    Request files
  • Defina, R. (2014). Arbil: Free tool for creating, editing and searching metadata. Language Documentation and Conservation, 8, 307-314.
  • Dell, G. S., Reed, K. D., Adams, D. R., & Meyer, A. S. (2000). Speech errors, phonotactic constraints, and implicit learning: A study of the role of experience in language production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 1355-1367. doi:10.1037/0278-7393.26.6.1355.

    Abstract

    Speech errors follow the phonotactics of the language being spoken. For example, in English, if [n] is mispronounced as [n] the [n] will always appear in a syllable coda. The authors created an analogue to this phenomenon by having participants recite lists of consonant-vowel-consonant syllables in 4 sessions on different days. In the first 2 experiments, some consonants were always onsets, some were always codas, and some could be both. In a third experiment, the set of possible onsets and codas depended on vowel identity. In all 3 studies, the production errors that occurred respected the "phonotactics" of the experiment. The results illustrate the implicit learning of the sequential constraints present in the stimuli and show that the language production system adapts to recent experience.
  • Deriziotis, P., O'Roak, B. J., Graham, S. A., Estruch, S. B., Dimitropoulou, D., Bernier, R. A., Gerdts, J., Shendure, J., Eichler, E. E., & Fisher, S. E. (2014). De novo TBR1 mutations in sporadic autism disrupt protein functions. Nature Communications, 5: 4954. doi:10.1038/ncomms5954.

    Abstract

    Next-generation sequencing recently revealed that recurrent disruptive mutations in a few genes may account for 1% of sporadic autism cases. Coupling these novel genetic data to empirical assays of protein function can illuminate crucial molecular networks. Here we demonstrate the power of the approach, performing the first functional analyses of TBR1 variants identified in sporadic autism. De novo truncating and missense mutations disrupt multiple aspects of TBR1 function, including subcellular localization, interactions with co-regulators and transcriptional repression. Missense mutations inherited from unaffected parents did not disturb function in our assays. We show that TBR1 homodimerizes, that it interacts with FOXP2, a transcription factor implicated in speech/language disorders, and that this interaction is disrupted by pathogenic mutations affecting either protein. These findings support the hypothesis that de novo mutations in sporadic autism have severe functional consequences. Moreover, they uncover neurogenetic mechanisms that bridge different neurodevelopmental disorders involving language deficits.
  • Deriziotis, P., Graham, S. A., Estruch, S. B., & Fisher, S. E. (2014). Investigating protein-protein interactions in live cells using Bioluminescence Resonance Energy Transfer. Journal of visualized experiments, 87: e51438. doi:10.3791/51438.

    Abstract

    Assays based on Bioluminescence Resonance Energy Transfer (BRET) provide a sensitive and reliable means to monitor protein-protein interactions in live cells. BRET is the non-radiative transfer of energy from a ‘donor’ luciferase enzyme to an ‘acceptor’ fluorescent protein. In the most common configuration of this assay, the donor is Renilla reniformis luciferase and the acceptor is Yellow Fluorescent Protein (YFP). Because the efficiency of energy transfer is strongly distance-dependent, observation of the BRET phenomenon requires that the donor and acceptor be in close proximity. To test for an interaction between two proteins of interest in cultured mammalian cells, one protein is expressed as a fusion with luciferase and the second as a fusion with YFP. An interaction between the two proteins of interest may bring the donor and acceptor sufficiently close for energy transfer to occur. Compared to other techniques for investigating protein-protein interactions, the BRET assay is sensitive, requires little hands-on time and few reagents, and is able to detect interactions which are weak, transient, or dependent on the biochemical environment found within a live cell. It is therefore an ideal approach for confirming putative interactions suggested by yeast two-hybrid or mass spectrometry proteomics studies, and in addition it is well-suited for mapping interacting regions, assessing the effect of post-translational modifications on protein-protein interactions, and evaluating the impact of mutations identified in patient DNA.

    Additional information

    video
  • Devanna, P., & Vernes, S. C. (2014). A direct molecular link between the autism candidate gene RORa and the schizophrenia candidate MIR137. Scientific Reports, 4: 3994. doi:10.1038/srep03994.

    Abstract

    Retinoic acid-related orphan receptor alpha gene (RORa) and the microRNA MIR137 have both recently been identified as novel candidate genes for neuropsychiatric disorders. RORa encodes a ligand-dependent orphan nuclear receptor that acts as a transcriptional regulator and miR-137 is a brain enriched small non-coding RNA that interacts with gene transcripts to control protein levels. Given the mounting evidence for RORa in autism spectrum disorders (ASD) and MIR137 in schizophrenia and ASD, we investigated if there was a functional biological relationship between these two genes. Herein, we demonstrate that miR-137 targets the 3'UTR of RORa in a site specific manner. We also provide further support for MIR137 as an autism candidate by showing that a large number of previously implicated autism genes are also putatively targeted by miR-137. This work supports the role of MIR137 as an ASD candidate and demonstrates a direct biological link between these previously unrelated autism candidate genes
  • Devanna, P., Middelbeek, J., & Vernes, S. C. (2014). FOXP2 drives neuronal differentiation by interacting with retinoic acid signaling pathways. Frontiers in Cellular Neuroscience, 8: 305. doi:10.3389/fncel.2014.00305.

    Abstract

    FOXP2 was the first gene shown to cause a Mendelian form of speech and language disorder. Although developmentally expressed in many organs, loss of a single copy of FOXP2 leads to a phenotype that is largely restricted to orofacial impairment during articulation and linguistic processing deficits. Why perturbed FOXP2 function affects specific aspects of the developing brain remains elusive. We investigated the role of FOXP2 in neuronal differentiation and found that FOXP2 drives molecular changes consistent with neuronal differentiation in a human model system. We identified a network of FOXP2 regulated genes related to retinoic acid signaling and neuronal differentiation. FOXP2 also produced phenotypic changes associated with neuronal differentiation including increased neurite outgrowth and reduced migration. Crucially, cells expressing FOXP2 displayed increased sensitivity to retinoic acid exposure. This suggests a mechanism by which FOXP2 may be able to increase the cellular differentiation response to environmental retinoic acid cues for specific subsets of neurons in the brain. These data demonstrate that FOXP2 promotes neuronal differentiation by interacting with the retinoic acid signaling pathway and regulates key processes required for normal circuit formation such as neuronal migration and neurite outgrowth. In this way, FOXP2, which is found only in specific subpopulations of neurons in the brain, may drive precise neuronal differentiation patterns and/or control localization and connectivity of these FOXP2 positive cells
  • Dietrich, C., Swingley, D., & Werker, J. F. (2007). Native language governs interpretation of salient speech sound differences at 18 months. Proceedings of the National Academy of Sciences of the USA, 104(41), 16027-16031.

    Abstract

    One of the first steps infants take in learning their native language is to discover its set of speech-sound categories. This early development is shown when infants begin to lose the ability to differentiate some of the speech sounds their language does not use, while retaining or improving discrimination of language-relevant sounds. However, this aspect of early phonological tuning is not sufficient for language learning. Children must also discover which of the phonetic cues that are used in their language serve to signal lexical distinctions. Phonetic variation that is readily discriminable to all children may indicate two different words in one language but only one word in another. Here, we provide evidence that the language background of 1.5-year-olds affects their interpretation of phonetic variation in word learning, and we show that young children interpret salient phonetic variation in language-specific ways. Three experiments with a total of 104 children compared Dutch- and English-learning 18-month-olds' responses to novel words varying in vowel duration or vowel quality. Dutch learners interpreted vowel duration as lexically contrastive, but English learners did not, in keeping with properties of Dutch and English. Both groups performed equivalently when differentiating words varying in vowel quality. Thus, at one and a half years, children's phonological knowledge already guides their interpretation of salient phonetic variation. We argue that early phonological learning is not just a matter of maintaining the ability to distinguish language-relevant phonetic cues. Learning also requires phonological interpretation at appropriate levels of linguistic analysis.
  • Dijkstra, T., Moscoso del Prado Martín, F., Schulpen, B., Schreuder, R., & Baayen, R. H. (2005). A roommate in cream: Morphological family size effects on interlingual homograph recognition. Language and Cognitive Processes, 20, 7-41. doi:10.1080/01690960444000124.
  • Dimroth, C., & Lindner, K. (2005). Was langsame Lerner uns zeigen können: der Erwerb der Finitheit im Deutschen durch einsprachige Kinder mit spezifischen Sprachentwicklungsstörung und durch Zweit-sprach-lerner. Zeitschrift für Literaturwissenschaft und Linguistik, 140, 40-61.
  • Dimroth, C. (2007). Zweitspracherwerb bei Kindern und Jugendlichen: Gemeinsamkeiten und Unterschiede. In T. Anstatt (Ed.), Mehrsprachigkeit bei Kindern und Erwachsenen: Erwerb, Formen, Förderung (pp. 115-137). Tübingen: Attempto.

    Abstract

    This paper discusses the influence of age-related factors like stage of cognitive development, prior linguistic knowledge, and motivation and addresses the specific effects of these ‘age factors’ on second language acquisition as opposed to other learning tasks. Based on longitudinal corpus data from child and adolescent learners of L2 German (L1 = Russian), the paper studies the acquisition of word order (verb raising over negation, verb second) and inflectional morphology (subject-verb-agreement, tense, noun plural, and adjective-noun agreement). Whereas the child learner shows target-like production in all of these areas within the observation period (1½ years), the adolescent learner masters only some of them. The discussion addresses the question of what it is about clusters of grammatical features that make them particularly affected by age.
  • Dimroth, C., & Klein, W. (2007). Den Erwachsenen überlegen: Kinder entwickeln beim Sprachenlernen besondere Techniken und sind erfolgreicher als ältere Menschen. Tagesspiegel, 19737, B6-B6.

    Abstract

    The younger - the better? This paper discusses second language learning at different ages and takes a critical look at generalizations of the kind ‘The younger – the better’. It is argued that these generalizations do not apply across the board. Age related differences like the amount of linguistic knowledge, prior experience as a language user, or more or less advanced communicative needs affect different components of the language system to different degrees, and can even be an advantage for the early development of simple communicative systems.
  • Dimroth, C., & Watorek, M. (2005). Additive scope particles in advanced learner and native speaker discourse. In Hendriks, & Henriëtte (Eds.), The structure of learner varieties (pp. 461-488). Berlin: Mouton de Gruyter.
  • Dimroth, C., & Starren, M. (Eds.). (2003). Information structure and the dynamics of language acquisition. Amsterdam: John Benjamins.

    Abstract

    The papers in this volume focus on the impact of information structure on language acquisition, thereby taking different linguistic approaches into account. They start from an empirical point of view, and examine data from natural first and second language acquisition, which cover a wide range of varieties, from early learner language to native speaker production and from gesture to Creole prototypes. The central theme is the interplay between principles of information structure and linguistic structure and its impact on the functioning and development of the learner's system. The papers examine language-internal explanatory factors and in particular the communicative and structural forces that push and shape the acquisition process, and its outcome. On the theoretical level, the approach adopted appeals both to formal and communicative constraints on a learner’s language in use. Two empirical domains provide a 'testing ground' for the respective weight of grammatical versus functional determinants in the acquisition process: (1) the expression of finiteness and scope relations at the utterance level and (2) the expression of anaphoric relations at the discourse level.
  • Dimroth, C., Gretsch, P., Jordens, P., Perdue, C., & Starren, M. (2003). Finiteness in Germanic languages: A stage-model for first and second language development. In C. Dimroth, & M. Starren (Eds.), Information structure and the dynamics of language acquisition (pp. 65-94). Amsterdam: Benjamins.
  • Dimroth, C., & Starren, M. (2003). Introduction. In C. Dimroth, & M. Starren (Eds.), Information structure and the dynamics of language acquisition (pp. 1-14). Amsterdam: John Benjamins.
  • Dimroth, C. (1998). Indiquer la portée en allemand L2: Une étude longitudinale de l'acquisition des particules de portée. AILE (Acquisition et Interaction en Langue étrangère), 11, 11-34.
  • Dimroth, C., & Watorek, M. (2000). The scope of additive particles in basic learner languages. Studies in Second Language Acquisition, 22, 307-336. Retrieved from http://journals.cambridge.org/action/displayAbstract?aid=65981.

    Abstract

    Based on their longitudinal analysis of the acquisition of Dutch, English, French, and German, Klein and Perdue (1997) described a “basic learner variety” as valid cross-linguistically and comprising a limited number of shared syntactic patterns interacting with two types of constraints: (a) semantic—the NP whose referent has highest control comes first, and (b) pragmatic—the focus expression is in final position. These authors hypothesized that “the topic-focus structure also plays an important role in some other respects. . . . Thus, negation and (other) scope particles occur at the topic-focus boundary” (p. 318). This poses the problem of the interaction between the core organizational principles of the basic variety and optional items such as negative particles and scope particles, which semantically affect the whole or part of the utterance in which they occur. In this article, we test the validity of these authors' hypothesis for the acquisition of the additive scope particle also (and its translation equivalents). Our analysis is based on the European Science Foundation (ESF) data originally used to define the basic variety, but we also included some more advanced learner data from the same database. In doing so, we refer to the analyses of Dimroth and Klein (1996), which concern the interaction between scope particles and the part of the utterance they affect, and we make a distinction between maximal scope—that which is potentially affected by the particle—and the actual scope of a particle in relation to an utterance in a given discourse context

    Files private

    Request files
  • Dingemanse, M., & Enfield, N. J. (2014). Ongeschreven regels van de taal. Psyche en Brein, 6, 6-11.

    Abstract

    Als je wereldwijd gesprekken beluistert, merk je dat de menselijke dialoog universele regels volgt. Die sturen en verrijken onze sociale interactie.
  • Dingemanse, M., & Floyd, S. (2014). Conversation across cultures. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 447-480). Cambridge: Cambridge University Press.
  • Dingemanse, M., Torreira, F., & Enfield, N. J. (2014). Conversational infrastructure and the convergent evolution of linguistic items. In E. A. Cartmill, S. G. Roberts, H. Lyn, & H. Cornish (Eds.), The Evolution of Language: Proceedings of the 10th International Conference (pp. 425-426). Singapore: World Scientific.
  • Dingemanse, M., Blythe, J., & Dirksmeyer, T. (2014). Formats for other-initiation of repair across languages: An exercise in pragmatic typology. Studies in Language, 38, 5-43. doi:10.1075/sl.38.1.01din.

    Abstract

    In conversation, people have to deal with problems of speaking, hearing, and understanding. We report on a cross-linguistic investigation of the conversational structure of other-initiated repair (also known as collaborative repair, feedback, requests for clarification, or grounding sequences). We take stock of formats for initiating repair across languages (comparable to English huh?, who?, y’mean X?, etc.) and find that different languages make available a wide but remarkably similar range of linguistic resources for this function. We exploit the patterned variation as evidence for several underlying concerns addressed by repair initiation: characterising trouble, managing responsibility, and handling knowledge. The concerns do not always point in the same direction and thus provide participants in interaction with alternative principles for selecting one format over possible others. By comparing conversational structures across languages, this paper contributes to pragmatic typology: the typology of systems of language use and the principles that shape them
  • Dingemanse, M. (2014). Making new ideophones in Siwu: Creative depiction in conversation. Pragmatics and Society, 5(3), 384-405. doi:10.1075/ps.5.3.04din.

    Abstract

    Ideophones are found in many of the world’s languages. Though they are a major word class on a par with nouns and verbs, their origins are ill-understood, and the question of ideophone creation has been a source of controversy. This paper studies ideophone creation in naturally occurring speech. New, unconventionalised ideophones are identified using native speaker judgements, and are studied in context to understand the rules and regularities underlying their production and interpretation. People produce and interpret new ideophones with the help of the semiotic infrastructure that underlies the use of existing ideophones: foregrounding frames certain stretches of speech as depictive enactments of sensory imagery, and various types of iconicity link forms and meanings. As with any creative use of linguistic resources, context and common ground also play an important role in supporting rapid ‘good enough’ interpretations of new material. The making of new ideophones is a special case of a more general phenomenon of creative depiction: the art of presenting verbal material in such a way that the interlocutor recognises and interprets it as a depiction.
  • Dingemanse, M., & Enfield, N. J. (2014). Let's talk: Universal social rules underlie languages. Scientific American Mind, 25, 64-69. doi:10.1038/scientificamericanmind0914-64.

    Abstract

    Recent developments in the science of language signal the emergence of a new paradigm for language study: a social approach to the fundamental questions of what language is like, how much languages really have in common, and why only our species has it. The key to these developments is a new appreciation of the need to study everyday spoken language, with all its complications and ‘imperfections’, in a systematic way. The work reviewed in this article —on turn-taking, timing, and other-initiated repair in languages around the world— has important implications for our understanding of human sociality and sheds new light on the social shape of language. For the first time in the history of linguistics, we are no longer tied to what can be written down or thought up. Rather, we look at language as a biologist would: as it occurs in nature.
  • Dingemanse, M., Verhoef, T., & Roberts, S. G. (2014). The role of iconicity in the cultural evolution of communicative signals. In B. De Boer, & T. Verhoef (Eds.), Proceedings of Evolang X, Workshop on Signals, Speech, and Signs (pp. 11-15).
  • Dirksmeyer, T. (2005). Why do languages die? Approaching taxonomies, (re-)ordering causes. In J. Wohlgemuth, & T. Dirksmeyer (Eds.), Bedrohte Vielfalt. Aspekte des Sprach(en)tods – Aspects of language death (pp. 53-68). Berlin: Weißensee.

    Abstract

    Under what circumstances do languages die? Why has their “mortality rate” increased dramatically in the recent past? What “causes of death” can be identified for historical cases, to what extent are these generalizable, and how can they be captured in an explanatory theory? In pursuing these questions, it becomes apparent that in typical cases of language death various causes tend to interact in multiple ways. Speakers’ attitudes towards their language play a critical role in all of this. Existing categorial taxonomies do not succeed in modeling the complex relationships between these factors. Therefore, an alternative, dimensional approach is called for to more adequately address (and counter) the causes of language death in a given scenario.
  • Dolscheid, S., Hunnius, S., Casasanto, D., & Majid, A. (2014). Prelinguistic infants are sensitive to space-pitch associations found across cultures. Psychological Science, 25(6), 1256-1261. doi:10.1177/0956797614528521.

    Abstract

    People often talk about musical pitch using spatial metaphors. In English, for instance, pitches can be “high” or “low” (i.e., height-pitch association), whereas in other languages, pitches are described as “thin” or “thick” (i.e., thickness-pitch association). According to results from psychophysical studies, metaphors in language can shape people’s nonlinguistic space-pitch representations. But does language establish mappings between space and pitch in the first place, or does it only modify preexisting associations? To find out, we tested 4-month-old Dutch infants’ sensitivity to height-pitch and thickness-pitch mappings using a preferential-looking paradigm. The infants looked significantly longer at cross-modally congruent stimuli for both space-pitch mappings, which indicates that infants are sensitive to these associations before language acquisition. The early presence of space-pitch mappings means that these associations do not originate from language. Instead, language builds on preexisting mappings, changing them gradually via competitive associative learning. Space-pitch mappings that are language-specific in adults develop from mappings that may be universal in infants.
  • Dolscheid, S., Willems, R. M., Hagoort, P., & Casasanto, D. (2014). The relation of space and musical pitch in the brain. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 421-426). Austin, Tx: Cognitive Science Society.

    Abstract

    Numerous experiments show that space and musical pitch are
    closely linked in people's minds. However, the exact nature of
    space-pitch associations and their neuronal underpinnings are
    not well understood. In an fMRI experiment we investigated
    different types of spatial representations that may underlie
    musical pitch. Participants judged stimuli that varied in
    spatial height in both the visual and tactile modalities, as well
    as auditory stimuli that varied in pitch height. In order to
    distinguish between unimodal and multimodal spatial bases of
    musical pitch, we examined whether pitch activations were
    present in modality-specific (visual or tactile) versus
    multimodal (visual and tactile) regions active during spatial
    height processing. Judgments of musical pitch were found to
    activate unimodal visual areas, suggesting that space-pitch
    associations may involve modality-specific spatial
    representations, supporting a key assumption of embodied
    theories of metaphorical mental representation.
  • Drozd, K. F. (1998). No as a determiner in child English: A summary of categorical evidence. In A. Sorace, C. Heycock, & R. Shillcock (Eds.), Proceedings of the Gala '97 Conference on Language Acquisition (pp. 34-39). Edinburgh, UK: Edinburgh University Press,.

    Abstract

    This paper summarizes the results of a descriptive syntactic category analysis of child English no which reveals that young children use and represent no as a determiner and negatives like no pen as NPs, contra standard analyses.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2014). Phoneme category retuning in a non-native language. In Proceedings of Interspeech 2014: 15th Annual Conference of the International Speech Communication Association (pp. 553-557).

    Abstract

    Previous studies have demonstrated that native listeners
    modify their interpretation of a speech sound when a talker
    produces an ambiguous sound in order to quickly tune into a
    speaker, but there is hardly any evidence that non-native
    listeners employ a similar mechanism when encountering
    ambiguous pronunciations. So far, one study demonstrated
    this lexically-guided perceptual learning effect for nonnatives,
    using phoneme categories similar in the native
    language of the listeners and the non-native language of the
    stimulus materials. The present study investigates the question
    whether phoneme category retuning is possible in a nonnative
    language for a contrast, /l/-/r/, which is phonetically
    differently embedded in the native (Dutch) and nonnative
    (English) languages involved. Listening experiments indeed
    showed a lexically-guided perceptual learning effect.
    Assuming that Dutch listeners have different phoneme
    categories for the native Dutch and non-native English /r/, as
    marked differences between the languages exist for /r/, these
    results, for the first time, seem to suggest that listeners are not
    only able to retune their native phoneme categories but also
    their non-native phoneme categories to include ambiguous
    pronunciations.
  • Drude, S., Trilsbeek, P., Sloetjes, H., & Broeder, D. (2014). Best practices in the creation, archiving and dissemination of speech corpora at the Language Archive. In S. Ruhi, M. Haugh, T. Schmidt, & K. Wörner (Eds.), Best Practices for Spoken Corpora in Linguistic Research (pp. 183-207). Newcastle upon Tyne: Cambridge Scholars Publishing.
  • Drude, S. (2005). A contribuição alemã à Lingüística e Antropologia dos índios do Brasil, especialmente da Amazônia. In J. J. A. Alves (Ed.), Múltiplas Faces da Históriadas Ciência na Amazônia (pp. 175-196). Belém: EDUFPA.
  • Drude, S. (2003). Advanced glossing: A language documentation format and its implementation with Shoebox. In Proceedings of the 2002 International Conference on Language Resources and Evaluation (LREC 2002). Paris: ELRA.

    Abstract

    This paper presents Advanced Glossing, a proposal for a general glossing format designed for language documentation, and a specific setup for the Shoebox-program that implements Advanced Glossing to a large extent. Advanced Glossing (AG) goes beyond the traditional Interlinear Morphemic Translation, keeping syntactic and morphological information apart from each other in separate glossing tables. AG provides specific lines for different kinds of annotation – phonetic, phonological, orthographical, prosodic, categorial, structural, relational, and semantic, and it allows for gradual and successive, incomplete, and partial filling in case that some information may be irrelevant, unknown or uncertain. The implementation of AG in Shoebox sets up several databases. Each documented text is represented as a file of syntactic glossings. The morphological glossings are kept in a separate database. As an additional feature interaction with lexical databases is possible. The implementation makes use of the interlinearizing automatism provided by Shoebox, thus obtaining the table format for the alignment of lines in cells, and for semi-automatic filling-in of information in glossing tables which has been extracted from databases
  • Drude, S. (2003). Digitizing and annotating texts and field recordings in the Awetí project. In Proceedings of the EMELD Language Digitization Project Conference 2003. Workshop on Digitizing and Annotating Text and Field Recordings, LSA Institute, Michigan State University, July 11th -13th.

    Abstract

    Digitizing and annotating texts and field recordings Given that several initiatives worldwide currently explore the new field of documentation of endangered languages, the E-MELD project proposes to survey and unite procedures, techniques and results in order to achieve its main goal, ''the formulation and promulgation of best practice in linguistic markup of texts and lexicons''. In this context, this year's workshop deals with the processing of recorded texts. I assume the most valuable contribution I could make to the workshop is to show the procedures and methods used in the Awetí Language Documentation Project. The procedures applied in the Awetí Project are not necessarily representative of all the projects in the DOBES program, and they may very well fall short in several respects of being best practice, but I hope they might provide a good and concrete starting point for comparison, criticism and further discussion. The procedures to be exposed include: * taping with digital devices, * digitizing (preliminarily in the field, later definitely by the TIDEL-team at the Max Planck Institute in Nijmegen), * segmenting and transcribing, using the transcriber computer program, * translating (on paper, or while transcribing), * adding more specific annotation, using the Shoebox program, * converting the annotation to the ELAN-format developed by the TIDEL-team, and doing annotation with ELAN. Focus will be on the different types of annotation. Especially, I will present, justify and discuss Advanced Glossing, a text annotation format developed by H.-H. Lieb and myself designed for language documentation. It will be shown how Advanced Glossing can be applied using the Shoebox program. The Shoebox setup used in the Awetí Project will be shown in greater detail, including lexical databases and semi-automatic interaction between different database types (jumping, interlinearization). ( Freie Universität Berlin and Museu Paraense Emílio Goeldi, with funding from the Volkswagen Foundation.)
  • Drude, S. (2014). Reduplication as a tool for morphological and phonological analysis in Awetí. In G. G. Gómez, & H. Van der Voort (Eds.), Reduplication in Indigenous languages of South America (pp. 185-216). Leiden: Brill.

Share this page