Publications

Displaying 301 - 400 of 1648
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1986). The syllable’s differing role in the segmentation of French and English. Journal of Memory and Language, 25, 385-400. doi:10.1016/0749-596X(86)90033-1.

    Abstract

    Speech segmentation procedures may differ in speakers of different languages. Earlier work based on French speakers listening to French words suggested that the syllable functions as a segmentation unit in speech processing. However, while French has relatively regular and clearly bounded syllables, other languages, such as English, do not. No trace of syllabifying segmentation was found in English listeners listening to English words, French words, or nonsense words. French listeners, however, showed evidence of syllabification even when they were listening to English words. We conclude that alternative segmentation routines are available to the human language processor. In some cases speech segmentation may involve the operation of more than one procedure
  • Cutler, A., Otake, T., & McQueen, J. M. (2009). Vowel devoicing and the perception of spoken Japanese words. Journal of the Acoustical Society of America, 125(3), 1693-1703. doi:10.1121/1.3075556.

    Abstract

    Three experiments, in which Japanese listeners detected Japanese words embedded in nonsense sequences, examined the perceptual consequences of vowel devoicing in that language. Since vowelless sequences disrupt speech segmentation [Norris et al. (1997). Cognit. Psychol. 34, 191– 243], devoicing is potentially problematic for perception. Words in initial position in nonsense sequences were detected more easily when followed by a sequence containing a vowel than by a vowelless segment (with or without further context), and vowelless segments that were potential devoicing environments were no easier than those not allowing devoicing. Thus asa, “morning,” was easier in asau or asazu than in all of asap, asapdo, asaf, or asafte, despite the fact that the /f/ in the latter two is a possible realization of fu, with devoiced [u]. Japanese listeners thus do not treat devoicing contexts as if they always contain vowels. Words in final position in nonsense sequences, however, produced a different pattern: here, preceding vowelless contexts allowing devoicing impeded word detection less strongly (so, sake was detected less accurately, but not less rapidly, in nyaksake—possibly arising from nyakusake—than in nyagusake). This is consistent with listeners treating consonant sequences as potential realizations of parts of existing lexical candidates wherever possible.
  • Cutler, A. (1986). Why readers of this newsletter should run cross-linguistic experiments. European Psycholinguistics Association Newsletter, 13, 4-8.
  • Cutler, A., & Butterfield, S. (1991). Word boundary cues in clear speech: A supplementary report. Speech Communication, 10, 335-353. doi:10.1016/0167-6393(91)90002-B.

    Abstract

    One of a listener's major tasks in understanding continuous speech is segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately speaking more clearly. In four experiments, we examined how word boundaries are produced in deliberately clear speech. In an earlier report we showed that speakers do indeed mark word boundaries in clear speech, by pausing at the boundary and lengthening pre-boundary syllables; moreover, these effects are applied particularly to boundaries preceding weak syllables. In English, listeners use segmentation procedures which make word boundaries before strong syllables easier to perceive; thus marking word boundaries before weak syllables in clear speech will make clear precisely those boundaries which are otherwise hard to perceive. The present report presents supplementary data, namely prosodic analyses of the syllable following a critical word boundary. More lengthening and greater increases in intensity were applied in clear speech to weak syllables than to strong. Mean F0 was also increased to a greater extent on weak syllables than on strong. Pitch movement, however, increased to a greater extent on strong syllables than on weak. The effects were, however, very small in comparison to the durational effects we observed earlier for syllables preceding the boundary and for pauses at the boundary.
  • Cutler, A., Norris, D., & McQueen, J. M. (2000). Tracking TRACE’s troubles. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP (Workshop on Spoken Word Access Processes) (pp. 63-66). Nijmegen: Max-Planck-Institute for Psycholinguistics.

    Abstract

    Simulations explored the inability of the TRACE model of spoken-word recognition to model the effects on human listening of acoustic-phonetic mismatches in word forms. The source of TRACE's failure lay not in its interactive connectivity, not in the presence of interword competition, and not in the use of phonemic representations, but in the need for continuously optimised interpretation of the input. When an analogue of TRACE was allowed to cycle to asymptote on every slice of input, an acceptable simulation of the subcategorical mismatch data was achieved. Even then, however, the simulation was not as close as that produced by the Merge model.
  • Cutler, A. (Ed.). (2005). Twenty-first century psycholinguistics: Four cornerstones. Hillsdale, NJ: Erlbaum.
  • Cutler, A., & Shanley, J. (2010). Validation of a training method for L2 continuous-speech segmentation. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 1844-1847).

    Abstract

    Recognising continuous speech in a second language is often unexpectedly difficult, as the operation of segmenting speech is so attuned to native-language structure. We report the initial steps in development of a novel training method for second-language listening, focusing on speech segmentation and employing a task designed for studying this: word-spotting. Listeners detect real words in sequences consisting of a word plus a minimal context. The present validation study shows that learners from varying non-English backgrounds successfully perform a version of this task in English, and display appropriate sensitivity to structural factors that also affect segmentation by native English listeners.
  • Dabrowska, E., Rowland, C. F., & Theakston, A. (2009). The acquisition of questions with long-distance dependencies. Cognitive Linguistics, 20(3), 571-597. doi:10.1515/COGL.2009.025.

    Abstract

    A number of researchers have claimed that questions and other constructions with long distance dependencies (LDDs) are acquired relatively early, by age 4 or even earlier, in spite of their complexity. Analysis of LDD questions in the input available to children suggests that they are extremely stereotypical, raising the possibility that children learn lexically specific templates such as WH do you think S-GAP? rather than general rules of the kind postulated in traditional linguistic accounts of this construction. We describe three elicited imitation experiments with children aged from 4;6 to 6;9 and adult controls. Participants were asked to repeat prototypical questions (i.e., questions which match the hypothesised template), unprototypical questions (which depart from it in several respects) and declarative counterparts of both types of interrogative sentences. The children performed significantly better on the prototypical variants of both constructions, even when both variants contained exactly the same lexical material, while adults showed prototypicality e¤ects for LDD questions only. These results suggest that a general declarative complementation construction emerges quite late in development (after age 6), and that even adults rely on lexically specific templates for LDD questions.
  • Dahan, D., & Tanenhaus, M. K. (2005). Looking at the rope when looking for the snake: Conceptually mediated eye movements during spoken-word recognition. Psychonomic Bulletin & Review, 12(3), 453-459.

    Abstract

    Participants' eye movements to four objects displayed on a computer screen were monitored as the participants clicked on the object named in a spoken instruction. The display contained pictures of the referent (e.g., a snake), a competitor that shared features with the visual representation associated with the referent's concept (e.g., a rope), and two distractor objects (e.g., a couch and an umbrella). As the first sounds of the referent's name were heard, the participants were more likely to fixate the visual competitor than to fixate either of the distractor objects. Moreover, this effect was not modulated by the visual similarity between the referent and competitor pictures, independently estimated in a visual similarity rating task. Because the name of the visual competitor did not overlap with the phonetic input, eye movements reflected word-object matching at the level of lexically activated perceptual features and not merely at the level of preactivated sound forms.
  • Dai, B., McQueen, J. M., Hagoort, P., & Kösem, A. (2017). Pure linguistic interference during comprehension of competing speech signals. The Journal of the Acoustical Society of America, 141, EL249-EL254. doi:10.1121/1.4977590.

    Abstract

    Speech-in-speech perception can be challenging because the processing of competing acoustic and linguistic information leads to informational masking. Here, a method is proposed to isolate the linguistic component of informational masking while keeping the distractor's acoustic information unchanged. Participants performed a dichotic listening cocktail-party task before and after training on 4-band noise-vocoded sentences that became intelligible through the training. Distracting noise-vocoded speech interfered more with target speech comprehension after training (i.e., when intelligible) than before training (i.e., when unintelligible) at −3 dB SNR. These findings confirm that linguistic and acoustic information have distinct masking effects during speech-in‐speech comprehension
  • D'Alessandra, Y., Devanna, P., Limana, F., Straino, S., Di Carlo, A., Brambilla, P. G., Rubino, M., Carena, M. C., Spazzafumo, L., De Simone, M., Micheli, B., Biglioli, P., Achilli, F., Martelli, F., Maggiolini, S., Marenzi, G., Pompilio, G., & Capogrossi, M. C. (2010). Circulating microRNAs are new and sensitive biomarkers of myocardial infarction. European Heart Journal, 31(22), 2765-2773. doi:10.1093/eurheartj/ehq167.

    Abstract

    Aims Circulating microRNAs (miRNAs) may represent a novel class of biomarkers; therefore, we examined whether acute myocardial infarction (MI) modulates miRNAs plasma levels in humans and mice. Methods and results Healthy donors (n = 17) and patients (n = 33) with acute ST-segment elevation MI (STEMI) were evaluated. In one cohort (n = 25), the first plasma sample was obtained 517 ± 309 min after the onset of MI symptoms and after coronary reperfusion with percutaneous coronary intervention (PCI); miR-1, -133a, -133b, and -499-5p were ∼15- to 140-fold control, whereas miR-122 and -375 were ∼87–90% lower than control; 5 days later, miR-1, -133a, -133b, -499-5p, and -375 were back to baseline, whereas miR-122 remained lower than control through Day 30. In additional patients (n = 8; four treated with thrombolysis and four with PCI), miRNAs and troponin I (TnI) were quantified simultaneously starting 156 ± 72 min after the onset of symptoms and at different times thereafter. Peak miR-1, -133a, and -133b expression and TnI level occurred at a similar time, whereas miR-499-5p exhibited a slower time course. In mice, miRNAs plasma levels and TnI were measured 15 min after coronary ligation and at different times thereafter. The behaviour of miR-1, -133a, -133b, and -499-5p was similar to STEMI patients; further, reciprocal changes in the expression levels of these miRNAs were found in cardiac tissue 3–6 h after coronary ligation. In contrast, miR-122 and -375 exhibited minor changes and no significant modulation. In mice with acute hind-limb ischaemia, there was no increase in the plasma level of the above miRNAs. Conclusion Acute MI up-regulated miR-1, -133a, -133b, and -499-5p plasma levels, both in humans and mice, whereas miR-122 and -375 were lower than control only in STEMI patients. These miRNAs represent novel biomarkers of cardiac damage.
  • Dalla Bella, S., Farrugia, F., Benoit, C.-E., Begel, V., Verga, L., Harding, E., & Kotz, S. A. (2017). BAASTA: Battery for the Assessment of Auditory Sensorimotor and Timing Abilities. Behavior Research Methods, 49(3), 1128-1145. doi:10.3758/s13428-016-0773-6.

    Abstract

    The Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA) is a new tool for the systematic assessment of perceptual and sensorimotor timing skills. It spans a broad range of timing skills aimed at differentiating individual timing profiles. BAASTA consists of sensitive time perception and production tasks. Perceptual tasks include duration discrimination, anisochrony detection (with tones and music), and a version of the Beat Alignment Task. Perceptual thresholds for duration discrimination and anisochrony detection are estimated with a maximum likelihood procedure (MLP) algorithm. Production tasks use finger tapping and include unpaced and paced tapping (with tones and music), synchronization-continuation, and adaptive tapping to a sequence with a tempo change. BAASTA was tested in a proof-of-concept study with 20 non-musicians (Experiment 1). To validate the results of the MLP procedure, less widespread than standard staircase methods, three perceptual tasks of the battery (duration discrimination, anisochrony detection with tones, and with music) were further tested in a second group of non-musicians using 2 down / 1 up and 3 down / 1 up staircase paradigms (n = 24) (Experiment 2). The results show that the timing profiles provided by BAASTA allow to detect cases of timing/rhythm disorders. In addition, perceptual thresholds yielded by the MLP algorithm, although generally comparable to the results provided by standard staircase, tend to be slightly lower. In sum, BAASTA provides a comprehensive battery to test perceptual and sensorimotor timing skills, and to detect timing/rhythm deficits.
  • Davids, N. (2009). Neurocognitive markers of phonological processing: A clinical perspective. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Davids, N., Van den Brink, D., Van Turennout, M., Mitterer, H., & Verhoeven, L. (2009). Towards neurophysiological assessment of phonemic discrimination: Context effects of the mismatch negativity. Clinical Neurophysiology, 120, 1078-1086. doi:10.1016/j.clinph.2009.01.018.

    Abstract

    This study focusses on the optimal paradigm for simultaneous assessment of auditory and phonemic discrimination in clinical populations. We investigated (a) whether pitch and phonemic deviants presented together in one sequence are able to elicit mismatch negativities (MMNs) in healthy adults and (b) whether MMN elicited by a change in pitch is modulated by the presence of the phonemic deviants.
  • Davidson, D. J., & Indefrey, P. (2009). An event-related potential study on changes of violation and error responses during morphosyntactic learning. Journal of Cognitive Neuroscience, 21(3), 433-446. Retrieved from http://www.mitpressjournals.org/doi/pdf/10.1162/jocn.2008.21031.

    Abstract

    Based on recent findings showing electrophysiological changes in adult language learners after relatively short periods of training, we hypothesized that adult Dutch learners of German would show responses to German gender and adjective declension violations after brief instruction. Adjective declension in German differs from previously studied morphosyntactic regularities in that the required suffixes depend not only on the syntactic case, gender, and number features to be expressed, but also on whether or not these features are already expressed on linearly preceding elements in the noun phrase. Violation phrases and matched controls were presented over three test phases (pretest and training on the first day, and a posttest one week later). During the pretest, no electrophysiological differences were observed between violation and control conditions, and participants’ classification performance was near chance. During the training and posttest phases, classification improved, and there was a P600-like violation response to declension but not gender violations. An error-related response during training was associated with improvement in grammatical discrimination from pretest to posttest. The results show that rapid changes in neuronal responses can be observed in adult learners of a complex morphosyntactic rule, and also that error-related electrophysiological responses may relate to grammar acquisition.
  • Davidson, D. J., & Indefrey, P. (2009). Plasticity of grammatical recursion in German learners of Dutch. Language and Cognitive Processes, 24, 1335-1369. doi:10.1080/01690960902981883.

    Abstract

    Previous studies have examined cross-serial and embedded complement clauses in West Germanic in order to distinguish between different types of working memory models of human sentence processing, as well as different formal language models. Here, adult plasticity in the use of these constructions is investigated by examining the response of German-speaking learners of Dutch using magnetoencephalography (MEG). In three experimental sessions spanning their initial acquisition of Dutch, participants performed a sentence-scene matching task with Dutch sentences including two different verb constituent orders (Dutch verb order, German verb order), and in addition rated similar constructions in a separate rating task. The average planar gradient of the evoked field to the initial verb within the cluster revealed a larger evoked response for the German order relative to the Dutch order between 0.2 to 0.4 s over frontal sensors after 2 weeks, but not initially. The rating data showed that constructions consistent with Dutch grammar, but inconsistent with the German grammar were initially rated as unacceptable, but this preference reversed after 3 months. The behavioural and electrophysiological results suggest that cortical responses to verb order preferences in complement clauses can change within 3 months after the onset of adult language learning, implying that this aspect of grammatical processing remains plastic into adulthood.
  • Davies, R., Kidd, E., & Lander, K. (2009). Investigating the psycholinguistic correlates of speechreading in preschool age children. International Journal of Language & Communication Disorders, 44(2), 164-174. doi:10.1080/13682820801997189.

    Abstract

    Background: Previous research has found that newborn infants can match phonetic information in the lips and voice from as young as ten weeks old. There is evidence that access to visual speech is necessary for normal speech development. Although we have an understanding of this early sensitivity, very little research has investigated older children's ability to speechread whole words. Aims: The aim of this study was to identify aspects of preschool children's linguistic knowledge and processing ability that may contribute to speechreading ability. We predicted a significant correlation between receptive vocabulary and speechreading, as well as phonological working memory to be a predictor of speechreading performance. Methods & Procedures: Seventy-six children (n = 76) aged between 2;10 and 4;11 years participated. Children were given three pictures and were asked to point to the picture that they thought that the experimenter had silently mouthed (ten trials). Receptive vocabulary and phonological working memory were also assessed. The results were analysed using Pearson correlations and multiple regressions. Outcomes & Results: The results demonstrated that the children could speechread at a rate greater than chance. Pearson correlations revealed significant, positive correlations between receptive vocabulary and speechreading score, phonological error rate and age. Further correlations revealed significant, positive relationships between The Children's Test of Non-Word Repetition (CNRep) and speechreading score, phonological error rate and age. Multiple regression analyses showed that receptive vocabulary best predicts speechreading ability over and above phonological working memory. Conclusions & Implications: The results suggest that preschool children are capable of speechreading, and that this ability is related to vocabulary size. This suggests that children aged between 2;10 and 4;11 are sensitive to visual information in the form of audio-visual mappings. We suggest that current and future therapies are correct to include visual feedback as a therapeutic tool; however, future research needs to be conducted in order to elucidate further the role of speechreading in development.
  • Davis, M. H., Johnsrude, I. S., Hervais-Adelman, A., Taylor, K., & McGettigan, C. (2005). Lexical information drives perceptual learning of distorted speech: Evidence from the comprehension of noise-vocoded sentences. Journal of Experimental Psychology-General, 134(2), 222-241. doi:10.1037/0096-3445.134.2.222.

    Abstract

    Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. For noise-vocoded sentences, a manipulation that removes spectral detail from speech, listeners' reporting improved from near 0% to 70% correct over 30 sentences (Experiment 1). Learning was enhanced if listeners heard distorted sentences while they knew the identity of the undistorted target (Experiments 2 and 3). Learning was absent when listeners were trained with nonword sentences (Experiments 4 and 5), although the meaning of the training sentences did not affect learning (Experiment 5). Perceptual learning of noise-vocoded speech depends on higher level information, consistent with top-down, lexically driven learning. Similar processes may facilitate comprehension of speech in an unfamiliar accent or following cochlear implantation.
  • Dediu, D. (2017). From biology to language change and diversity. In N. J. Enfield (Ed.), Dependencies in language: On the causal ontology of linguistics systems (pp. 39-52). Berlin: Language Science Press.
  • Dediu, D. (2009). Genetic biasing through cultural transmission: Do simple Bayesian models of language evolution generalize? Journal of Theoretical Biology, 259, 552-561. doi:10.1016/j.jtbi.2009.04.004.

    Abstract

    The recent Bayesian approaches to language evolution and change seem to suggest that genetic biases can impact on the characteristics of language, but, at the same time, that its cultural transmission can partially free it from these same genetic constraints. One of the current debates centres on the striking differences between sampling and a posteriori maximising Bayesian learners, with the first converging on the prior bias while the latter allows a certain freedom to language evolution. The present paper shows that this difference disappears if populations more complex than a single teacher and a single learner are considered, with the resulting behaviours more similar to the sampler. This suggests that generalisations based on the language produced by Bayesian agents in such homogeneous single agent chains are not warranted. It is not clear which of the assumptions in such models are responsible, but these findings seem to support the rising concerns on the validity of the “acquisitionist” assumption, whereby the locus of language change and evolution is taken to be the first language acquirers (children) as opposed to the competent language users (the adults).
  • Dediu, D., Janssen, R., & Moisik, S. R. (2017). Language is not isolated from its wider environment: Vocal tract influences on the evolution of speech and language. Language and Communication, 54, 9-20. doi:10.1016/j.langcom.2016.10.002.

    Abstract

    Language is not a purely cultural phenomenon somehow isolated from its wider environment, and we may only understand its origins and evolution by seriously considering its embedding in this environment as well as its multimodal nature. By environment here we understand other aspects of culture (such as communication technology, attitudes towards language contact, etc.), of the physical environment (ultraviolet light incidence, air humidity, etc.), and of the biological infrastructure for language and speech. We are specifically concerned in this paper with the latter, in the form of the biases, constraints and affordances that the anatomy and physiology of the vocal tract create on speech and language. In a nutshell, our argument is that (a) there is an under-appreciated amount of inter-individual variation in vocal tract (VT) anatomy and physiology, (b) variation that is non-randomly distributed across populations, and that (c) results in systematic differences in phonetics and phonology between languages. Relevant differences in VT anatomy include the overall shape of the hard palate, the shape of the alveolar ridge, the relationship between the lower and upper jaw, to mention just a few, and our data offer a new way to systematically explore such differences and their potential impact on speech. These differences generate very small biases that nevertheless can be amplified by the repeated use and transmission of language, affecting language diachrony and resulting in cross-linguistic synchronic differences. Moreover, the same type of biases and processes might have played an essential role in the emergence and evolution of language, and might allow us a glimpse into the speech and language of extinct humans by, for example, reconstructing the anatomy of parts of their vocal tract from the fossil record and extrapolating the biases we find in present-day humans.
  • Dediu, D. (2010). Linguistic and genetic diversity - how and why are they related? In M. Brüne, F. Salter, & W. McGrew (Eds.), Building bridges between anthropology, medicine and human ethology: Tributes to Wulf Schiefenhövel (pp. 169-178). Bochum: Europäischer Universitätsverlag.

    Abstract

    There are some 6000 languages spoken today, classfied in approximately 90 linguistic families and many isolates, and also differing across structural, typological, dimensions. Genetically, the human species is remarkably homogeneous, with the existant genetic diversity mostly explain by intra-population differences between individuals, but the remaining inter-population differences have a non-trivial structure. Populations splits and contacts influence both languages and genes, in principle allowing them to evolve in parallel ways. The farming/language co-dispersal hypothesis is a well-known such theory, whereby farmers spreading agriculture from its places of origin also spread their genes and languages. A different type of relationship was recently proposed, involving a genetic bias which influences the structural properties of language as it is transmitted across generations. Such a bias was proposed to explain the correlations between the distribution of tone languages and two brain development-related human genes and, if confirmed by experimental studies, it could represent a new factor explaining the distrbution of diversity. The present chapter overviews these related topics in the hope that a truly interdisciplinary approach could allow a better understanding of our complex (recent as well as evolutionary) history.
  • Deegan, B., Sturt, B., Ryder, D., Butcher, M., Brumby, S., Long, G., Badngarri, N., Lannigan, J., Blythe, J., & Wightman, G. (2010). Jaru animals and plants: Aboriginal flora and fauna knowledge from the south-east Kimberley and western Top End, north Australia. Halls Creek: Kimberley Language Resource Centre; Palmerston: Department of Natural Resources, Environment, the Arts and Sport.
  • Defina, R. (2010). Aspect and modality in Avatime. Master Thesis, Leiden University.
  • Dell, G. S., Reed, K. D., Adams, D. R., & Meyer, A. S. (2000). Speech errors, phonotactic constraints, and implicit learning: A study of the role of experience in language production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 1355-1367. doi:10.1037/0278-7393.26.6.1355.

    Abstract

    Speech errors follow the phonotactics of the language being spoken. For example, in English, if [n] is mispronounced as [n] the [n] will always appear in a syllable coda. The authors created an analogue to this phenomenon by having participants recite lists of consonant-vowel-consonant syllables in 4 sessions on different days. In the first 2 experiments, some consonants were always onsets, some were always codas, and some could be both. In a third experiment, the set of possible onsets and codas depended on vowel identity. In all 3 studies, the production errors that occurred respected the "phonotactics" of the experiment. The results illustrate the implicit learning of the sequential constraints present in the stimuli and show that the language production system adapts to recent experience.
  • Deriziotis, P., & Fisher, S. E. (2017). Speech and Language: Translating the Genome. Trends in Genetics, 33(9), 642-656. doi:10.1016/j.tig.2017.07.002.

    Abstract

    Investigation of the biological basis of human speech and language is being transformed by developments in molecular technologies, including high-throughput genotyping and next-generation sequencing of whole genomes. These advances are shedding new light on the genetic architecture underlying language-related disorders (speech apraxia, specific language impairment, developmental dyslexia) as well as that contributing to variation in relevant skills in the general population. We discuss how state-of-the-art methods are uncovering a range of genetic mechanisms, from rare mutations of large effect to common polymorphisms that increase risk in a subtle way, while converging on neurogenetic pathways that are shared between distinct disorders. We consider the future of the field, highlighting the unusual challenges and opportunities associated with studying genomics of language-related traits.
  • Devaraju, K., Miskinyte, G., Hansen, M. G., Monni, E., Tornero, D., Woods, N. B., Bengzon, J., Ahlenius, H., Lindvall, O., & Kokaia, Z. (2017). Direct conversion of human fibroblasts to functional excitatory cortical neurons integrating into human neural networks. Stem Cell Research & Therapy, 8: 207. doi:10.1186/s13287-017-0658-3.

    Abstract

    Background: Human fibroblasts can be directly converted to several subtypes of neurons, but cortical projection neurons have not been generated. Methods: Here we screened for transcription factor combinations that could potentially convert human fibroblasts to functional excitatory cortical neurons. The induced cortical (iCtx) cells were analyzed for cortical neuronal identity using immunocytochemistry, single-cell quantitative polymerase chain reaction (qPCR), electrophysiology, and their ability to integrate into human neural networks in vitro and ex vivo using electrophysiology and rabies virus tracing. Results: We show that a combination of three ranscription fact ors, BRN2, MYT1L, and FEZF2, have the ability to directly convert human fibroblasts to functional excitatory cortical neurons. The conversion efficiency was increased to about 16% by treatment with small molecules and microRNAs. The iCtx cells exhibited electrophysiological properties of functional neurons, had pyramidal-like cell morphology, and expressed key cortical projection neuronal markers. Single-cell analysis of iCtx cells revealed a complex gene expression profile, a subpopulation of them displaying a molecular signature closely resembling that of human fetal primary cortical neurons. The iCtx cells received synaptic inputs from co-cultured human fetal primary cortical neurons, contained spines, and expressed the postsyna ptic excitatory scaffold protein PSD95. When transplanted ex vivo to organotypic cultures of adult human cerebral cortex, the iCtx cells exhibited morphological and electrophysiological properties of mature neurons, integrated structurally into the cortical tissue, and received synaptic inputs from adult human neurons. Conclusions: Our findings indicate that functional excitatory cortical neurons, generated here for the first time by direct conversion of human somatic cells, have the capacity for synaptic integration into adult human cortex.
  • Dietrich, R., & Klein, W. (1986). Simple language. Interdisciplinary Science Reviews, 11(2), 110-117.
  • Dijkstra, T., Moscoso del Prado Martín, F., Schulpen, B., Schreuder, R., & Baayen, R. H. (2005). A roommate in cream: Morphological family size effects on interlingual homograph recognition. Language and Cognitive Processes, 20, 7-41. doi:10.1080/01690960444000124.
  • Dimitrova, D. V., Redeker, G., & Hoeks, J. C. J. (2009). Did you say a BLUE banana? The prosody of contrast and abnormality in Bulgarian and Dutch. In 10th Annual Conference of the International Speech Communication Association [Interspeech 2009] (pp. 999-1002). ISCA Archive.

    Abstract

    In a production experiment on Bulgarian that was based on a previous study on Dutch [1], we investigated the role of prosody when linguistic and extra-linguistic information coincide or contradict. Speakers described abnormally colored fruits in conditions where contrastive focus and discourse relations were varied. We found that the coincidence of contrast and abnormality enhances accentuation in Bulgarian as it did in Dutch. Surprisingly, when both factors are in conflict, the prosodic prominence of abnormality often overruled focus accentuation in both Bulgarian and Dutch, though the languages also show marked differences.
  • Dimroth, C., & Lindner, K. (2005). Was langsame Lerner uns zeigen können: der Erwerb der Finitheit im Deutschen durch einsprachige Kinder mit spezifischen Sprachentwicklungsstörung und durch Zweit-sprach-lerner. Zeitschrift für Literaturwissenschaft und Linguistik, 140, 40-61.
  • Dimroth, C., & Narasimhan, B. (2009). Accessibility and topicality in children's use of word order. In J. Chandlee, M. Franchini, S. Lord, & G. M. Rheiner (Eds.), Proceedings of the 33rd annual Boston University Conference on Language Development (BULCD) (pp. 133-138).
  • Dimroth, C., & Klein, W. (2009). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik, 153, 5-9.
  • Dimroth, C., & Watorek, M. (2005). Additive scope particles in advanced learner and native speaker discourse. In Hendriks, & Henriëtte (Eds.), The structure of learner varieties (pp. 461-488). Berlin: Mouton de Gruyter.
  • Dimroth, C., & Jordens, P. (Eds.). (2009). Functional categories in learner language. Berlin: Mouton de Gruyter.
  • Dimroth, C., Andorno, C., Benazzo, S., & Verhagen, J. (2010). Given claims about new topics: How Romance and Germanic speakers link changed and maintained information in narrative discourse. Journal of Pragmatics, 42(12), 3328-3344. doi:10.1016/j.pragma.2010.05.009.

    Abstract

    This paper deals with the anaphoric linking of information units in spoken discourse in French, Italian, Dutch and German. We distinguish the information units ‘time’, ‘entity’, and ‘predicate’ and specifically investigate how speakers mark the information structure of their utterances and enhance discourse cohesion in contexts where the predicate contains given information but there is a change in one or more of the other information units. Germanic languages differ from Romance languages in the availability of a set of assertion-related particles (e.g. doch/toch, wel; roughly meaning ‘indeed’) and the option of highlighting the assertion component of a finite verb independently of its lexical content (verum focus). Based on elicited production data from 20 native speakers per language, we show that speakers of Dutch and German relate utterances to one another by focussing on this assertion component, and propose an analysis of the additive scope particles ook/auch (also) along similar lines. Speakers of Romance languages tend to highlight change or maintenance in the other information units. Such differences in the repertoire have consequences for the selection of units that are used for anaphoric linking. We conclude that there is a Germanic and a Romance way of signalling the information flow and enhancing discourse cohesion.
  • Dimroth, C. (1998). Indiquer la portée en allemand L2: Une étude longitudinale de l'acquisition des particules de portée. AILE (Acquisition et Interaction en Langue étrangère), 11, 11-34.
  • Dimroth, C. (2009). L'acquisition de la finitude en allemand L2 à différents âges. AILE (Acquisition et Interaction en Langue étrangère)/LIA (Languages, Interaction, Acquisition), 1(1), 113-135.

    Abstract

    Ultimate attainment in adult second language learners often differs tremendously from the end state typically achieved by young children learning their first language (L1) or a second language (L2). The research summarized in this article concentrates on developmental steps and orders of acquisition attested in learners of different ages. Findings from a longitudinal study concerned with the acquisition of verbal morpho-syntax in German as an L2 by two young Russian learners (8 and 14 years old) are compared to findings from the acquisition of the same target language by younger children and by untutored adult learners. The study focuses on the acquisition of verbal morphology, the role of auxiliary verbs and the position of finite and non finite verbs in relation to negation and additive scope particles.
  • Dimroth, C. (2009). Lernervarietäten im Sprachunterricht. Zeitschrift für Literaturwissenschaft und Linguistik, 39(153), 60-80.
  • Dimroth, C. (2010). The acquisition of negation. In L. R. Horn (Ed.), The expression of negation (pp. 39-73). Berlin/New York: Mouton de Gruyter.
  • Dimroth, C. (2009). Stepping stones and stumbling blocks: Why negation accelerates and additive particles delay the acquisition of finiteness in German. In C. Dimroth, & P. Jordens (Eds.), Functional Categories in Learner Language (pp. 137-170). Berlin: Mouton de Gruyter.
  • Dimroth, C., & Watorek, M. (2000). The scope of additive particles in basic learner languages. Studies in Second Language Acquisition, 22, 307-336. Retrieved from http://journals.cambridge.org/action/displayAbstract?aid=65981.

    Abstract

    Based on their longitudinal analysis of the acquisition of Dutch, English, French, and German, Klein and Perdue (1997) described a “basic learner variety” as valid cross-linguistically and comprising a limited number of shared syntactic patterns interacting with two types of constraints: (a) semantic—the NP whose referent has highest control comes first, and (b) pragmatic—the focus expression is in final position. These authors hypothesized that “the topic-focus structure also plays an important role in some other respects. . . . Thus, negation and (other) scope particles occur at the topic-focus boundary” (p. 318). This poses the problem of the interaction between the core organizational principles of the basic variety and optional items such as negative particles and scope particles, which semantically affect the whole or part of the utterance in which they occur. In this article, we test the validity of these authors' hypothesis for the acquisition of the additive scope particle also (and its translation equivalents). Our analysis is based on the European Science Foundation (ESF) data originally used to define the basic variety, but we also included some more advanced learner data from the same database. In doing so, we refer to the analyses of Dimroth and Klein (1996), which concern the interaction between scope particles and the part of the utterance they affect, and we make a distinction between maximal scope—that which is potentially affected by the particle—and the actual scope of a particle in relation to an utterance in a given discourse context

    Files private

    Request files
  • Dingemanse, M. (2017). Brain-to-brain interfaces and the role of language in distributing agency. In N. J. Enfield, & P. Kockelman (Eds.), Distributed Agency (pp. 59-66). Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780190457204.003.0007.

    Abstract

    Brain-to-brain interfaces, in which brains are physically connected without the intervention of language, promise new ways of collaboration and communication between humans. I examine the narrow view of language implicit in current conceptions of brain-to-brain interfaces and put forward a constructive alternative, stressing the role of language in organising joint agency. Two features of language stand out as crucial: its selectivity, which provides people with much-needed filters between public words and private worlds; and its negotiability, which provides people with systematic opportunities for calibrating understanding and expressing consent and dissent. Without these checks and balances, brain-to-brain interfaces run the risk of reducing people to the level of amoeba in a slime mold; with them, they may mature to become useful extensions of human agency
  • Dingemanse, M., & Akita, K. (2017). An inverse relation between expressiveness and grammatical integration: on the morphosyntactic typology of ideophones, with special reference to Japanese. Journal of Linguistics, 53(3), 501-532. doi:10.1017/S002222671600030X.

    Abstract

    Words and phrases may differ in the extent to which they are susceptible to prosodic foregrounding and expressive morphology: their expressiveness. They may also differ in the degree to which they are integrated in the morphosyntactic structure of the utterance: their grammatical integration. We describe an inverse relation that holds across widely varied languages, such that more expressiveness goes together with less grammatical integration, and vice versa. We review typological evidence for this inverse relation in 10 languages, then quantify and explain it using Japanese corpus data. We do this by tracking ideophones —vivid sensory words also known as mimetics or expressives— across different morphosyntactic contexts and measuring their expressiveness in terms of intonation, phonation and expressive morphology. We find that as expressiveness increases, grammatical integration decreases. Using gesture as a measure independent of the speech signal, we find that the most expressive ideophones are most likely to come together with iconic gestures. We argue that the ultimate cause is the encounter of two distinct and partly incommensurable modes of representation: the gradient, iconic, depictive system represented by ideophones and iconic gestures and the discrete, arbitrary, descriptive system represented by ordinary words. The study shows how people combine modes of representation in speech and demonstrates the value of integrating description and depiction into the scientific vision of language.

    Additional information

    Open data & R code
  • Dingemanse, M. (2010). [Review of Talking voices: Repetition, dialogue, and imagery in conversational discourse. 2nd edition. By Deborah Tannen]. Language in Society, 39(1), 139-140. doi:10.1017/S0047404509990765.

    Abstract

    Reviews the book, Talking voices: Repetition, dialogue, and imagery in conversational discourse. 2nd edition by Deborah Tannen. This book is the same as the 1989 original except for an added introduction. This introduction situates TV in the context of intertextuality and gives a survey of relevant research since the book first appeared. The strength of the book lies in its insightful analysis of the auditory side of conversation. Yet talking voices have always been embedded in richly contextualized multimodal speech events. As spontaneous and pervasive involvement strategies, both iconic gestures and ideophones should be of central importance to the analysis of conversational discourse. Unfortunately, someone who picks up this book is pretty much left in the dark about the prevalence of these phenomena in everyday face-to-face interaction all over the world.
  • Dingemanse, M. (2010). Folk definitions of ideophones. In E. Norcliffe, & N. J. Enfield (Eds.), Field manual volume 13 (pp. 24-29). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.529151.

    Abstract

    Ideophones are marked words that depict sensory events, for example English hippety-hoppety ‘in a limping and hobbling manner’ or Siwu mukumuku ‘mouth movements of a toothless person eating’. They typically have special sound patterns and distinct grammatical properties. Ideophones are found in many languages of the world, suggesting a common fascination with detailed sensory depiction, but reliable data on their meaning and use is still very scarce. This task involves video-recording spontaneous, informal explanations (“folk definitions”) of individual ideophones by native speakers, in their own language. The approach facilitates collection of rich primary data in a planned context while ensuring a large amount of spontaneity and freedom.
  • Dingemanse, M. (2009). Kããã [finalist photo in the 2008 AAA Photo Contest]. Anthropology News, 50(3), 23-23.

    Abstract

    Kyeei Yao, an age group leader, oversees a festival in Akpafu-Mempeasem, Volta Region, Ghana. The expensive draped cloth, Ashanti-inspired wreath, strings of beads that are handed down through the generations, and digital wristwatch work together to remind us that culture is a moving target, always renewing and reshaping itself. Kããã is a Siwu ideophone for "looking attentively".
  • Dingemanse, M. (2009). Ideophones in unexpected places. In P. K. Austin, O. Bond, M. Charette, D. Nathan, & P. Sells (Eds.), Proceedings of the 2nd Conference on Language Documentation and Linguistic Theory (pp. 83-97). London: School of Oriental and African Studies (SOAS).
  • Dingemanse, M. (2017). Expressiveness and system integration: On the typology of ideophones, with special reference to Siwu. STUF - Language Typology and Universals, 70(2), 363-384. doi:10.1515/stuf-2017-0018.

    Abstract

    Ideophones are often described as words that are highly expressive and morphosyntactically marginal. A study of ideophones in everyday conversations in Siwu (Kwa, eastern Ghana) reveals a landscape of variation and change that sheds light on some larger questions in the morphosyntactic typology of ideophones. The article documents a trade-off between expressiveness and morphosyntactic integration, with high expressiveness linked to low integration and vice versa. It also describes a pathway for deideophonisation and finds that frequency of use is a factor that influences the degree to which ideophones can come to be more like ordinary words. The findings have implications for processes of (de)ideophonisation, ideophone borrowing, and ideophone typology. A key point is that the internal diversity we find in naturally occurring data, far from being mere noise, is patterned variation that can help us to get a handle on the factors shaping ideophone systems within and across languages.
  • Dingemanse, M. (2017). On the margins of language: Ideophones, interjections and dependencies in linguistic theory. In N. J. Enfield (Ed.), Dependencies in language (pp. 195-202). Berlin: Language Science Press. doi:10.5281/zenodo.573781.

    Abstract

    Linguistic discovery is viewpoint-dependent, just like our ideas about what is marginal and what is central in language. In this essay I consider two supposed marginalia —ideophones and interjections— which provide some useful pointers for widening our field of view. Ideophones challenge us to take a fresh look at language and consider how it is that our communication system combines multiple modes of representation. Interjections challenge us to extend linguistic inquiry beyond sentence level, and remind us that language is social-interactive at core. Marginalia, then, are not the obscure, exotic phenomena that can be safely ignored: they represent opportunities for innovation and invite us to keep pushing the edges of linguistic inquiry.
  • Dingemanse, M. (2009). The enduring spoken word [Comment on Oard 2008]. Science, 323(5917), 1010-1011. doi:10.1126/science.323.5917.1010b.
  • Dingemanse, M., Rossi, G., & Floyd, S. (2017). Place reference in story beginnings: a cross-linguistic study of narrative and interactional affordances. Language in Society, 46(2), 129-158. doi:10.1017/S0047404516001019.

    Abstract

    People often begin stories in conversation by referring to person, time, and place. We study story beginnings in three societies and find place reference is recurrently used to (i) set the stage, foreshadowing the type of story and the kind of response due, and to (ii) make the story cohere, anchoring elements of the developing story. Recipients orient to these interactional affordances of place reference by responding in ways that attend to the relevance of place for the story and by requesting clarification when references are incongruent or noticeably absent. The findings are based on 108 story beginnings in three unrelated languages: Cha’palaa, a Barbacoan language of Ecuador; Northern Italian, a Romance language of Italy; and Siwu, a Kwa language of Ghana. The commonalities suggest we have identified generic affordances of place reference, and that storytelling in conversation offers a robust sequential environment for systematic comparative research on conversational structures.
  • Dingemanse, M. (2009). The selective advantage of body-part terms. Journal of Pragmatics, 41(10), 2130-2136. doi:10.1016/j.pragma.2008.11.008.

    Abstract

    This paper addresses the question why body-part terms are so often used to talk about other things than body parts. It is argued that the strategy of falling back on stable common ground to maximize the chances of successful communication is the driving force behind the selective advantage of body-part terms. The many different ways in which languages may implement this universal strategy suggest that, in order to properly understand the privileged role of the body in the evolution of linguistic signs, we have to look beyond the body to language in its socio-cultural context. A theory which acknowledges the interacting influences of stable common ground and diversified cultural practices on the evolution of linguistic signs will offer the most explanatory power for both universal patterns and language-specific variation.
  • Dirksmeyer, T. (2005). Why do languages die? Approaching taxonomies, (re-)ordering causes. In J. Wohlgemuth, & T. Dirksmeyer (Eds.), Bedrohte Vielfalt. Aspekte des Sprach(en)tods – Aspects of language death (pp. 53-68). Berlin: Weißensee.

    Abstract

    Under what circumstances do languages die? Why has their “mortality rate” increased dramatically in the recent past? What “causes of death” can be identified for historical cases, to what extent are these generalizable, and how can they be captured in an explanatory theory? In pursuing these questions, it becomes apparent that in typical cases of language death various causes tend to interact in multiple ways. Speakers’ attitudes towards their language play a critical role in all of this. Existing categorial taxonomies do not succeed in modeling the complex relationships between these factors. Therefore, an alternative, dimensional approach is called for to more adequately address (and counter) the causes of language death in a given scenario.
  • Doherty, M., & Klein, W. (Eds.). (1991). Übersetzung [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (84).
  • Dolscheid, S., Shayan, S., Ozturk, O., Majid, A., & Casasanto, D. (2010). Language shapes mental representations of musical pitch: Implications for metaphorical language processing [Abstract]. In Proceedings of the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010] (pp. 137). York: University of York.

    Abstract

    Speakers often use spatial metaphors to talk about musical pitch (e.g., a low note, a high soprano). Previous experiments suggest that English speakers also think about pitches as high or low in space, even when theyʼre not using language or musical notation (Casasanto, 2010). Do metaphors in language merely reflect pre-existing associations between space and pitch, or might language also shape these non-linguistic metaphorical mappings? To investigate the role of language in pitch tepresentation, we conducted a pair of non-linguistic spacepitch interference experiments in speakers of two languages that use different spatial metaphors. Dutch speakers usually describe pitches as ʻhighʼ (hoog) and ʻlowʼ (laag). Farsi speakers, however, often describe high-frequency pitches as ʻthinʼ (naazok) and low-frequency pitches as ʻthickʼ (koloft). Do Dutch and Farsi speakers mentally represent pitch differently? To find out, we asked participants to reproduce musical pitches that they heard in the presence of irrelevant spatial information (i.e., lines that varied either in height or in thickness). For the Height Interference experiment, horizontal lines bisected a vertical reference line at one of nine different locations. For the Thickness Interference experiment, a vertical line appeared in the middle of the screen in one of nine thicknesses. In each experiment, the nine different lines were crossed with nine different pitches ranging from C4 to G#4 in semitone increments, to produce 81 distinct trials. If Dutch and Farsi speakers mentally represent pitch the way they talk about it, using different kinds of spatial representations, they should show contrasting patterns of cross-dimensional interference: Dutch speakersʼ pitch estimates should be more strongly affected by irrelevant height information, and Farsi speakersʼ by irrelevant thickness information. As predicted, Dutch speakersʼ pitch estimates were significantly modulated by spatial height but not by thickness. Conversely, Farsi speakersʼ pitch estimates were modulated by spatial thickness but not by height (2x2 ANOVA on normalized slopes of the effect of space on pitch: F(1,71)=17,15 p<.001). To determine whether language plays a causal role in shaping pitch representations, we conducted a training experiment. Native Dutch speakers learned to use Farsi-like metaphors, describing pitch relationships in terms of thickness (e.g., a cello sounds ʻthickerʼ than a flute). After training, Dutch speakers showed a significant effect of Thickness interference in the non-linguistic pitch reproduction task, similar to native Farsi speakers: on average, pitches accompanied by thicker lines were reproduced as lower in pitch (effect of thickness on pitch: r=-.22, p=.002). By conducting psychophysical tasks, we tested the ʻWhorfianʼ question without using words. Yet, results also inform theories of metaphorical language processing. According to psycholinguistic theories (e.g., Bowdle & Gentner, 2005), highly conventional metaphors are processed without any active mapping from the source to the target domain (e.g., from space to pitch). Our data, however, suggest that when people use verbal metaphors they activate a corresponding non-linguistic mapping from either height or thickness to pitch, strengthening this association at the expense of competing associations. As a result, people who use different metaphors in their native languages form correspondingly different representations of musical pitch. Casasanto, D. (2010). Space for Thinking. In Language, Cognition and Space: State of the art and new directions. V. Evans & P. Chilton (Eds.), 453-478, London: Equinox Publishing. Bowdle, B. & Gentner, D. (2005). The career of metaphor. Psychological Review, 112, 193-216.
  • Doumas, L. A. A., Hamer, A., Puebla, G., & Martin, A. E. (2017). A theory of the detection and learning of structured representations of similarity and relative magnitude. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 1955-1960). Austin, TX: Cognitive Science Society.

    Abstract

    Responding to similarity, difference, and relative magnitude (SDM) is ubiquitous in the animal kingdom. However, humans seem unique in the ability to represent relative magnitude (‘more’/‘less’) and similarity (‘same’/‘different’) as abstract relations that take arguments (e.g., greater-than (x,y)). While many models use structured relational representations of magnitude and similarity, little progress has been made on how these representations arise. Models that developuse these representations assume access to computations of similarity and magnitude a priori, either encoded as features or as output of evaluation operators. We detail a mechanism for producing invariant responses to “same”, “different”, “more”, and “less” which can be exploited to compute similarity and magnitude as an evaluation operator. Using DORA (Doumas, Hummel, & Sandhofer, 2008), these invariant responses can serve be used to learn structured relational representations of relative magnitude and similarity from pixel images of simple shapes
  • Drijvers, L., & Ozyurek, A. (2017). Visual context enhanced: The joint contribution of iconic gestures and visible speech to degraded speech comprehension. Journal of Speech, Language, and Hearing Research, 60, 212-222. doi:10.1044/2016_JSLHR-H-16-0101.

    Abstract

    Purpose This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately.

    Method Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2-band noise-vocoding, 6-band noise-vocoding, clear), 3 multimodal conditions (speech + lips blurred, speech + visible speech, speech + visible speech + gesture), and 2 visual-only conditions (visible speech, visible speech + gesture).

    Results Accuracy levels were higher when both visual articulators were present compared with 1 or none. The enhancement effects of (a) visible speech, (b) gestural information on top of visible speech, and (c) both visible speech and iconic gestures were larger in 6-band than 2-band noise-vocoding or visual-only conditions. Gestural enhancement in 2-band noise-vocoding did not differ from gestural enhancement in visual-only conditions.
  • Drozd, K. F. (1998). No as a determiner in child English: A summary of categorical evidence. In A. Sorace, C. Heycock, & R. Shillcock (Eds.), Proceedings of the Gala '97 Conference on Language Acquisition (pp. 34-39). Edinburgh, UK: Edinburgh University Press,.

    Abstract

    This paper summarizes the results of a descriptive syntactic category analysis of child English no which reveals that young children use and represent no as a determiner and negatives like no pen as NPs, contra standard analyses.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2017). L2 voice recognition: The role of speaker-, listener-, and stimulus-related factors. The Journal of the Acoustical Society of America, 142(5), 3058-3068. doi:10.1121/1.5010169.

    Abstract

    Previous studies examined various factors influencing voice recognition and learning with mixed results. The present study investigates the separate and combined contribution of these various speaker-, stimulus-, and listener-related factors to voice recognition. Dutch listeners, with arguably incomplete phonological and lexical knowledge in the target language, English, learned to recognize the voice of four native English speakers, speaking in English, during four-day training. Training was successful and listeners' accuracy was shown to be influenced by the acoustic characteristics of speakers and the sound composition of the words used in the training, but not by lexical frequency of the words, nor the lexical knowledge of the listeners or their phonological aptitude. Although not conclusive, listeners with a lower working memory capacity seemed to be slower in learning voices than listeners with a higher working memory capacity. The results reveal that speaker-related, listener-related, and stimulus-related factors accumulate in voice recognition, while lexical information turns out not to play a role in successful voice learning and recognition. This implies that voice recognition operates at the prelexical processing level.
  • Drude, S. (2005). A contribuição alemã à Lingüística e Antropologia dos índios do Brasil, especialmente da Amazônia. In J. J. A. Alves (Ed.), Múltiplas Faces da Históriadas Ciência na Amazônia (pp. 175-196). Belém: EDUFPA.
  • Drude, S. (2009). Nasal harmony in Awetí ‐ A declarative account. ReVEL - Revista Virtual de Estudos da Linguagem, (3). Retrieved from http://www.revel.inf.br/en/edicoes/?mode=especial&id=16.

    Abstract

    This article describes and analyses nasal harmony (or spreading of nasality) in Awetí. It first shows generally how sounds in prefixes adapt to nasality or orality of stems, and how nasality in stems also ‘extends’ to the left. With abstract templates we show which phonetically nasal or oral sequences are possible in Awetí (focusing on stops, pre-nasalized stops and nasals) and which phonological analysis is appropriate for account for this regularities. In Awetí, there are intrinsically nasal and oral vowels and ‘neutral’ vowels which adapt phonetically to a following vowel or consonant, as is the case of sonorant consonants. Pre-nasalized stops such as “nt” are nasalized variants of stops, not post-oralized variants of nasals as in Tupí-Guaranian languages. For nasals and stops in syllable coda (end of morphemes), we postulate arqui-phonemes which adapt to the preceding vowel or a following consonant. Finally, using a declarative approach, the analysis formulates ‘rules’ (statements) which account for the ‘behavior’ of nasality in Awetí words, making use of “structured sequences” on both the phonetic and phonological levels. So, each unit (syllable, morpheme, word etc.) on any level has three components, a sequence of segments, a constituent structure (where pre-nasalized stops, like diphthongs, correspond to two segments), and an intonation structure. The statements describe which phonetic variants can be combined (concatenated) with which other variants, depending on their nasality or orality.
  • Duffield, N., Matsuo, A., & Roberts, L. (2009). Factoring out the parallelism effect in VP-ellipsis: English vs. Dutch contrasts. Second Language Research, 25, 427-467. doi:10.1177/0267658309349425.

    Abstract

    Previous studies, including Duffield and Matsuo (2001; 2002; 2009), have demonstrated second language learners’ overall sensitivity to a parallelism constraint governing English VP-ellipsis constructions: like native speakers (NS), advanced Dutch, Spanish and Japanese learners of English reliably prefer ellipsis clauses with structurally parallel antecedents over those with non-parallel antecedents. However, these studies also suggest that, in contrast to English native speakers, L2 learners’ sensitivity to parallelism is strongly influenced by other non-syntactic formal factors, such that the constraint applies in a comparatively restricted range of construction-specific contexts. This article reports a set of follow-up experiments — from both computer-based as well as more traditional acceptability judgement tasks — that systematically manipulates these other factors. Convergent results from these tasks confirm a qualitative difference in the judgement patterns of the two groups, as well as important differences between theoreticians’ judgements and those of typical native speakers. We consider the implications of these findings for theories of ultimate attainment in second language acquisition (SLA), as well as for current theoretical accounts of ellipsis.
  • Dugoujon, J.-M., Larrouy, G., Mazières, S., Brucato, N., Sevin, A., Cassar, O., & Gessain, A. (2010). Histoire et dynamique du peuplement humain en Amazonie: L’exemple de la Guyane. In A. Pavé, & G. Fornet (Eds.), Amazonie: Une aventure scientifique et humaine du CNRS (pp. 128-132). Paris: Galaade Éditions.
  • Dunn, M., Terrill, A., Reesink, G., Foley, R. A., & Levinson, S. C. (2005). Structural phylogenetics and the reconstruction of ancient language history. Science, 309(5743), 2072-2075. doi:10.1126/science.1114615.
  • Dunn, M. (2009). Contact and phylogeny in Island Melanesia. Lingua, 11(11), 1664-1678. doi:10.1016/j.lingua.2007.10.026.

    Abstract

    This paper shows that despite evidence of structural convergence between some of the Austronesian and non-Austronesian (Papuan) languages of Island Melanesia, statistical methods can detect two independent genealogical signals derived from linguistic structural features. Earlier work by the author and others has presented a maximum parsimony analysis which gave evidence for a genealogical connection between the non-Austronesian languages of island Melanesia. Using the same data set, this paper demonstrates for the non-statistician the application of more sophisticated statistical techniques—including Bayesian methods of phylogenetic inference, and shows that the evidence for common ancestry is if anything stronger than originally supposed.
  • Dunn, M. (2000). Planning for failure: The niche of standard Chukchi. Current Issues in Language Planning, 1, 389-399. doi:10.1080/14664200008668013.

    Abstract

    This paper examines the effects of language standardization and orthography design on the Chukchi linguistic ecology. The process of standardisation has not taken into consideration the gender-based sociolects of colloquial Chukchi and is based on a grammaticaldescriptionwhich does not reflectactual Chukchi use; as a result standard Chukchi has not gained a place in the Chukchi language ecology. The Cyrillic orthography developed for Chukchi is also problematic as it is based on features of Russian phonology, rather than on Chukchi itself: this has meant that a knowledge of written Chukchi is dependent on a knowledge of the principles of Russian orthography. The aspects of language planning have had a large impact on the pre-existing Chukchi language ecology which has contributed to the obsolescence of the colloquial language.
  • Edmiston, P., Perlman, M., & Lupyan, G. (2017). Creating words from iterated vocal imitation. In G. Gunzelman, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 331-336). Austin, TX: Cognitive Science Society.

    Abstract

    We report the results of a large-scale (N=1571) experiment to investigate whether spoken words can emerge from the process of repeated imitation. Participants played a version of the children’s game “Telephone”. The first generation was asked to imitate recognizable environmental sounds (e.g., glass breaking, water splashing); subsequent generations imitated the imitators for a total of 8 generations. We then examined whether the vocal imitations became more stable and word-like, retained a resemblance to the original sound, and became more suitable as learned category labels. The results showed (1) the imitations became progressively more word-like, (2) even after 8 generations, they could be matched above chance to the environmental sound that motivated them, and (3) imitations from later generations were more effective as learned category labels. These results show how repeated imitation can create progressively more word-like forms while retaining a semblance of iconicity.
  • Eibl-Eibesfeldt, I., Senft, B., & Senft, G. (1998). Trobriander (Ost-Neuguinea, Trobriand Inseln, Kaile'una) Fadenspiele 'ninikula'. In Ethnologie - Humanethologische Begleitpublikationen von I. Eibl-Eibesfeldt und Mitarbeitern. Sammelband I, 1985-1987. Göttingen: Institut für den Wissenschaftlichen Film.
  • Eibl-Eibesfeldt, I., & Senft, G. (1991). Trobriander (Papua-Neu-guinea, Trobriand -Inseln, Kaile'una) Tänze zur Einleitung des Erntefeier-Rituals. Film E 3129. Trobriander (Papua-Neuguinea, Trobriand-Inseln, Kiriwina); Ausschnitte aus einem Erntefesttanz. Film E3130. Publikationen zu wissenschaftlichen Filmen. Sektion Ethnologie, 17, 1-17.
  • Eimer, M., Kiss, M., Press, C., & Sauter, D. (2009). The roles of feature-specific task set and bottom-up salience in attentional capture: An ERP study. Journal of Experimental Psychology: Human Perception and Performance, 35, 1316-1328. doi:10.1037/a0015872.

    Abstract

    We investigated the roles of top-down task set and bottom-up stimulus salience for feature-specific attentional capture. ERPs and behavioural performance were measured in two experiments where spatially nonpredictive cues preceded visual search arrays that included a colour-defined target. When cue arrays contained a target-colour singleton, behavioural spatial cueing effects were accompanied by a cue-induced N2pc component, indicative of attentional capture. Behavioural cueing effects and N2pc components were only minimally attenuated for non-singleton relative to singleton target-colour cues, demonstrating that top-down task set has a much greater impact on attentional capture than bottom-up salience. For nontarget-colour singleton cues, no N2pc was triggered, but an anterior N2 component indicative of top-down inhibition was observed. In Experiment 2, these cues produced an inverted behavioural cueing effect, which was accompanied by a delayed N2pc to targets presented at cued locations. These results suggest that perceptually salient visual stimuli without task-relevant features trigger a transient location-specific inhibition process that prevents attentional capture, but delays the selection of subsequent target events.
  • Eisenbeiss, S. (2000). The acquisition of Determiner Phrase in German child language. In M.-A. Friedemann, & L. Rizzi (Eds.), The Acquisition of Syntax (pp. 26-62). Harlow, UK: Pearson Education Ltd.
  • Eising, E., Shyti, R., 'T hoen, P. A. C., Vijfhuizen, L. S., Huisman, S. M. H., Broos, L. A. M., Mahfourz, A., Reinders, M. J. T., Ferrrari, M. D., Tolner, E. A., De Vries, B., & Van den Maagdenberg, A. M. J. M. (2017). Cortical spreading depression causes unique dysregulation of inflammatory pathways in a transgenic mouse model of migraine. Molecular Biology, 54(4), 2986-2996. doi:10.1007/s12035-015-9681-5.

    Abstract

    Familial hemiplegic migraine type 1 (FHM1) is a
    rare monogenic subtype of migraine with aura caused by mutations
    in CACNA1A that encodes the α1A subunit of voltagegated
    CaV2.1 calcium channels. Transgenic knock-in mice
    that carry the human FHM1 R192Q missense mutation
    (‘FHM1 R192Q mice’) exhibit an increased susceptibility to
    cortical spreading depression (CSD), the mechanism underlying
    migraine aura. Here, we analysed gene expression profiles
    from isolated cortical tissue of FHM1 R192Q mice 24 h after
    experimentally induced CSD in order to identify molecular
    pathways affected by CSD. Gene expression profiles were
    generated using deep serial analysis of gene expression sequencing.
    Our data reveal a signature of inflammatory signalling
    upon CSD in the cortex of both mutant and wild-type
    mice. However, only in the brains of FHM1 R192Q mice
    specific genes are up-regulated in response to CSD that are
    implicated in interferon-related inflammatory signalling. Our
    findings show that CSD modulates inflammatory processes in
    both wild-type and mutant brains, but that an additional
    unique inflammatory signature becomes expressed after
    CSD in a relevant mouse model of migraine.
  • Eising, E., Pelzer, N., Vijfhuizen, L. S., De Vries, B., Ferrari, M. D., 'T Hoen, P. A. C., Terwindt, G. M., & Van den Maagdenberg, A. M. J. M. (2017). Identifying a gene expression signature of cluster headache in blood. Scientific Reports, 7: 40218. doi:10.1038/srep40218.

    Abstract

    Cluster headache is a relatively rare headache disorder, typically characterized by multiple daily, short-lasting attacks of excruciating, unilateral (peri-)orbital or temporal pain associated with autonomic symptoms and restlessness. To better understand the pathophysiology of cluster headache, we used RNA sequencing to identify differentially expressed genes and pathways in whole blood of patients with episodic (n = 19) or chronic (n = 20) cluster headache in comparison with headache-free controls (n = 20). Gene expression data were analysed by gene and by module of co-expressed genes with particular attention to previously implicated disease pathways including hypocretin dysregulation. Only moderate gene expression differences were identified and no associations were found with previously reported pathogenic mechanisms. At the level of functional gene sets, associations were observed for genes involved in several brain-related mechanisms such as GABA receptor function and voltage-gated channels. In addition, genes and modules of co-expressed genes showed a role for intracellular signalling cascades, mitochondria and inflammation. Although larger study samples may be required to identify the full range of involved pathways, these results indicate a role for mitochondria, intracellular signalling and inflammation in cluster headache

    Additional information

    Eising_etal_2017sup.pdf
  • Eisner, F., & McQueen, J. M. (2005). The specificity of perceptual learning in speech processing. Perception & Psychophysics, 67(2), 224-238.

    Abstract

    We conducted four experiments to investigate the specificity of perceptual adjustments made to unusual speech sounds. Dutch listeners heard a female talker produce an ambiguous fricative [?] (between [f] and [s]) in [f]- or [s]-biased lexical contexts. Listeners with [f]-biased exposure (e.g., [witlo?]; from witlof, “chicory”; witlos is meaningless) subsequently categorized more sounds on an [εf]–[εs] continuum as [f] than did listeners with [s]-biased exposure. This occurred when the continuum was based on the exposure talker's speech (Experiment 1), and when the same test fricatives appeared after vowels spoken by novel female and male talkers (Experiments 1 and 2). When the continuum was made entirely from a novel talker's speech, there was no exposure effect (Experiment 3) unless fricatives from that talker had been spliced into the exposure talker's speech during exposure (Experiment 4). We conclude that perceptual learning about idiosyncratic speech is applied at a segmental level and is, under these exposure conditions, talker specific.
  • Eisner, F., Weber, A., & Melinger, A. (2010). Generalization of learning in pre-lexical adjustments to word-final devoicing [Abstract]. Journal of the Acoustical Society of America, 128, 2323.

    Abstract

    Pre-lexical representations of speech sounds have been to shown to change dynamically through a mechanism of lexically driven learning. [Norris et al. (2003).] Here we investigated whether this type of learning occurs in native British English (BE) listeners for a word-final stop contrast which is commonly de-voiced in Dutch-accented English. Specifically, this study asked whether the change in pre-lexical representation also encodes information about the position of the critical sound within a word. After exposure to a native Dutch speaker's productions of de-voiced stops in word-final position (but not in any other positions), BE listeners showed evidence of perceptual learning in a subsequent cross-modal priming task, where auditory primes with voiceless final stops (e.g., [si:t], “seat”) facilitated recognition of visual targets with voiced final stops (e.g., “seed”). This learning generalized to test pairs where the critical contrast was in word-initial position, e.g., auditory primes such as [taun] (“town”), facilitated recognition of visual targets like “down”. Control listeners, who had not heard any stops by the speaker during exposure, showed no learning effects. The results suggest that under these exposure conditions, word position is not encoded in the pre-lexical adjustment to the accented phoneme contras
  • Eisner, F., McGettigan, C., Faulkner, A., Rosen, S., & Scott, S. K. (2010). Inferior frontal gyrus activation predicts individual differences in perceptual learning of cochlear-implant simulations. Journal of Neuroscience, 30(21), 7179-7186. doi:10.1523/JNEUROSCI.4040-09.2010.
  • Enard, W., Gehre, S., Hammerschmidt, K., Hölter, S. M., Blass, T., Somel, M., Brückner, M. K., Schreiweis, C., Winter, C., Sohr, R., Becker, L., Wiebe, V., Nickel, B., Giger, T., Müller, U., Groszer, M., Adler, T., Aguilar, A., Bolle, I., Calzada-Wack, J. and 36 moreEnard, W., Gehre, S., Hammerschmidt, K., Hölter, S. M., Blass, T., Somel, M., Brückner, M. K., Schreiweis, C., Winter, C., Sohr, R., Becker, L., Wiebe, V., Nickel, B., Giger, T., Müller, U., Groszer, M., Adler, T., Aguilar, A., Bolle, I., Calzada-Wack, J., Dalke, C., Ehrhardt, N., Favor, J., Fuchs, H., Gailus-Durner, V., Hans, W., Hölzlwimmer, G., Javaheri, A., Kalaydjiev, S., Kallnik, M., Kling, E., Kunder, S., Moßbrugger, I., Naton, B., Racz, I., Rathkolb, B., Rozman, J., Schrewe, A., Busch, D. H., Graw, J., Ivandic, B., Klingenspor, M., Klopstock, T., Ollert, M., Quintanilla-Martinez, L., Schulz, H., Wolf, E., Wurst, W., Zimmer, A., Fisher, S. E., Morgenstern, R., Arendt, T., Hrabé de Angelis, M., Fischer, J., Schwarz, J., & Pääbo, S. (2009). A humanized version of Foxp2 affects cortico-basal ganglia circuits in mice. Cell, 137(5), 961-971. doi:10.1016/j.cell.2009.03.041.

    Abstract

    It has been proposed that two amino acid substitutions in the transcription factor FOXP2 have been positively selected during human evolution due to effects on aspects of speech and language. Here, we introduce these substitutions into the endogenous Foxp2 gene of mice. Although these mice are generally healthy, they have qualitatively different ultrasonic vocalizations, decreased exploratory behavior and decreased dopamine concentrations in the brain suggesting that the humanized Foxp2 allele affects basal ganglia. In the striatum, a part of the basal ganglia affected in humans with a speech deficit due to a nonfunctional FOXP2 allele, we find that medium spiny neurons have increased dendrite lengths and increased synaptic plasticity. Since mice carrying one nonfunctional Foxp2 allele show opposite effects, this suggests that alterations in cortico-basal ganglia circuits might have been important for the evolution of speech and language in humans.
  • Enfield, N. J. (2005). The body as a cognitive artifact in kinship representations: Hand gesture diagrams by speakers of Lao. Current Anthropology, 46(1), 51-81.

    Abstract

    Central to cultural, social, and conceptual life are cognitive arti-facts, the perceptible structures which populate our world and mediate our navigation of it, complementing, enhancing, and altering available affordances for the problem-solving challenges of everyday life. Much work in this domain has concentrated on technological artifacts, especially manual tools and devices and the conceptual and communicative tools of literacy and diagrams. Recent research on hand gestures and other bodily movements which occur during speech shows that the human body serves a number of the functions of "cognitive technologies," affording the special cognitive advantages claimed to be associated exclusively with enduring (e.g., printed or drawn) diagrammatic representations. The issue is explored with reference to extensive data from video-recorded interviews with speakers of Lao in Vientiane, Laos, which show integration of verbal descriptions with complex spatial representations akin to diagrams. The study has implications both for research on cognitive artifacts (namely, that the body is a visuospatial representational resource not to be overlooked) and for research on ethnogenealogical knowledge (namely, that hand gestures reveal speakers' conceptualizations of kinship structure which are of a different nature to and not necessarily retrievable from the accompanying linguistic code).
  • Enfield, N. J., Levinson, S. C., De Ruiter, J. P., & Stivers, T. (2010). Building a corpus of multimodal interaction in your field site. In E. Norcliffe, & N. J. Enfield (Eds.), Field manual volume 13 (pp. 30-33). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Enfield, N. J. (2010). Burnt banknotes [Review of the books Making the social world by John R. Searle and The theory of social and cultural selection by W.G. Runciman]. The Times Literary Supplement, September 3, 2010, 3-4.
  • Enfield, N. J. (2009). Common tragedy [Review of the book The native mind and the cultural construction of nature by Scott Atran Douglas Medin]. The Times Literary Supplement, September 18,2009, 10-11.
  • Enfield, N. J. (2009). 'Case relations' in Lao, a radically isolating language. In A. L. Malčukov, & A. Spencer (Eds.), The Oxford handbook of case (pp. 808-819). Oxford: Oxford University Press.
  • Enfield, N. J. (2010). [Review of the book Gesturecraft: The manu-facture of meaning by Jürgen Streeck]. Pragmatics & Cognition, 18(2), 465-467. doi:10.1075/pc.18.2.11enf.

    Abstract

    Reviews the book, Gesturecraft: The Manu-Facture of Meaning by Jurgen Streeck (see record 2009-03892-000). This book on gesture goes back to well before the recent emergence of a mainstream of interest in the topic. The author of this book presents his vision of the hands' involvement in the making of meaning. The author's stance falls within a second broad category of work, a much more interdisciplinary approach, which focuses on context more richly construed. The approach not only addresses socially and otherwise distributed cognition, but also tackles the less psychologically framed concerns of meaning as a collaborative achievement and its role in the practicalities of human social life. The author's insistence that the right point of departure for gesture work is "human beings in their daily activities" leads to a view of gesture that begins not with language, and not with mind, but with types of social and contextual settings that constitute ecologies for the deployment of the hands in making meaning. The author's categories go beyond a reliance on semiotic properties of hand movements or their relation to accompanying speech, being grounded also in contextual aspects of the local setting, social activity type and communicative goals. Thus, this book is a unique contribution to gesture research.
  • Enfield, N. J. (2009). [Review of the book Serial verb constructions: A cross-linguistic typology ed. by Alexandra Y. Aikhenvald and R. M. W. Dixon]. Language, 85, 445-451. doi:10.1353/lan.0.0124.
  • Enfield, N. J. (2005). Depictive and other secondary predication in Lao. In N. P. Himmelmann, & E. Schultze-Berndt (Eds.), Secondary predication and adverbial modification (pp. 379-392). Oxford: Oxford University Press.
  • Enfield, N. J. (2005). Areal linguistics and mainland Southeast Asia. Annual Review of Anthropology, 34, 181-206. doi:10.1146/annurev.anthro.34.081804.120406.
  • Enfield, N. J. (2005). [Comment on the book Explorations in the deictic field]. Current Anthropology, 46(2), 212-212.
  • Enfield, N. J. (2005). [Review of the book Laughter in interaction by Philip Glenn]. Linguistics, 43(6), 1195-1197. doi:10.1515/ling.2005.43.6.1191.
  • Enfield, N. J. (2005). Micro and macro dimensions in linguistic systems. In S. Marmaridou, K. Nikiforidou, & E. Antonopoulou (Eds.), Reviewing linguistic thought: Converging trends for the 21st Century (pp. 313-326). Berlin: Mouton de Gruyter.
  • Enfield, N. J. (2010). Human sociality at the heart of language [Inaugural lecture]. Nijmegen: Radboud University Nijmegen.

    Abstract

    Rede uitgesproken bij de aanvaarding van het ambt van hoogleraar Etnolinguïstiek, in het bijzonder die van Zuid-Oost Azië, aan de Faculteit der Letteren van de Radboud Universiteit Nijmegen op woensdag 4 november 2009 door prof. dr. N.J. Enfield
  • Enfield, N. J., & Levinson, S. C. (2010). Metalanguage for speech acts. In Field manual volume 13 (pp. 34-36). Nijmegen: Max Planck Institute for Psycholinguistics.

    Abstract

    People of all cultures have some degree of concern with categorizing types of communicative social action. All languages have words with meanings like speak, say, talk, complain, curse, promise, accuse, nod, wink, point and chant. But the exact distinctions they make will differ in both quantity and quality. How is communicative social action categorised across languages and cultures? The goal of this task is to establish a basis for cross-linguistic comparison of native metalanguages for social action.
  • Enfield, N. J., & Levinson, S. C. (2009). Metalanguage for speech acts. In A. Majid (Ed.), Field manual volume 12 (pp. 51-53). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.883559.

    Abstract

    People of all cultures have some degree of concern with categorizing types of communicative social action. All languages have words with meanings like speak, say, talk, complain, curse, promise, accuse, nod, wink, point and chant. But the exact distinctions they make will differ in both quantity and quality. How is communicative social action categorised across languages and cultures? The goal of this task is to establish a basis for cross-linguistic comparison of native metalanguages for social action.
  • Enfield, N. J. (2009). Language and culture. In L. Wei, & V. Cook (Eds.), Contemporary Applied Linguistics Volume 2 (pp. 83-97). London: Continuum.
  • Enfield, N. J. (2010). Language and culture in Laos: An agenda for research. Journal of Lao Studies, 1(1), 48-54.
  • Enfield, N. J. (2017). Language in the Mainland Southeast Asia Area. In R. Hickey (Ed.), The Cambridge Handbook of Areal Linguistics (pp. 677-702). Cambridge: Cambridge University Press. doi:10.1017/9781107279872.026.
  • Enfield, N. J. (2009). Language: Social motives for syntax [Review of the book Origins of human communication by Michael Tomasello]. Science, 324(5923), 39. doi:10.1126/science.1172660.
  • Enfield, N. J. (2010). Lost in translation [Letter to the editor]. New Scientist, 207 (2773), 31. doi:10.1016/S0262-4079(10)61971-9.

    Abstract

    no abstract available
  • Enfield, N. J. (2009). Everyday ritual in the residential world. In G. Senft, & E. B. Basso (Eds.), Ritual communication (pp. 51-80). Oxford: Berg.
  • Enfield, N. J. (2000). On linguocentrism. In M. Pütz, & M. H. Verspoor (Eds.), Explorations in linguistic relativity (pp. 125-157). Amsterdam: Benjamins.

Share this page