Publications

Displaying 301 - 400 of 1578
  • Dingemanse, M., & Thompson, B. (2020). Playful iconicity: Structural markedness underlies the relation between funniness and iconicity. Language and Cognition, 12(1), 203-224. doi:10.1017/langcog.2019.49.

    Abstract

    Words like ‘waddle’, ‘flop’ and ‘zigzag’ combine playful connotations with iconic form-meaning resemblances. Here we propose that structural markedness may be a common factor underlying perceptions of playfulness and iconicity. Using collected and estimated lexical ratings covering a total of over 70,000 English words, we assess the robustness of this assocation. We identify cues of phonotactic complexity that covary with funniness and iconicity ratings and that, we propose, serve as metacommunicative signals to draw attention to words as playful and performative. To assess the generalisability of the findings we develop a method to estimate lexical ratings from distributional semantics and apply it to a dataset 20 times the size of the original set of human ratings. The method can be used more generally to extend coverage of lexical ratings. We find that it reliably reproduces correlations between funniness and iconicity as well as cues of structural markedness, though it also amplifies biases present in the human ratings. Our study shows that the playful and the poetic are part of the very texture of the lexicon.
  • Dingemanse, M., Verhoef, T., & Roberts, S. G. (2014). The role of iconicity in the cultural evolution of communicative signals. In B. De Boer, & T. Verhoef (Eds.), Proceedings of Evolang X, Workshop on Signals, Speech, and Signs (pp. 11-15).
  • Dingemanse, M., Schuerman, W. L., Reinisch, E., Tufvesson, S., & Mitterer, H. (2016). What sound symbolism can and cannot do: Testing the iconicity of ideophones from five languages. Language, 92(2), e117-e133. doi:10.1353/lan.2016.0034.

    Abstract

    Sound symbolism is a phenomenon with broad relevance to the study of language and mind, but there has been a disconnect between its investigations in linguistics and psychology. This study tests the sound-symbolic potential of ideophones—words described as iconic—in an experimental task that improves over prior work in terms of ecological validity and experimental control. We presented 203 ideophones from five languages to eighty-two Dutch listeners in a binary-choice task, in four versions: original recording, full diphone resynthesis, segments-only resynthesis, and prosody-only resynthesis. Listeners guessed the meaning of all four versions above chance, confirming the iconicity of ideophones and showing the viability of speech synthesis as a way of controlling for segmental and suprasegmental properties in experimental studies of sound symbolism. The success rate was more modest than prior studies using pseudowords like bouba/kiki, implying that assumptions based on such words cannot simply be transferred to natural languages. Prosody and segments together drive the effect: neither alone is sufficient, showing that segments and prosody work together as cues supporting iconic interpretations. The findings cast doubt on attempts to ascribe iconic meanings to segments alone and support a view of ideophones as words that combine arbitrariness and iconicity.We discuss the implications for theory and methods in the empirical study of sound symbolism and iconicity.

    Additional information

    https://muse.jhu.edu/article/619540
  • Djemie, T., Weckhuysen, S., von Spiczak, S., Carvill, G. L., Jaehn, J., Anttonen, A. K., Brilstra, E., Caglayan, H. S., De Kovel, C. G. F., Depienne, C., Gaily, E., Gennaro, E., Giraldez, B. G., Gormley, P., Guerrero-Lopez, R., Guerrini, R., Hamalainen, E., Hartmann, `., Hernandez-Hernandez, L., Hjalgrim, H. and 26 moreDjemie, T., Weckhuysen, S., von Spiczak, S., Carvill, G. L., Jaehn, J., Anttonen, A. K., Brilstra, E., Caglayan, H. S., De Kovel, C. G. F., Depienne, C., Gaily, E., Gennaro, E., Giraldez, B. G., Gormley, P., Guerrero-Lopez, R., Guerrini, R., Hamalainen, E., Hartmann, `., Hernandez-Hernandez, L., Hjalgrim, H., Koeleman, B. P., Leguern, E., Lehesjoki, A. E., Lemke, J. R., Leu, C., Marini, C., McMahon, J. M., Mei, D., Moller, R. S., Muhle, H., Myers, C. T., Nava, C., Serratosa, J. M., Sisodiya, S. M., Stephani, U., Striano, P., van Kempen, M. J., Verbeek, N. E., Usluer, S., Zara, F., Palotie, A., Mefford, H. C., Scheffer, I. E., De Jonghe, P., Helbig, I., & Suls, A. (2016). Pitfalls in genetic testing: the story of missed SCN1A mutations. Molecular Genetics & Genomic Medicine, 4(4), 457-64. doi:10.1002/mgg3.217.

    Abstract

    Background Sanger sequencing, still the standard technique for genetic testing in most diagnostic laboratories and until recently widely used in research, is gradually being complemented by next-generation sequencing (NGS). No single mutation detection technique is however perfect in identifying all mutations. Therefore, we wondered to what extent inconsistencies between Sanger sequencing and NGS affect the molecular diagnosis of patients. Since mutations in SCN1A, the major gene implicated in epilepsy, are found in the majority of Dravet syndrome (DS) patients, we focused on missed SCN1A mutations. Methods We sent out a survey to 16 genetic centers performing SCN1A testing. Results We collected data on 28 mutations initially missed using Sanger sequencing. All patients were falsely reported as SCN1A mutation-negative, both due to technical limitations and human errors. Conclusion We illustrate the pitfalls of Sanger sequencing and most importantly provide evidence that SCN1A mutations are an even more frequent cause of DS than already anticipated.
  • Dolscheid, S., Çelik, S., Erkan, H., Küntay, A., & Majid, A. (2020). Space-pitch associations differ in their susceptibility to language. Cognition, 196: 104073. doi:10.1016/j.cognition.2019.104073.

    Abstract

    To what extent are links between musical pitch and space universal, and to what extent are they shaped by
    language? There is contradictory evidence in support of both universality and linguistic relativity presently,
    leaving the question open. To address this, speakers of Dutch who talk about pitch in terms of spatial height and
    speakers of Turkish who use a thickness metaphor were tested in simple nonlinguistic space-pitch association
    tasks. Both groups showed evidence of a thickness-pitch association, but differed significantly in their heightpitch
    associations, suggesting the latter may be more susceptible to language. When participants had to match
    pitches to spatial stimuli where height and thickness were opposed (i.e., a thick line high in space vs. a thin line
    low in space), Dutch and Turkish differed in their relative preferences. Whereas Turkish participants predominantly
    opted for a thickness-pitch interpretation—even if this meant a reversal of height-pitch
    mappings—Dutch participants favored a height-pitch interpretation more often. These findings provide new
    evidence that speakers of different languages vary in their space-pitch associations, while at the same time
    showing such associations are not equally susceptible to linguistic influences. Some space-pitch (i.e., heightpitch)
    associations are more malleable than others (i.e., thickness-pitch).
  • Dolscheid, S., Hunnius, S., Casasanto, D., & Majid, A. (2014). Prelinguistic infants are sensitive to space-pitch associations found across cultures. Psychological Science, 25(6), 1256-1261. doi:10.1177/0956797614528521.

    Abstract

    People often talk about musical pitch using spatial metaphors. In English, for instance, pitches can be “high” or “low” (i.e., height-pitch association), whereas in other languages, pitches are described as “thin” or “thick” (i.e., thickness-pitch association). According to results from psychophysical studies, metaphors in language can shape people’s nonlinguistic space-pitch representations. But does language establish mappings between space and pitch in the first place, or does it only modify preexisting associations? To find out, we tested 4-month-old Dutch infants’ sensitivity to height-pitch and thickness-pitch mappings using a preferential-looking paradigm. The infants looked significantly longer at cross-modally congruent stimuli for both space-pitch mappings, which indicates that infants are sensitive to these associations before language acquisition. The early presence of space-pitch mappings means that these associations do not originate from language. Instead, language builds on preexisting mappings, changing them gradually via competitive associative learning. Space-pitch mappings that are language-specific in adults develop from mappings that may be universal in infants.
  • Dolscheid, S., Willems, R. M., Hagoort, P., & Casasanto, D. (2014). The relation of space and musical pitch in the brain. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 421-426). Austin, Tx: Cognitive Science Society.

    Abstract

    Numerous experiments show that space and musical pitch are
    closely linked in people's minds. However, the exact nature of
    space-pitch associations and their neuronal underpinnings are
    not well understood. In an fMRI experiment we investigated
    different types of spatial representations that may underlie
    musical pitch. Participants judged stimuli that varied in
    spatial height in both the visual and tactile modalities, as well
    as auditory stimuli that varied in pitch height. In order to
    distinguish between unimodal and multimodal spatial bases of
    musical pitch, we examined whether pitch activations were
    present in modality-specific (visual or tactile) versus
    multimodal (visual and tactile) regions active during spatial
    height processing. Judgments of musical pitch were found to
    activate unimodal visual areas, suggesting that space-pitch
    associations may involve modality-specific spatial
    representations, supporting a key assumption of embodied
    theories of metaphorical mental representation.
  • Donnelly, S., & Kidd, E. (2020). Individual differences in lexical processing efficiency and vocabulary in toddlers: A longitudinal investigation. Journal of Experimental Child Psychology, 192: 104781. doi:10.1016/j.jecp.2019.104781.

    Abstract

    Research on infants’ online lexical processing by Fernald, Perfors, and Marchman (2006) revealed substantial individual differences that are related to vocabulary development, such that infants with better lexical processing efficiency show greater vocabulary growth across time. Although it is clear that individual differences in lexical processing efficiency exist and are meaningful, the theoretical nature of lexical processing efficiency and its relation to vocabulary size is less clear. In the current study, we asked two questions: (a) Is lexical processing efficiency better conceptualized as a central processing capacity or as an emergent capacity reflecting a collection of word-specific capacities? and (b) Is there evidence for a causal role for lexical processing efficiency in early vocabulary development? In the study, 120 infants were tested on a measure of lexical processing at 18, 21, and 24 months, and their vocabulary was measured via parent report. Structural equation modeling of the 18-month time point data revealed that both theoretical constructs represented in the first question above (a) fit the data. A set of regression analyses on the longitudinal data revealed little evidence for a causal effect of lexical processing on vocabulary but revealed a significant effect of vocabulary size on lexical processing efficiency early in development. Overall, the results suggest that lexical processing efficiency is a stable construct in infancy that may reflect the structure of the developing lexicon.
  • Doumas, L. A., & Martin, A. E. (2016). Abstraction in time: Finding hierarchical linguistic structure in a model of relational processing. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 2279-2284). Austin, TX: Cognitive Science Society.

    Abstract

    Abstract mental representation is fundamental for human cognition. Forming such representations in time, especially from dynamic and noisy perceptual input, is a challenge for any processing modality, but perhaps none so acutely as for language processing. We show that LISA (Hummel & Holyaok, 1997) and DORA (Doumas, Hummel, & Sandhofer, 2008), models built to process and to learn structured (i.e., symbolic) rep resentations of conceptual properties and relations from unstructured inputs, show oscillatory activation during processing that is highly similar to the cortical activity elicited by the linguistic stimuli from Ding et al.(2016). We argue, as Ding et al.(2016), that this activation reflects formation of hierarchical linguistic representation, and furthermore, that the kind of computational mechanisms in LISA/DORA (e.g., temporal binding by systematic asynchrony of firing) may underlie formation of abstract linguistic representations in the human brain. It may be this repurposing that allowed for the generation or mergence of hierarchical linguistic structure, and therefore, human language, from extant cognitive and neural systems. We conclude that models of thinking and reasoning and models of language processing must be integrated —not only for increased plausiblity, but in order to advance both fields towards a larger integrative model of human cognition
  • Doumas, L. A. A., Martin, A. E., & Hummel, J. E. (2020). Relation learning in a neurocomputational architecture supports cross-domain transfer. In S. Denison, M. Mack, Y. Xu, & B. C. Armstrong (Eds.), Proceedings of the 42nd Annual Virtual Meeting of the Cognitive Science Society (CogSci 2020) (pp. 932-937). Montreal, QB: Cognitive Science Society.

    Abstract

    Humans readily generalize, applying prior knowledge to novel situations and stimuli. Advances in machine learning have begun to approximate and even surpass human performance, but these systems struggle to generalize what they have learned to untrained situations. We present a model based on wellestablished neurocomputational principles that demonstrates human-level generalisation. This model is trained to play one video game (Breakout) and performs one-shot generalisation to a new game (Pong) with different characteristics. The model
    generalizes because it learns structured representations that are functionally symbolic (viz., a role-filler binding calculus) from unstructured training data. It does so without feedback, and without requiring that structured representations are specified a priori. Specifically, the model uses neural co-activation to discover which characteristics of the input are invariant and to learn relational predicates, and oscillatory regularities in network firing to bind predicates to arguments. To our knowledge,
    this is the first demonstration of human-like generalisation in a machine system that does not assume structured representa-
    tions to begin with.
  • Doust, C., Gordon, S. D., Garden, N., Fisher, S. E., Martin, N. G., Bates, T. C., & Luciano, M. (2020). The association of dyslexia and developmental speech and language disorder candidate genes with reading and language abilities in adults. Twin Research and Human Genetics, 23(1), 22-32. doi:10.1017/thg.2020.7.

    Abstract

    Reading and language abilities are critical for educational achievement and success in adulthood. Variation in these traits is highly heritable, but the underlying genetic architecture is largely undiscovered. Genetic studies of reading and language skills traditionally focus on children with developmental disorders; however, much larger unselected adult samples are available, increasing power to identify associations with specific genetic variants of small effect size. We introduce an Australian adult population cohort (41.7–73.2 years of age, N = 1505) in which we obtained data using validated measures of several aspects of reading and language abilities. We performed genetic association analysis for a reading and spelling composite score, nonword reading (assessing phonological processing: a core component in learning to read), phonetic spelling, self-reported reading impairment and nonword repetition (a marker of language ability). Given the limited power in a sample of this size (~80% power to find a minimum effect size of 0.005), we focused on analyzing candidate genes that have been associated with dyslexia and developmental speech and language disorders in prior studies. In gene-based tests, FOXP2, a gene implicated in speech/language disorders, was associated with nonword repetition (p < .001), phonetic spelling (p = .002) and the reading and spelling composite score (p < .001). Gene-set analyses of candidate dyslexia and speech/language disorder genes were not significant. These findings contribute to the assessment of genetic associations in reading and language disorders, crucial for understanding their etiology and informing intervention strategies, and validate the approach of using unselected adult samples for gene discovery in language and reading.

    Additional information

    Supplementary materials
  • Dowell, C., Hajnal, A., Pouw, W., & Wagman, J. B. (2020). Visual and haptic perception of affordances of feelies. Perception, 49(9), 905-925. doi:10.1177/0301006620946532.

    Abstract

    Most objects have well-defined affordances. Investigating perception of affordances of objects that were not created for a specific purpose would provide insight into how affordances are perceived. In addition, comparison of perception of affordances for such objects across different exploratory modalities (visual vs. haptic) would offer a strong test of the lawfulness of information about affordances (i.e., the invariance of such information over transformation). Along these lines, “feelies”— objects created by Gibson with no obvious function and unlike any common object—could shed light on the processes underlying affordance perception. This study showed that when observers reported potential uses for feelies, modality significantly influenced what kind of affordances were perceived. Specifically, visual exploration resulted in more noun labels (e.g., “toy”) than haptic exploration which resulted in more verb labels (i.e., “throw”). These results suggested that overlapping, but distinct classes of action possibilities are perceivable using vision and haptics. Semantic network analyses revealed that visual exploration resulted in object-oriented responses focused on object identification, whereas haptic exploration resulted in action-oriented responses. Cluster analyses confirmed these results. Affordance labels produced in the visual condition were more consistent, used fewer descriptors, were less diverse, but more novel than in the haptic condition.
  • Drijvers, L., Mulder, K., & Ernestus, M. (2016). Alpha and gamma band oscillations index differential processing of acoustically reduced and full forms. Brain and Language, 153-154, 27-37. doi:10.1016/j.bandl.2016.01.003.

    Abstract

    Reduced forms like yeshay for yesterday often occur in conversations. Previous behavioral research reported a processing advantage for full over reduced forms. The present study investigated whether this processing advantage is reflected in a modulation of alpha (8–12 Hz) and gamma (30+ Hz) band activity. In three electrophysiological experiments, participants listened to full and reduced forms in isolation (Experiment 1), sentence-final position (Experiment 2), or mid-sentence position (Experiment 3). Alpha power was larger in response to reduced forms than to full forms, but only in Experiments 1 and 2. We interpret these increases in alpha power as reflections of higher auditory cognitive load. In all experiments, gamma power only increased in response to full forms, which we interpret as showing that lexical activation spreads more quickly through the semantic network for full than for reduced forms. These results confirm a processing advantage for full forms, especially in non-medial sentence position.
  • Drijvers, L., & Ozyurek, A. (2020). Non-native listeners benefit less from gestures and visible speech than native listeners during degraded speech comprehension. Language and Speech, 63(2), 209-220. doi:10.1177/0023830919831311.

    Abstract

    Native listeners benefit from both visible speech and iconic gestures to enhance degraded speech comprehension (Drijvers & Ozyürek, 2017). We tested how highly proficient non-native listeners benefit from these visual articulators compared to native listeners. We presented videos of an actress uttering a verb in clear, moderately, or severely degraded speech, while her lips were blurred, visible, or visible and accompanied by a gesture. Our results revealed that unlike native listeners, non-native listeners were less likely to benefit from the combined enhancement of visible speech and gestures, especially since the benefit from visible speech was minimal when the signal quality was not sufficient.
  • Dronkers, N. F., Wilkins, D. P., Van Valin Jr., R. D., Redfern, B. B., & Jaeger, J. J. (2004). Lesion analysis of the brain areas involved in language comprehension. Cognition, 92, 145-177. doi:10.1016/j.cognition.2003.11.002.

    Abstract

    The cortical regions of the brain traditionally associated with the comprehension of language are Wernicke's area and Broca's area. However, recent evidence suggests that other brain regions might also be involved in this complex process. This paper describes the opportunity to evaluate a large number of brain-injured patients to determine which lesioned brain areas might affect language comprehension. Sixty-four chronic left hemisphere stroke patients were evaluated on 11 subtests of the Curtiss–Yamada Comprehensive Language Evaluation – Receptive (CYCLE-R; Curtiss, S., & Yamada, J. (1988). Curtiss–Yamada Comprehensive Language Evaluation. Unpublished test, UCLA). Eight right hemisphere stroke patients and 15 neurologically normal older controls also participated. Patients were required to select a single line drawing from an array of three or four choices that best depicted the content of an auditorily-presented sentence. Patients' lesions obtained from structural neuroimaging were reconstructed onto templates and entered into a voxel-based lesion-symptom mapping (VLSM; Bates, E., Wilson, S., Saygin, A. P., Dick, F., Sereno, M., Knight, R. T., & Dronkers, N. F. (2003). Voxel-based lesion-symptom mapping. Nature Neuroscience, 6(5), 448–450.) analysis along with the behavioral data. VLSM is a brain–behavior mapping technique that evaluates the relationships between areas of injury and behavioral performance in all patients on a voxel-by-voxel basis, similar to the analysis of functional neuroimaging data. Results indicated that lesions to five left hemisphere brain regions affected performance on the CYCLE-R, including the posterior middle temporal gyrus and underlying white matter, the anterior superior temporal gyrus, the superior temporal sulcus and angular gyrus, mid-frontal cortex in Brodmann's area 46, and Brodmann's area 47 of the inferior frontal gyrus. Lesions to Broca's and Wernicke's areas were not found to significantly alter language comprehension on this particular measure. Further analysis suggested that the middle temporal gyrus may be more important for comprehension at the word level, while the other regions may play a greater role at the level of the sentence. These results are consistent with those seen in recent functional neuroimaging studies and offer complementary data in the effort to understand the brain areas underlying language comprehension.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2016). Lexically-guided perceptual learning in non-native listening. Bilingualism: Language and Cognition, 19(5), 914-920. doi:10.1017/S136672891600002X.

    Abstract

    There is ample evidence that native and non-native listeners use lexical knowledge to retune their native phonetic categories following ambiguous pronunciations. The present study investigates whether a non-native ambiguous sound can retune non-native phonetic categories. After a brief exposure to an ambiguous British English [l/ɹ] sound, Dutch listeners demonstrated retuning. This retuning was, however, asymmetrical: the non-native listeners seemed to show (more) retuning of the /ɹ/ category than of the /l/ category, suggesting that non-native listeners can retune non-native phonetic categories. This asymmetry is argued to be related to the large phonetic variability of /r/ in both Dutch and English.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2014). Phoneme category retuning in a non-native language. In Proceedings of Interspeech 2014: 15th Annual Conference of the International Speech Communication Association (pp. 553-557).

    Abstract

    Previous studies have demonstrated that native listeners
    modify their interpretation of a speech sound when a talker
    produces an ambiguous sound in order to quickly tune into a
    speaker, but there is hardly any evidence that non-native
    listeners employ a similar mechanism when encountering
    ambiguous pronunciations. So far, one study demonstrated
    this lexically-guided perceptual learning effect for nonnatives,
    using phoneme categories similar in the native
    language of the listeners and the non-native language of the
    stimulus materials. The present study investigates the question
    whether phoneme category retuning is possible in a nonnative
    language for a contrast, /l/-/r/, which is phonetically
    differently embedded in the native (Dutch) and nonnative
    (English) languages involved. Listening experiments indeed
    showed a lexically-guided perceptual learning effect.
    Assuming that Dutch listeners have different phoneme
    categories for the native Dutch and non-native English /r/, as
    marked differences between the languages exist for /r/, these
    results, for the first time, seem to suggest that listeners are not
    only able to retune their native phoneme categories but also
    their non-native phoneme categories to include ambiguous
    pronunciations.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2016). Processing and adaptation to ambiguous sounds during the course of perceptual learning. In Proceedings of Interspeech 2016: The 17th Annual Conference of the International Speech Communication Association (pp. 2811-2815). doi:10.21437/Interspeech.2016-814.

    Abstract

    Listeners use their lexical knowledge to interpret ambiguous sounds, and retune their phonetic categories to include this ambiguous sound. Although there is ample evidence for lexically-guided retuning, the adaptation process is not fully understood. Using a lexical decision task with an embedded auditory semantic priming task, the present study investigates whether words containing an ambiguous sound are processed in the same way as “natural” words and whether adaptation to the ambiguous sound tends to equalize the processing of “ambiguous” and natural words. Analyses of the yes/no responses and reaction times to natural and “ambiguous” words showed that words containing an ambiguous sound were accepted as words less often and were processed slower than the same words without ambiguity. The difference in acceptance disappeared after exposure to approximately 15 ambiguous items. Interestingly, lower acceptance rates and slower processing did not have an effect on the processing of semantic information of the following word. However, lower acceptance rates of ambiguous primes predict slower reaction times of these primes, suggesting an important role of stimulus-specific characteristics in triggering lexically-guided perceptual learning.
  • Drude, S., Trilsbeek, P., Sloetjes, H., & Broeder, D. (2014). Best practices in the creation, archiving and dissemination of speech corpora at the Language Archive. In S. Ruhi, M. Haugh, T. Schmidt, & K. Wörner (Eds.), Best Practices for Spoken Corpora in Linguistic Research (pp. 183-207). Newcastle upon Tyne: Cambridge Scholars Publishing.
  • Drude, S. (2014). Reduplication as a tool for morphological and phonological analysis in Awetí. In G. G. Gómez, & H. Van der Voort (Eds.), Reduplication in Indigenous languages of South America (pp. 185-216). Leiden: Brill.
  • Drude, S., Broeder, D., & Trilsbeek, P. (2014). The Language Archive and its solutions for sustainable endangered languages corpora. Book 2.0, 4, 5-20. doi:10.1386/btwo.4.1-2.5_1.

    Abstract

    Since the late 1990s, the technical group at the Max-Planck-Institute for Psycholinguistics has worked on solutions for important challenges in building sustainable data archives, in particular, how to guarantee long-time-availability of digital research data for future research. The support for the well-known DOBES (Documentation of Endangered Languages) programme has greatly inspired and advanced this work, and lead to the ongoing development of a whole suite of tools for annotating, cataloguing and archiving multi-media data. At the core of the LAT (Language Archiving Technology) tools is the IMDI metadata schema, now being integrated into a larger network of digital resources in the European CLARIN project. The multi-media annotator ELAN (with its web-based cousin ANNEX) is now well known not only among documentary linguists. We aim at presenting an overview of the solutions, both achieved and in development, for creating and exploiting sustainable digital data, in particular in the area of documenting languages and cultures, and their interfaces with related other developments
  • Drude, S. (2004). Wörterbuchinterpretation: Integrative Lexikographie am Beispiel des Guaraní. Tübingen: Niemeyer.

    Abstract

    This study provides an answer to the question of how dictionaries should be read. For this purpose, articles taken from an outline for a Guaraní-German dictionary geared to established lexicographic practice are provided with standardized interpretations. Each article is systematically assigned a formal sentence making its meaning explicit both for content words (including polysemes) and functional words or affixes. Integrative Linguistics proves its theoretical and practical value both for the description of Guaraní (indigenous Indian language spoken in Paraguay, Argentina and Brazil) and in metalexicographic terms.
  • Dunn, M. (2014). [Review of the book Evolutionary Linguistics by April McMahon and Robert McMahon]. American Anthropologist, 116(3), 690-691.
  • Dunn, M. (2014). Gender determined dialect variation. In G. G. Corbett (Ed.), The expression of gender (pp. 39-68). Berlin: De Gruyter.
  • Dunn, M. (2014). Language phylogenies. In C. Bowern, & B. Evans (Eds.), The Routledge handbook of historical linguistics (pp. 190-211). London: Routlege.
  • Dunn, M., & Terrill, A. (2004). Lexical comparison between Papuan languages: Inland bird and tree species. In A. Majid (Ed.), Field Manual Volume 9 (pp. 65-69). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492942.

    Abstract

    The Pioneers project seeks to uncover relationships between the Papuan languages of Island Melanesia. One basic way to uncover linguistic relationships, either contact or genetic, is through lexical comparison. We have seen very few shared words between our Papuan languages and any other languages, either Oceanic or Papuan, but most of the words which are shared are shared because they are commonly borrowed from Oceanic languages. This task is aimed at enabling fieldworkers to collect terms for inland bird and tree species. In the past it is has proved very difficult for non-experts to identify plant and bird species, so the task consists of a booklet of colour pictures of some of the more common species, with information on the range and habits of each species, as well as some information on their cultural uses, which should enable better identification. It is intended that fieldworkers will show this book to consultants and use it as an elicitation aid.
  • Eaves, L. J., St Pourcain, B., Smith, G. D., York, T. P., & Evans, D. M. (2014). Resolving the Effects of Maternal and Offspring Genotype on Dyadic Outcomes in Genome Wide Complex Trait Analysis (“M-GCTA”). Behavior Genetics, 44(5), 445-455. doi:10.1007/s10519-014-9666-6.

    Abstract

    Genome wide complex trait analysis (GCTA) is extended to include environmental effects of the maternal genotype on offspring phenotype (“maternal effects”, M-GCTA). The model includes parameters for the direct effects of the offspring genotype, maternal effects and the covariance between direct and maternal effects. Analysis of simulated data, conducted in OpenMx, confirmed that model parameters could be recovered by full information maximum likelihood (FIML) and evaluated the biases that arise in conventional GCTA when indirect genetic effects are ignored. Estimates derived from FIML in OpenMx showed very close agreement to those obtained by restricted maximum likelihood using the published algorithm for GCTA. The method was also applied to illustrative perinatal phenotypes from ~4,000 mother-offspring pairs from the Avon Longitudinal Study of Parents and Children. The relative merits of extended GCTA in contrast to quantitative genetic approaches based on analyzing the phenotypic covariance structure of kinships are considered.
  • Edmunds, R., L'Hours, H., Rickards, L., Trilsbeek, P., Vardigan, M., & Mokrane, M. (2016). Core trustworthy data repositories requirements. Zenodo, 168411. doi:10.5281/zenodo.168411.

    Abstract

    The Core Trustworthy Data Repository Requirements were developed by the DSA–WDS Partnership Working Group on Repository Audit and Certification, a Working Group (WG) of the Research Data Alliance . The goal of the effort was to create a set of harmonized common requirements for certification of repositories at the core level, drawing from criteria already put in place by the Data Seal of Approval (DSA: www.datasealofapproval.org) and the ICSU World Data System (ICSU-WDS: https://www.icsu-wds.org/services/certification). An additional goal of the project was to develop common procedures to be implemented by both DSA and ICSU-WDS. Ultimately, the DSA and ICSU-WDS plan to collaborate on a global framework for repository certification that moves from the core to the extended (nestor-Seal DIN 31644), to the formal (ISO 16363) level.
  • Eekhof, L. S., Van Krieken, K., & Sanders, J. (2020). VPIP: A lexical identification procedure for perceptual, cognitive, and emotional viewpoint in narrative discourse. Open Library of Humanities, 6(1): 18. doi:10.16995/olh.483.

    Abstract

    Although previous work on viewpoint techniques has shown that viewpoint is ubiquitous in narrative discourse, approaches to identify and analyze the linguistic manifestations of viewpoint are currently scattered over different disciplines and dominated by qualitative methods. This article presents the ViewPoint Identification Procedure (VPIP), the first systematic method for the lexical identification of markers of perceptual, cognitive and emotional viewpoint in narrative discourse. Use of this step-wise procedure is facilitated by a large appendix of Dutch viewpoint markers. After the introduction of the procedure and discussion of some special cases, we demonstrate its application by discussing three types of narrative excerpts: a literary narrative, a news narrative, and an oral narrative. Applying the identification procedure to the full news narrative, we show that the VPIP can be reliably used to detect viewpoint markers in long stretches of narrative discourse. As such, the systematic identification of viewpoint has the potential to benefit both established viewpoint scholars and researchers from other fields interested in the analytical and experimental study of narrative and viewpoint. Such experimental studies could complement qualitative studies, ultimately advancing our theoretical understanding of the relation between the linguistic presentation and cognitive processing of viewpoint. Suggestions for elaboration of the VPIP, particularly in the realm of pragmatic viewpoint marking, are formulated in the final part of the paper.

    Additional information

    appendix
  • Egger, J., Rowland, C. F., & Bergmann, C. (2020). Improving the robustness of infant lexical processing speed measures. Behavior Research Methods, 52, 2188-2201. doi:10.3758/s13428-020-01385-5.

    Abstract

    Visual reaction times to target pictures after naming events are an informative measurement in language acquisition research, because gaze shifts measured in looking-while-listening paradigms are an indicator of infants’ lexical speed of processing. This measure is very useful, as it can be applied from a young age onwards and has been linked to later language development. However, to obtain valid reaction times, the infant is required to switch the fixation of their eyes from a distractor to a target object. This means that usually at least half the trials have to be discarded—those where the participant is already fixating the target at the onset of the target word—so that no reaction time can be measured. With few trials, reliability suffers, which is especially problematic when studying individual differences. In order to solve this problem, we developed a gaze-triggered looking-while-listening paradigm. The trials do not differ from the original paradigm apart from the fact that the target object is chosen depending on the infant’s eye fixation before naming. The object the infant is looking at becomes the distractor and the other object is used as the target, requiring a fixation switch, and thus providing a reaction time. We tested our paradigm with forty-three 18-month-old infants, comparing the results to those from the original paradigm. The Gaze-triggered paradigm yielded more valid reaction time trials, as anticipated. The results of a ranked correlation between the conditions confirmed that the manipulated paradigm measures the same concept as the original paradigm.
  • Ehrich, V., & Levelt, W. J. M. (Eds.). (1982). Max-Planck-Institute for Psycholinguistics: Annual Report Nr.3 1982. Nijmegen: MPI for Psycholinguistics.
  • Eielts, C., Pouw, W., Ouwehand, K., Van Gog, T., Zwaan, R. A., & Paas, F. (2020). Co-thought gesturing supports more complex problem solving in subjects with lower visual working-memory capacity. Psychological Research, 84, 502-513. doi:10.1007/s00426-018-1065-9.

    Abstract

    During silent problem solving, hand gestures arise that have no communicative intent. The role of such co-thought gestures in
    cognition has been understudied in cognitive research as compared to co-speech gestures. We investigated whether gesticulation
    during silent problem solving supported subsequent performance in a Tower of Hanoi problem-solving task, in relation
    to visual working-memory capacity and task complexity. Seventy-six participants were assigned to either an instructed gesture
    condition or a condition that allowed them to gesture, but without explicit instructions to do so. This resulted in three
    gesture groups: (1) non-gesturing; (2) spontaneous gesturing; (3) instructed gesturing. In line with the embedded/extended
    cognition perspective on gesture, gesturing benefited complex problem-solving performance for participants with a lower
    visual working-memory capacity, but not for participants with a lower spatial working-memory capacity.
  • Eijk, L., Fletcher, A., McAuliffe, M., & Janse, E. (2020). The effects of word frequency and word probability on speech rhythm in dysarthria. Journal of Speech, Language, and Hearing Research, 63, 2833-2845. doi:10.1044/2020_JSLHR-19-00389.

    Abstract

    Purpose

    In healthy speakers, the more frequent and probable a word is in its context, the shorter the word tends to be. This study investigated whether these probabilistic effects were similarly sized for speakers with dysarthria of different severities.
    Method

    Fifty-six speakers of New Zealand English (42 speakers with dysarthria and 14 healthy speakers) were recorded reading the Grandfather Passage. Measurements of word duration, frequency, and transitional word probability were taken.
    Results

    As hypothesized, words with a higher frequency and probability tended to be shorter in duration. There was also a significant interaction between word frequency and speech severity. This indicated that the more severe the dysarthria, the smaller the effects of word frequency on speakers' word durations. Transitional word probability also interacted with speech severity, but did not account for significant unique variance in the full model.
    Conclusions

    These results suggest that, as the severity of dysarthria increases, the duration of words is less affected by probabilistic variables. These findings may be due to reductions in the control and execution of muscle movement exhibited by speakers with dysarthria.
  • Eising, E., Huisman, S. M., Mahfouz, A., Vijfhuizen, L. S., Anttila, V., Winsvold, B. S., Kurth, T., Ikram, M. A., Freilinger, T., Kaprio, J., Boomsma, D. I., van Duijn, C. M., Järvelin, M.-R.-R., Zwart, J.-A., Quaye, L., Strachan, D. P., Kubisch, C., Dichgans, M., Davey Smith, G., Stefansson, K. and 9 moreEising, E., Huisman, S. M., Mahfouz, A., Vijfhuizen, L. S., Anttila, V., Winsvold, B. S., Kurth, T., Ikram, M. A., Freilinger, T., Kaprio, J., Boomsma, D. I., van Duijn, C. M., Järvelin, M.-R.-R., Zwart, J.-A., Quaye, L., Strachan, D. P., Kubisch, C., Dichgans, M., Davey Smith, G., Stefansson, K., Palotie, A., Chasman, D. I., Ferrari, M. D., Terwindt, G. M., de Vries, B., Nyholt, D. R., Lelieveldt, B. P., van den Maagdenberg, A. M., & Reinders, M. J. (2016). Gene co‑expression analysis identifies brain regions and cell types involved in migraine pathophysiology: a GWAS‑based study using the Allen Human Brain Atlas. Human Genetics, 135(4), 425-439. doi:10.1007/s00439-016-1638-x.

    Abstract

    Migraine is a common disabling neurovascular brain disorder typically characterised by attacks of severe headache and associated with autonomic and neurological symptoms. Migraine is caused by an interplay of genetic and environmental factors. Genome-wide association studies (GWAS) have identified over a dozen genetic loci associated with migraine. Here, we integrated migraine GWAS data with high-resolution spatial gene expression data of normal adult brains from the Allen Human Brain Atlas to identify specific brain regions and molecular pathways that are possibly involved in migraine pathophysiology. To this end, we used two complementary methods. In GWAS data from 23,285 migraine cases and 95,425 controls, we first studied modules of co-expressed genes that were calculated based on human brain expression data for enrichment of genes that showed association with migraine. Enrichment of a migraine GWAS signal was found for five modules that suggest involvement in migraine pathophysiology of: (i) neurotransmission, protein catabolism and mitochondria in the cortex; (ii) transcription regulation in the cortex and cerebellum; and (iii) oligodendrocytes and mitochondria in subcortical areas. Second, we used the high-confidence genes from the migraine GWAS as a basis to construct local migraine-related co-expression gene networks. Signatures of all brain regions and pathways that were prominent in the first method also surfaced in the second method, thus providing support that these brain regions and pathways are indeed involved in migraine pathophysiology.
  • Eising, E., De Leeuw, C., Min, J. L., Anttila, V., Verheijen, M. H. G., Terwindt, G. M., Dichgans, M., Freilinger, T., Kubisch, C., Ferrari, M. D., Smit, A. B., De Vries, B., Palotie, A., Van Den Maagdenberg, A. M. J. M., & Posthuma, D. (2016). Involvement of astrocyte and oligodendrocyte gene sets in migraine. Cephalalgia, 36(7), 640-647. doi:10.1177/0333102415618614.

    Abstract

    Migraine is a common episodic brain disorder characterized by recurrent attacks of severe unilateral headache and additional neurological symptoms. Two main migraine types can be distinguished based on the presence of aura symptoms that can accompany the headache: migraine with aura and migraine without aura. Multiple genetic and environmental factors confer disease susceptibility. Recent genome-wide association studies (GWAS) indicate that migraine susceptibility genes are involved in various pathways, including neurotransmission, which have already been implicated in genetic studies of monogenic familial hemiplegic migraine, a subtype of migraine with aura. Methods To further explore the genetic background of migraine, we performed a gene set analysis of migraine GWAS data of 4954 clinic-based patients with migraine, as well as 13,390 controls. Curated sets of synaptic genes and sets of genes predominantly expressed in three glial cell types (astrocytes, microglia and oligodendrocytes) were investigated. Discussion Our results show that gene sets containing astrocyte- and oligodendrocyte-related genes are associated with migraine, which is especially true for gene sets involved in protein modification and signal transduction. Observed differences between migraine with aura and migraine without aura indicate that both migraine types, at least in part, seem to have a different genetic background.
  • Emmendorfer, A. K., Correia, J. M., Jansma, B. M., Kotz, S. A., & Bonte, M. (2020). ERP mismatch response to phonological and temporal regularities in speech. Scientific Reports, 10: 9917. doi:10.1038/s41598-020-66824-x.

    Abstract

    Predictions of our sensory environment facilitate perception across domains. During speech perception, formal and temporal predictions may be made for phonotactic probability and syllable stress patterns, respectively, contributing to the efficient processing of speech input. The current experiment employed a passive EEG oddball paradigm to probe the neurophysiological processes underlying temporal and formal predictions simultaneously. The component of interest, the mismatch negativity (MMN), is considered a marker for experience-dependent change detection, where its timing and amplitude are indicative of the perceptual system’s sensitivity to presented stimuli. We hypothesized that more predictable stimuli (i.e. high phonotactic probability and first syllable stress) would facilitate change detection, indexed by shorter peak latencies or greater peak amplitudes of the MMN. This hypothesis was confirmed for phonotactic probability: high phonotactic probability deviants elicited an earlier MMN than low phonotactic probability deviants. We do not observe a significant modulation of the MMN to variations in syllable stress. Our findings confirm that speech perception is shaped by formal and temporal predictability. This paradigm may be useful to investigate the contribution of implicit processing of statistical regularities during (a)typical language development.

    Additional information

    supplementary information
  • Emmorey, K., & Ozyurek, A. (2014). Language in our hands: Neural underpinnings of sign language and co-speech gesture. In M. S. Gazzaniga, & G. R. Mangun (Eds.), The cognitive neurosciences (5th ed., pp. 657-666). Cambridge, Mass: MIT Press.
  • Enfield, N. J. (2004). On linear segmentation and combinatorics in co-speech gesture: A symmetry-dominance construction in Lao fish trap descriptions. Semiotica, 149(1/4), 57-123. doi:10.1515/semi.2004.038.
  • Enfield, N. J., Levinson, S. C., De Ruiter, J. P., & Stivers, T. (2004). Building a corpus of multimodal interaction in your field site. In A. Majid (Ed.), Field Manual Volume 9 (pp. 32-36). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.506951.

    Abstract

    This Field Manual entry has been superceded by the 2007 version:
    https://doi.org/10.17617/2.468728

    Files private

    Request files
  • Enfield, N. J. (2014). Causal dynamics of language. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 325-342). Cambridge: Cambridge University Press.
  • Enfield, N. J. (2004). Adjectives in Lao. In R. M. W. Dixon, & A. Y. Aikhenvald (Eds.), Adjective classes: A cross-linguistic typology (pp. 323-347). Oxford: Oxford University Press.
  • Enfield, N. J. (2004). Areal grammaticalisation of postverbal 'acquire' in mainland Southeast Asia. In S. Burusphat (Ed.), Proceedings of the 11th Southeast Asia Linguistics Society Meeting (pp. 275-296). Arizona State University: Tempe.
  • Enfield, N. J. (2004). Nominal classification in Lao: A sketch. Sprachtypologie und Universalienforschung, 57(2/3), 117-143.
  • Enfield, N. J. (2014). Human agency and the infrastructure for requests. In P. Drew, & E. Couper-Kuhlen (Eds.), Requesting in social interaction (pp. 35-50). Amsterdam: John Benjamins.

    Abstract

    This chapter discusses some of the elements of human sociality that serve as the social and cognitive infrastructure or preconditions for the use of requests and other kinds of recruitments in interaction. The notion of an agent with goals is a canonical starting point, though importantly agency tends not to be wholly located in individuals, but rather is socially distributed. This is well illustrated in the case of requests, in which the person or group that has a certain goal is not necessarily the one who carries out the behavior towards that goal. The chapter focuses on the role of semiotic (mostly linguistic) resources in negotiating the distribution of agency with request-like actions, with examples from video-recorded interaction in Lao, a language spoken in Laos and nearby countries. The examples illustrate five hallmarks of requesting in human interaction, which show some ways in which our ‘manipulation’ of other people is quite unlike our manipulation of tools: (1) that even though B is being manipulated, B wants to help, (2) that while A is manipulating B now, A may be manipulated in return later; (3) that the goal of the behavior may be shared between A and B, (4) that B may not comply, or may comply differently than requested, due to actual or potential contingencies, and (5) that A and B are accountable to one another; reasons may be asked for, and/or given, for the request. These hallmarks of requesting are grounded in a prosocial framework of human agency.
  • Enfield, N., Kelly, A., & Sprenger, S. (2004). Max-Planck-Institute for Psycholinguistics: Annual Report 2004. Nijmegen: MPI for Psycholinguistics.
  • Enfield, N. J., & Sidnell, J. (2014). Language presupposes an enchronic infrastructure for social interaction. In D. Dor, C. Knight, & J. Lewis (Eds.), The social origins of language (pp. 92-104). Oxford: Oxford University Press.
  • Enfield, N. J., Kockelman, P., & Sidnell, J. (2014). Interdisciplinary perspectives. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 599-602). Cambridge: Cambridge University Press.
  • Enfield, N. J., Kockelman, P., & Sidnell, J. (2014). Introduction: Directions in the anthropology of language. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 1-24). Cambridge: Cambridge University Press.
  • Enfield, N. J. (2014). Natural causes of language: Frames, biases and cultural transmission. Berlin: Language Science Press. Retrieved from http://langsci-press.org/catalog/book/48.

    Abstract

    What causes a language to be the way it is? Some features are universal, some are inherited, others are borrowed, and yet others are internally innovated. But no matter where a bit of language is from, it will only exist if it has been diffused and kept in circulation through social interaction in the history of a community. This book makes the case that a proper understanding of the ontology of language systems has to be grounded in the causal mechanisms by which linguistic items are socially transmitted, in communicative contexts. A biased transmission model provides a basis for understanding why certain things and not others are likely to develop, spread, and stick in languages. Because bits of language are always parts of systems, we also need to show how it is that items of knowledge and behavior become structured wholes. The book argues that to achieve this, we need to see how causal processes apply in multiple frames or 'time scales' simultaneously, and we need to understand and address each and all of these frames in our work on language. This forces us to confront implications that are not always comfortable: for example, that "a language" is not a real thing but a convenient fiction, that language-internal and language-external processes have a lot in common, and that tree diagrams are poor conceptual tools for understanding the history of languages. By exploring avenues for clear solutions to these problems, this book suggests a conceptual framework for ultimately explaining, in causal terms, what languages are like and why they are like that.
  • Enfield, N. J. (2004). Repair sequences in interaction. In A. Majid (Ed.), Field Manual Volume 9 (pp. 48-52). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492945.

    Abstract

    This Field Manual entry has been superceded by the 2007 version: https://doi.org/10.17617/2.468724

    Files private

    Request files
  • Enfield, N. J., Kockelman, P., & Sidnell, J. (Eds.). (2014). The Cambridge handbook of linguistic anthropology. Cambridge: Cambridge University Press.
  • Enfield, N. J., Sidnell, J., & Kockelman, P. (2014). System and function. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 25-28). Cambridge: Cambridge University Press.
  • Enfield, N. J. (2014). The item/system problem. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 48-77). Cambridge: Cambridge University Press.
  • Enfield, N. J. (2014). Transmission biases in the cultural evolution of language: Towards an explanatory framework. In D. Dor, C. Knight, & J. Lewis (Eds.), The social origins of language (pp. 325-335). Oxford: Oxford University Press.
  • Erard, M. (2016). Solving Australia's language puzzle. Science, 353(6306), 1357-1359. doi:10.1126/science.353.6306.1357.
  • Ergin, R., Raviv, L., Senghas, A., Padden, C., & Sandler, W. (2020). Community structure affects convergence on uniform word orders: Evidence from emerging sign languages. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 84-86). Nijmegen: The Evolution of Language Conferences.
  • Ernestus, M. (2014). Acoustic reduction and the roles of abstractions and exemplars in speech processing. Lingua, 142, 27-41. doi:10.1016/j.lingua.2012.12.006.

    Abstract

    Acoustic reduction refers to the frequent phenomenon in conversational speech that words are produced with fewer or lenited segments compared to their citation forms. The few published studies on the production and comprehension of acoustic reduction have important implications for the debate on the relevance of abstractions and exemplars in speech processing. This article discusses these implications. It first briefly introduces the key assumptions of simple abstractionist and simple exemplar-based models. It then discusses the literature on acoustic reduction and draws the conclusion that both types of models need to be extended to explain all findings. The ultimate model should allow for the storage of different pronunciation variants, but also reserve an important role for phonetic implementation. Furthermore, the recognition of a highly reduced pronunciation variant requires top down information and leads to activation of the corresponding unreduced variant, the variant that reaches listeners’ consciousness. These findings are best accounted for in hybrids models, assuming both abstract representations and exemplars. None of the hybrid models formulated so far can account for all data on reduced speech and we need further research for obtaining detailed insight into how speakers produce and listeners comprehend reduced speech.
  • Ernestus, M., & Giezenaar, G. (2014). Een goed verstaander heeft maar een half woord nodig. In B. Bossers (Ed.), Vakwerk 9: Achtergronden van de NT2-lespraktijk: Lezingen conferentie Hoeven 2014 (pp. 81-92). Amsterdam: BV NT2.
  • Ernestus, M., & Mak, W. M. (2004). Distinctive phonological features differ in relevance for both spoken and written word recognition. Brain and Language, 90(1-3), 378-392. doi:10.1016/S0093-934X(03)00449-8.

    Abstract

    This paper discusses four experiments on Dutch which show that distinctive phonological features differ in their relevance for word recognition. The relevance of a feature for word recognition depends on its phonological stability, that is, the extent to which that feature is generally realized in accordance with its lexical specification in the relevant word position. If one feature value is uninformative, all values of that feature are less relevant for word recognition, with the least informative feature being the least relevant. Features differ in their relevance both in spoken and written word recognition, though the differences are more pronounced in auditory lexical decision than in self-paced reading.
  • Ernestus, M., & Baayen, R. H. (2004). Analogical effects in regular past tense production in Dutch. Linguistics, 42(5), 873-903. doi:10.1515/ling.2004.031.

    Abstract

    This study addresses the question to what extent the production of regular past tense forms in Dutch is a¤ected by analogical processes. We report an experiment in which native speakers of Dutch listened to existing regular verbs over headphones, and had to indicate which of the past tense allomorphs, te or de, was appropriate for these verbs. According to generative analyses, the choice between the two su‰xes is completely regular and governed by the underlying [voice]-specification of the stem-final segment. In this approach, no analogical e¤ects are expected. In connectionist and analogical approaches, by contrast, the phonological similarity structure in the lexicon is expected to a¤ect lexical processing. Our experimental results support the latter approach: all participants created more nonstandard past tense forms, produced more inconsistency errors, and responded more slowly for verbs with stronger analogical support for the nonstandard form.
  • Ernestus, M., & Baayen, R. H. (2004). Kuchde, tobte, en turfte: Lekkage in 't kofschip. Onze Taal, 73(12), 360-361.
  • Ernestus, M., Giezenaar, G., & Dikmans, M. (2016). Ikfstajezotuuknie: Half uitgesproken woorden in alledaagse gesprekken. Les, 199, 7-9.

    Abstract

    Amsterdam klinkt in informele gesprekken vaak als Amsdam en Rotterdam als Rodam, zonder dat de meeste moedertaalsprekers zich daar bewust van zijn. In alledaagse situaties valt een aanzienlijk deel van de klanken weg. Daarnaast worden veel klanken zwakker gearticuleerd (bijvoorbeeld een d als een j, als de mond niet helemaal afgesloten wordt). Het lijkt waarschijnlijk dat deze half uitgesproken woorden een probleem vormen voor tweedetaalleerders. Gereduceerde vormen kunnen immers sterk afwijken van de vormen die deze leerders geleerd hebben. Of dit werkelijk zo is, hebben de auteurs onderzocht in twee studies. Voordat ze deze twee studies bespreken, vertellen ze eerst kort iets over de verschillende typen reducties die voorkomen.
  • Ernestus, M. (2016). L'utilisation des corpus oraux pour la recherche en (psycho)linguistique. In M. Kilani-Schoch, C. Surcouf, & A. Xanthos (Eds.), Nouvelles technologies et standards méthodologiques en linguistique (pp. 65-93). Lausanne: Université de Lausanne.
  • Ernestus, M., Kočková-Amortová, L., & Pollak, P. (2014). The Nijmegen corpus of casual Czech. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2014: 9th International Conference on Language Resources and Evaluation (pp. 365-370).

    Abstract

    This article introduces a new speech corpus, the Nijmegen Corpus of Casual Czech (NCCCz), which contains more than 30 hours of high-quality recordings of casual conversations in Common Czech, among ten groups of three male and ten groups of three female friends. All speakers were native speakers of Czech, raised in Prague or in the region of Central Bohemia, and were between 19 and 26 years old. Every group of speakers consisted of one confederate, who was instructed to keep the conversations lively, and two speakers naive to the purposes of the recordings. The naive speakers were engaged in conversations for approximately 90 minutes, while the confederate joined them for approximately the last 72 minutes. The corpus was orthographically annotated by experienced transcribers and this orthographic transcription was aligned with the speech signal. In addition, the conversations were videotaped. This corpus can form the basis for all types of research on casual conversations in Czech, including phonetic research and research on how to improve automatic speech recognition. The corpus will be freely available
  • Eryilmaz, K., Little, H., & De Boer, B. (2016). Using HMMs To Attribute Structure To Artificial Languages. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/125.html.

    Abstract

    We investigated the use of Hidden Markov Models (HMMs) as a way of representing repertoires of continuous signals in order to infer their building blocks. We tested the idea on a dataset from an artificial language experiment. The study demonstrates using HMMs for this purpose is viable, but also that there is a lot of room for refinement such as explicit duration modeling, incorporation of autoregressive elements and relaxing the Markovian assumption, in order to accommodate specific details.
  • Estruch, S. B., Graham, S. A., Chinnappa, S. M., Deriziotis, P., & Fisher, S. E. (2016). Functional characterization of rare FOXP2 variants in neurodevelopmental disorder. Journal of Neurodevelopmental Disorders, 8: 44. doi:10.1186/s11689-016-9177-2.
  • Estruch, S. B., Graham, S. A., Deriziotis, P., & Fisher, S. E. (2016). The language-related transcription factor FOXP2 is post-translationally modified with small ubiquitin-like modifiers. Scientific Reports, 6: 20911. doi:10.1038/srep20911.

    Abstract

    Mutations affecting the transcription factor FOXP2 cause a rare form of severe speech and language disorder. Although it is clear that sufficient FOXP2 expression is crucial for normal brain development, little is known about how this transcription factor is regulated. To investigate post-translational mechanisms for FOXP2 regulation, we searched for protein interaction partners of FOXP2, and identified members of the PIAS family as novel FOXP2 interactors. PIAS proteins mediate post-translational modification of a range of target proteins with small ubiquitin-like modifiers (SUMOs). We found that FOXP2 can be modified with all three human SUMO proteins and that PIAS1 promotes this process. An aetiological FOXP2 mutation found in a family with speech and language disorder markedly reduced FOXP2 SUMOylation. We demonstrate that FOXP2 is SUMOylated at a single major site, which is conserved in all FOXP2 vertebrate orthologues and in the paralogues FOXP1 and FOXP4. Abolishing this site did not lead to detectable changes in FOXP2 subcellular localization, stability, dimerization or transcriptional repression in cellular assays, but the conservation of this site suggests a potential role for SUMOylation in regulating FOXP2 activity in vivo.

    Additional information

    srep20911-s1.pdf
  • Ho, Y. Y. W., Evans, D. M., Montgomery, G. W., Henders, A. K., Kemp, J. P., Timpson, N. J., St Pourcain, B., Heath, A. C., Madden, P. A. F., Loesch, D. Z., McNevin, D., Daniel, R., Davey-Smith, G., Martin, N. G., & Medland, S. E. (2016). Common genetic variants influence whorls in fingerprint patterns. Journal of Investigative Dermatology, 136(4), 859-862. doi:10.1016/j.jid.2015.10.062.
  • Evans, N., Levinson, S. C., Enfield, N. J., Gaby, A., & Majid, A. (2004). Reciprocal constructions and situation type. In A. Majid (Ed.), Field Manual Volume 9 (pp. 25-30). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.506955.
  • Evans, S., McGettigan, C., Agnew, Z., Rosen, S., Cesar, L., Boebinger, D., Ostarek, M., Chen, S. H., Richards, A., Meekins, S., & Scott, S. K. (2014). The neural basis of informational and energetic masking effects in the perception and production of speech [abstract]. The Journal of the Acoustical Society of America, 136(4), 2243. doi:10.1121/1.4900096.

    Abstract

    When we have spoken conversations, it is usually in the context of competing sounds within our environment. Speech can be masked by many different kinds of sounds, for example, machinery noise and the speech of others, and these different sounds place differing demands on cognitive resources. In this talk, I will present data from a series of functional magnetic resonance imaging (fMRI) studies in which the informational properties of background sounds have been manipulated to make them more or less similar to speech. I will demonstrate the neural effects associated with speaking over and listening to these sounds, and demonstrate how in perception these effects are modulated by the age of the listener. The results will be interpreted within a framework of auditory processing developed from primate neurophysiology and human functional imaging work (Rauschecker and Scott 2009).
  • Everaerd, D., Klumpers, F., Zwiers, M., Guadalupe, T., Franke, B., Van Oostrum, I., Schene, A., Fernandez, G., & Tendolkar, I. (2016). Childhood abuse and deprivation are associated with distinct sex-dependent differences in brain morphology. Neuropsychopharmacology, 41, 1716-1723. doi:10.1038/npp.2015.344.

    Abstract

    Childhood adversity (CA) has been associated with long-term structural brain alterations and an increased risk for psychiatric disorders. Evidence is emerging that subtypes of CA, varying in the dimensions of threat and deprivation, lead to distinct neural and behavioral outcomes. However, these specific associations have yet to be established without potential confounders such as psychopathology. Moreover, differences in neural development and psychopathology necessitate the exploration of sexual dimorphism. Young healthy adult subjects were selected based on history of CA from a large database to assess gray matter (GM) differences associated with specific subtypes of adversity. We compared voxel-based morphometry data of subjects reporting specific childhood exposure to abuse (n = 127) or deprivation (n = 126) and a similar sized group of controls (n = 129) without reported CA. Subjects were matched on age, gender, and educational level. Differences between CA subtypes were found in the fusiform gyrus and middle occipital gyms, where subjects with a history of deprivation showed reduced GM compared with subjects with a history of abuse. An interaction between sex and CA subtype was found. Women showed less GM in the visual posterior precuneal region after both subtypes of CA than controls. Men had less GM in the postcentral gyms after childhood deprivation compared with abuse. Our results suggest that even in a healthy population, CA subtypes are related to specific alterations in brain structure, which are modulated by sex. These findings may help understand neurodevelopmental consequences related to CA
  • Everett, C., Blasi, D. E., & Roberts, S. G. (2016). Language evolution and climate: The case of desiccation and tone. Journal of Language Evolution, 1, 33-46. doi:10.1093/jole/lzv004.

    Abstract

    We make the case that, contra standard assumption in linguistic theory, the sound systems of human languages are adapted to their environment. While not conclusive, this plausible case rests on several points discussed in this work: First, human behavior is generally adaptive and the assumption that this characteristic does not extend to linguistic structure is empirically unsubstantiated. Second, animal communication systems are well known to be adaptive within species across a variety of phyla and taxa. Third, research in laryngology demonstrates clearly that ambient desiccation impacts the performance of the human vocal cords. The latter point motivates a clear, testable hypothesis with respect to the synchronic global distribution of language types. Fourth, this hypothesis is supported in our own previous work, and here we discuss new approaches being developed to further explore the hypothesis. We conclude by suggesting that the time has come to more substantively examine the possibility that linguistic sound systems are adapted to their physical ecology
  • Everett, C., Blasi, D., & Roberts, S. G. (2016). Response: Climate and language: has the discourse shifted? Journal of Language Evolution, 1(1), 83-87. doi:10.1093/jole/lzv013.

    Abstract

    We begin by thanking the respondents for their thoughtful comments and insightful leads. The overall impression we are left with by this exchange is one of progress, even if no consensus remains about the particular hypothesis we raise. To date, there has been a failure to seriously engage with the possibility that humans might adapt their communication to ecological factors. In these exchanges, we see signs of serious engagement with that possibility. Most respondents expressed agreement with the notion that our central premise—that language is ecologically adaptive—requires further exploration and may in fact be operative. We are pleased to see this shift in discourse, and to witness a heightening appreciation of possible ecological constraints on language evolution. It is that shift in discourse that represents progress in our view. Our hope is that future work will continue to explore these issues, paying careful attention to the fact that the human larynx is clearly sensitive to characteristics of ambient air. More generally, we think this exchange is indicative of the growing realization that inquiries into language development must consider potential external factors (see Dediu 2015)...

    Additional information

    AppendixResponseToHammarstrom.pdf
  • Faber, M., Mak, M., & Willems, R. M. (2020). Word skipping as an indicator of individual reading style during literary reading. Journal of Eye Movement Research, 13(3): 2. doi:10.16910/jemr.13.3.2.

    Abstract

    Decades of research have established that the content of language (e.g. lexical characteristics of words) predicts eye movements during reading. Here we investigate whether there exist individual differences in ‘stable’ eye movement patterns during narrative reading. We computed Euclidean distances from correlations between gaze durations time courses (word level) across 102 participants who each read three literary narratives in Dutch. The resulting distance matrices were compared between narratives using a Mantel test. The results show that correlations between the scaling matrices of different narratives are relatively weak (r ≤ .11) when missing data points are ignored. However, when including these data points as zero durations (i.e. skipped words), we found significant correlations between stories (r > .51). Word skipping was significantly positively associated with print exposure but not with self-rated attention and story-world absorption, suggesting that more experienced readers are more likely to skip words, and do so in a comparable fashion. We interpret this finding as suggesting that word skipping might be a stable individual eye movement pattern.
  • Fan, Q., Guo, X., Tideman, J. W. L., Williams, K. M., Yazar, S., Hosseini, S. M., Howe, L. D., St Pourcain, B., Evans, D. M., Timpson, N. J., McMahon, G., Hysi, P. G., Krapohl, E., Wang, Y. X., Jonas, J. B., Baird, P. N., Wang, J. J., Cheng, C. Y., Teo, Y. Y., Wong, T. Y. and 17 moreFan, Q., Guo, X., Tideman, J. W. L., Williams, K. M., Yazar, S., Hosseini, S. M., Howe, L. D., St Pourcain, B., Evans, D. M., Timpson, N. J., McMahon, G., Hysi, P. G., Krapohl, E., Wang, Y. X., Jonas, J. B., Baird, P. N., Wang, J. J., Cheng, C. Y., Teo, Y. Y., Wong, T. Y., Ding, X., Wojciechowski, R., Young, T. L., Parssinen, O., Oexle, K., Pfeiffer, N., Bailey-Wilson, J. E., Paterson, A. D., Klaver, C. C. W., Plomin, R., Hammond, C. J., Mackey, D. A., He, M. G., Saw, S. M., Williams, C., Guggenheim, J. A., & Cream, C. (2016). Childhood gene-environment interactions and age-dependent effects of genetic variants associated with refractive error and myopia: The CREAM Consortium. Scientific Reports, 6: 25853. doi:10.1038/srep25853.

    Abstract

    Myopia, currently at epidemic levels in East Asia, is a leading cause of untreatable visual impairment. Genome-wide association studies (GWAS) in adults have identified 39 loci associated with refractive error and myopia. Here, the age-of-onset of association between genetic variants at these 39 loci and refractive error was investigated in 5200 children assessed longitudinally across ages 7-15 years, along with gene-environment interactions involving the major environmental risk-factors, nearwork and time outdoors. Specific variants could be categorized as showing evidence of: (a) early-onset effects remaining stable through childhood, (b) early-onset effects that progressed further with increasing age, or (c) onset later in childhood (N = 10, 5 and 11 variants, respectively). A genetic risk score (GRS) for all 39 variants explained 0.6% (P = 6.6E-08) and 2.3% (P = 6.9E-21) of the variance in refractive error at ages 7 and 15, respectively, supporting increased effects from these genetic variants at older ages. Replication in multi-ancestry samples (combined N = 5599) yielded evidence of childhood onset for 6 of 12 variants present in both Asians and Europeans. There was no indication that variant or GRS effects altered depending on time outdoors, however 5 variants showed nominal evidence of interactions with nearwork (top variant, rs7829127 in ZMAT4; P = 6.3E-04).

    Additional information

    srep25853-s1.pdf
  • Fan, Q., Verhoeven, V. J., Wojciechowski, R., Barathi, V. A., Hysi, P. G., Guggenheim, J. A., Höhn, R., Vitart, V., Khawaja, A. P., Yamashiro, K., Hosseini, S. M., Lehtimäki, T., Lu, Y., Haller, T., Xie, J., Delcourt, C., Pirastu, M., Wedenoja, J., Gharahkhani, P., Venturini, C. and 83 moreFan, Q., Verhoeven, V. J., Wojciechowski, R., Barathi, V. A., Hysi, P. G., Guggenheim, J. A., Höhn, R., Vitart, V., Khawaja, A. P., Yamashiro, K., Hosseini, S. M., Lehtimäki, T., Lu, Y., Haller, T., Xie, J., Delcourt, C., Pirastu, M., Wedenoja, J., Gharahkhani, P., Venturini, C., Miyake, M., Hewitt, A. W., Guo, X., Mazur, J., Huffman, J. E., Williams, K. M., Polasek, O., Campbell, H., Rudan, I., Vatavuk, Z., Wilson, J. F., Joshi, P. K., McMahon, G., St Pourcain, B., Evans, D. M., Simpson, C. L., Schwantes-An, T.-H., Igo, R. P., Mirshahi, A., Cougnard-Gregoire, A., Bellenguez, C., Blettner, M., Raitakari, O., Kähönen, M., Seppälä, I., Zeller, T., Meitinger, T., Ried, J. S., Gieger, C., Portas, L., Van Leeuwen, E. M., Amin, N., Uitterlinden, A. G., Rivadeneira, F., Hofman, A., Vingerling, J. R., Wang, Y. X., Wang, X., Boh, E.-T.-H., Ikram, M. K., Sabanayagam, C., Gupta, P., Tan, V., Zhou, L., Ho, C. E., Lim, W., Beuerman, R. W., Siantar, R., Tai, E.-S., Vithana, E., Mihailov, E., Khor, C.-C., Hayward, C., Luben, R. N., Foster, P. J., Klein, B. E., Klein, R., Wong, H.-S., Mitchell, P., Metspalu, A., Aung, T., Young, T. L., He, M., Pärssinen, O., Van Duijn, C. M., Wang, J. J., Williams, C., Jonas, J. B., Teo, Y.-Y., Mackey, D. A., Oexle, K., Yoshimura, N., Paterson, A. D., Pfeiffer, N., Wong, T.-Y., Baird, P. N., Stambolian, D., Bailey-Wilson, J. E., Cheng, C.-Y., Hammond, C. J., Klaver, C. C., Saw, S.-M., & Consortium for Refractive Error and Myopia (CREAM) (2016). Meta-analysis of gene–environment-wide association scans accounting for education level identifies additional loci for refractive error. Nature Communications, 7: 11008. doi:10.1038/ncomms11008.

    Abstract

    Myopia is the most common human eye disorder and it results from complex genetic and environmental causes. The rapidly increasing prevalence of myopia poses a major public health challenge. Here, the CREAM consortium performs a joint meta-analysis to test single-nucleotide polymorphism (SNP) main effects and SNP × education interaction effects on refractive error in 40,036 adults from 25 studies of European ancestry and 10,315 adults from 9 studies of Asian ancestry. In European ancestry individuals, we identify six novel loci (FAM150B-ACP1, LINC00340, FBN1, DIS3L-MAP2K1, ARID2-SNAT1 and SLC14A2) associated with refractive error. In Asian populations, three genome-wide significant loci AREG, GABRR1 and PDE10A also exhibit strong interactions with education (P<8.5 × 10−5), whereas the interactions are less evident in Europeans. The discovery of these loci represents an important advance in understanding how gene and environment interactions contribute to the heterogeneity of myopia

    Additional information

    Fan_etal_2016sup.pdf
  • Favier, S. (2020). Individual differences in syntactic knowledge and processing: Exploring the role of literacy experience. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Fazekas, J., Jessop, A., Pine, J., & Rowland, C. F. (2020). Do children learn from their prediction mistakes? A registered report evaluating error-based theories of language acquisition. Royal Society Open Science, 7(11): 180877. doi:10.1098/rsos.180877.

    Abstract

    Error-based theories of language acquisition suggest that children, like adults, continuously make and evaluate predictions in order to reach an adult-like state of language use. However, while these theories have become extremely influential, their central claim - that unpredictable
    input leads to higher rates of lasting change in linguistic representations – has scarcely been
    tested. We designed a prime surprisal-based intervention study to assess this claim.
    As predicted, both 5- to 6-year-old children (n=72) and adults (n=72) showed a pre- to post-test shift towards producing the dative syntactic structure they were exposed to in surprising sentences. The effect was significant in both age groups together, and in the child group separately when participants with ceiling performance in the pre-test were excluded. Secondary
    predictions were not upheld: we found no verb-based learning effects and there was only reliable evidence for immediate prime surprisal effects in the adult, but not in the child group. To our knowledge this is the first published study demonstrating enhanced learning rates for the same syntactic structure when it appeared in surprising as opposed to predictable contexts, thus
    providing crucial support for error-based theories of language acquisition.
  • Fedorenko, E., Morgan, A., Murray, E., Cardinaux, A., Mei, C., Tager-Flusberg, H., Fisher, S. E., & Kanwisher, N. (2016). A highly penetrant form of childhood apraxia of speech due to deletion of 16p11.2. European Journal of Human Genetics, 24(2), 302-306. doi:10.1038/ejhg.2015.149.

    Abstract

    Individuals with heterozygous 16p11.2 deletions reportedly suffer from a variety of difficulties with speech and language. Indeed, recent copy-number variant screens of children with childhood apraxia of speech (CAS), a specific and rare motor speech disorder, have identified three unrelated individuals with 16p11.2 deletions. However, the nature and prevalence of speech and language disorders in general, and CAS in particular, is unknown for individuals with 16p11.2 deletions. Here we took a genotype-first approach, conducting detailed and systematic characterization of speech abilities in a group of 11 unrelated children ascertained on the basis of 16p11.2 deletions. To obtain the most precise and replicable phenotyping, we included tasks that are highly diagnostic for CAS, and we tested children under the age of 18 years, an age group where CAS has been best characterized. Two individuals were largely nonverbal, preventing detailed speech analysis, whereas the remaining nine met the standard accepted diagnostic criteria for CAS. These results link 16p11.2 deletions to a highly penetrant form of CAS. Our findings underline the need for further precise characterization of speech and language profiles in larger groups of affected individuals, which will also enhance our understanding of how genetic pathways contribute to human communication disorders.
  • Fernandez-Vest, M. M. J., & Van Valin Jr., R. D. (Eds.). (2016). Information structure and spoken language in a cross-linguistics perspective. Berlin: Mouton de Gruyter.
  • Ferraro, S., Nigri, A., D'incerti, L., Rosazza, C., Sattin, D., Sebastiano, D. R., Visani, E., Duran, D., Marotta, G., De Michelis, G., Catricalà, E., Kotz, S. A., Verga, L., Leonardi, M., Cappa, S. F., & Bruzzone, M. G. (2020). Preservation of language processing and auditory performance in patients with disorders of consciousness: a multimodal assessment. Frontiers in Neurology, 11: 526465. doi:10.3389/fneur.2020.526465.

    Abstract

    The impact of language impairment on the clinical assessment of patients suffering from disorders of consciousness (DOC) is unknown or underestimated, and may mask the presence of conscious behavior. In a group of DOC patients (n=11; time post-injury range:5-252 months), we investigated the main neural functional and structural underpinnings of linguistic processing, and their relationship with the behavioral measures of the auditory function, using the Coma Recovery Scale-Revised (CRS-R). We assessed the integrity of the brainstem auditory pathways, of the left superior temporal gyrus and arcuate fasciculus, the neural activity elicited by passive listening of an auditory language task and the mean hemispheric glucose metabolism.
    Our results support the hypothesis of a relationship between the level of preservation of the investigated structures/functions and the CRS-R auditory subscale scores.
    Moreover, our findings indicate that patients in minimally conscious state minus (MCS-): 1) when presenting the \emph{auditory startle} (at the CRS-R auditory subscale) might be aphasic in the receptive domain, being severely impaired in the core language structures/functions; 2) when presenting the \emph{localization to sound} might retain language processing, being almost intact or intact in the core language structures/functions. Despite the small group of investigated patients, our findings provide a grounding of the clinical measures of the CRS-R auditory subscale in the integrity of the underlying auditory structures/functions. Future studies are needed to confirm our results that might have important consequences for the clinical practice.
  • Ferreri, L., & Verga, L. (2016). Benefits of music on verbal learning and memory: How and when does it work? Music Perception, 34(2), 167-182. doi:10.1525/mp.2016.34.2.167.

    Abstract

    A long-standing debate in cognitive neurosciences concerns the effect of music on verbal learning and memory. Research in this field has largely provided conflicting results in both clinical as well as non-clinical populations. Although several studies have shown a positive effect of music on the encoding and retrieval of verbal stimuli, music has also been suggested to hinder mnemonic performance by dividing attention. In an attempt to explain this conflict, we review the most relevant literature on the effects of music on verbal learning and memory. Furthermore, we specify several mechanisms through which music may modulate these cognitive functions. We suggest that the extent to which music boosts these cognitive functions relies on experimental factors, such as the relative complexity of musical and verbal stimuli employed. These factors should be carefully considered in further studies, in order to reliably establish how and when music boosts verbal memory and learning. The answers to these questions are not only crucial for our knowledge of how music influences cognitive and brain functions, but may have important clinical implications. Considering the increasing number of approaches using music as a therapeutic tool, the importance of understanding exactly how music works can no longer be underestimated.
  • Filippi, P. (2016). Emotional and Interactional Prosody across Animal Communication Systems: A Comparative Approach to the Emergence of Language. Frontiers in Psychology, 7: 1393. doi:10.3389/fpsyg.2016.01393.

    Abstract

    Across a wide range of animal taxa, prosodic modulation of the voice can express emotional information and is used to coordinate vocal interactions between multiple individuals. Within a comparative approach to animal communication systems, I hypothesize that the ability for emotional and interactional prosody (EIP) paved the way for the evolution of linguistic prosody – and perhaps also of music, continuing to play a vital role in the acquisition of language. In support of this hypothesis, I review three research fields: (i) empirical studies on the adaptive value of EIP in non-human primates, mammals, songbirds, anurans, and insects; (ii) the beneficial effects of EIP in scaffolding language learning and social development in human infants; (iii) the cognitive relationship between linguistic prosody and the ability for music, which has often been identified as the evolutionary precursor of language.
  • Filippi, P., Congdon, J. V., Hoang, J., Bowling, D. L., Reber, S., Pašukonis, A., Hoeschele, M., Ocklenburg, S., de Boer, B., Sturdy, C. B., Newen, A., & Güntürkün, O. (2016). Humans Recognize Vocal Expressions Of Emotional States Universally Across Species. In The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/91.html.

    Abstract

    The perception of danger in the environment can induce physiological responses (such as a heightened state of arousal) in animals, which may cause measurable changes in the prosodic modulation of the voice (Briefer, 2012). The ability to interpret the prosodic features of animal calls as an indicator of emotional arousal may have provided the first hominins with an adaptive advantage, enabling, for instance, the recognition of a threat in the surroundings. This ability might have paved the ability to process meaningful prosodic modulations in the emerging linguistic utterances.
  • Filippi, P., Ocklenburg, S., Bowling, D. L., Heege, L., Newen, A., Güntürkün, O., & de Boer, B. (2016). Multimodal Processing Of Emotional Meanings: A Hypothesis On The Adaptive Value Of Prosody. In The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/90.html.

    Abstract

    Humans combine multiple sources of information to comprehend meanings. These sources can be characterized as linguistic (i.e., lexical units and/or sentences) or paralinguistic (e.g. body posture, facial expression, voice intonation, pragmatic context). Emotion communication is a special case in which linguistic and paralinguistic dimensions can simultaneously denote the same, or multiple incongruous referential meanings. Think, for instance, about when someone says “I’m sad!”, but does so with happy intonation and a happy facial expression. Here, the communicative channels express very specific (although conflicting) emotional states as denotations. In such cases of intermodal incongruence, are we involuntarily biased to respond to information in one channel over the other? We hypothesize that humans are involuntary biased to respond to prosody over verbal content and facial expression, since the ability to communicate socially relevant information such as basic emotional states through prosodic modulation of the voice might have provided early hominins with an adaptive advantage that preceded the emergence of segmental speech (Darwin 1871; Mithen, 2005). To address this hypothesis, we examined the interaction between multiple communicative channels in recruiting attentional resources, within a Stroop interference task (i.e. a task in which different channels give conflicting information; Stroop, 1935). In experiment 1, we used synonyms of “happy” and “sad” spoken with happy and sad prosody. Participants were asked to identify the emotion expressed by the verbal content while ignoring prosody (Word task) or vice versa (Prosody task). Participants responded faster and more accurately in the Prosody task. Within the Word task, incongruent stimuli were responded to more slowly and less accurately than congruent stimuli. In experiment 2, we adopted synonyms of “happy” and “sad” spoken in happy and sad prosody, while a happy or sad face was displayed. Participants were asked to identify the emotion expressed by the verbal content while ignoring prosody and face (Word task), to identify the emotion expressed by prosody while ignoring verbal content and face (Prosody task), or to identify the emotion expressed by the face while ignoring prosody and verbal content (Face task). Participants responded faster in the Face task and less accurately when the two non-focused channels were expressing an emotion that was incongruent with the focused one, as compared with the condition where all the channels were congruent. In addition, in the Word task, accuracy was lower when prosody was incongruent to verbal content and face, as compared with the condition where all the channels were congruent. Our data suggest that prosody interferes with emotion word processing, eliciting automatic responses even when conflicting with both verbal content and facial expressions at the same time. In contrast, although processed significantly faster than prosody and verbal content, faces alone are not sufficient to interfere in emotion processing within a three-dimensional Stroop task. Our findings align with the hypothesis that the ability to communicate emotions through prosodic modulation of the voice – which seems to be dominant over verbal content - is evolutionary older than the emergence of segmental articulation (Mithen, 2005; Fitch, 2010). This hypothesis fits with quantitative data suggesting that prosody has a vital role in the perception of well-formed words (Johnson & Jusczyk, 2001), in the ability to map sounds to referential meanings (Filippi et al., 2014), and in syntactic disambiguation (Soderstrom et al., 2003). This research could complement studies on iconic communication within visual and auditory domains, providing new insights for models of language evolution. Further work aimed at how emotional cues from different modalities are simultaneously integrated will improve our understanding of how humans interpret multimodal emotional meanings in real life interactions.
  • Filippi, P. (2014). Linguistic animals: understanding language through a comparative approach. In E. A. Cartmill, S. Roberts, H. Lyn, & H. Crnish (Eds.), The Evolution of Language: Proceedings of the 10th International Conference (pp. 74-81). doi:10.1142/9789814603638_0082.

    Abstract

    With the aim to clarify the definition of humans as “linguistic animals”, in the present paper I functionally distinguish three types of language competences: i) language as a general biological tool for communication, ii) “perceptual syntax”, iii) propositional language. Following this terminological distinction, I review pivotal findings on animals' communication systems, which constitute useful evidence for the investigation of the nature of three core components of humans' faculty of language: semantics, syntax, and theory of mind. In fact, despite the capacity to process and share utterances with an open-ended structure is uniquely human, some isolated components of our linguistic competence are in common with nonhuman animals. Therefore, as I argue in the present paper, the investigation of animals' communicative competence provide crucial insights into the range of cognitive constraints underlying humans' ability of language, enabling at the same time the analysis of its phylogenetic path as well as of the selective pressures that have led to its emergence.
  • Filippi, P., Gingras, B., & Fitch, W. T. (2014). The effect of pitch enhancement on spoken language acquisition. In E. A. Cartmill, S. Roberts, H. Lyn, & H. Crnish (Eds.), The Evolution of Language: Proceedings of the 10th International Conference (pp. 437-438). doi:10.1142/9789814603638_0082.

    Abstract

    The aim of this study is to investigate the word-learning phenomenon utilizing a new model that integrates three processes: a) extracting a word out of a continuous sounds sequence, b) inducing referential meanings, c) mapping a word onto its intended referent, with the possibility to extend the acquired word over a potentially infinite sets of objects of the same semantic category, and over not-previously-heard utterances. Previous work has examined the role of statistical learning and/or of prosody in each of these processes separately. In order to examine the multilayered word-learning task, we integrate these two strands of investigation into a single approach. We have conducted the study on adults and included six different experimental conditions, each including specific perceptual manipulations of the signal. In condition 1, the only cue to word-meaning mapping was the co-occurrence between words and referents (“statistical cue”). This cue was present in all the conditions. In condition 2, we added infant-directed-speech (IDS) typical pitch enhancement as a marker of the target word and of the statistical cue. In condition 3 we placed IDS typical pitch enhancement on random words of the utterances, i.e. inconsistently matching the statistical cue. In conditions 4, 5 and 6 we manipulated respectively duration, a non-prosodic acoustic cue and a visual cue as markers of the target word and of the statistical cue. Systematic comparisons between learning performance in condition 1 with the other conditions revealed that the word-learning process is facilitated only when pitch prominence consistently marks the target word and the statistical cue…
  • Filippi, P., Jadoul, Y., Ravignani, A., Thompson, B., & de Boer, B. (2016). Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages. Frontiers in Human Neuroscience, 10: 586. doi:10.3389/fnhum.2016.00586.

    Abstract

    Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure—regularities arising in an ordered series of syllable timings—testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint.
  • Filippi, P., Gingras, B., & Fitch, W. T. (2014). Pitch enhancement facilitates word learning across visual contexts. Frontiers in Psychology, 5: 1468. doi:10.3389%2Ffpsyg.2014.01468.

    Abstract

    This study investigates word-learning using a new experimental paradigm that integrates three processes: (a) extracting a word out of a continuous sound sequence, (b) inferring its referential meanings in context, (c) mapping the segmented word onto its broader intended referent, such as other objects of the same semantic category, and to novel utterances. Previous work has examined the role of statistical learning and/or of prosody in each of these processes separately. Here, we combine these strands of investigation into a single experimental approach, in which participants viewed a photograph belonging to one of three semantic categories while hearing a complex, five-word utterance containing a target word. Six between-subjects conditions were tested with 20 adult participants each. In condition 1, the only cue to word-meaning mapping was the co-occurrence of word and referents. This statistical cue was present in all conditions. In condition 2, the target word was sounded at a higher pitch. In condition 3, random words were sounded at a higher pitch, creating an inconsistent cue. In condition 4, the duration of the target word was lengthened. In conditions 5 and 6, an extraneous acoustic cue and a visual cue were associated with the target word, respectively. Performance in this word-learning task was significantly higher than that observed with simple co-occurrence only when pitch prominence consistently marked the target word. We discuss implications for the pragmatic value of pitch marking as well as the relevance of our findings to language acquisition and language evolution.
  • Fisher, S. E. (2016). A molecular genetic perspective on speech and language. In G. Hickok, & S. Small (Eds.), Neurobiology of Language (pp. 13-24). Amsterdam: Elsevier. doi:10.1016/B978-0-12-407794-2.00002-X.

    Abstract

    The rise of genomic technologies has yielded exciting new routes for studying the biological foundations of language. Researchers have begun to identify genes implicated in neurodevelopmental disorders that disrupt speech and language skills. This chapter illustrates how such work can provide powerful entry points into the critical neural pathways using FOXP2 as an example. Rare mutations of this gene cause problems with learning to sequence mouth movements during speech, accompanied by wide-ranging impairments in language production and comprehension. FOXP2 encodes a regulatory protein, a hub in a network of other genes, several of which have also been associated with language-related impairments. Versions of FOXP2 are found in similar form in many vertebrate species; indeed, studies of animals and birds suggest conserved roles in the development and plasticity of certain sets of neural circuits. Thus, the contributions of this gene to human speech and language involve modifications of evolutionarily ancient functions.
  • Fitz, H. (2014). Computermodelle für Spracherwerb und Sprachproduktion. Forschungsbericht 2014 - Max-Planck-Institut für Psycholinguistik. In Max-Planck-Gesellschaft Jahrbuch 2014. München: Max Planck Society for the Advancement of Science. Retrieved from http://www.mpg.de/7850678/Psycholinguistik_JB_2014?c=8236817.

    Abstract

    Relative clauses are a syntactic device to create complex sentences and they make language structurally productive. Despite a considerable number of experimental studies, it is still largely unclear how children learn relative clauses and how these are processed in the language system. Researchers at the MPI for Psycholinguistics used a computational learning model to gain novel insights into these issues. The model explains the differential development of relative clauses in English as well as cross-linguistic differences
  • Fitz, H., Uhlmann, M., Van den Broek, D., Duarte, R., Hagoort, P., & Petersson, K. M. (2020). Neuronal spike-rate adaptation supports working memory in language processing. Proceedings of the National Academy of Sciences of the United States of America, 117(34), 20881-20889. doi:10.1073/pnas.2000222117.

    Abstract

    Language processing involves the ability to store and integrate pieces of
    information in working memory over short periods of time. According to
    the dominant view, information is maintained through sustained, elevated
    neural activity. Other work has argued that short-term synaptic facilitation
    can serve as a substrate of memory. Here, we propose an account where
    memory is supported by intrinsic plasticity that downregulates neuronal
    firing rates. Single neuron responses are dependent on experience and we
    show through simulations that these adaptive changes in excitability pro-
    vide memory on timescales ranging from milliseconds to seconds. On this
    account, spiking activity writes information into coupled dynamic variables
    that control adaptation and move at slower timescales than the membrane
    potential. From these variables, information is continuously read back into
    the active membrane state for processing. This neuronal memory mech-
    anism does not rely on persistent activity, excitatory feedback, or synap-
    tic plasticity for storage. Instead, information is maintained in adaptive
    conductances that reduce firing rates and can be accessed directly with-
    out cued retrieval. Memory span is systematically related to both the time
    constant of adaptation and baseline levels of neuronal excitability. Inter-
    ference effects within memory arise when adaptation is long-lasting. We
    demonstrate that this mechanism is sensitive to context and serial order
    which makes it suitable for temporal integration in sequence processing
    within the language domain. We also show that it enables the binding of
    linguistic features over time within dynamic memory registers. This work
    provides a step towards a computational neurobiology of language.
  • FitzPatrick, I., & Indefrey, P. (2016). Accessing Conceptual Representations for Speaking [Editorial]. Frontiers in Psychology, 7: 1216. doi:10.3389/fpsyg.2016.01216.

    Abstract

    Systematic investigations into the role of semantics in the speech production process have remained elusive. This special issue aims at moving forward toward a more detailed account of how precisely conceptual information is used to access the lexicon in speaking and what corresponding format of conceptual representations needs to be assumed. The studies presented in this volume investigated effects of conceptual processing on different processing stages of language production, including sentence formulation, lemma selection, and word form access.
  • FitzPatrick, I., & Indefrey, P. (2014). Head start for target language in bilingual listening. Brain Research, 1542, 111-130. doi:10.1016/j.brainres.2013.10.014.

    Abstract

    In this study we investigated the availability of non-target language semantic features in bilingual speech processing. We recorded EEG from Dutch-English bilinguals who listened to spoken sentences in their L2 (English) or L1 (Dutch). In Experiments 1 and 3 the sentences contained an interlingual homophone. The sentence context was either biased towards the target language meaning of the homophone (target biased), the non-target language meaning (non-target biased), or neither meaning of the homophone (fully incongruent). These conditions were each compared to a semantically congruent control condition. In L2 sentences we observed an N400 in the non-target biased condition that had an earlier offset than the N400 to fully incongruent homophones. In the target biased condition, a negativity emerged that was later than the N400 to fully incongruent homophones. In L1 contexts, neither target biased nor non-target biased homophones yielded significant N400 effects (compared to the control condition). In Experiments 2 and 4 the sentences contained a language switch to a non-target language word that could be semantically congruent or incongruent. Semantically incongruent words (switched, and non-switched) elicited an N400 effect. The N400 to semantically congruent language-switched words had an earlier offset than the N400 to incongruent words. Both congruent and incongruent language switches elicited a Late Positive Component (LPC). These findings show that bilinguals activate both meanings of interlingual homophones irrespective of their contextual fit. In L2 contexts, the target-language meaning of the homophone has a head start over the non-target language meaning. The target-language head start is also evident for language switches from both L2-to-L1 and L1-to-L2
  • Flecken, M., & Van Bergen, G. (2020). Can the English stand the bottle like the Dutch? Effects of relational categories on object perception. Cognitive Neuropsychology, 37(5-6), 271-287. doi:10.1080/02643294.2019.1607272.

    Abstract

    Does language influence how we perceive the world? This study examines how linguistic encoding of relational information by means of verbs implicitly affects visual processing, by measuring perceptual judgements behaviourally, and visual perception and attention in EEG. Verbal systems can vary cross-linguistically: Dutch uses posture verbs to describe inanimate object configurations (the bottle stands/lies on the table). In English, however, such use of posture verbs is rare (the bottle is on the table). Using this test case, we ask (1) whether previously attested language-perception interactions extend to more complex domains, and (2) whether differences in linguistic usage probabilities affect perception. We report three nonverbal experiments in which Dutch and English participants performed a picture-matching task. Prime and target pictures contained object configurations (e.g., a bottle on a table); in the critical condition, prime and target showed a mismatch in object position (standing/lying). In both language groups, we found similar responses, suggesting that probabilistic differences in linguistic encoding of relational information do not affect perception.
  • Flecken, M., von Stutterheim, C., & Carroll, M. (2014). Grammatical aspect influences motion event perception: Evidence from a cross-linguistic non-verbal recognition task. Language and Cognition, 6(1), 45-78. doi:10.1017/langcog.2013.2.

    Abstract

    Using eye-tracking as a window on cognitive processing, this study investigates language effects on attention to motion events in a non-verbal task. We compare gaze allocation patterns by native speakers of German and Modern Standard Arabic (MSA), two languages that differ with regard to the grammaticalization of temporal concepts. Findings of the non-verbal task, in which speakers watch dynamic event scenes while performing an auditory distracter task, are compared to gaze allocation patterns which were obtained in an event description task, using the same stimuli. We investigate whether differences in the grammatical aspectual systems of German and MSA affect the extent to which endpoints of motion events are linguistically encoded and visually processed in the two tasks. In the linguistic task, we find clear language differences in endpoint encoding and in the eye-tracking data (attention to event endpoints) as well: German speakers attend to and linguistically encode endpoints more frequently than speakers of MSA. The fixation data in the non-verbal task show similar language effects, providing relevant insights with regard to the language-and-thought debate. The present study is one of the few studies that focus explicitly on language effects related to grammatical concepts, as opposed to lexical concepts.
  • Fleur, D. S., Flecken, M., Rommers, J., & Nieuwland, M. S. (2020). Definitely saw it coming? The dual nature of the pre-nominal prediction effect. Cognition, 204: 104335. doi:10.1016/j.cognition.2020.104335.

    Abstract

    In well-known demonstrations of lexical prediction during language comprehension, pre-nominal articles that mismatch a likely upcoming noun's gender elicit different neural activity than matching articles. However, theories differ on what this pre-nominal prediction effect means and on what is being predicted. Does it reflect mismatch with a predicted article, or ‘merely’ revision of the noun prediction? We contrasted the ‘article prediction mismatch’ hypothesis and the ‘noun prediction revision’ hypothesis in two ERP experiments on Dutch mini-story comprehension, with pre-registered data collection and analyses. We capitalized on the Dutch gender system, which marks gender on definite articles (‘de/het’) but not on indefinite articles (‘een’). If articles themselves are predicted, mismatching gender should have little effect when readers expected an indefinite article without gender marking. Participants read contexts that strongly suggested either a definite or indefinite noun phrase as its best continuation, followed by a definite noun phrase with the expected noun or an unexpected, different gender noun phrase (‘het boek/de roman’, the book/the novel). Experiment 1 (N = 48) showed a pre-nominal prediction effect, but evidence for the article prediction mismatch hypothesis was inconclusive. Informed by exploratory analyses and power analyses, direct replication Experiment 2 (N = 80) yielded evidence for article prediction mismatch at a newly pre-registered occipital region-of-interest. However, at frontal and posterior channels, unexpectedly definite articles also elicited a gender-mismatch effect, and this support for the noun prediction revision hypothesis was further strengthened by exploratory analyses: ERPs elicited by gender-mismatching articles correlated with incurred constraint towards a new noun (next-word entropy), and N400s for initially unpredictable nouns decreased when articles made them more predictable. By demonstrating its dual nature, our results reconcile two prevalent explanations of the pre-nominal prediction effect.
  • Floyd, S. (2014). 'We’ as social categorization in Cha’palaa: A language of Ecuador. In T.-S. Pavlidou (Ed.), Constructing collectivity: 'We' across languages and contexts (pp. 135-158). Amsterdam: Benjamins.

    Abstract

    This chapter connects the grammar of the first person collective pronoun in the Cha’palaa language of Ecuador with its use in interaction for collective reference and social category membership attribution, addressing the problem posed by the fact that non-singular pronouns do not have distributional semantics (“speakers”) but are rather associational (“speaker and relevant associates”). It advocates a cross-disciplinary approach that jointly considers elements of linguistic form, situated usages of those forms in instances of interaction, and the broader ethnographic context of those instances. Focusing on large-scale and relatively stable categories such as racial and ethnic groups, it argues that looking at how speakers categorize themselves and others in the speech situation by using pronouns provides empirical data on the status of macro-social categories for members of a society

    Files private

    Request files
  • Floyd, S. (2014). [Review of the book Flexible word classes: Typological studies of underspecified parts of speech ed. by Jan Rijkhoff and Eva van Lier]. Linguistics, 52, 1499-1502. doi:10.1515/ling-2014-0027.
  • Floyd, S. (2016). [Review of the book Fluent Selves: Autobiography, Person, and History in Lowland South America ed. by Suzanne Oakdale and Magnus Course]. Journal of Linguistic Anthropology, 26(1), 110-111. doi:10.1111/jola.12112.

Share this page