Publications

Displaying 301 - 400 of 922
  • Gullberg, M., & Indefrey, P. (Eds.). (2010). The earliest stages of language learning [Special Issue]. Language Learning, 60(Supplement s2).
  • Gullberg, M., & Narasimhan, B. (2010). What gestures reveal about the development of semantic distinctions in Dutch children's placement verbs. Cognitive Linguistics, 21(2), 239-262. doi:10.1515/COGL.2010.009.

    Abstract

    Placement verbs describe every-day events like putting a toy in a box. Dutch uses two semi-obligatory caused posture verbs (leggen ‘lay’ and zetten ‘set/stand’) to distinguish between events based on whether the located object is placed horizontally or vertically. Although prevalent in the input, these verbs cause Dutch children difficulties even at age five (Narasimhan & Gullberg, submitted). Children overextend leggen to all placement events and underextend use of zetten. This study examines what gestures can reveal about Dutch three- and five-year-olds’ semantic representations of such verbs. The results show that children gesture differently from adults in this domain. Three-year-olds express only the path of the caused motion, whereas five-year-olds, like adults, also incorporate the located object. Crucially, gesture patterns are tied to verb use: those children who over-use leggen 'lay' for all placement events only gesture about path. Conversely, children who use the two verbs differentially for horizontal and vertical placement also incorporate objects in gestures like adults. We argue that children's gestures reflect their current knowledge of verb semantics, and indicate a developmental transition from a system with a single semantic component – (caused) movement – to an (adult-like) focus on two semantic components – (caused) movement-and-object
  • Guo, Y., Martin, R. C., Hamilton, C., Van Dyke, J., & Tan, Y. (2010). Neural basis of semantic and syntactic interference resolution in sentence comprehension. Procedia - Social and Behavioral Sciences, 6, 88-89. doi:10.1016/j.sbspro.2010.08.045.
  • Gur, C., & Sumer, B. (2022). Learning to introduce referents in narration is resilient to the effects of late sign language exposure. Sign Language & Linguistics, 25(2), 205-234. doi:10.1075/sll.21004.gur.

    Abstract

    The present study investigates the effects of late sign language exposure on narrative development in Turkish Sign Language (TİD) by focusing on the introductions of main characters and the linguistic strategies used in these introductions. We study these domains by comparing narrations produced by native and late signers in TİD. The results of our study reveal that late sign language exposure does not hinder the acquisition of linguistic devices to introduce main characters in narrations. Thus, their acquisition seems to be resilient to the effects of late language exposure. Our study further suggests that a two-year exposure to sign language facilitates the acquisition of these skills in signing children even in the case of late language exposure, thus providing further support for the importance of sign language exposure to develop linguistic skills for signing children.
  • Gussenhoven, C., Lu, Y.-A., Lee-Kim, S.-I., Liu, C., Rahmani, H., Riad, T., & Zora, H. (2022). The sequence recall task and lexicality of tone: Exploring tone “deafness”. Frontiers in Psychology, 13: 902569. doi:10.3389/fpsyg.2022.902569.

    Abstract

    Many perception and processing effects of the lexical status of tone have been found in behavioral, psycholinguistic, and neuroscientific research, often pitting varieties of tonal Chinese against non-tonal Germanic languages. While the linguistic and cognitive evidence for lexical tone is therefore beyond dispute, the word prosodic systems of many languages continue to escape the categorizations of typologists. One controversy concerns the existence of a typological class of “pitch accent languages,” another the underlying phonological nature of surface tone contrasts, which in some cases have been claimed to be metrical rather than tonal. We address the question whether the Sequence Recall Task (SRT), which has been shown to discriminate between languages with and without word stress, can distinguish languages with and without lexical tone. Using participants from non-tonal Indonesian, semi-tonal Swedish, and two varieties of tonal Mandarin, we ran SRTs with monosyllabic tonal contrasts to test the hypothesis that high performance in a tonal SRT indicates the lexical status of tone. An additional question concerned the extent to which accuracy scores depended on phonological and phonetic properties of a language’s tone system, like its complexity, the existence of an experimental contrast in a language’s phonology, and the phonetic salience of a contrast. The results suggest that a tonal SRT is not likely to discriminate between tonal and non-tonal languages within a typologically varied group, because of the effects of specific properties of their tone systems. Future research should therefore address the first hypothesis with participants from otherwise similar tonal and non-tonal varieties of the same language, where results from a tonal SRT may make a useful contribution to the typological debate on word prosody.

    Additional information

    also published as book chapter (2023)
  • Haagen, T., Dona, L., Bosscha, S., Zamith, B., Koetschruyter, R., & Wijnholds, G. (2022). Noun Phrase and Verb Phrase Ellipsis in Dutch: Identifying Subject-Verb Dependencies with BERTje. Computational Linguistics in the Netherlands Journal, 12, 49-63.

    Abstract

    Previous research has set out to quantify the syntactic capacity of BERTje (the Dutch equivalent of BERT) in the context of phenomena such as control verb nesting and verb raising in Dutch. Another complex language phenomenon is ellipsis, where a constituent is omitted from a sentence and can be recovered using context. Like verb raising and control verb nesting, ellipsis is suitable for evaluating BERTje’s linguistic capacity since it requires the processing of syntactic and lexical cues to recover the elided phrases. This work outlines an approach to identify subject-verb dependencies in Dutch sentences with verb phrase and noun phrase ellipsis using BERTje. Results will inform about BERTje’s capability of capturing syntactic information and its ability to capture ellipsis in particular. Understanding more about how computational models process ellipsis and how it can be improved is crucial for boosting the performance of language models, as natural language contains many instances of ellipsis. Using training data from Lassy, converted to contextualized embeddings using BERTje, a probe model is trained to identify subject-verb dependencies. The model is tested on sentences generated using a Context Free Grammar (CFG), which is designed to generate sentences containing ellipsis. These sentences are also converted to contextualized representations using BERTje. Results show that BERTje’s syntactic abilities are lacking, shown by accuracy drops compared to baseline measures.

    Additional information

    direct link to journal
  • Hagoort, P. (1998). De electrofysiologie van taal: Wat hersenpotentialen vertellen over het menselijk taalvermogen. Neuropraxis, 2, 223-229.
  • Hagoort, P. (1998). De spreker als sprinter. Psychologie, 17, 48-49.
  • Hagoort, P. (1999). De toekomstige eeuw zonder psychologie. Psychologie Magazine, 18, 35-36.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech compared to reading: the P600/SPS to syntactic violations in spoken sentences and rapid serial visual presentation. Neuropsychologia, 38, 1531-1549.

    Abstract

    In this study, event-related brain potential ffects of speech processing are obtained and compared to similar effects in sentence reading. In two experiments sentences were presented that contained three different types of grammatical violations. In one experiment sentences were presented word by word at a rate of four words per second. The grammatical violations elicited a Syntactic Positive Shift (P600/SPS), 500 ms after the onset of the word that rendered the sentence ungrammatical. The P600/SPS consisted of two phases, an early phase with a relatively equal anterior-posterior distribution and a later phase with a strong posterior distribution. We interpret the first phase as an indication of structural integration complexity, and the second phase as an indication of failing parsing operations and/or an attempt at reanalysis. In the second experiment the same syntactic violations were presented in sentences spoken at a normal rate and with normal intonation. These violations elicited a P600/SPS with the same onset as was observed for the reading of these sentences. In addition two of the three violations showed a preceding frontal negativity, most clearly over the left hemisphere.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech: semantic ERP effects. Neuropsychologia, 38, 1518-1530.

    Abstract

    In this study, event-related brain potential effects of speech processing are obtained and compared to similar effects insentence reading. In two experiments spoken sentences were presented with semantic violations in sentence-signal or mid-sentence positions. For these violations N400 effects were obtained that were very similar to N400 effects obtained in reading. However, the N400 effects in speech were preceded by an earlier negativity (N250). This negativity is not commonly observed with written input. The early effect is explained as a manifestation of a mismatch between the word forms expected on the basis of the context, and the actual cohort of activated word candidates that is generated on the basis of the speech signal.
  • Hagoort, P., & Brown, C. M. (1999). Gender electrified: ERP evidence on the syntactic nature of gender processing. Journal of Psycholinguistic Research, 28(6), 715-728. doi:10.1023/A:1023277213129.

    Abstract

    The central issue of this study concerns the claim that the processing of gender agreement in online sentence comprehension is a syntactic rather than a conceptual/semantic process. This claim was tested for the grammatical gender agreement in Dutch between the definite article and the noun. Subjects read sentences in which the definite article and the noun had the same gender and sentences in which the gender agreement was violated, While subjects read these sentences, their electrophysiological activity was recorded via electrodes placed on the scalp. Earlier research has shown that semantic and syntactic processing events manifest themselves in different event-related brain potential (ERP) effects. Semantic integration modulates the amplitude of the so-called N400.The P600/SPS is an ERP effect that is more sensitive to syntactic processes. The violation of grammatical gender agreement was found to result in a P600/SPS. For violations in sentence-final position, an additional increase of the N400 amplitude was observed. This N400 effect is interpreted as resulting from the consequence of a syntactic violation for the sentence-final wrap-up. The overall pattern of results supports the claim that the on-line processing of gender agreement information is not a content driven but a syntactic-form driven process.
  • Hagoort, P. (2013). MUC (Memory, Unification, Control) and beyond. Frontiers in Psychology, 4: 416. doi:10.3389/fpsyg.2013.00416.

    Abstract

    A neurobiological model of language is discussed that overcomes the shortcomings of the classical Wernicke-Lichtheim-Geschwind model. It is based on a subdivision of language processing into three components: Memory, Unification, and Control. The functional components as well as the neurobiological underpinnings of the model are discussed. In addition, the need for extension of the model beyond the classical core regions for language is shown. Attentional networks as well as networks for inferential processing are crucial to realize language comprehension beyond single word processing and beyond decoding propositional content. It is shown that this requires the dynamic interaction between multiple brain regions.
  • Hagoort, P. (1998). Hersenen en taal in onderzoek en praktijk. Neuropraxis, 6, 204-205.
  • Hagoort, P., & Brown, C. M. (1999). The consequences of the temporal interaction between syntactic and semantic processes for haemodynamic studies of language. NeuroImage, 9, S1024-S1024.
  • Hagoort, P., Ramsey, N., Rutten, G.-J., & Van Rijen, P. (1999). The role of the left anterior temporal cortex in language processing. Brain and Language, 69, 322-325. doi:10.1006/brln.1999.2169.
  • Hagoort, P., Indefrey, P., Brown, C. M., Herzog, H., Steinmetz, H., & Seitz, R. J. (1999). The neural circuitry involved in the reading of german words and pseudowords: A PET study. Journal of Cognitive Neuroscience, 11(4), 383-398. doi:10.1162/089892999563490.

    Abstract

    Silent reading and reading aloud of German words and pseudowords were used in a PET study using (15O)butanol to examine the neural correlates of reading and of the phonological conversion of legal letter strings, with or without meaning.
    The results of 11 healthy, right-handed volunteers in the age range of 25 to 30 years showed activation of the lingual gyri during silent reading in comparison with viewing a fixation cross. Comparisons between the reading of words and pseudowords suggest the involvement of the middle temporal gyri in retrieving both the phonological and semantic code for words. The reading of pseudowords activates the left inferior frontal gyrus, including the ventral part of Broca’s area, to a larger extent than the reading of words. This suggests that this area might be involved in the sublexical conversion of orthographic input strings into phonological output codes. (Pre)motor areas were found to be activated during both silent reading and reading aloud. On the basis of the obtained activation patterns, it is hypothesized that the articulation of high-frequency syllables requires the retrieval of their concomitant articulatory gestures from the SMA and that the articulation of lowfrequency syllables recruits the left medial premotor cortex.
  • Hagoort, P., & Meyer, A. S. (2013). What belongs together goes together: the speaker-hearer perspective. A commentary on MacDonald's PDC account. Frontiers in Psychology, 4: 228. doi:10.3389/fpsyg.2013.00228.

    Abstract

    First paragraph:
    MacDonald (2013) proposes that distributional properties of language and processing biases in language comprehension can to a large extent be attributed to consequences of the language production process. In essence, the account is derived from the principle of least effort that was formulated by Zipf, among others (Zipf, 1949; Levelt, 2013). However, in Zipf's view the outcome of the least effort principle was a compromise between least effort for the speaker and least effort for the listener, whereas MacDonald puts most of the burden on the production process.
  • Hagoort, P. (2000). What we shall know only tomorrow. Brain and Language, 71, 89-92. doi:10.1006/brln.1999.2221.
  • Hall, S., Rumney, L., Holler, J., & Kidd, E. (2013). Associations among play, gesture and early spoken language acquisition. First Language, 33, 294-312. doi:10.1177/0142723713487618.

    Abstract

    The present study investigated the developmental interrelationships between play, gesture use and spoken language development in children aged 18–31 months. The children completed two tasks: (i) a structured measure of pretend (or ‘symbolic’) play and (ii) a measure of vocabulary knowledge in which children have been shown to gesture. Additionally, their productive spoken language knowledge was measured via parental report. The results indicated that symbolic play is positively associated with children’s gesture use, which in turn is positively associated with spoken language knowledge over and above the influence of age. The tripartite relationship between gesture, play and language development is discussed with reference to current developmental theory.
  • Hammarström, H. (2010). A full-scale test of the language farming dispersal hypothesis. Diachronica, 27(2), 197-213. doi:10.1075/dia.27.2.02ham.

    Abstract

    One attempt at explaining why some language families are large (while others are small) is the hypothesis that the families that are now large became large because their ancestral speakers had a technological advantage, most often agriculture. Variants of this idea are referred to as the Language Farming Dispersal Hypothesis. Previously, detailed language family studies have uncovered various supporting examples and counterexamples to this idea. In the present paper I weigh the evidence from ALL attested language families. For each family, I use the number of member languages as a measure of cardinal size, member language coordinates to measure geospatial size and ethnographic evidence to assess subsistence status. This data shows that, although agricultural families tend to be larger in cardinal size, their size is hardly due to the simple presence of farming. If farming were responsible for language family expansions, we would expect a greater east-west geospatial spread of large families than is actually observed. The data, however, is compatible with weaker versions of the farming dispersal hypothesis as well with models where large families acquire farming because of their size, rather than the other way around.
  • Hammarström, H. (2010). The status of the least documented language families in the world. Language Documentation and Conservation, 4, 177-212. Retrieved from http://hdl.handle.net/10125/4478.

    Abstract

    This paper aims to list all known language families that are not yet extinct and all of whose member languages are very poorly documented, i.e., less than a sketch grammar’s worth of data has been collected. It explains what constitutes a valid family, what amount and kinds of documentary data are sufficient, when a language is considered extinct, and more. It is hoped that the survey will be useful in setting priorities for documentation fieldwork, in particular for those documentation efforts whose underlying goal is to understand linguistic diversity.
  • Hanique, I., Aalders, E., & Ernestus, M. (2013). How robust are exemplar effects in word comprehension? The mental lexicon, 8, 269-294. doi:10.1075/ml.8.3.01han.

    Abstract

    This paper studies the robustness of exemplar effects in word comprehension by means of four long-term priming experiments with lexical decision tasks in Dutch. A prime and target represented the same word type and were presented with the same or different degree of reduction. In Experiment 1, participants heard only a small number of trials, a large proportion of repeated words, and stimuli produced by only one speaker. They recognized targets more quickly if these represented the same degree of reduction as their primes, which forms additional evidence for the exemplar effects reported in the literature. Similar effects were found for two speakers who differ in their pronunciations. In Experiment 2, with a smaller proportion of repeated words and more trials between prime and target, participants recognized targets preceded by primes with the same or a different degree of reduction equally quickly. Also, in Experiments 3 and 4, in which listeners were not exposed to one but two types of pronunciation variation (reduction degree and speaker voice), no exemplar effects arose. We conclude that the role of exemplars in speech comprehension during natural conversations, which typically involve several speakers and few repeated content words, may be smaller than previously assumed.
  • Hanique, I., Ernestus, M., & Schuppler, B. (2013). Informal speech processes can be categorical in nature, even if they affect many different words. Journal of the Acoustical Society of America, 133, 1644-1655. doi:10.1121/1.4790352.

    Abstract

    This paper investigates the nature of reduction phenomena in informal speech. It addresses the question whether reduction processes that affect many word types, but only if they occur in connected informal speech, may be categorical in nature. The focus is on reduction of schwa in the prefixes and on word-final /t/ in Dutch past participles. More than 2000 tokens of past participles from the Ernestus Corpus of Spontaneous Dutch and the Spoken Dutch Corpus (both from the interview and read speech component) were transcribed automatically. The results demonstrate that the presence and duration of /t/ are affected by approximately the same phonetic variables, indicating that the absence of /t/ is the extreme result of shortening, and thus results from a gradient reduction process. Also for schwa, the data show that mainly phonetic variables influence its reduction, but its presence is affected by different and more variables than its duration, which suggests that the absence of schwa may result from gradient as well as categorical processes. These conclusions are supported by the distributions of the segments’ durations. These findings provide evidence that reduction phenomena which affect many words in informal conversations may also result from categorical reduction processes.
  • Hanulikova, A., & Hamann, S. (2010). Illustrations of Slovak IPA. Journal of the International Phonetic Association, 40(3), 373-378. doi:10.1017/S0025100310000162.

    Abstract

    Slovak (sometimes also called Slovakian) is an Indo-European language belonging to the West-Slavic branch, and is most closely related to Czech. Slovak is spoken as a native language by 4.6 million speakers in Slovakia (that is by roughly 85% of the population), and by over two million Slovaks living abroad, most of them in the USA, the Czech Republic, Hungary, Canada and Great Britain (Office for Slovaks Living Abroad 2009).
  • Hanulikova, A., McQueen, J. M., & Mitterer, H. (2010). Possible words and fixed stress in the segmentation of Slovak speech. Quarterly Journal of Experimental Psychology, 63, 555 -579. doi:10.1080/17470210903038958.

    Abstract

    The possible-word constraint (PWC; Norris, McQueen, Cutler, & Butterfield, 1997) has been proposed as a language-universal segmentation principle: Lexical candidates are disfavoured if the resulting segmentation of continuous speech leads to vowelless residues in the input—for example, single consonants. Three word-spotting experiments investigated segmentation in Slovak, a language with single-consonant words and fixed stress. In Experiment 1, Slovak listeners detected real words such as ruka “hand” embedded in prepositional-consonant contexts (e.g., /gruka/) faster than those in nonprepositional-consonant contexts (e.g., /truka/) and slowest in syllable contexts (e.g., /dugruka/). The second experiment controlled for effects of stress. Responses were still fastest in prepositional-consonant contexts, but were now slowest in nonprepositional-consonant contexts. In Experiment 3, the lexical and syllabic status of the contexts was manipulated. Responses were again slowest in nonprepositional-consonant contexts but equally fast in prepositional-consonant, prepositional-vowel, and nonprepositional-vowel contexts. These results suggest that Slovak listeners use fixed stress and the PWC to segment speech, but that single consonants that can be words have a special status in Slovak segmentation. Knowledge about what constitutes a phonologically acceptable word in a given language therefore determines whether vowelless stretches of speech are or are not treated as acceptable parts of the lexical parse.
  • Haun, D. B. M., Van Leeuwen, E. J. C., & Edelson, M. G. (2013). Majority influence in children and other animals. Developmental Cognitive Neuroscience, 3, 61-71. doi:10.1016/j.dcn.2012.09.003.

    Abstract

    We here review existing evidence for majority influences in children under the age of ten years and comparable studies with animals ranging from fish to apes. Throughout the review, we structure the discussion surrounding majority influences by differentiating the behaviour of individuals in the presence of a majority and the underlying mechanisms and motivations. Most of the relevant research to date in both developmental psychology and comparative psychology has focused on the behavioural outcomes, where a multitude of mechanisms could be at play. We further propose that interpreting cross-species differences in behavioural patterns is difficult without considering the psychology of the individual. Some attempts at this have been made both in developmental psychology and comparative psychology. We propose that physiological measures should be used to subsidize behavioural studies in an attempt to understand the composition of mechanisms and motivations underlying majority influence. We synthesize the relevant evidence on human brain function in order to provide a framework for future investigation in this area. In addition to streamlining future research efforts, we aim to create a conceptual platform for productive exchanges across the related disciplines of developmental and comparative psychology.
  • Haun, D. B. M., Jordan, F., Vallortigara, G., & Clayton, N. S. (2010). Origins of spatial, temporal and numerical cognition: Insights from comparative psychology [Review article]. Trends in Cognitive Sciences, 14, 552-560. doi:10.1016/j.tics.2010.09.006.

    Abstract

    Contemporary comparative cognition has a large repertoire of animal models and methods, with concurrent theoretical advances that are providing initial answers to crucial questions about human cognition. What cognitive traits are uniquely human? What are the species-typical inherited predispositions of the human mind? What is the human mind capable of without certain types of specific experiences with the surrounding environment? Here, we review recent findings from the domains of space, time and number cognition. These findings are produced using different comparative methodologies relying on different animal species, namely birds and non-human great apes. The study of these species not only reveals the range of cognitive abilities across vertebrates, but also increases our understanding of human cognition in crucial ways.
  • Heesen, R., Fröhlich, M., Sievers, C., Woensdregt, M., & Dingemanse, M. (2022). Coordinating social action: A primer for the cross-species investigation of communicative repair. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 377(1859): 20210110. doi:10.1098/rstb.2021.0110.

    Abstract

    Human joint action is inherently cooperative, manifested in the collaborative efforts of participants to minimize communicative trouble through interactive repair. Although interactive repair requires sophisticated cognitive abilities,
    it can be dissected into basic building blocks shared with non-human animal species. A review of the primate literature shows that interactionally contingent signal sequences are at least common among species of nonhuman great apes, suggesting a gradual evolution of repair. To pioneer a cross-species assessment of repair this paper aims at (i) identifying necessary precursors of human interactive repair; (ii) proposing a coding framework for its comparative study in humans and non-human species; and (iii) using this framework to analyse examples of interactions of humans (adults/children) and non-human great apes. We hope this paper will serve as a primer for cross-species comparisons of communicative breakdowns and how they are repaired.
  • Heid, I. M., Henneman, P., Hicks, A., Coassin, S., Winkler, T., Aulchenko, Y. S., Fuchsberger, C., Song, K., Hivert, M.-F., Waterworth, D. M., Timpson, N. J., Richards, J. B., Perry, J. R. B., Tanaka, T., Amin, N., Kollerits, B., Pichler, I., Oostra, B. A., Thorand, B., Frants, R. R. and 22 moreHeid, I. M., Henneman, P., Hicks, A., Coassin, S., Winkler, T., Aulchenko, Y. S., Fuchsberger, C., Song, K., Hivert, M.-F., Waterworth, D. M., Timpson, N. J., Richards, J. B., Perry, J. R. B., Tanaka, T., Amin, N., Kollerits, B., Pichler, I., Oostra, B. A., Thorand, B., Frants, R. R., Illig, T., Dupuis, J., Glaser, B., Spector, T., Guralnik, J., Egan, J. M., Florez, J. C., Evans, D. M., Soranzo, N., Bandinelli, S., Carlson, O. D., Frayling, T. M., Burling, K., Smith, G. D., Mooser, V., Ferrucci, L., Meigs, J. B., Vollenweider, P., Dijk, K. W. v., Pramstaller, P., Kronenberg, F., & van Duijn, C. M. (2010). Clear detection of ADIPOQ locus as the major gene for plasma adiponectin: Results of genome-wide association analyses including 4659 European individuals. Atherosclerosis, 208(2), 412-420. doi:10.1016/j.atherosclerosis.2009.11.035.

    Abstract

    OBJECTIVE: Plasma adiponectin is strongly associated with various components of metabolic syndrome, type 2 diabetes and cardiovascular outcomes. Concentrations are highly heritable and differ between men and women. We therefore aimed to investigate the genetics of plasma adiponectin in men and women. METHODS: We combined genome-wide association scans of three population-based studies including 4659 persons. For the replication stage in 13795 subjects, we selected the 20 top signals of the combined analysis, as well as the 10 top signals with p-values less than 1.0 x 10(-4) for each the men- and the women-specific analyses. We further selected 73 SNPs that were consistently associated with metabolic syndrome parameters in previous genome-wide association studies to check for their association with plasma adiponectin. RESULTS: The ADIPOQ locus showed genome-wide significant p-values in the combined (p=4.3 x 10(-24)) as well as in both women- and men-specific analyses (p=8.7 x 10(-17) and p=2.5 x 10(-11), respectively). None of the other 39 top signal SNPs showed evidence for association in the replication analysis. None of 73 SNPs from metabolic syndrome loci exhibited association with plasma adiponectin (p>0.01). CONCLUSIONS: We demonstrated the ADIPOQ gene as the only major gene for plasma adiponectin, which explains 6.7% of the phenotypic variance. We further found that neither this gene nor any of the metabolic syndrome loci explained the sex differences observed for plasma adiponectin. Larger studies are needed to identify more moderate genetic determinants of plasma adiponectin.
  • Heidlmayr, K., Moutier, S., Hemforth, B., Courtin, C., Tanzmeister, R., & Isel, F. (2013). Successive bilingualism and executive functions: The effect of second language use on inhibitory control in a behavioural Stroop Colour Wordtask. Bilingualism: Language and Cognition, 17(3), 630-645. doi:dx.doi.org/10.1017/S1366728913000539.

    Abstract

    Here we examined the role of bilingualism on cognitive inhibition using the Stroop Colour Word task. Our hypothesis was that the frequency of use of a second language (L2) in the daily life of successive bilingual individuals impacts the efficiency of their inhibitory control mechanism. Thirty-three highly proficient successive French–German bilinguals, living either in a French or in a German linguistic environment, performed a Stroop task on both French and German words. Moreover, 31 French monolingual individuals were also tested with French words. We showed that the bilingual advantage was (i) reinforced by the use of a third language, and (ii) modulated by the duration of immersion in a second language environment. This suggests that top–down inhibitory control is most involved at the beginning of immersion. Taken together, the present findings lend support to the psycholinguistic models of bilingual language processing that postulate that top–down active inhibition is involved in language control.
  • Heilbron, M., Armeni, K., Schoffelen, J.-M., Hagoort, P., & De Lange, F. P. (2022). A hierarchy of linguistic predictions during natural language comprehension. Proceedings of the National Academy of Sciences of the United States of America, 119(32): e2201968119. doi:10.1073/pnas.2201968119.

    Abstract

    Understanding spoken language requires transforming ambiguous acoustic streams into a hierarchy of representations, from phonemes to meaning. It has been suggested that the brain uses prediction to guide the interpretation of incoming input. However, the role of prediction in language processing remains disputed, with disagreement about both the ubiquity and representational nature of predictions. Here, we address both issues by analyzing brain recordings of participants listening to audiobooks, and using a deep neural network (GPT-2) to precisely quantify contextual predictions. First, we establish that brain responses to words are modulated by ubiquitous predictions. Next, we disentangle model-based predictions into distinct dimensions, revealing dissociable neural signatures of predictions about syntactic category (parts of speech), phonemes, and semantics. Finally, we show that high-level (word) predictions inform low-level (phoneme) predictions, supporting hierarchical predictive processing. Together, these results underscore the ubiquity of prediction in language processing, showing that the brain spontaneously predicts upcoming language at multiple levels of abstraction.

    Additional information

    supporting information
  • Heinemann, T. (2010). The question–response system of Danish. Journal of Pragmatics, 42, 2703-2725. doi:10.1016/j.pragma.2010.04.007.

    Abstract

    This paper provides an overview of the question–response system of Danish, based on a collection of 350 questions (and responses) collected from video recordings of naturally occurring face-to-face interactions between native speakers of Danish. The paper identifies the lexico-grammatical options for formulating questions, the range of social actions that can be implemented through questions and the relationship between questions and responses. It further describes features where Danish questions differ from a range of other languages in terms of, for instance, distribution and the relationship between question format and social action. For instance, Danish has a high frequency of interrogatively formatted questions and questions that are negatively formulated, when compared to languages that have the same grammatical options. In terms of action, Danish shows a higher number of questions that are used for making suggestions, offers and requests and does not use repetition as a way of answering a question as often as other languages.
  • Heritage, J., & Stivers, T. (1999). Online commentary in acute medical visits: A method of shaping patient expectations. Social Science and Medicine, 49(11), 1501-1517. doi:10.1016/S0277-9536(99)00219-1.
  • Heritage, J., Elliott, M. N., Stivers, T., Richardson, A., & Mangione-Smith, R. (2010). Reducing inappropriate antibiotics prescribing: The role of online commentary on physical examination findings. Patient Education and Counseling, 81, 119-125. doi:10.1016/j.pec.2009.12.005.

    Abstract

    Objective: This study investigates the relationship of ‘online commentary’(contemporaneous physician comments about physical examination [PE] findings) with (i) parent questioning of the treatment recommendation and (ii) inappropriate antibiotic prescribing. Methods: A nested cross-sectional study of 522 encounters motivated by upper respiratory symptoms in 27 California pediatric practices (38 pediatricians). Physicians completed a post-visit survey regarding physical examination findings, diagnosis, treatment, and whether they perceived the parent as expecting an antibiotic. Taped encounters were coded for ‘problem’ online commentary (PE findings discussed as significant or clearly abnormal) and ‘no problem’ online commentary (PE findings discussed reassuringly as normal or insignificant). Results: Online commentary during the PE occurred in 73% of visits with viral diagnoses (n = 261). Compared to similar cases with ‘no problem’ online commentary, ‘problem’ comments were associated with a 13% greater probability of parents uestioning a non-antibiotic treatment plan (95% CI 0-26%, p = .05,) and a 27% (95% CI: 2-52%, p < .05) greater probability of an inappropriate antibiotic prescription. Conclusion: With viral illnesses, problematic online comments are associated with more pediatrician-parent conflict over non-antibiotic treatment recommendations. This may increase inappropriate antibiotic prescribing. Practice implications: In viral cases, physicians should consider avoiding the use of problematic online commentary.
  • Hersh, T. A., Gero, S., Rendell, L., Cantor, M., Weilgart, L., Amano, M., Dawson, S. M., Slooten, E., Johnson, C. M., Kerr, I., Payne, R., Rogan, A., Antunes, R., Andrews, O., Ferguson, E. L., Hom-Weaver, C. A., Norris, T. F., Barkley, Y. M., Merkens, K. P., Oleson, E. M. and 7 moreHersh, T. A., Gero, S., Rendell, L., Cantor, M., Weilgart, L., Amano, M., Dawson, S. M., Slooten, E., Johnson, C. M., Kerr, I., Payne, R., Rogan, A., Antunes, R., Andrews, O., Ferguson, E. L., Hom-Weaver, C. A., Norris, T. F., Barkley, Y. M., Merkens, K. P., Oleson, E. M., Doniol-Valcroze, T., Pilkington, J. F., Gordon, J., Fernandes, M., Guerra, M., Hickmott, L., & Whitehead, H. (2022). Evidence from sperm whale clans of symbolic marking in non-human cultures. Proceedings of the National Academy of Sciences of the United States of America, 119(37): e2201692119. doi:10.1073/pnas.2201692119.

    Abstract

    Culture, a pillar of the remarkable ecological success of humans, is increasingly recognized as a powerful force structuring nonhuman animal populations. A key gap between these two types of culture is quantitative evidence of symbolic markers—seemingly arbitrary traits that function as reliable indicators of cultural group membership to conspecifics. Using acoustic data collected from 23 Pacific Ocean locations, we provide quantitative evidence that certain sperm whale acoustic signals exhibit spatial patterns consistent with a symbolic marker function. Culture segments sperm whale populations into behaviorally distinct clans, which are defined based on dialects of stereotyped click patterns (codas). We classified 23,429 codas into types using contaminated mixture models and hierarchically clustered coda repertoires into seven clans based on similarities in coda usage; then we evaluated whether coda usage varied with geographic distance within clans or with spatial overlap between clans. Similarities in within-clan usage of both “identity codas” (coda types diagnostic of clan identity) and “nonidentity codas” (coda types used by multiple clans) decrease as space between repertoire recording locations increases. However, between-clan similarity in identity, but not nonidentity, coda usage decreases as clan spatial overlap increases. This matches expectations if sympatry is related to a measurable pressure to diversify to make cultural divisions sharper, thereby providing evidence that identity codas function as symbolic markers of clan identity. Our study provides quantitative evidence of arbitrary traits, resembling human ethnic markers, conveying cultural identity outside of humans, and highlights remarkable similarities in the distributions of human ethnolinguistic groups and sperm whale clans.
  • Hervais-Adelman, A., Kumar, U., Mishra, R., Tripathi, V., Guleria, A., Singh, J. P., & Huettig, F. (2022). How does literacy affect speech processing? Not by enhancing cortical responses to speech, but by promoting connectivity of acoustic-phonetic and graphomotor cortices. Journal of Neuroscience, 42(47), 8826-8841. doi:10.1523/JNEUROSCI.1125-21.2022.

    Abstract

    Previous research suggests that literacy, specifically learning alphabetic letter-to-phoneme mappings, modifies online speech processing, and enhances brain responses, as indexed by the blood-oxygenation level dependent signal (BOLD), to speech in auditory areas associated with phonological processing (Dehaene et al., 2010). However, alphabets are not the only orthographic systems in use in the world, and hundreds of millions of individuals speak languages that are not written using alphabets. In order to make claims that literacy per se has broad and general consequences for brain responses to speech, one must seek confirmatory evidence from non-alphabetic literacy. To this end, we conducted a longitudinal fMRI study in India probing the effect of literacy in Devanagari, an abugida, on functional connectivity and cerebral responses to speech in 91 variously literate Hindi-speaking male and female human participants. Twenty-two completely illiterate participants underwent six months of reading and writing training. Devanagari literacy increases functional connectivity between acoustic-phonetic and graphomotor brain areas, but we find no evidence that literacy changes brain responses to speech, either in cross-sectional or longitudinal analyses. These findings shows that a dramatic reconfiguration of the neurofunctional substrates of online speech processing may not be a universal result of learning to read, and suggest that the influence of writing on speech processing should also be investigated.
  • Hickman, L. J., Keating, C. T., Ferrari, A., & Cook, J. L. (2022). Skin conductance as an index of alexithymic traits in the general population. Psychological Reports, 125(3), 1363-1379. doi:10.1177/00332941211005118.

    Abstract

    Alexithymia concerns a difficulty identifying and communicating one’s own emotions, and a tendency towards externally-oriented thinking. Recent work argues that such alexithymic traits are due to altered arousal response and poor subjective awareness of “objective” arousal responses. Although there are individual differences within the general population in identifying and describing emotions, extant research has focused on highly alexithymic individuals. Here we investigated whether mean arousal and concordance between subjective and objective arousal underpin individual differences in alexithymic traits in a general population sample. Participants rated subjective arousal responses to 60 images from the International Affective Picture System whilst their skin conductance was recorded. The Autism Quotient was employed to control for autistic traits in the general population. Analysis using linear models demonstrated that mean arousal significantly predicted Toronto Alexithymia Scale scores above and beyond autistic traits, but concordance scores did not. This indicates that, whilst objective arousal is a useful predictor in populations that are both above and below the cut-off values for alexithymia, concordance scores between objective and subjective arousal do not predict variation in alexithymic traits in the general population.
  • Hilbrink, E., Sakkalou, E., Ellis-Davies, K., Fowler, N., & Gattis, M. (2013). Selective and faithful imitation at 12 and 15 months. Developmental Science., 16(6), 828-840. doi:10.1111/desc.12070.

    Abstract

    Research on imitation in infancy has primarily focused on what and when infants imitate. More recently, however, the question why infants imitate has received renewed attention, partly motivated by the finding that infants sometimes selectively imitate the actions of others and sometimes faithfully imitate, or overimitate, the actions of others. The present study evaluates the hypothesis that this varying imitative behavior is related to infants' social traits. To do so, we assessed faithful and selective imitation longitudinally at 12 and 15 months, and extraversion at 15 months. At both ages, selective imitation was dependent on the causal structure of the act. From 12 to 15 months, selective imitation decreased while faithful imitation increased. Furthermore, infants high in extraversion were more faithful imitators than infants low in extraversion. These results demonstrate that the onset of faithful imitation is earlier than previously thought, but later than the onset of selective imitation. The observed relation between extraversion and faithful imitation supports the hypothesis that faithful imitation is driven by the social motivations of the infant. We call this relation the King Louie Effect: like the orangutan King Louie in The Jungle Book, infants imitate faithfully due to a growing interest in the interpersonal nature of interactions.
  • Hill, C. (2010). [Review of the book Discourse and Grammar in Australian Languages ed. by Ilana Mushin and Brett Baker]. Studies in Language, 34(1), 215-225. doi:10.1075/sl.34.1.12hil.
  • Hinds, D. A., McMahon, G., Kiefer, A. K., Do, C. B., Eriksson, N., Evans, D. M., St Pourcain, B., Ring, S. M., Mountain, J. L., Francke, U., Davey-Smith, G., Timpson, N. J., & Tung, J. Y. (2013). A genome-wide association meta-analysis of self-reported allergy identifies shared and allergy-specific susceptibility loci. Nat Genet, 45(8), 907-911. doi:10.1038/ng.2686.

    Abstract

    Allergic disease is very common and carries substantial public-health burdens. We conducted a meta-analysis of genome-wide associations with self-reported cat, dust-mite and pollen allergies in 53,862 individuals. We used generalized estimating equations to model shared and allergy-specific genetic effects. We identified 16 shared susceptibility loci with association P<5×10(-8), including 8 loci previously associated with asthma, as well as 4p14 near TLR1, TLR6 and TLR10 (rs2101521, P=5.3×10(-21)); 6p21.33 near HLA-C and MICA (rs9266772, P=3.2×10(-12)); 5p13.1 near PTGER4 (rs7720838, P=8.2×10(-11)); 2q33.1 in PLCL1 (rs10497813, P=6.1×10(-10)), 3q28 in LPP (rs9860547, P=1.2×10(-9)); 20q13.2 in NFATC2 (rs6021270, P=6.9×10(-9)), 4q27 in ADAD1 (rs17388568, P=3.9×10(-8)); and 14q21.1 near FOXA1 and TTC6 (rs1998359, P=4.8×10(-8)). We identified one locus with substantial evidence of differences in effects across allergies at 6p21.32 in the class II human leukocyte antigen (HLA) region (rs17533090, P=1.7×10(-12)), which was strongly associated with cat allergy. Our study sheds new light on the shared etiology of immune and autoimmune disease.
  • Holler, J., Drijvers, L., Rafiee, A., & Majid, A. (2022). Embodied space-pitch associations are shaped by language. Cognitive Science, 46(2): e13083. doi:10.1111/cogs.13083.

    Abstract

    Height-pitch associations are claimed to be universal and independent of language, but this claim remains controversial. The present study sheds new light on this debate with a multimodal analysis of individual sound and melody descriptions obtained in an interactive communication paradigm with speakers of Dutch and Farsi. The findings reveal that, in contrast to Dutch speakers, Farsi speakers do not use a height-pitch metaphor consistently in speech. Both Dutch and Farsi speakers’ co-speech gestures did reveal a mapping of higher pitches to higher space and lower pitches to lower space, and this gesture space-pitch mapping tended to co-occur with corresponding spatial words (high-low). However, this mapping was much weaker in Farsi speakers than Dutch speakers. This suggests that cross-linguistic differences shape the conceptualization of pitch and further calls into question the universality of height-pitch associations.

    Additional information

    supporting information
  • Holler, J. (2022). Visual bodily signals as core devices for coordinating minds in interaction. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 377(1859): 20210094. doi:10.1098/rstb.2021.0094.

    Abstract

    The view put forward here is that visual bodily signals play a core role in human communication and the coordination of minds. Critically, this role goes far beyond referential and propositional meaning. The human communication system that we consider to be the explanandum in the evolution of language thus is not spoken language. It is, instead, a deeply multimodal, multilayered, multifunctional system that developed—and survived—owing to the extraordinary flexibility and adaptability that it endows us with. Beyond their undisputed iconic power, visual bodily signals (manual and head gestures, facial expressions, gaze, torso movements) fundamentally contribute to key pragmatic processes in modern human communication. This contribution becomes particularly evident with a focus that includes non-iconic manual signals, non-manual signals and signal combinations. Such a focus also needs to consider meaning encoded not just via iconic mappings, since kinematic modulations and interaction-bound meaning are additional properties equipping the body with striking pragmatic capacities. Some of these capacities, or its precursors, may have already been present in the last common ancestor we share with the great apes and may qualify as early versions of the components constituting the hypothesized interaction engine.
  • Holler, J., Bavelas, J., Woods, J., Geiger, M., & Simons, L. (2022). Given-new effects on the duration of gestures and of words in face-to-face dialogue. Discourse Processes, 59(8), 619-645. doi:10.1080/0163853X.2022.2107859.

    Abstract

    The given-new contract entails that speakers must distinguish for their addressee whether references are new or already part of their dialogue. Past research had found that, in a monologue to a listener, speakers shortened repeated words. However, the notion of the given-new contract is inherently dialogic, with an addressee and the availability of co-speech gestures. Here, two face-to-face dialogue experiments tested whether gesture duration also follows the given-new contract. In Experiment 1, four experimental sequences confirmed that when speakers repeated their gestures, they shortened the duration significantly. Experiment 2 replicated the effect with spontaneous gestures in a different task. This experiment also extended earlier results with words, confirming that speakers shortened their repeated words significantly in a multimodal dialogue setting, the basic form of language use. Because words and gestures were not necessarily redundant, these results offer another instance in which gestures and words independently serve pragmatic requirements of dialogue.
  • Holler, J., Turner, K., & Varcianna, T. (2013). It's on the tip of my fingers: Co-speech gestures during lexical retrieval in different social contexts. Language and Cognitive Processes, 28(10), 1509-1518. doi:10.1080/01690965.2012.698289.

    Abstract

    The Lexical Retrieval Hypothesis proposes that gestures function at the level of speech production, aiding in the retrieval of lexical items from the mental lexicon. However, empirical evidence for this account is mixed, and some critics argue that a more likely function of gestures during lexical retrieval is a communicative one. The present study was designed to test these predictions against each other by keeping lexical retrieval difficulty constant while varying social context. Participants' gestures were analysed during tip of the tongue experiences when communicating with a partner face-to-face (FTF), while being separated by a screen, or on their own by speaking into a voice recorder. The results show that participants in the FTF context produced significantly more representational gestures than participants in the solitary condition. This suggests that, even in the specific context of lexical retrieval difficulties, representational gestures appear to play predominantly a communicative role.

    Files private

    Request files
  • Hömke, P., Majid, A., & Boroditsky, L. (2013). Reversing the direction of time: Does the visibility of spatial representations of time shape temporal focus? Proceedings of the Master's Program Cognitive Neuroscience, 8(1), 40-54. Retrieved from http://www.ru.nl/master/cns/journal/archive/volume-8-issue-1/print-edition/.

    Abstract

    While people around the world mentally represent time in terms of space, there is substantial cross-cultural
    variability regarding which temporal constructs are mapped onto which parts in space. Do particular spatial
    layouts of time – as expressed through metaphors in language – shape temporal focus? We trained native
    English speakers to use spatiotemporal metaphors in a way such that the flow of time is reversed, representing
    the future behind the body (out of visible space) and the past ahead of the body (within visible space). In a
    task measuring perceived relevance of past events, people considered past events and present (or immediate
    past) events to be more relevant after using the reversed metaphors compared to a control group that used canonical metaphors spatializing the past behind and the future ahead of the body (Experiment 1). In a control measure in which temporal information was removed, this effect disappeared (Experiment 2). Taken
    together, these findings suggest that the degree to which people focus on the past may be shaped by the
    visibility of the past in spatiotemporal metaphors used in language.
  • Hoogman, M., Onnink, M., Coolen, R., Aarts, E., Kan, C., Arias Vasquez, A., Buitelaar, J., & Franke, B. (2013). The dopamine transporter haplotype and reward-related striatal responses in adult ADHD. European Neuropsychopharmacology, 23, 469-478. doi:10.1016/j.euroneuro.2012.05.011.

    Abstract

    Attention deficit/hyperactivity disorder (ADHD) is a highly heritable disorder and several genes increasing disease risk have been identified. The dopamine transporter gene, SLC6A3/DAT1, has been studied most extensively in ADHD research. Interestingly, a different haplotype of this gene (formed by genetic variants in the 3' untranslated region and intron 8) is associated with childhood ADHD (haplotype 10-6) and adult ADHD (haplotype 9-6). The expression of DAT1 is highest in striatal regions in the brain. This part of the brain is of interest to ADHD because of its role in reward processing is altered in ADHD patients; ADHD patients display decreased striatal activation during reward processing. To better understand how the DAT1 gene exerts effects on ADHD, we studied the effect of this gene on reward-related brain functioning in the area of its highest expression in the brain, the striatum, using functional magnetic resonance imaging. In doing so, we tried to resolve inconsistencies observed in previous studies of healthy individuals and ADHD-affected children. In a sample of 87 adult ADHD patients and 77 healthy comparison subjects, we confirmed the association of the 9-6 haplotype with adult ADHD. Striatal hypoactivation during the reward anticipation phase of a monetary incentive delay task in ADHD patients was again shown, but no significant effects of DAT1 on striatal activity were found. Although the importance of the DAT1 haplotype as a risk factor for adult ADHD was again demonstrated in this study, the mechanism by which this gene increases disease risk remains largely unknown.

    Additional information

    mmc1.zip
  • Hoogman, M., Van Rooij, D., Klein, M., Boedhoe, P., Ilioska, I., Li, T., Patel, Y., Postema, M., Zhang-James, Y., Anagnostou, E., Arango, C., Auzias, G., Banaschewski, T., Bau, C. H. D., Behrmann, M., Bellgrove, M. A., Brandeis, D., Brem, S., Busatto, G. F., Calderoni, S. and 60 moreHoogman, M., Van Rooij, D., Klein, M., Boedhoe, P., Ilioska, I., Li, T., Patel, Y., Postema, M., Zhang-James, Y., Anagnostou, E., Arango, C., Auzias, G., Banaschewski, T., Bau, C. H. D., Behrmann, M., Bellgrove, M. A., Brandeis, D., Brem, S., Busatto, G. F., Calderoni, S., Calvo, R., Castellanos, F. X., Coghill, D., Conzelmann, A., Daly, E., Deruelle, C., Dinstein, I., Durston, S., Ecker, C., Ehrlich, S., Epstein, J. N., Fair, D. A., Fitzgerald, J., Freitag, C. M., Frodl, T., Gallagher, L., Grevet, E. H., Haavik, J., Hoekstra, P. J., Janssen, J., Karkashadze, G., King, J. A., Konrad, K., Kuntsi, J., Lazaro, L., Lerch, J. P., Lesch, K.-P., Louza, M. R., Luna, B., Mattos, P., McGrath, J., Muratori, F., Murphy, C., Nigg, J. T., Oberwelland-Weiss, E., O'Gorman Tuura, R. L., O'Hearn, K., Oosterlaan, J., Parellada, M., Pauli, P., Plessen, K. J., Ramos-Quiroga, J. A., Reif, A., Reneman, L., Retico, A., Rosa, P. G. P., Rubia, K., Shaw, P., Silk, T. J., Tamm, L., Vilarroya, O., Walitza, S., Jahanshad, N., Faraone, S. V., Francks, C., Van den Heuvel, O. A., Paus, T., Thompson, P. M., Buitelaar, J. K., & Franke, B. (2022). Consortium neuroscience of attention deficit/hyperactivity disorder and autism spectrum disorder: The ENIGMA adventure. Human Brain Mapping, 43(1), 37-55. doi:10.1002/hbm.25029.

    Abstract

    Abstract Neuroimaging has been extensively used to study brain structure and function in individuals with attention deficit/hyperactivity disorder (ADHD) and autism spectrum disorder (ASD) over the past decades. Two of the main shortcomings of the neuroimaging literature of these disorders are the small sample sizes employed and the heterogeneity of methods used. In 2013 and 2014, the ENIGMA-ADHD and ENIGMA-ASD working groups were respectively, founded with a common goal to address these limitations. Here, we provide a narrative review of the thus far completed and still ongoing projects of these working groups. Due to an implicitly hierarchical psychiatric diagnostic classification system, the fields of ADHD and ASD have developed largely in isolation, despite the considerable overlap in the occurrence of the disorders. The collaboration between the ENIGMA-ADHD and -ASD working groups seeks to bring the neuroimaging efforts of the two disorders closer together. The outcomes of case–control studies of subcortical and cortical structures showed that subcortical volumes are similarly affected in ASD and ADHD, albeit with small effect sizes. Cortical analyses identified unique differences in each disorder, but also considerable overlap between the two, specifically in cortical thickness. Ongoing work is examining alternative research questions, such as brain laterality, prediction of case–control status, and anatomical heterogeneity. In brief, great strides have been made toward fulfilling the aims of the ENIGMA collaborations, while new ideas and follow-up analyses continue that include more imaging modalities (diffusion MRI and resting-state functional MRI), collaborations with other large databases, and samples with dual diagnoses.
  • Houston, D. M., Jusczyk, P. W., Kuijpers, C., Coolen, R., & Cutler, A. (2000). Cross-language word segmentation by 9-month-olds. Psychonomic Bulletin & Review, 7, 504-509.

    Abstract

    Dutch-learning and English-learning 9-month-olds were tested, using the Headturn Preference Procedure, for their ability to segment Dutch words with strong/weak stress patterns from fluent Dutch speech. This prosodic pattern is highly typical for words of both languages. The infants were familiarized with pairs of words and then tested on four passages, two that included the familiarized words and two that did not. Both the Dutch- and the English-learning infants gave evidence of segmenting the targets from the passages, to an equivalent degree. Thus, English-learning infants are able to extract words from fluent speech in a language that is phonetically different from English. We discuss the possibility that this cross-language segmentation ability is aided by the similarity of the typical rhythmic structure of Dutch and English words.
  • Howarth, H., Sommer, V., & Jordan, F. (2010). Visual depictions of female genitalia differ depending on source. Medical Humanities, 36, 75-79. doi:10.1136/jmh.2009.003707.

    Abstract

    Very little research has attempted to describe normal human variation in female genitalia, and no studies have compared the visual images that women might use in constructing their ideas of average and acceptable genital morphology to see if there are any systematic differences. Our objective was to determine if visual depictions of the vulva differed according to their source so as to alert medical professionals and their patients to how these depictions might capture variation and thus influence perceptions of "normality". We conducted a comparative analysis by measuring (a) published visual materials from human anatomy textbooks in a university library, (b) feminist publications (both print and online) depicting vulval morphology, and (c) online pornography, focusing on the most visited and freely accessible sites in the UK. Post-hoc tests showed that labial protuberance was significantly less (p < .001, equivalent to approximately 7 mm) in images from online pornography compared to feminist publications. All five measures taken of vulval features were significantly correlated (p < .001) in the online pornography sample, indicating a less varied range of differences in organ proportions than the other sources where not all measures were correlated. Women and health professionals should be aware that specific sources of imagery may depict different types of genital morphology and may not accurately reflect true variation in the population, and consultations for genital surgeries should include discussion about the actual and perceived range of variation in female genital morphology.
  • Hoymann, G. (2010). Questions and responses in ╪Ākhoe Hai||om. Journal of Pragmatics, 42(10), 2726-2740. doi:10.1016/j.pragma.2010.04.008.

    Abstract

    This paper examines ╪Ākhoe Hai||om, a Khoe language of the Khoisan family spoken in Northern Namibia. I document the way questions are posed in natural conversation, the actions the questions are used for and the manner in which they are responded to. I show that in this language speakers rely most heavily on content questions. I also find that speakers of ╪Ākhoe Hai||om address fewer questions to a specific individual than would be expected from prior research on Indo European languages. Finally, I discuss some possible explanations for these findings.
  • Huettig, F., Audring, J., & Jackendoff, R. (2022). A parallel architecture perspective on pre-activation and prediction in language processing. Cognition, 224: 105050. doi:10.1016/j.cognition.2022.105050.

    Abstract

    A recent trend in psycholinguistic research has been to posit prediction as an essential function of language processing. The present paper develops a linguistic perspective on viewing prediction in terms of pre-activation. We describe what predictions are and how they are produced. Our basic premises are that (a) no prediction can be made without knowledge to support it; and (b) it is therefore necessary to characterize the precise form of that knowledge, as revealed by a suitable theory of linguistic representations. We describe the Parallel Architecture (PA: Jackendoff, 2002; Jackendoff and Audring, 2020), which makes explicit our commitments about linguistic representations, and we develop an account of processing based on these representations. Crucial to our account is that what have been traditionally treated as derivational rules of grammar are formalized by the PA as lexical items, encoded in the same format as words. We then present a theory of prediction in these terms: linguistic input activates lexical items whose beginning (or incipit) corresponds to the input encountered so far; and prediction amounts to pre-activation of the as yet unheard parts of those lexical items (the remainder). Thus the generation of predictions is a natural byproduct of processing linguistic representations. We conclude that the PA perspective on pre-activation provides a plausible account of prediction in language processing that bridges linguistic and psycholinguistic theorizing.
  • Huettig, F., Chen, J., Bowerman, M., & Majid, A. (2010). Do language-specific categories shape conceptual processing? Mandarin classifier distinctions influence eye gaze behavior, but only during linguistic processing. Journal of Cognition and Culture, 10(1/2), 39-58. doi:10.1163/156853710X497167.

    Abstract

    In two eye-tracking studies we investigated the influence of Mandarin numeral classifiers - a grammatical category in the language - on online overt attention. Mandarin speakers were presented with simple sentences through headphones while their eye-movements to objects presented on a computer screen were monitored. The crucial question is what participants look at while listening to a pre-specified target noun. If classifier categories influence Mandarin speakers' general conceptual processing, then on hearing the target noun they should look at objects that are members of the same classifier category - even when the classifier is not explicitly present (cf. Huettig & Altmann, 2005). The data show that when participants heard a classifier (e.g., ba3, Experiment 1) they shifted overt attention significantly more to classifier-match objects (e.g., chair) than to distractor objects. But when the classifier was not explicitly presented in speech, overt attention to classifier-match objects and distractor objects did not differ (Experiment 2). This suggests that although classifier distinctions do influence eye-gaze behavior, they do so only during linguistic processing of that distinction and not in moment-to-moment general conceptual processing.
  • Huettig, F., & Hartsuiker, R. J. (2010). Listening to yourself is like listening to others: External, but not internal, verbal self-monitoring is based on speech perception. Language and Cognitive Processes, 3, 347 -374. doi:10.1080/01690960903046926.

    Abstract

    Theories of verbal self-monitoring generally assume an internal (pre-articulatory) monitoring channel, but there is debate about whether this channel relies on speech perception or on production-internal mechanisms. Perception-based theories predict that listening to one's own inner speech has similar behavioral consequences as listening to someone else's speech. Our experiment therefore registered eye-movements while speakers named objects accompanied by phonologically related or unrelated written words. The data showed that listening to one's own speech drives eye-movements to phonologically related words, just as listening to someone else's speech does in perception experiments. The time-course of these eye-movements was very similar to that in other-perception (starting 300 ms post-articulation), which demonstrates that these eye-movements were driven by the perception of overt speech, not inner speech. We conclude that external, but not internal monitoring, is based on speech perception.
  • Huizeling, E., Arana, S., Hagoort, P., & Schoffelen, J.-M. (2022). Lexical frequency and sentence context influence the brain’s response to single words. Neurobiology of Language, 3(1), 149-179. doi:10.1162/nol_a_00054.

    Abstract

    Typical adults read remarkably quickly. Such fast reading is facilitated by brain processes that are sensitive to both word frequency and contextual constraints. It is debated as to whether these attributes have additive or interactive effects on language processing in the brain. We investigated this issue by analysing existing magnetoencephalography data from 99 participants reading intact and scrambled sentences. Using a cross-validated model comparison scheme, we found that lexical frequency predicted the word-by-word elicited MEG signal in a widespread cortical network, irrespective of sentential context. In contrast, index (ordinal word position) was more strongly encoded in sentence words, in left front-temporal areas. This confirms that frequency influences word processing independently of predictability, and that contextual constraints affect word-by-word brain responses. With a conservative multiple comparisons correction, only the interaction between lexical frequency and surprisal survived, in anterior temporal and frontal cortex, and not between lexical frequency and entropy, nor between lexical frequency and index. However, interestingly, the uncorrected index*frequency interaction revealed an effect in left frontal and temporal cortex that reversed in time and space for intact compared to scrambled sentences. Finally, we provide evidence to suggest that, in sentences, lexical frequency and predictability may independently influence early (<150ms) and late stages of word processing, but interact during later stages of word processing (>150-250ms), thus helping to converge previous contradictory eye-tracking and electrophysiological literature. Current neuro-cognitive models of reading would benefit from accounting for these differing effects of lexical frequency and predictability on different stages of word processing.
  • Huizeling, E., Peeters, D., & Hagoort, P. (2022). Prediction of upcoming speech under fluent and disfluent conditions: Eye tracking evidence from immersive virtual reality. Language, Cognition and Neuroscience, 37(4), 481-508. doi:10.1080/23273798.2021.1994621.

    Abstract

    Traditional experiments indicate that prediction is important for efficient speech processing. In three virtual reality visual world paradigm experiments, we tested whether such findings hold in naturalistic settings (Experiment 1) and provided novel insights into whether disfluencies in speech (repairs/hesitations) inform one’s predictions in rich environments (Experiments 2–3). Experiment 1 supports that listeners predict upcoming speech in naturalistic environments, with higher proportions of anticipatory target fixations in predictable compared to unpredictable trials. In Experiments 2–3, disfluencies reduced anticipatory fixations towards predicted referents, compared to conjunction (Experiment 2) and fluent (Experiment 3) sentences. Unexpectedly, Experiment 2 provided no evidence that participants made new predictions from a repaired verb. Experiment 3 provided novel findings that fixations towards the speaker increase upon hearing a hesitation, supporting current theories of how hesitations influence sentence processing. Together, these findings unpack listeners’ use of visual (objects/speaker) and auditory (speech/disfluencies) information when predicting upcoming words.
  • Hulten, A., Laaksonen, H., Vihla, M., Laine, M., & Salmelin, R. (2010). Modulation of brain activity after learning predicts long-term memory for words. Journal of Neuroscience, 30(45), 15160-15164. doi:10.1523/​JNEUROSCI.1278-10.2010.

    Abstract

    The acquisition and maintenance of new language information, such as picking up new words, is a critical human ability that is needed throughout the life span. Most likely you learned the word “blog” quite recently as an adult, whereas the word “kipe,” which in the 1970s denoted stealing, now seems unfamiliar. Brain mechanisms underlying the long-term maintenance of new words have remained unknown, albeit they could provide important clues to the considerable individual differences in the ability to remember words. After successful training of a set of novel object names we tracked, over a period of 10 months, the maintenance of this new vocabulary in 10 human participants by repeated behavioral tests and magnetoencephalography measurements of overt picture naming. When namingrelated activation in the left frontal and temporal cortex was enhanced 1 week after training, compared with the level at the end of training, the individual retained a good command of the new vocabulary at 10 months; vice versa, individuals with reduced activation at 1 week posttraining were less successful in recalling the names at 10 months. This finding suggests an individual neural marker for memory, in the context of language. Learning is not over when the acquisition phase has been successfully completed: neural events during the access to recently established word representations appear to be important for the long-term outcome of learning.
  • Indefrey, P., & Levelt, W. J. M. (1999). A meta-analysis of neuroimaging experiments on word production. Neuroimage, 7, 1028.
  • Indefrey, P. (1998). De neurale architectuur van taal: Welke hersengebieden zijn betrokken bij het spreken. Neuropraxis, 2(6), 230-237.
  • Indefrey, P., & Gullberg, M. (2010). Foreword. Language Learning, 60(S2), v. doi:10.1111/j.1467-9922.2010.00596.x.

    Abstract

    The articles in this volume are the result of an invited conference entitled "The Earliest Stages of Language Learning" held at the Max Planck Institute for Psycholinguistics in Nijmegen, The Netherlands, in October 2009.
  • Indefrey, P., Gruber, O., Brown, C. M., Hagoort, P., Posse, S., & Kleinschmidt, A. (1998). Lexicality and not syllable frequency determine lateralized premotor activation during the pronunciation of word-like stimuli: An fMRI study. NeuroImage, 7, S4.
  • Indefrey, P., & Gullberg, M. (2010). The earliest stages of language learning: Introduction. Language Learning, 60(S2), 1-4. doi:10.1111/j.1467-9922.2010.00597.x.
  • Indefrey, P. (1999). Some problems with the lexical status of nondefault inflection. Behavioral and Brain Sciences, 22(6), 1025. doi:10.1017/S0140525X99342229.

    Abstract

    Clahsen's characterization of nondefault inflection as based exclusively on lexical entries does not capture the full range of empirical data on German inflection. In the verb system differential effects of lexical frequency seem to be input-related rather than affecting morphological production. In the noun system, the generalization properties of -n and -e plurals exceed mere analogy-based productivity.
  • Ingason, A., Giegling, I., Cichon, S., Hansen, T., Rasmussen, H. B., Nielsen, J., Jurgens, G., Muglia, P., Hartmann, A. M., Strengman, E., Vasilescu, C., Muhleisen, T. W., Djurovic, S., Melle, I., Lerer, B., Möller, H.-J., Francks, C., Pietilainen, O. P. H., Lonnqvist, J., Suvisaari, J. and 20 moreIngason, A., Giegling, I., Cichon, S., Hansen, T., Rasmussen, H. B., Nielsen, J., Jurgens, G., Muglia, P., Hartmann, A. M., Strengman, E., Vasilescu, C., Muhleisen, T. W., Djurovic, S., Melle, I., Lerer, B., Möller, H.-J., Francks, C., Pietilainen, O. P. H., Lonnqvist, J., Suvisaari, J., Tuulio-Henriksson, A., Walshe, M., Vassos, E., Di Forti, M., Murray, R., Bonetto, C., Tosato, S., Cantor, R. M., Rietschel, M., Craddock, N., Owen, M. J., Andreassen, O. A., Nothen, M. M., Peltonen, L., St. Clair, D., Ophoff, R. A., O’Donovan, M. C., Collier, D. A., Werge, T., & Rujescu, D. (2010). A large replication study and meta-analysis in European samples provides further support for association of AHI1 markers with schizophrenia. Human Molecular Genetics, 19(7), 1379-1386. doi:10.1093/hmg/ddq009.

    Abstract

    The Abelson helper integration site 1 (AHI1) gene locus on chromosome 6q23 is among a group of candidate loci for schizophrenia susceptibility that were initially identified by linkage followed by linkage disequilibrium mapping, and subsequent replication of the association in an independent sample. Here, we present results of a replication study of AHI1 locus markers, previously implicated in schizophrenia, in a large European sample (in total 3907 affected and 7429 controls). Furthermore, we perform a meta-analysis of the implicated markers in 4496 affected and 18,920 controls. Both the replication study of new samples and the meta-analysis show evidence for significant overrepresentation of all tested alleles in patients compared with controls (meta-analysis; P = 8.2 x 10(-5)-1.7 x 10(-3), common OR = 1.09-1.11). The region contains two genes, AHI1 and C6orf217, and both genes-as well as the neighbouring phosphodiesterase 7B (PDE7B)-may be considered candidates for involvement in the genetic aetiology of schizophrenia.
  • Isbilen, E. S., Frost, R. L. A., Monaghan, P., & Christiansen, M. H. (2022). Statistically based chunking of nonadjacent dependencies. Journal of Experimental Psychology: General, 151(11), 2623-2640. doi:10.1037/xge0001207.

    Abstract

    How individuals learn complex regularities in the environment and generalize them to new instances is a key question in cognitive science. Although previous investigations have advocated the idea that learning and generalizing depend upon separate processes, the same basic learning mechanisms may account for both. In language learning experiments, these mechanisms have typically been studied in isolation of broader cognitive phenomena such as memory, perception, and attention. Here, we show how learning and generalization in language is embedded in these broader theories by testing learners on their ability to chunk nonadjacent dependencies—a key structure in language but a challenge to theories that posit learning through the memorization of structure. In two studies, adult participants were trained and tested on an artificial language containing nonadjacent syllable dependencies, using a novel chunking-based serial recall task involving verbal repetition of target sequences (formed from learned strings) and scrambled foils. Participants recalled significantly more syllables, bigrams, trigrams, and nonadjacent dependencies from sequences conforming to the language’s statistics (both learned and generalized sequences). They also encoded and generalized specific nonadjacent chunk information. These results suggest that participants chunk remote dependencies and rapidly generalize this information to novel structures. The results thus provide further support for learning-based approaches to language acquisition, and link statistical learning to broader cognitive mechanisms of memory.
  • Jackson, C., & Roberts, L. (2010). Animacy affects the processing of subject–object ambiguities in the second language: Evidence from self-paced reading with German second language learners of Dutch. Applied Psycholinguistics, 31(4), 671-691. doi:10.1017/S0142716410000196.

    Abstract

    The results of a self-paced reading study with German second language (L2) learners of Dutch showed that noun animacy affected the learners' on-line commitments when comprehending relative clauses in their L2. Earlier research has found that German L2 learners of Dutch do not show an on-line preference for subject–object word order in temporarily ambiguous relative clauses when no disambiguating material is available prior to the auxiliary verb. We investigated whether manipulating the animacy of the ambiguous noun phrases would push the learners to make an on-line commitment to either a subject- or object-first analysis. Results showed they performed like Dutch native speakers in that their reading times reflected an interaction between topichood and animacy in the on-line assignment of grammatical roles
  • Janse, E., De Bree, E., & Brouwer, S. (2010). Decreased sensitivity to phonemic mismatch in spoken word processing in adult developmental dyslexia. Journal of Psycholinguistic Research, 39(6), 523-539. doi:10.1007/s10936-010-9150-2.

    Abstract

    Initial lexical activation in typical populations is a direct reflection of the goodness of fit between the presented stimulus and the intended target. In this study, lexical activation was investigated upon presentation of polysyllabic pseudowords (such as procodile for crocodile) for the atypical population of dyslexic adults to see to what extent mismatching phonemic information affects lexical activation in the face of overwhelming support for one specific lexical candidate. Results of an auditory lexical decision task showed that sensitivity to phonemic mismatch was less in the dyslexic population, compared to the respective control group. However, the dyslexic participants were outperformed by their controls only for word-initial mismatches. It is argued that a subtle speech decoding deficit affects lexical activation levels and makes spoken word processing less robust against distortion.
  • Janse, E., & Newman, R. S. (2013). Identifying nonwords: Effects of lexical neighborhoods, phonotactic probability, and listener characteristics. Language and Speech, 56(4), 421-444. doi:10.1177/0023830912447914.

    Abstract

    Listeners find it relatively difficult to recognize words that are similar-sounding to other known words. In contrast, when asked to identify spoken nonwords, listeners perform better when the nonwords are similar to many words in their language. These effects of sound similarity have been assessed in multiple ways, and both sublexical (phonotactic probability) and lexical (neighborhood) effects have been reported, leading to models that incorporate multiple stages of processing. One prediction that can be derived from these models is that there may be differences among individuals in the size of these similarity effects as a function of working memory abilities. This study investigates how item-individual characteristics of nonwords (both phonotactic probability and neighborhood density) interact with listener-individual characteristics (such as cognitive abilities and hearing sensitivity) in the perceptual identification of nonwords. A set of nonwords was used in which neighborhood density and phonotactic probability were not correlated. In our data, neighborhood density affected identification more reliably than did phonotactic probability. The first study, with young adults, showed that higher neighborhood density particularly benefits nonword identification for those with poorer attention-switching control. This suggests that it may be easier to focus attention on a novel item if it activates and receives support from more similar-sounding neighbors. A similar study on nonword identification with older adults showed increased neighborhood density effects for those with poorer hearing, suggesting that activation of long-term linguistic knowledge is particularly important to back up auditory representations that are degraded as a result of hearing loss.
  • Janse, E. (2010). Spoken word processing and the effect of phonemic mismatch in aphasia. Aphasiology, 24(1), 3-27. doi:10.1080/02687030802339997.

    Abstract

    Background: There is evidence that, unlike in typical populations, initial lexical activation upon hearing spoken words in aphasic patients is not a direct reflection of the goodness of fit between the presented stimulus and the intended target. Earlier studies have mainly used short monosyllabic target words. Short words are relatively difficult to recognise because they are not highly redundant: changing one phoneme will often result in a (similar-sounding) different word. Aims: The present study aimed to investigate sensitivity of the lexical recognition system in aphasia. The focus was on longer words that contain more redundancy, to investigate whether aphasic adults might be impaired in deactivation of strongly activated lexical candidates. This was done by studying lexical activation upon presentation of spoken polysyllabic pseudowords (such as procodile) to see to what extent mismatching phonemic information leads to deactivation in the face of overwhelming support for one specific lexical candidate. Methods & Procedures: Speeded auditory lexical decision was used to investigate response time and accuracy to pseudowords with a word-initial or word-final phonemic mismatch in 21 aphasic patients and in an age-matched control group. Outcomes & Results: Results of an auditory lexical decision task showed that aphasic participants were less sensitive to phonemic mismatch if there was strong evidence for one particular lexical candidate, compared to the control group. Classifications of patients as Broca's vs Wernicke's or as fluent vs non-fluent did not reveal differences in sensitivity to mismatch between aphasia types. There was no reliable relationship between measures of auditory verbal short-term memory and lexical decision performance. Conclusions: It is argued that the aphasic results can best be viewed as lexical “overactivation” and that a verbal short-term memory account is less appropriate.
  • Janssens, S. E. W., Sack, A. T., Ten Oever, S., & Graaf, T. A. (2022). Calibrating rhythmic stimulation parameters to individual electroencephalography markers: The consistency of individual alpha frequency in practical lab settings. European Journal of Neuroscience, 55(11/12), 3418-3437. doi:10.1111/ejn.15418.

    Abstract

    Rhythmic stimulation can be applied to modulate neuronal oscillations. Such ‘entrainment’ is optimized when stimulation frequency is individually calibrated based on magneto/encephalography markers. It remains unknown how consistent such individual markers are across days/sessions, within a session, or across cognitive states, hemispheres and estimation methods, especially in a realistic, practical, lab setting. We here estimated individual alpha frequency (IAF) repeatedly from short electroencephalography (EEG) measurements at rest or during an attention task (cognitive state), using single parieto-occipital electrodes in 24 participants on 4 days (between-sessions), with multiple measurements over an hour on 1 day (within-session). First, we introduce an algorithm to automatically reject power spectra without a sufficiently clear peak to ensure unbiased IAF estimations. Then we estimated IAF via the traditional ‘maximum’ method and a ‘Gaussian fit’ method. IAF was reliable within- and between-sessions for both cognitive states and hemispheres, though task-IAF estimates tended to be more variable. Overall, the ‘Gaussian fit’ method was more reliable than the ‘maximum’ method. Furthermore, we evaluated how far from an approximated ‘true’ task-related IAF the selected ‘stimulation frequency’ was, when calibrating this frequency based on a short rest-EEG, a short task-EEG, or simply selecting 10 Hz for all participants. For the ‘maximum’ method, rest-EEG calibration was best, followed by task-EEG, and then 10 Hz. For the ‘Gaussian fit’ method, rest-EEG and task-EEG-based calibration were similarly accurate, and better than 10 Hz. These results lead to concrete recommendations about valid, and automated, estimation of individual oscillation markers in experimental and clinical settings.
  • Janssens, S. E., Ten Oever, S., Sack, A. T., & de Graaf, T. A. (2022). “Broadband Alpha Transcranial Alternating Current Stimulation”: Exploring a new biologically calibrated brain stimulation protocol. NeuroImage, 253: 119109. doi:10.1016/j.neuroimage.2022.119109.

    Abstract

    Transcranial alternating current stimulation (tACS) can be used to study causal contributions of oscillatory brain mechanisms to cognition and behavior. For instance, individual alpha frequency (IAF) tACS was reported to enhance alpha power and impact visuospatial attention performance. Unfortunately, such results have been inconsistent and difficult to replicate. In tACS, stimulation generally involves one frequency, sometimes individually calibrated to a peak value observed in an M/EEG power spectrum. Yet, the ‘peak’ actually observed in such power spectra often contains a broader range of frequencies, raising the question whether a biologically calibrated tACS protocol containing this fuller range of alpha-band frequencies might be more effective. Here, we introduce ‘Broadband-alpha-tACS’, a complex individually calibrated electrical stimulation protocol. We band-pass filtered left posterior resting-state EEG data around the IAF (+/- 2 Hz), and converted that time series into an electrical waveform for tACS stimulation of that same left posterior parietal cortex location. In other words, we stimulated a brain region with a ‘replay’ of its own alpha-band frequency content, based on spontaneous activity. Within-subjects (N=24), we compared to a sham tACS session the effects of broadband-alpha tACS, power-matched spectral inverse (‘alpha-removed’) control tACS, and individual alpha frequency tACS, on EEG alpha power and performance in an endogenous attention task previously reported to be affected by alpha tACS. Broadband-alpha-tACS significantly modulated attention task performance (i.e., reduced the rightward visuospatial attention bias in trials without distractors, and reduced attention benefits). Alpha-removed tACS also reduced the rightward visuospatial attention bias. IAF-tACS did not significantly modulate attention task performance compared to sham tACS, but also did not statistically significantly differ from broadband-alpha-tACS. This new broadband-alpha tACS approach seems promising, but should be further explored and validated in future studies.

    Additional information

    supplementary materials
  • Jara-Ettinger, J., & Rubio-Fernández, P. (2022). The social basis of referential communication: Speakers construct physical reference based on listeners’ expected visual search. Psychological Review, 129, 1394-1413. doi:10.1037/rev0000345.

    Abstract

    A foundational assumption of human communication is that speakers should say as much as necessary, but no more. Yet, people routinely produce redundant adjectives and their propensity to do so varies cross-linguistically. Here, we propose a computational theory, whereby speakers create referential expressions designed to facilitate listeners’ reference resolution, as they process words in real time. We present a computational model of our account, the Incremental Collaborative Efficiency (ICE) model, which generates referential expressions by considering listeners’ real-time incremental processing and reference identification. We apply the ICE framework to physical reference, showing that listeners construct expressions designed to minimize listeners’ expected visual search effort during online language processing. Our model captures a number of known effects in the literature, including cross-linguistic differences in speakers’ propensity to over-specify. Moreover, the ICE model predicts graded acceptability judgments with quantitative accuracy, systematically outperforming an alternative, brevity-based model. Our findings suggest that physical reference production is best understood as driven by a collaborative goal to help the listener identify the intended referent, rather than by an egocentric effort to minimize utterance length.
  • Järvikivi, J., Vainio, M., & Aalto, D. (2010). Real-time correlates of phonological quantity reveal unity of tonal and non-tonal languages. Plos One, 5(9), e12603. doi:10.1371/journal.pone.0012603.

    Abstract

    Discrete phonological phenomena form our conscious experience of language: continuous changes in pitch appear as distinct tones to the speakers of tone languages, whereas the speakers of quantity languages experience duration categorically. The categorical nature of our linguistic experience is directly reflected in the traditionally clear-cut linguistic classification of languages into tonal or non-tonal. However, some evidence suggests that duration and pitch are fundamentally interconnected and co-vary in signaling word meaning in non-tonal languages as well. We show that pitch information affects real-time language processing in a (non-tonal) quantity language. The results suggest that there is no unidirectional causal link from a genetically-based perceptual sensitivity towards pitch information to the appearance of a tone language. They further suggest that the contrastive categories tone and quantity may be based on simultaneously co-varying properties of the speech signal and the processing system, even though the conscious experience of the speakers may highlight only one discrete variable at a time.
  • Jesse, A., & Massaro, D. W. (2010). Seeing a singer helps comprehension of the song's lyrics. Psychonomic Bulletin & Review, 17, 323-328.

    Abstract

    When listening to speech, we often benefit when also seeing the speaker talk. If this benefit is not domain-specific for speech, then the recognition of sung lyrics should likewise benefit from seeing the singer. Nevertheless, previous research failed to obtain a substantial improvement in that domain. Our study shows that this failure was not due to inherent differences between singing and speaking but rather to less informative visual presentations. By presenting a professional singer, we found a substantial audiovisual benefit of about 35% improvement for lyrics recognition. This benefit was further robust across participants, phrases, and repetition of the test materials. Our results provide the first evidence that lyrics recognition just like speech and music perception is a multimodal process.
  • Jesse, A., Vrignaud, N., Cohen, M. M., & Massaro, D. W. (2000). The processing of information from multiple sources in simultaneous interpreting. Interpreting, 5(2), 95-115. doi:10.1075/intp.5.2.04jes.

    Abstract

    Language processing is influenced by multiple sources of information. We examined whether the performance in simultaneous interpreting would be improved when providing two sources of information, the auditory speech as well as corresponding lip-movements, in comparison to presenting the auditory speech alone. Although there was an improvement in sentence recognition when presented with visible speech, there was no difference in performance between these two presentation conditions when bilinguals simultaneously interpreted from English to German or from English to Spanish. The reason why visual speech did not contribute to performance could be the presentation of the auditory signal without noise (Massaro, 1998). This hypothesis should be tested in the future. Furthermore, it should be investigated if an effect of visible speech can be found for other contexts, when visual information could provide cues for emotions, prosody, or syntax.
  • Jesse, A., & Massaro, D. W. (2010). The temporal distribution of information in audiovisual spoken-word identification. Attention, Perception & Psychophysics, 72(1), 209-225. doi:10.3758/APP.72.1.209.

    Abstract

    In the present study, we examined the distribution and processing of information over time in auditory and visual speech as it is used in unimodal and bimodal word recognition. English consonant—vowel—consonant words representing all possible initial consonants were presented as auditory, visual, or audiovisual speech in a gating task. The distribution of information over time varied across and within features. Visual speech information was generally fully available early during the phoneme, whereas auditory information was still accumulated. An audiovisual benefit was therefore already found early during the phoneme. The nature of the audiovisual recognition benefit changed, however, as more of the phoneme was presented. More features benefited at short gates rather than at longer ones. Visual speech information plays, therefore, a more important role early during the phoneme rather than later. The results of the study showed the complex interplay of information across modalities and time, since this is essential in determining the time course of audiovisual spoken-word recognition.
  • Jessop, A., & Chang, F. (2022). Thematic role tracking difficulties across multiple visual events influences role use in language production. Visual Cognition, 30(3), 151-173. doi:10.1080/13506285.2021.2013374.

    Abstract

    Language sometimes requires tracking the same participant in different thematic roles across multiple visual events (e.g., The girl that another girl pushed chased a third girl). To better understand how vision and language interact in role tracking, participants described videos of multiple randomly moving circles where two push events were presented. A circle might have the same role in both push events (e.g., agent) or different roles (e.g., agent of one push and patient of other push). The first three studies found higher production accuracy for the same role conditions compared to the different role conditions across different linguistic structure manipulations. The last three studies compared a featural account, where role information was associated with particular circles, or a relational account, where role information was encoded with particular push events. These studies found no interference between different roles, contrary to the predictions of the featural account. The foil was manipulated in these studies to increase the saliency of the second push and it was found that this changed the accuracy in describing the first push. The results suggest that language-related thematic role processing uses a relational representation that can encode multiple events.

    Additional information

    https://doi.org/10.17605/OSF.IO/PKXZH
  • Johnson, E. K., Lahey, M., Ernestus, M., & Cutler, A. (2013). A multimodal corpus of speech to infant and adult listeners. Journal of the Acoustical Society of America, 134, EL534-EL540. doi:10.1121/1.4828977.

    Abstract

    An audio and video corpus of speech addressed to 28 11-month-olds is described. The corpus allows comparisons between adult speech directed towards infants, familiar adults and unfamiliar adult addressees, as well as of caregivers’ word teaching strategies across word classes. Summary data show that infant-directed speech differed more from speech to unfamiliar than familiar adults; that word teaching strategies for nominals versus verbs and adjectives differed; that mothers mostly addressed infants with multi-word utterances; and that infants’ vocabulary size was unrelated to speech rate, but correlated positively with predominance of continuous caregiver speech (not of isolated words) in the input.
  • Johnson, E. K., & Tyler, M. (2010). Testing the limits of statistical learning for word segmentation. Developmental Science, 13, 339-345. doi:10.1111/j.1467-7687.2009.00886.x.

    Abstract

    Past research has demonstrated that infants can rapidly extract syllable distribution information from an artificial language and use this knowledge to infer likely word boundaries in speech. However, artificial languages are extremely simplified with respect to natural language. In this study, we ask whether infants’ ability to track transitional probabilities between syllables in an artificial language can scale up to the challenge of natural language. We do so by testing both 5.5- and 8-month-olds’ ability to segment an artificial language containing four words of uniform length (all CVCV) or four words of varying length (two CVCV, two CVCVCV). The transitional probability cues to word boundaries were held equal across the two languages. Both age groups segmented the language containing words of uniform length, demonstrating that even 5.5-month-olds are extremely sensitive to the conditional probabilities in their environment. However, either age group succeeded in segmenting the language containing words of varying length, despite the fact that the transitional probability cues defining word boundaries were equally strong in the two languages. We conclude that infants’ statistical learning abilities may not be as robust as earlier studies have suggested.
  • Jordan, F., & Dunn, M. (2010). Kin term diversity is the result of multilevel, historical processes [Comment on Doug Jones]. Behavioral and Brain Sciences, 33, 388. doi:10.1017/S0140525X10001962.

    Abstract

    Explanations in the domain of kinship can be sought on several different levels: Jones addresses online processing, as well as issues of origins and innateness. We argue that his framework can more usefully be applied at the levels of developmental and historical change, the latter especially. A phylogenetic approach to the diversity of kinship terminologies is most urgently required.
  • Jordan, F., & Huber, B. H. (2013). Introduction: Evolutionary processes in language and culture group. Cross-Cultural Research, 47(2), 91 -101. doi:10.1177/1069397112471800.

    Abstract

    This special issue “Evolutionary Approaches to Cross-Cultural Anthropology” brings together scholars from the fields of behavioral ecology, evolutionary psychology, and cultural evolution whose cross-cultural work draws on evolutionary theory and methods. The papers here are a subset of those presented at a symposium we organized for the 2011 meeting of the Society for Cross-Cultural Research held in Charleston, South Carolina. Collectively, our authors show how an engagement with cultural variation has enriched evolutionary anthropology, and these papers showcase how cross-cultural research can benefit from the theoretical and methodological contributions of an evolutionary approach.
  • Julvez, J., Smith, G. D., Golding, J., Ring, S., St Pourcain, B., Gonzalez, J. R., & Grandjean, P. (2013). Prenatal methylmercury exposure and genetic predisposition to cognitive deficit at age 8 years. Epidemiology, 24(5), 643-650. doi:10.1097/EDE.0b013e31829d5c93.

    Abstract

    BACKGROUND: Cognitive consequences at school age associated with prenatal methylmercury (MeHg) exposure may need to take into account nutritional and sociodemographic cofactors as well as relevant genetic polymorphisms. METHODS: A subsample (n = 1,311) of the Avon Longitudinal Study of Parents and Children (Bristol, UK) was selected, and mercury (Hg) concentrations were measured in freeze-dried umbilical cord tissue as a measure of MeHg exposure. A total of 1135 children had available data on 247 single-nucleotide polymorphisms (SNPs) within relevant genes, as well as the Wechsler Intelligence Scale for Children Intelligence Quotient (IQ) scores at age 8 years. Multivariate regression models were used to assess the associations between MeHg exposure and IQ and to determine possible gene-environment interactions. RESULTS: Hg concentrations indicated low background exposures (mean = 26 ng/g, standard deviation = 13). Log10-transformed Hg was positively associated with IQ, which attenuated after adjustment for nutritional and sociodemographic cofactors. In stratified analyses, a reverse association was found in higher social class families (for performance IQ, P value for interaction = 0.0013) among whom there was a wider range of MeHg exposure. Among 40 SNPs showing nominally significant main effects, MeHg interactions were detected for rs662 (paraoxonase 1) and rs1042838 (progesterone receptor) (P <} 0.05) and for rs3811647 (transferrin) and rs2049046 (brain-derived neurotrophic factor) (P {< 0.10). CONCLUSIONS: In this population with a low level of MeHg exposure, there were only equivocal associations between MeHg exposure and adverse neuropsychological outcomes. Heterogeneities in several relevant genes suggest possible genetic predisposition to MeHg neurotoxicity in a substantial proportion of the population. Future studies need to address this possibility.
  • Kajihara, T., Verdonschot, R. G., Sparks, J., & Stewart, L. (2013). Action-perception coupling in violinists. Frontiers in Human Neuroscience, 7: 349. doi:10.3389/fnhum.2013.00349.

    Abstract

    The current study investigates auditory-motor coupling in musically trained participants using a Stroop-type task that required the execution of simple finger sequences according to aurally presented number sequences (e.g., "2," " 4," "5," "3," "1"). Digital remastering was used to manipulate the pitch contour of the number sequences such that they were either congruent or incongruent with respect to the resulting action sequence. Conservatoire-level violinists showed a strong effect of congruency manipulation (increased response time for incongruent vs. congruent trials), in comparison to a control group of non-musicians. In Experiment 2, this paradigm was used to determine whether pedagogical background would influence this effect in a group of young violinists. Suzuki-trained violinists differed significantly from those with no musical background, while traditionally-trained violinists did not. The findings extend previous research in this area by demonstrating that obligatory audio-motor coupling is directly related to a musicians' expertise on their instrument of study and is influenced by pedagogy.
  • Kaltwasser, L., Ries, S., Sommer, W., Knight, R., & Willems, R. M. (2013). Independence of valence and reward in emotional word processing: Electrophysiological evidence. Frontiers in Psychology, 4: 168. doi:10.3389/fpsyg.2013.00168.

    Abstract

    Both emotion and reward are primary modulators of cognition: Emotional word content enhances word processing, and reward expectancy similarly amplifies cognitive processing from the perceptual up to the executive control level. Here, we investigate how these primary regulators of cognition interact. We studied how the anticipation of gain or loss modulates the neural time course (event-related potentials, ERPs) related to processing of emotional words. Participants performed a semantic categorization task on emotional and neutral words, which were preceded by a cue indicating that performance could lead to monetary gain or loss. Emotion-related and reward-related effects occurred in different time windows, did not interact statistically, and showed different topographies. This speaks for an independence of reward expectancy and the processing of emotional word content. Therefore, privileged processing given to emotionally valenced words seems immune to short-term modulation of reward. Models of language comprehension should be able to incorporate effects of reward and emotion on language processing, and the current study argues for an architecture in which reward and emotion do not share a common neurobiological mechanism
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Özyürek, A. (2022). Sign advantage: Both children and adults’ spatial expressions in sign are more informative than those in speech and gestures combined. Journal of Child Language. Advance online publication. doi:10.1017/S0305000922000642.

    Abstract

    Expressing Left-Right relations is challenging for speaking-children. Yet, this challenge was absent for signing-children, possibly due to iconicity in the visual-spatial modality of expression. We investigate whether there is also a modality advantage when speaking-children’s co-speech gestures are considered. Eight-year-old child and adult hearing monolingual Turkish speakers and deaf signers of Turkish-Sign-Language described pictures of objects in various spatial relations. Descriptions were coded for informativeness in speech, sign, and speech-gesture combinations for encoding Left-Right relations. The use of co-speech gestures increased the informativeness of speakers’ spatial expressions compared to speech-only. This pattern was more prominent for children than adults. However, signing-adults and children were more informative than child and adult speakers even when co-speech gestures were considered. Thus, both speaking- and signing-children benefit from iconic expressions in visual modality. Finally, in each modality, children were less informative than adults, pointing to the challenge of this spatial domain in development.
  • Karaminis, T., Hintz, F., & Scharenborg, O. (2022). The presence of background noise extends the competitor space in native and non-native spoken-word recognition: Insights from computational modeling. Cognitive Science, 46(2): e13110. doi:10.1111/cogs.13110.

    Abstract

    Oral communication often takes place in noisy environments, which challenge spoken-word recognition. Previous research has suggested that the presence of background noise extends the number of candidate words competing with the target word for recognition and that this extension affects the time course and accuracy of spoken-word recognition. In this study, we further investigated the temporal dynamics of competition processes in the presence of background noise, and how these vary in listeners with different language proficiency (i.e., native and non-native) using computational modeling. We developed ListenIN (Listen-In-Noise), a neural-network model based on an autoencoder architecture, which learns to map phonological forms onto meanings in two languages and simulates native and non-native spoken-word comprehension. Simulation A established that ListenIN captures the effects of noise on accuracy rates and the number of unique misperception errors of native and non-native listeners in an offline spoken-word identification task (Scharenborg et al., 2018). Simulation B showed that ListenIN captures the effects of noise in online task settings and accounts for looking preferences of native (Hintz & Scharenborg, 2016) and non-native (new data collected for this study) listeners in a visual-world paradigm. We also examined the model’s activation states during online spoken-word recognition. These analyses demonstrated that the presence of background noise increases the number of competitor words which are engaged in phonological competition and that this happens in similar ways intra- and interlinguistically and in native and non-native listening. Taken together, our results support accounts positing a ‘many-additional-competitors scenario’ for the effects of noise on spoken-word recognition.
  • Karsan, Ç., Özdemir, R. S., Bulut, T., & Hanoğlu, L. (2022). The effects of single-session cathodal and bihemispheric tDCS on fluency in stuttering. Journal of Neurolinguistics, 63(101064): 101064. doi:10.1016/j.jneuroling.2022.101064.

    Abstract

    Developmental stuttering is a fluency disorder that adversely affect many aspects of a person's life. Recent transcranial direct current stimulation (tDCS) studies have shown promise to improve fluency in people who stutter. To date, bihemispheric tDCS has not been investigated in this population. In the present study, we aimed to investigate the effects of single-session bihemispheric and unihemispheric cathodal tDCS on fluency in adults who stutter. We predicted that bihemispheric tDCS with anodal stimulation to the left IFG and cathodal stimulation to the right IFG would improve fluency better than the sham and cathodal tDCS to the right IFG. Seventeen adults who stutter completed this single-blind, crossover, sham-controlled tDCS experiment. All participants received 20 min of tDCS alongside metronome-timed speech during intervention sessions. Three tDCS interventions were administered: bihemispheric tDCS with anodal stimulation to the left IFG and cathodal stimulation to the right IFG, unihemispheric tDCS with cathodal stimulation to the right IFG, and sham stimulation. Speech fluency during reading and conversation was assessed before, immediately after, and one week after each intervention session. There was no significant fluency improvement in conversation for any tDCS interventions. Reading fluency improved following both bihemispheric and cathodal tDCS interventions. tDCS montages were not significantly different in their effects on fluency.

    Files private

    Request files
  • Kartushina, N., Mani, N., Aktan-Erciyes, A., Alaslani, K., Aldrich, N. J., Almohammadi, A., Alroqi, H., Anderson, L. M., Andonova, E., Aussems, S., Babineau, M., Barokova, M., Bergmann, C., Cashon, C., Custode, S., De Carvalho, A., Dimitrova, N., Dynak, A., Farah, R., Fennell, C. and 32 moreKartushina, N., Mani, N., Aktan-Erciyes, A., Alaslani, K., Aldrich, N. J., Almohammadi, A., Alroqi, H., Anderson, L. M., Andonova, E., Aussems, S., Babineau, M., Barokova, M., Bergmann, C., Cashon, C., Custode, S., De Carvalho, A., Dimitrova, N., Dynak, A., Farah, R., Fennell, C., Fiévet, A.-C., Frank, M. C., Gavrilova, M., Gendler-Shalev, H., Gibson, S. P., Golway, K., Gonzalez-Gomez, N., Haman, E., Hannon, E., Havron, N., Hay, J., Hendriks, C., Horowitz-Kraus, T., Kalashnikova, M., Kanero, J., Keller, C., Krajewski, G., Laing, C., Lundwall, R. A., Łuniewska, M., Mieszkowska, K., Munoz, L., Nave, K., Olesen, N., Perry, L., Rowland, C. F., Santos Oliveira, D., Shinskey, J., Veraksa, A., Vincent, K., Zivan, M., & Mayor, J. (2022). COVID-19 first lockdown as a window into language acquisition: Associations between caregiver-child activities and vocabulary gains. Language Development Research, 2, 1-36. doi:10.34842/abym-xv34.

    Abstract

    The COVID-19 pandemic, and the resulting closure of daycare centers worldwide, led to unprecedented changes in children’s learning environments. This period of increased time at home with caregivers, with limited access to external sources (e.g., daycares) provides a unique opportunity to examine the associations between the caregiver-child activities and children’s language development. The vocabularies of 1742 children aged8-36 months across 13 countries and 12 languages were evaluated at the beginning and end of the first lockdown period in their respective countries(from March to September 2020). Children who had less passive screen exposure and whose caregivers read more to them showed larger gains in vocabulary development during lockdown, after controlling for SES and other caregiver-child activities. Children also gained more words than expected (based on normative data) during lockdown; either caregivers were more aware of their child’s development or vocabulary development benefited from intense caregiver-child interaction during lockdown.
  • Kelly, S. D., Ozyurek, A., & Maris, E. (2010). Two sides of the same coin: Speech and gesture mutually interact to enhance comprehension. Psychological Science, 21, 260-267. doi:10.1177/0956797609357327.

    Abstract

    Gesture and speech are assumed to form an integrated system during language production. Based on this view, we propose the integrated‐systems hypothesis, which explains two ways in which gesture and speech are integrated—through mutual and obligatory interactions—in language comprehension. Experiment 1 presented participants with action primes (e.g., someone chopping vegetables) and bimodal speech and gesture targets. Participants related primes to targets more quickly and accurately when they contained congruent information (speech: “chop”; gesture: chop) than when they contained incongruent information (speech: “chop”; gesture: twist). Moreover, the strength of the incongruence affected processing, with fewer errors for weak incongruities (speech: “chop”; gesture: cut) than for strong incongruities (speech: “chop”; gesture: twist). Crucial for the integrated‐systems hypothesis, this influence was bidirectional. Experiment 2 demonstrated that gesture’s influence on speech was obligatory. The results confirm the integrated‐systems hypothesis and demonstrate that gesture and speech form an integrated system in language comprehension.
  • Kemmerer, S. K., Sack, A. T., de Graaf, T. A., Ten Oever, S., De Weerd, P., & Schuhmann, T. (2022). Frequency-specific transcranial neuromodulation of alpha power alters visuospatial attention performance. Brain Research, 1782: 147834. doi:10.1016/j.brainres.2022.147834.

    Abstract

    Transcranial alternating current stimulation (tACS) at 10 Hz has been shown to modulate spatial attention. However, the frequency-specificity and the oscillatory changes underlying this tACS effect are still largely unclear. Here, we applied high-definition tACS at individual alpha frequency (IAF), two control frequencies (IAF+/-2Hz) and sham to the left posterior parietal cortex and measured its effects on visuospatial attention performance and offline alpha power (using electroencephalography, EEG). We revealed a behavioural and electrophysiological stimulation effect relative to sham for IAF but not control frequency stimulation conditions: there was a leftward lateralization of alpha power for IAF tACS, which differed from sham for the first out of three minutes following tACS. At a high value of this EEG effect (moderation effect), we observed a leftward attention bias relative to sham. This effect was task-specific, i.e., it could be found in an endogenous attention but not in a detection task. Only in the IAF tACS condition, we also found a correlation between the magnitude of the alpha lateralization and the attentional bias effect. Our results support a functional role of alpha oscillations in visuospatial attention and the potential of tACS to modulate it. The frequency-specificity of the effects suggests that an individualization of the stimulation frequency is necessary in heterogeneous target groups with a large variation in IAF.

    Additional information

    supplementary data
  • Kemmerer, S. K., De Graaf, T. A., Ten Oever, S., Erkens, M., De Weerd, P., & Sack, A. T. (2022). Parietal but not temporoparietal alpha-tACS modulates endogenous visuospatial attention. Cortex, 154, 149-166. doi:10.1016/j.cortex.2022.01.021.

    Abstract

    Visuospatial attention can either be voluntarily directed (endogenous/top-down attention) or automatically triggered (exogenous/bottom-up attention). Recent research showed that dorsal parietal transcranial alternating current stimulation (tACS) at alpha frequency modulates the spatial attentional bias in an endogenous but not in an exogenous visuospatial attention task. Yet, the reason for this task-specificity remains unexplored. Here, we tested whether this dissociation relates to the proposed differential role of the dorsal attention network (DAN) and ventral attention network (VAN) in endogenous and exogenous attention processes respectively. To that aim, we targeted the left and right dorsal parietal node of the DAN, as well as the left and right ventral temporoparietal node of the VAN using tACS at the individual alpha frequency. Every participant completed all four stimulation conditions and a sham condition in five separate sessions. During tACS, we assessed the behavioral visuospatial attention bias via an endogenous and exogenous visuospatial attention task. Additionally, we measured offline alpha power immediately before and after tACS using electroencephalography (EEG). The behavioral data revealed an effect of tACS on the endogenous but not exogenous attention bias, with a greater leftward bias during (sham-corrected) left than right hemispheric stimulation. In line with our hypothesis, this effect was brain area-specific, i.e., present for dorsal parietal but not ventral temporoparietal tACS. However, contrary to our expectations, there was no effect of ventral temporoparietal tACS on the exogenous visuospatial attention bias. Hence, no double dissociation between the two targeted attention networks. There was no effect of either tACS condition on offline alpha power. Our behavioral data reveal that dorsal parietal but not ventral temporoparietal alpha oscillations steer endogenous visuospatial attention. This brain-area specific tACS effect matches the previously proposed dissociation between the DAN and VAN and, by showing that the spatial attention bias effect does not generalize to any lateral posterior tACS montage, renders lateral cutaneous and retinal effects for the spatial attention bias in the dorsal parietal condition unlikely. Yet the absence of tACS effects on the exogenous attention task suggests that ventral temporoparietal alpha oscillations are not functionally relevant for exogenous visuospatial attention. We discuss the potential implications of this finding in the context of an emerging theory on the role of the ventral temporoparietal node.

    Additional information

    supplementary material
  • Kempen, G. (1998). Comparing and explaining the trajectories of first and second language acquisition: In search of the right mix of psychological and linguistic factors [Commentory]. Bilingualism: Language and Cognition, 1, 29-30. doi:10.1017/S1366728998000066.

    Abstract

    When you compare the behavior of two different age groups which are trying to master the same sensori-motor or cognitive skill, you are likely to discover varying learning routes: different stages, different intervals between stages, or even different orderings of stages. Such heterogeneous learning trajectories may be caused by at least six different types of factors: (1) Initial state: the kinds and levels of skills the learners have available at the onset of the learning episode. (2) Learning mechanisms: rule-based, inductive, connectionist, parameter setting, and so on. (3) Input and feedback characteristics: learning stimuli, information about success and failure. (4) Information processing mechanisms: capacity limitations, attentional biases, response preferences. (5) Energetic variables: motivation, emotional reactions. (6) Final state: the fine-structure of kinds and levels of subskills at the end of the learning episode. This applies to language acquisition as well. First and second language learners probably differ on all six factors. Nevertheless, the debate between advocates and opponents of the Fundamental Difference Hypothesis concerning L1 and L2 acquisition have looked almost exclusively at the first two factors. Those who believe that L1 learners have access to Universal Grammar whereas L2 learners rely on language processing strategies, postulate different learning mechanisms (UG parameter setting in L1, more general inductive strategies in L2 learning). Pienemann opposes this view and, based on his Processability Theory, argues that L1 and L2 learners start out from different initial states: they come to the grammar learning task with different structural hypotheses (SOV versus SVO as basic word order of German).
  • Kempen, G. (2000). Could grammatical encoding and grammatical decoding be subserved by the same processing module? Behavioral and Brain Sciences, 23, 38-39.
  • Kempen, G. (1999). Fiets en (centri)fuge. Onze Taal, 68, 88.
  • Kempen, G. (1985). Psychologie 2000. Toegepaste psychologie in de informatiemaatschappij. Computers in de psychologie, 13-21.
  • Kendall-Taylor, N., Erard, M., & Haydon, A. (2013). The Use of Metaphor as a Science Communication Tool: Air Traffic Control for Your Brain. Journal of Applied Communication Research, 41(4), 412-433. doi:10.1080/00909882.2013.836678.

    Abstract

    Science is currently under-utilized as a tool for effective policy and program design. A key part of this research-to-practice gap lies in the ineffectiveness of current models of science translation. Drawing on theory and methods from anthropology and cognitive linguistics, this study explores the role of cultural models and metaphor in the practice of science communication and translation. Qualitative interviews and group sessions, along with quantitative framing experiments, were used to design and test the effectiveness of a set of explanatory metaphors in translating the science of executive function. Developmental and cognitive scientists typically define executive function as a multi-dimensional set of related abilities that include working memory, inhibitory control, and cognitive flexibility. The study finds one metaphor in particular—the brain's air traffic control system—to be effective in bridging gaps between expert and public understandings on this issue and in so doing improving the accessibility of scientific information to members of the public as they reason about public policy issues. Findings suggest both a specific tool that can be used in efforts to translate the science of executive function and a theory and methodology that can be employed to design and test metaphors as communication devices on other science and social issues.

    Files private

    Request files
  • Kidd, E., & Garcia, R. (2022). How diverse is child language acquisition research? First Language, 42(6), 703-735. doi:10.1177/01427237211066405.

    Abstract

    A comprehensive theory of child language acquisition requires an evidential base that is representative of the typological diversity present in the world’s 7000 or so languages. However, languages are dying at an alarming rate, and the next 50 years represents the last chance we have to document acquisition in many of them. Here, we take stock of the last 45 years of research published in the four main child language acquisition journals: Journal of Child Language, First Language, Language Acquisition and Language Learning and Development. We coded each article for several variables, including (1) participant group (mono vs multilingual), (2) language(s), (3) topic(s) and (4) country of author affiliation, from each journal’s inception until the end of 2020. We found that we have at least one article published on around 103 languages, representing approximately 1.5% of the world’s languages. The distribution of articles was highly skewed towards English and other well-studied Indo-European languages, with the majority of non-Indo-European languages having just one paper. A majority of the papers focused on studies of monolingual children, although papers did not always explicitly report participant group status. The distribution of topics across language categories was more even. The number of articles published on non-Indo-European languages from countries outside of North America and Europe is increasing; however, this increase is driven by research conducted in relatively wealthy countries. Overall, the vast majority of the research was produced in the Global North. We conclude that, despite a proud history of crosslinguistic research, the goals of the discipline need to be recalibrated before we can lay claim to truly a representative account of child language acquisition.

    Additional information

    Read author's response to comments
  • Kidd, E., & Garcia, R. (2022). Where to from here? Increasing language coverage while building a more diverse discipline. First Language, 42(6), 837-851. doi:10.1177/01427237221121190.

    Abstract

    Our original target article highlighted some significant shortcomings in the current state of child language research: a large skew in our evidential base towards English and a handful of other Indo-European languages that partly has its origins in a lack of researcher diversity. In this article, we respond to the 21 commentaries on our original article. The commentaries highlighted both the importance of attention to typological features of languages and the environments and contexts in which languages are acquired, with many commentators providing concrete suggestions on how we address the data skew. In this response, we synthesise the main themes of the commentaries and make suggestions for how the field can move towards both improving data coverage and opening up to traditionally under-represented researchers.

    Additional information

    Link to original target article
  • Kidd, E., Lieven, E., & Tomasello, M. (2010). Lexical frequency and exemplar-based learning effects in language acquisition: evidence from sentential complements. Language Sciences, 32(1), 132-142. doi:10.1016/j.langsci.2009.05.002.

    Abstract

    Usage-based approaches to language acquisition argue that children acquire the grammar of their target language using general-cognitive learning principles. The current paper reports on an experiment that tested a central assumption of the usage-based approach: argument structure patterns are connected to high frequency verbs that facilitate acquisition. Sixty children (N = 60) aged 4- and 6-years participated in a sentence recall/lexical priming experiment that manipulated the frequency with which the target verbs occurred in the finite sentential complement construction in English. The results showed that the children performed better on sentences that contained high frequency verbs. Furthermore, the children’s performance suggested that their knowledge of finite sentential complements relies most heavily on one particular verb – think, supporting arguments made by Goldberg [Goldberg, A.E., 2006. Constructions at Work: The Nature of Generalization in Language. Oxford University Press, Oxford], who argued that skewed input facilitates language learning.
  • Kidd, E., Rogers, P., & Rogers, C. (2010). The personality correlates of adults who had imaginary companions in childhood. Psychological Reports, 107(1), 163-172. doi:10.2466/02.04.10.pr0.107.4.163-172.

    Abstract

    Two studies showed that adults who reported having an imaginary companion as a child differed from adults who did not on certain personality dimensions. The first yielded a higher mean on the Gough Creative Personality Scale for the group who had imaginary companions. Study 2 showed that such adults scored higher on the Achievement and Absorption subscales of Tellegen's Multidimensional Personality Questionnaire. The results suggest that some differences reported in the developmental literature may be observed in adults

Share this page