Publications

Displaying 301 - 400 of 991
  • Guest, O., Caso, A., & Cooper, R. P. (2020). On simulating neural damage in connectionist networks. Computational Brain & Behavior, 3, 289-321. doi:10.1007/s42113-020-00081-z.

    Abstract

    A key strength of connectionist modelling is its ability to simulate both intact cognition and the behavioural effects of neural damage. We survey the literature, showing that models have been damaged in a variety of ways, e.g. by removing connections, by adding noise to connection weights, by scaling weights, by removing units and by adding noise to unit activations. While these different implementations of damage have often been assumed to be behaviourally equivalent, some theorists have made aetiological claims that rest on nonequivalence. They suggest that related deficits with different aetiologies might be accounted for by different forms of damage within a single model. We present two case studies that explore the effects of different forms of damage in two influential connectionist models, each of which has been applied to explain neuropsychological deficits. Our results indicate that the effect of simulated damage can indeed be sensitive to the way in which damage is implemented, particularly when the environment comprises subsets of items that differ in their statistical properties, but such effects are sensitive to relatively subtle aspects of the model’s training environment. We argue that, as a consequence, substantial methodological care is required if aetiological claims about simulated neural damage are to be justified, and conclude more generally that implementation assumptions, including those concerning simulated damage, must be fully explored when evaluating models of neurological deficits, both to avoid over-extending the explanatory power of specific implementations and to ensure that reported results are replicable.
  • Guggenheim, J. A., Williams, C., Northstone, K., Howe, L. D., Tilling, K., St Pourcain, B., McMahon, G., & Lawlor, D. A. (2014). Does Vitamin D Mediate the Protective Effects of Time Outdoors On Myopia? Findings From a Prospective Birth Cohort. Investigative Ophthalmology & Visual Science, 55(12), 8550-8558. doi:10.1167/iovs.14-15839.
  • Gullberg, M. (2004). [Review of the book Pointing: Where language, culture and cognition meet ed. by Sotaro Kita]. Gesture, 4(2), 235-248. doi:10.1075/gest.4.2.08gul.
  • Gullberg, M. (1995). Giving language a hand: gesture as a cue based communicative strategy. Working Papers, Lund University, Dept. of Linguistics, 44, 41-60.

    Abstract

    All accounts of communicative behaviour in general, and communicative strategies in particular, mention gesture1 in relation to language acquisition (cf. Faerch & Kasper 1983 for an overview). However, few attempts have been made to investigate how spoken language and spontaneous gesture combine to determine discourse referents. Referential gesture and referential discourse will be of particular interest, since communicative strategies in second language discourse often involve labelling problems.

    This paper will focus on two issues:

    1) Within a cognitive account of communicative strategies, gesture will be seen to be part of conceptual or analysis-based strategies, in that relational features in the referents are exploited;

    2) It will be argued that communication strategies can be seen in terms of cue manipulation in the same sense as sentence processing has been analysed in terms of competing cues. Strategic behaviour, and indeed the process of referring in general, are seen in terms of cues, combining or competing to determine discourse referents. Gesture can then be regarded as being such a cue at the discourse level, and as a cue-based communicative strategy, in that gesture functions by exploiting physically based cues which can be recognised as being part of the referent. The question of iconicity and motivation vs. the arbitrary qualities of gesture as a strategic cue will be addressed in connection with this.
  • Gullberg, M., & Holmqvist, K. (1999). Keeping an eye on gestures: Visual perception of gestures in face-to-face communication. Pragmatics & Cognition, 7(1), 35-63. doi:10.1075/pc.7.1.04gul.

    Abstract

    Since listeners usually look at the speaker's face, gestural information has to be absorbed through peripheral visual perception. In the literature, it has been suggested that listeners look at gestures under certain circumstances: 1) when the articulation of the gesture is peripheral; 2) when the speech channel is insufficient for comprehension; and 3) when the speaker him- or herself indicates that the gesture is worthy of attention. The research here reported employs eye tracking techniques to study the perception of gestures in face-to-face interaction. The improved control over the listener's visual channel allows us to test the validity of the above claims. We present preliminary findings substantiating claims 1 and 3, and relate them to theoretical proposals in the literature and to the issue of how visual and cognitive attention are related.
  • Haan, E. H. F., Seijdel, N., Kentridge, R. W., & Heywood, C. A. (2020). Plasticity versus chronicity: Stable performance on category fluency 40 years post‐onset. Journal of Neuropsychology, 14(1), 20-27. doi:10.1111/jnp.12180.

    Abstract

    What is the long‐term trajectory of semantic memory deficits in patients who have suffered structural brain damage? Memory is, per definition, a changing faculty. The traditional view is that after an initial recovery period, the mature human brain has little capacity to repair or reorganize. More recently, it has been suggested that the central nervous system may be more plastic with the ability to change in neural structure, connectivity, and function. The latter observations are, however, largely based on normal learning in healthy subjects. Here, we report a patient who suffered bilateral ventro‐medial damage after presumed herpes encephalitis in 1971. He was seen regularly in the eighties, and we recently had the opportunity to re‐assess his semantic memory deficits. On semantic category fluency, he showed a very clear category‐specific deficit performing better that control data on non‐living categories and significantly worse on living items. Recent testing showed that his impairments have remained unchanged for more than 40 years. We suggest cautiousness when extrapolating the concept of brain plasticity, as observed during normal learning, to plasticity in the context of structural brain damage.
  • Hagoort, P. (1994). Afasie als een tekort aan tijd voor spreken en verstaan. De Psycholoog, 4, 153-154.
  • Hagoort, P. (1997). De rappe prater als gewoontedier [Review of the book Smooth talkers: The linguistic performance of auctioneers and sportscasters, by Koenraad Kuiper]. Psychologie, 16, 22-23.
  • Hagoort, P. (1999). De toekomstige eeuw zonder psychologie. Psychologie Magazine, 18, 35-36.
  • Hagoort, P. (2002). De koninklijke verloving tussen psychologie en neurowetenschap. De Psycholoog, 37, 107-113.
  • Hagoort, P., Hald, L. A., Bastiaansen, M. C. M., & Petersson, K. M. (2004). Integration of word meaning and world knowledge in language comprehension. Science, 304(5669), 438-441. doi:10.1126/science.1095455.

    Abstract

    Although the sentences that we hear or read have meaning, this does not necessarily mean that they are also true. Relatively little is known about the critical brain structures for, and the relative time course of, establishing the meaning and truth of linguistic expressions. We present electroencephalogram data that show the rapid parallel integration of both semantic and world
    knowledge during the interpretation of a sentence. Data from functional magnetic resonance imaging revealed that the left inferior prefrontal cortex is involved in the integration of both meaning and world knowledge. Finally, oscillatory brain responses indicate that the brain keeps a record of what makes a sentence hard to interpret.
  • Hagoort, P., & Brown, C. M. (1999). Gender electrified: ERP evidence on the syntactic nature of gender processing. Journal of Psycholinguistic Research, 28(6), 715-728. doi:10.1023/A:1023277213129.

    Abstract

    The central issue of this study concerns the claim that the processing of gender agreement in online sentence comprehension is a syntactic rather than a conceptual/semantic process. This claim was tested for the grammatical gender agreement in Dutch between the definite article and the noun. Subjects read sentences in which the definite article and the noun had the same gender and sentences in which the gender agreement was violated, While subjects read these sentences, their electrophysiological activity was recorded via electrodes placed on the scalp. Earlier research has shown that semantic and syntactic processing events manifest themselves in different event-related brain potential (ERP) effects. Semantic integration modulates the amplitude of the so-called N400.The P600/SPS is an ERP effect that is more sensitive to syntactic processes. The violation of grammatical gender agreement was found to result in a P600/SPS. For violations in sentence-final position, an additional increase of the N400 amplitude was observed. This N400 effect is interpreted as resulting from the consequence of a syntactic violation for the sentence-final wrap-up. The overall pattern of results supports the claim that the on-line processing of gender agreement information is not a content driven but a syntactic-form driven process.
  • Hagoort, P. (1994). Het brein op een kier: Over hersenen gesproken. Psychologie, 13, 42-46.
  • Hagoort, P. (2014). Nodes and networks in the neural architecture for language: Broca's region and beyond. Current Opinion in Neurobiology, 28, 136-141. doi:10.1016/j.conb.2014.07.013.

    Abstract

    Current views on the neurobiological underpinnings of language are discussed that deviate in a number of ways from the classical Wernicke–Lichtheim–Geschwind model. More areas than Broca's and Wernicke's region are involved in language. Moreover, a division along the axis of language production and language comprehension does not seem to be warranted. Instead, for central aspects of language processing neural infrastructure is shared between production and comprehension. Three different accounts of the role of Broca's area in language are discussed. Arguments are presented in favor of a dynamic network view, in which the functionality of a region is co-determined by the network of regions in which it is embedded at particular moments in time. Finally, core regions of language processing need to interact with other networks (e.g. the attentional networks and the ToM network) to establish full functionality of language and communication.
  • Hagoort, P., & Brown, C. M. (1999). The consequences of the temporal interaction between syntactic and semantic processes for haemodynamic studies of language. NeuroImage, 9, S1024-S1024.
  • Hagoort, P. (1997). Semantic priming in Broca's aphasics at a short SOA: No support for an automatic access deficit. Brain and Language, 56, 287-300. doi:10.1006/brln.1997.1849.

    Abstract

    This study tests the recent claim that Broca’s aphasics are impaired in automatic lexical access, including the retrieval of word meaning. Subjects are required to perform a lexical decision on visually presented prime target pairs. Half of the word targets are preceded by a related word, half by an unrelated word. Primes and targets are presented with a long stimulus-onset-asynchrony (SOA) of 1400 msec and with a short SOA of 300 msec. Normal priming effects are observed in Broca’s aphasics for both SOAs. This result is discussed in the context of the claim that Broca’s aphasics suffer from an impairment in the automatic access of lexical–semantic information. It is argued that none of the current priming studies provides evidence supporting this claim, since with short SOAs priming effects have been reliably obtained in Broca’s aphasics. The results are more compatible with the claim that in many Broca’s aphasics the functional locus of their comprehension deficit is at the level of postlexical integration processes.
  • Hagoort, P., Brown, C. M., & Swaab, T. Y. (1995). Semantic deficits in right hemisphere patients. Brain and Language, 51, 161-163. doi:10.1006/brln.1995.1058.
  • Hagoort, P., Ramsey, N., Rutten, G.-J., & Van Rijen, P. (1999). The role of the left anterior temporal cortex in language processing. Brain and Language, 69, 322-325. doi:10.1006/brln.1999.2169.
  • Hagoort, P., Indefrey, P., Brown, C. M., Herzog, H., Steinmetz, H., & Seitz, R. J. (1999). The neural circuitry involved in the reading of german words and pseudowords: A PET study. Journal of Cognitive Neuroscience, 11(4), 383-398. doi:10.1162/089892999563490.

    Abstract

    Silent reading and reading aloud of German words and pseudowords were used in a PET study using (15O)butanol to examine the neural correlates of reading and of the phonological conversion of legal letter strings, with or without meaning.
    The results of 11 healthy, right-handed volunteers in the age range of 25 to 30 years showed activation of the lingual gyri during silent reading in comparison with viewing a fixation cross. Comparisons between the reading of words and pseudowords suggest the involvement of the middle temporal gyri in retrieving both the phonological and semantic code for words. The reading of pseudowords activates the left inferior frontal gyrus, including the ventral part of Broca’s area, to a larger extent than the reading of words. This suggests that this area might be involved in the sublexical conversion of orthographic input strings into phonological output codes. (Pre)motor areas were found to be activated during both silent reading and reading aloud. On the basis of the obtained activation patterns, it is hypothesized that the articulation of high-frequency syllables requires the retrieval of their concomitant articulatory gestures from the SMA and that the articulation of lowfrequency syllables recruits the left medial premotor cortex.
  • Hagoort, P., & Indefrey, P. (2014). The neurobiology of language beyond single words. Annual Review of Neuroscience, 37, 347-362. doi:10.1146/annurev-neuro-071013-013847.

    Abstract

    A hallmark of human language is that we combine lexical building blocks retrieved from memory in endless new ways. This combinatorial aspect of language is referred to as unification. Here we focus on the neurobiological infrastructure for syntactic and semantic unification. Unification is characterized by a high-speed temporal profile including both prediction and integration of retrieved lexical elements. A meta-analysis of numerous neuroimaging studies reveals a clear dorsal/ventral gradient in both left inferior frontal cortex and left posterior temporal cortex, with dorsal foci for syntactic processing and ventral foci for semantic processing. In addition to core areas for unification, further networks need to be recruited to realize language-driven communication to its full extent. One example is the theory of mind network, which allows listeners and readers to infer the intended message (speaker meaning) from the coded meaning of the linguistic utterance. This indicates that sensorimotor simulation cannot handle all of language processing.
  • Hagoort, P. (1997). Valt er nog te lachen zonder de rechter hersenhelft? Psychologie, 16, 52-55.
  • Hahn, L. E., Ten Buuren, M., Snijders, T. M., & Fikkert, P. (2020). Learning words in a second language while cycling and listening to children’s songs: The Noplica Energy Center. International Journal of Music in Early Childhood, 15(1), 95-108. doi:10.1386/ijmec_00014_1.

    Abstract

    Children’s songs are a great source for linguistic learning. Here we explore whether children can acquire novel words in a second language by playing a game featuring children’s songs in a playhouse. The playhouse is designed by the Noplica foundation (www.noplica.nl) to advance language learning through unsupervised play. We present data from three experiments that serve to scientifically proof the functionality of one game of the playhouse: the Energy Center. For this game, children move three hand-bikes mounted on a panel within the playhouse. Once the children cycle, a song starts playing that is accompanied by musical instruments. In our experiments, children executed a picture-selection task to evaluate whether they acquired new vocabulary from the songs presented during the game. Two of our experiments were run in the field, one at a Dutch and one at an Indian pre-school. The third experiment features data from a more controlled laboratory setting. Our results partly confirm that the Energy Center is a successful means to support vocabulary acquisition in a second language. More research with larger sample sizes and longer access to the Energy Center is needed to evaluate the overall functionality of the game. Based on informal observations at our test sites, however, we are certain that children do pick up linguistic content from the songs during play, as many of the children repeat words and phrases from the songs they heard. We will pick up upon these promising observations during future studies.
  • Hahn, L. E., Benders, T., Snijders, T. M., & Fikkert, P. (2020). Six-month-old infants recognize phrases in song and speech. Infancy, 25(5), 699-718. doi:10.1111/infa.12357.

    Abstract

    Infants exploit acoustic boundaries to perceptually organize phrases in speech. This prosodic parsing ability is well‐attested and is a cornerstone to the development of speech perception and grammar. However, infants also receive linguistic input in child songs. This study provides evidence that infants parse songs into meaningful phrasal units and replicates previous research for speech. Six‐month‐old Dutch infants (n = 80) were tested in the song or speech modality in the head‐turn preference procedure. First, infants were familiarized to two versions of the same word sequence: One version represented a well‐formed unit, and the other contained a phrase boundary halfway through. At test, infants were presented two passages, each containing one version of the familiarized sequence. The results for speech replicated the previously observed preference for the passage containing the well‐formed sequence, but only in a more fine‐grained analysis. The preference for well‐formed phrases was also observed in the song modality, indicating that infants recognize phrase structure in song. There were acoustic differences between stimuli of the current and previous studies, suggesting that infants are flexible in their processing of boundary cues while also providing a possible explanation for differences in effect sizes.

    Additional information

    infa12357-sup-0001-supinfo.zip
  • Hammarstroem, H., & Güldemann, T. (2014). Quantifying geographical determinants of large-scale distributions of linguistic features. Language Dynamics and Change, 4, 87-115. doi:10.1163/22105832-00401002.

    Abstract

    In the recent past the work on large-scale linguistic distributions across the globe has intensified considerably. Work on macro-areal relationships in Africa (Güldemann, 2010) suggests that the shape of convergence areas may be determined by climatic factors and geophysical features such as mountains, water bodies, coastlines, etc. Worldwide data is now available for geophysical features as well as linguistic features, including numeral systems and basic constituent order. We explore the possibility that the shape of areal aggregations of individual features in these two linguistic domains correlates with Köppen-Geiger climate zones. Furthermore, we test the hypothesis that the shape of such areal feature aggregations is determined by the contour of adjacent geophysical features like mountain ranges or coastlines. In these first basic tests, we do not find clear evidence that either Köppen-Geiger climate zones or the contours of geophysical features are major predictors for the linguistic data at hand

    Files private

    Request files
  • Hammarstroem, H., & Donohue, M. (2014). Some principles on the use of macro-areas in typological comparison. Language Dynamics and Change, 4, 167-187. doi:10.1163/22105832-00401001.

    Abstract

    While the notion of the ‘area’ or ‘Sprachbund’ has a long history in linguistics, with geographically-defined regions frequently cited as a useful means to explain typological distributions, the problem of delimiting areas has not been well addressed. Lists of general-purpose, largely independent ‘macro-areas’ (typically continent size) have been proposed as a step to rule out contact as an explanation for various large-scale linguistic phenomena. This squib points out some problems in some of the currently widely-used predetermined areas, those found in the World Atlas of Language Structures (Haspelmath et al., 2005). Instead, we propose a principled division of the world’s landmasses into six macro-areas that arguably have better geographical independence properties
  • Hammarström, H. (2014). [Review of the book A grammar of the great Andamanese language: An ethnolinguistic study by Anvita Abbi]. Journal of South Asian Languages and Linguistics, 1, 111-116. doi:10.1515/jsall-2014-0007.
  • Härle, M., Dobel, C., Cohen, R., & Rockstroh, B. (2002). Brain activity during syntactic and semantic processing - a magnetoencephalographic study. Brain Topography, 15(1), 3-11. doi:10.1023/A:1020070521429.

    Abstract

    Drawings of objects were presented in series of 54 each to 14 German speaking subjects with the tasks to indicate by button presses a) whether the grammatical gender of an object name was masculine ("der") or feminine ("die") and b) whether the depicted object was man-made or nature-made. The magnetoencephalogram (MEG) was recorded with a whole-head neuromagnetometer and task-specific patterns of brain activity were determined in the source space (Minimum Norm Estimates, MNE). A left-temporal focus of activity 150-275 ms after stimulus onset in the gender decision compared to the semantic classification task was discussed as indicating the retrieval of syntactic information, while a more expanded left hemispheric activity in the gender relative to the semantic task 300-625 ms after stimulus onset was discussed as indicating phonological encoding. A predominance of activity in the semantic task was observed over right fronto-central region 150-225 ms after stimulus-onset, suggesting that semantic and syntactic processes are prominent in this stage of lexical selection.
  • Haun, D. B. M., Rekers, Y., & Tomasello, M. (2014). Children conform the behavior of peers; Other great apes stick with what they know. Psychological Science, 25, 2160-2167. doi:10.1177/0956797614553235.

    Abstract

    All primates learn things from conspecifics socially, but it is not clear whether they conform to the behavior of these conspecifics—if conformity is defined as overriding individually acquired behavioral tendencies in order to copy peers’ behavior. In the current study, chimpanzees, orangutans, and 2-year-old human children individually acquired a problem-solving strategy. They then watched several conspecific peers demonstrate an alternative strategy. The children switched to this new, socially demonstrated strategy in roughly half of all instances, whereas the other two great-ape species almost never adjusted their behavior to the majority’s. In a follow-up study, children switched much more when the peer demonstrators were still present than when they were absent, which suggests that their conformity arose at least in part from social motivations. These results demonstrate an important difference between the social learning of humans and great apes, a difference that might help to account for differences in human and nonhuman cultures

    Additional information

    Haun_Rekers_Tomasello_2014_supp.pdf
  • Havron, N., Bergmann, C., & Tsuji, S. (2020). Preregistration in infant research - A primer. Infancy, 25(5), 734-754. doi:10.1111/infa.12353.

    Abstract

    Preregistration, the act of specifying a research plan in advance, is becoming more common in scientific research. Infant researchers contend with unique problems that might make preregistration particularly challenging. Infants are a hard‐to‐reach population, usually yielding small sample sizes, they can only complete a limited number of trials, and they can be excluded based on hard‐to‐predict complications (e.g., parental interference, fussiness). In addition, as effects themselves potentially change with age and population, it is hard to calculate an a priori effect size. At the same time, these very factors make preregistration in infant studies a valuable tool. A priori examination of the planned study, including the hypotheses, sample size, and resulting statistical power, increases the credibility of single studies and adds value to the field. Preregistration might also improve explicit decision making to create better studies. We present an in‐depth discussion of the issues uniquely relevant to infant researchers, and ways to contend with them in preregistration and study planning. We provide recommendations to researchers interested in following current best practices.

    Additional information

    Preprint version on OSF
  • Hayano, K. (2004). Kaiwa ni okeru ninshikiteki ken’i no koushou: Shuujoshi yo, ne, odoroki hyouji no bunpu to kinou [Negotiation of Epistemic Authority in Conversation: on the use of final particles yo, ne and surprise markers]. Studies in Pragmatics, 6, 17-28.
  • Heidlmayr, K., Kihlstedt, M., & Isel, F. (2020). A review on the electroencephalography markers of Stroop executive control processes. Brain and Cognition, 146: 105637. doi:10.1016/j.bandc.2020.105637.

    Abstract

    The present article on executive control addresses the issue of the locus of the Stroop effect by examining neurophysiological components marking conflict monitoring, interference suppression, and conflict resolution. Our goal was to provide an overview of a series of determining neurophysiological findings including neural source reconstruction data on distinct executive control processes and sub-processes involved in the Stroop task. Consistently, a fronto-central N2 component is found to reflect conflict monitoring processes, with its main neural generator being the anterior cingulate cortex (ACC). Then, for cognitive control tasks that involve a linguistic component like the Stroop task, the N2 is followed by a centro-posterior N400 and subsequently a late sustained potential (LSP). The N400 is mainly generated by the ACC and the prefrontal cortex (PFC) and is thought to reflect interference suppression, whereas the LSP plausibly reflects conflict resolution processes. The present overview shows that ERP constitute a reliable methodological tool for tracing with precision the time course of different executive processes and sub-processes involved in experimental tasks involving a cognitive conflict. Future research should shed light on the fine-grained mechanisms of control respectively involved in linguistic and non-linguistic tasks.
  • Heidlmayr, K., Weber, K., Takashima, A., & Hagoort, P. (2020). No title, no theme: The joined neural space between speakers and listeners during production and comprehension of multi-sentence discourse. Cortex, 130, 111-126. doi:10.1016/j.cortex.2020.04.035.

    Abstract

    Speakers and listeners usually interact in larger discourses than single words or even single sentences. The goal of the present study was to identify the neural bases reflecting how the mental representation of the situation denoted in a multi-sentence discourse (situation model) is constructed and shared between speakers and listeners. An fMRI study using a variant of the ambiguous text paradigm was designed. Speakers (n=15) produced ambiguous texts in the scanner and listeners (n=27) subsequently listened to these texts in different states of ambiguity: preceded by a highly informative, intermediately informative or no title at all. Conventional BOLD activation analyses in listeners, as well as inter-subject correlation analyses between the speakers’ and the listeners’ hemodynamic time courses were performed. Critically, only the processing of disambiguated, coherent discourse with an intelligible situation model representation involved (shared) activation in bilateral lateral parietal and medial prefrontal regions. This shared spatiotemporal pattern of brain activation between the speaker and the listener suggests that the process of memory retrieval in medial prefrontal regions and the binding of retrieved information in the lateral parietal cortex constitutes a core mechanism underlying the communication of complex conceptual representations.

    Additional information

    supplementary data
  • Heilbron, M., Richter, D., Ekman, M., Hagoort, P., & De Lange, F. P. (2020). Word contexts enhance the neural representation of individual letters in early visual cortex. Nature Communications, 11: 321. doi:10.1038/s41467-019-13996-4.

    Abstract

    Visual context facilitates perception, but how this is neurally implemented remains unclear. One example of contextual facilitation is found in reading, where letters are more easily identified when embedded in a word. Bottom-up models explain this word advantage as a post-perceptual decision bias, while top-down models propose that word contexts enhance perception itself. Here, we arbitrate between these accounts by presenting words and nonwords and probing the representational fidelity of individual letters using functional magnetic resonance imaging. In line with top-down models, we find that word contexts enhance letter representations in early visual cortex. Moreover, we observe increased coupling between letter information in visual cortex and brain activity in key areas of the reading network, suggesting these areas may be the source of the enhancement. Our results provide evidence for top-down representational enhancement in word recognition, demonstrating that word contexts can modulate perceptual processing already at the earliest visual regions.

    Additional information

    Supplementary information
  • Heinrich, T., Ravignani, A., & Hanke, F. H. (2020). Visual timing abilities of a harbour seal (Phoca vitulina) and a South African fur seal (Arctocephalus pusillus pusillus) for sub- and supra-second time intervals. Animal Cognition, 23(5), 851-859. doi:10.1007/s10071-020-01390-3.

    Abstract

    Timing is an essential parameter influencing many behaviours. A previous study demonstrated a high sensitivity of a phocid, the harbour seal (Phoca vitulina), in discriminating time intervals. In the present study, we compared the harbour seal’s timing abilities with the timing abilities of an otariid, the South African fur seal (Arctocephalus pusillus pusillus). This comparison seemed essential as phocids and otariids differ in many respects and might, thus, also differ regarding their timing abilities. We determined time difference thresholds for sub- and suprasecond time intervals marked by a white circle on a black background displayed for a specific time interval on a monitor using a staircase method. Contrary to our expectation, the timing abilities of the fur seal and the harbour seal were comparable. Over a broad range of time intervals, 0.8–7 s in the fur seal and 0.8–30 s in the harbour seal, the difference thresholds followed Weber’s law. In this range, both animals could discriminate time intervals differing only by 12 % and 14 % on average. Timing might, thus be a fundamental cue for pinnipeds in general to be used in various contexts, thereby complementing information provided by classical sensory systems. Future studies will help to clarify if timing is indeed involved in foraging decisions or the estimation of travel speed or distance.

    Additional information

    supplementary material
  • Henson, R. N., Suri, S., Knights, E., Rowe, J. B., Kievit, R. A., Lyall, D. M., Chan, D., Eising, E., & Fisher, S. E. (2020). Effect of apolipoprotein E polymorphism on cognition and brain in the Cambridge Centre for Ageing and Neuroscience cohort. Brain and Neuroscience Advances, 4: 2398212820961704. doi:10.1177/2398212820961704.

    Abstract

    Polymorphisms in the apolipoprotein E (APOE) gene have been associated with individual differences in cognition, brain structure and brain function. For example, the ε4 allele has been associated with cognitive and brain impairment in old age and increased risk of dementia, while the ε2 allele has been claimed to be neuroprotective. According to the ‘antagonistic pleiotropy’ hypothesis, these polymorphisms have different effects across the lifespan, with ε4, for example, postulated to confer benefits on cognitive and brain functions earlier in life. In this stage 2 of the Registered Report – https://osf.io/bufc4, we report the results from the cognitive and brain measures in the Cambridge Centre for Ageing and Neuroscience cohort (www.cam-can.org). We investigated the antagonistic pleiotropy hypothesis by testing for allele-by-age interactions in approximately 600 people across the adult lifespan (18–88 years), on six outcome variables related to cognition, brain structure and brain function (namely, fluid intelligence, verbal memory, hippocampal grey-matter volume, mean diffusion within white matter and resting-state connectivity measured by both functional magnetic resonance imaging and magnetoencephalography). We found no evidence to support the antagonistic pleiotropy hypothesis. Indeed, Bayes factors supported the null hypothesis in all cases, except for the (linear) interaction between age and possession of the ε4 allele on fluid intelligence, for which the evidence for faster decline in older ages was ambiguous. Overall, these pre-registered analyses question the antagonistic pleiotropy of APOE polymorphisms, at least in healthy adults.

    Additional information

    supplementary material
  • Heritage, J., & Stivers, T. (1999). Online commentary in acute medical visits: A method of shaping patient expectations. Social Science and Medicine, 49(11), 1501-1517. doi:10.1016/S0277-9536(99)00219-1.
  • Hersh, T., King, B., & Lutton, B. V. (2014). Novel bioinformatics tools for analysis of gene expression in the skate, Leucoraja erinacea. The Bulletin, MDI Biological Laboratory, 53, 16-18.
  • Hervais-Adelman, A., Pefkou, M., & Golestani, N. (2014). Bilingual speech-in-noise: Neural bases of semantic context use in the native language. Brain and Language, 132, 1-6. doi:10.1016/j.bandl.2014.01.009.

    Abstract

    Bilingual listeners comprehend speech-in-noise better in their native than non-native language. This native-language benefit is thought to arise from greater use of top-down linguistic information to assist degraded speech comprehension. Using functional magnetic resonance imaging, we recently showed that left angular gyrus activation is modulated when semantic context is used to assist native language speech-in-noise comprehension (Golestani, Hervais-Adelman, Obleser, & Scott, 2013). Here, we extend the previous work, by reanalyzing the previous data alongside the results obtained in the non-native language of the same late bilingual participants. We found a behavioral benefit of semantic context in processing speech-in-noise in the native language only, and the imaging results also revealed a native language context effect in the left angular gyrus. We also find a complementary role of lower-level auditory regions during stimulus-driven processing. Our findings help to elucidate the neural basis of the established native language behavioral benefit of speech-in-noise processing. (C) 2014 Elsevier Inc. All rights reserved.
  • Hessels, R. S., Hooge, I., Snijders, T. M., & Kemner, C. (2014). Is there a limit to the superiority of individuals with ASD in visual search? Journal of Autism and Developmental Disorders, 44, 443-451. doi:10.1007/s10803-013-1886-8.

    Abstract

    Superiority in visual search for individuals diagnosed with autism spectrum disorder (ASD) is a well-reported finding. We administered two visual search tasks to individuals with ASD and matched controls. One showed no difference between the groups, and one did show the expected superior performance for individuals with ASD. These results offer an explanation, formulated in terms of load theory. We suggest that there is a limit to the superiority in visual search for individuals with ASD, related to the perceptual load of the stimuli. When perceptual load becomes so high that no additional task-(ir)relevant information can be processed, performance will be based on single stimulus identification, in which no differences between individuals with ASD and controls have been demonstrated
  • Hestvik, A., Shinohara, Y., Durvasula, K., Verdonschot, R. G., & Sakai, H. (2020). Abstractness of human speech sound representations. Brain Research, 1732: 146664. doi:10.1016/j.brainres.2020.146664.

    Abstract

    We argue, based on a study of brain responses to speech sound differences in Japanese, that memory encoding of functional speech sounds-phonemes-are highly abstract. As an example, we provide evidence for a theory where the consonants/p t k b d g/ are not only made up of symbolic features but are underspecified with respect to voicing or laryngeal features, and that languages differ with respect to which feature value is underspecified. In a previous study we showed that voiced stops are underspecified in English [Hestvik, A., & Durvasula, K. (2016). Neurobiological evidence for voicing underspecification in English. Brain and Language], as shown by asymmetries in Mismatch Negativity responses to /t/ and /d/. In the current study, we test the prediction that the opposite asymmetry should be observed in Japanese, if voiceless stops are underspecified in that language. Our results confirm this prediction. This matches a linguistic architecture where phonemes are highly abstract and do not encode actual physical characteristics of the corresponding speech sounds, but rather different subsets of abstract distinctive features.
  • Hildebrand, M. S., Jackson, V. E., Scerri, T. S., Van Reyk, O., Coleman, M., Braden, R., Turner, S., Rigbye, K. A., Boys, A., Barton, S., Webster, R., Fahey, M., Saunders, K., Parry-Fielder, B., Paxton, G., Hayman, M., Coman, D., Goel, H., Baxter, A., Ma, A. and 11 moreHildebrand, M. S., Jackson, V. E., Scerri, T. S., Van Reyk, O., Coleman, M., Braden, R., Turner, S., Rigbye, K. A., Boys, A., Barton, S., Webster, R., Fahey, M., Saunders, K., Parry-Fielder, B., Paxton, G., Hayman, M., Coman, D., Goel, H., Baxter, A., Ma, A., Davis, N., Reilly, S., Delatycki, M., Liégeois, F. J., Connelly, A., Gecz, J., Fisher, S. E., Amor, D. J., Scheffer, I. E., Bahlo, M., & Morgan, A. T. (2020). Severe childhood speech disorder: Gene discovery highlights transcriptional dysregulation. Neurology, 94(20), e2148-e2167. doi:10.1212/WNL.0000000000009441.

    Abstract

    Objective
    Determining the genetic basis of speech disorders provides insight into the neurobiology of
    human communication. Despite intensive investigation over the past 2 decades, the etiology of
    most speech disorders in children remains unexplained. To test the hypothesis that speech
    disorders have a genetic etiology, we performed genetic analysis of children with severe speech
    disorder, specifically childhood apraxia of speech (CAS).
    Methods
    Precise phenotyping together with research genome or exome analysis were performed on
    children referred with a primary diagnosis of CAS. Gene coexpression and gene set enrichment
    analyses were conducted on high-confidence gene candidates.
    Results
    Thirty-four probands ascertained for CAS were studied. In 11/34 (32%) probands, we identified
    highly plausible pathogenic single nucleotide (n = 10; CDK13, EBF3, GNAO1, GNB1,
    DDX3X, MEIS2, POGZ, SETBP1, UPF2, ZNF142) or copy number (n = 1; 5q14.3q21.1 locus)
    variants in novel genes or loci for CAS. Testing of parental DNA was available for 9 probands
    and confirmed that the variants had arisen de novo. Eight genes encode proteins critical for
    regulation of gene transcription, and analyses of transcriptomic data found CAS-implicated
    genes were highly coexpressed in the developing human brain.
    Conclusion
    We identify the likely genetic etiology in 11 patients with CAS and implicate 9 genes for the first
    time. We find that CAS is often a sporadic monogenic disorder, and highly genetically heterogeneous.
    Highly penetrant variants implicate shared pathways in broad transcriptional
    regulation, highlighting the key role of transcriptional regulation in normal speech development.
    CAS is a distinctive, socially debilitating clinical disorder, and understanding its
    molecular basis is the first step towards identifying precision medicine approaches.
  • Hintz, F., Meyer, A. S., & Huettig, F. (2020). Visual context constrains language-mediated anticipatory eye movements. Quarterly Journal of Experimental Psychology, 73(3), 458-467. doi:10.1177/1747021819881615.

    Abstract

    Contemporary accounts of anticipatory language processing assume that individuals predict upcoming information at multiple levels of representation. Research investigating language-mediated anticipatory eye gaze typically assumes that linguistic input restricts the domain of subsequent reference (visual target objects). Here, we explored the converse case: Can visual input restrict the dynamics of anticipatory language processing? To this end, we recorded participants’ eye movements as they listened to sentences in which an object was predictable based on the verb’s selectional restrictions (“The man peels a banana”). While listening, participants looked at different types of displays: The target object (banana) was either present or it was absent. On target-absent trials, the displays featured objects that had a similar visual shape as the target object (canoe) or objects that were semantically related to the concepts invoked by the target (monkey). Each trial was presented in a long preview version, where participants saw the displays for approximately 1.78 seconds before the verb was heard (pre-verb condition), and a short preview version, where participants saw the display approximately 1 second after the verb had been heard (post-verb condition), 750 ms prior to the spoken target onset. Participants anticipated the target objects in both conditions. Importantly, robust evidence for predictive looks to objects related to the (absent) target objects in visual shape and semantics was found in the post-verb but not in the pre-verb condition. These results suggest that visual information can restrict language-mediated anticipatory gaze and delineate theoretical accounts of predictive processing in the visual world.

    Additional information

    Supplemental Material
  • Hintz, F., Meyer, A. S., & Huettig, F. (2020). Activating words beyond the unfolding sentence: Contributions of event simulation and word associations to discourse reading. Neuropsychologia, 141: 107409. doi:10.1016/j.neuropsychologia.2020.107409.

    Abstract

    Previous studies have shown that during comprehension readers activate words beyond the unfolding sentence. An open question concerns the mechanisms underlying this behavior. One proposal is that readers mentally simulate the described event and activate related words that might be referred to as the discourse further unfolds. Another proposal is that activation between words spreads in an automatic, associative fashion. The empirical support for these proposals is mixed. Therefore, theoretical accounts differ with regard to how much weight they place on the contributions of these sources to sentence comprehension. In the present study, we attempted to assess the contributions of event simulation and lexical associations to discourse reading, using event-related brain potentials (ERPs). Participants read target words, which were preceded by associatively related words either appearing in a coherent discourse event (Experiment 1) or in sentences that did not form a coherent discourse event (Experiment 2). Contextually unexpected target words that were associatively related to the described events elicited a reduced N400 amplitude compared to contextually unexpected target words that were unrelated to the events (Experiment 1). In Experiment 2, a similar but reduced effect was observed. These findings support the notion that during discourse reading event simulation and simple word associations jointly contribute to language comprehension by activating words that are beyond contextually congruent sentence continuations.
  • Hintz*, F., Jongman*, S. R., Dijkhuis, M., Van 't Hoff, V., McQueen, J. M., & Meyer, A. S. (2020). Shared lexical access processes in speaking and listening? An individual differences study. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(6), 1048-1063. doi:10.1037/xlm0000768.

    Abstract

    - * indicates joint first authorship - Lexical access is a core component of word processing. In order to produce or comprehend a word, language users must access word forms in their mental lexicon. However, despite its involvement in both tasks, previous research has often studied lexical access in either production or comprehension alone. Therefore, it is unknown to which extent lexical access processes are shared across both tasks. Picture naming and auditory lexical decision are considered good tools for studying lexical access. Both of them are speeded tasks. Given these commonalities, another open question concerns the involvement of general cognitive abilities (e.g., processing speed) in both linguistic tasks. In the present study, we addressed these questions. We tested a large group of young adults enrolled in academic and vocational courses. Participants completed picture naming and auditory lexical decision tasks as well as a battery of tests assessing non-verbal processing speed, vocabulary, and non-verbal intelligence. Our results suggest that the lexical access processes involved in picture naming and lexical decision are related but less closely than one might have thought. Moreover, reaction times in picture naming and lexical decision depended as least as much on general processing speed as on domain-specific linguistic processes (i.e., lexical access processes).
  • Hintz, F., Dijkhuis, M., Van 't Hoff, V., McQueen, J. M., & Meyer, A. S. (2020). A behavioural dataset for studying individual differences in language skills. Scientific Data, 7: 429. doi:10.1038/s41597-020-00758-x.

    Abstract

    This resource contains data from 112 Dutch adults (18–29 years of age) who completed the Individual Differences in Language Skills test battery that included 33 behavioural tests assessing language skills and domain-general cognitive skills likely involved in language tasks. The battery included tests measuring linguistic experience (e.g. vocabulary size, prescriptive grammar knowledge), general cognitive skills (e.g. working memory, non-verbal intelligence) and linguistic processing skills (word production/comprehension, sentence production/comprehension). Testing was done in a lab-based setting resulting in high quality data due to tight monitoring of the experimental protocol and to the use of software and hardware that were optimized for behavioural testing. Each participant completed the battery twice (i.e., two test days of four hours each). We provide the raw data from all tests on both days as well as pre-processed data that were used to calculate various reliability measures (including internal consistency and test-retest reliability). We encourage other researchers to use this resource for conducting exploratory and/or targeted analyses of individual differences in language and general cognitive skills.
  • Hoedemaker, R. S., & Gordon, P. C. (2014). Embodied language comprehension: Encoding-based and goal-driven processes. Journal of Experimental Psychology: General, 143(2), 914-929. doi:10.1037/a0032348.

    Abstract

    Theories of embodied language comprehension have proposed that language is understood through perceptual simulation of the sensorimotor characteristics of its meaning. Strong support for this claim requires demonstration of encoding-based activation of sensorimotor representations that is distinct from task-related or goal-driven processes. Participants in 3 eye-tracking experiments were presented with triplets of either numbers or object and animal names. In Experiment 1, participants indicated whether the size of the referent of the middle object or animal name was in between the size of the 2 outer items. In Experiment 2, the object and animal names were encoded for an immediate recognition memory task. In Experiment 3, participants completed the same comparison task of Experiment 1 for both words and numbers. During the comparison tasks, word and number decision times showed a symbolic distance effect, such that response time was inversely related to the size difference between the items. A symbolic distance effect was also observed for animal and object encoding times in cases where encoding time likely reflected some goal-driven processes as well. When semantic size was irrelevant to the task (Experiment 2), it had no effect on word encoding times. Number encoding times showed a numerical distance priming effect: Encoding time increased with numerical difference between items. Together these results suggest that while activation of numerical magnitude representations is encoding-based as well as goal-driven, activation of size information associated with words is goal-driven and does not occur automatically during encoding. This conclusion challenges strong theories of embodied cognition which claim that language comprehension consists of activation of analog sensorimotor representations irrespective of higher level processes related to context or task-specific goals
  • Hoedemaker, R. S., & Gordon, P. C. (2014). It takes time to prime: Semantic priming in the ocular lexical decision task. Journal of Experimental Psychology: Human Perception and Performance, 40(6), 2179-2197. doi:10.1037/a0037677.

    Abstract

    Two eye-tracking experiments were conducted in which the manual response mode typically used in lexical decision tasks (LDTs) was replaced with an eye-movement response through a sequence of 3 words. This ocular LDT combines the explicit control of task goals found in LDTs with the highly practiced ocular response used in reading text. In Experiment 1, forward saccades indicated an affirmative lexical decision (LD) on each word in the triplet. In Experiment 2, LD responses were delayed until all 3 letter strings had been read. The goal of the study was to evaluate the contribution of task goals and response mode to semantic priming. Semantic priming is very robust in tasks that involve recognition of words in isolation, such as LDT, but limited during text reading, as measured using eye movements. Gaze durations in both experiments showed robust semantic priming even though ocular response times were much shorter than manual LDs for the same words in the English Lexicon Project. Ex-Gaussian distribution fits revealed that the priming effect was concentrated in estimates of tau (τ), meaning that priming was most pronounced in the slow tail of the distribution. This pattern shows differential use of the prime information, which may be more heavily recruited in cases in which the LD is difficult, as indicated by longer response times. Compared with the manual LD responses, ocular LDs provide a more sensitive measure of this task-related influence on word recognition as measured by the LDT.
  • Hoeks, J. C. J., Vonk, W., & Schriefers, H. (2002). Processing coordinated structures in context: The effect of topic-structure on ambiguity resolution. Journal of Memory and Language, 46(1), 99-119. doi:10.1006/jmla.2001.2800.

    Abstract

    When a sentence such as The model embraced the designer and the photographer laughed is read, the noun phrase the photographer is temporarily ambiguous: It can be either one of the objects of embraced (NP-coordination) or the subject of a new, conjoined sentence (S-coordination). It has been shown for a number of languages, including Dutch (the language used in this study), that readers prefer NP-coordination over S-coordination, at least in isolated sentences. In the present paper, it will be suggested that NP-coordination is preferred because it is the simpler of the two options in terms of topic-structure; in NP-coordinations there is only one topic, whereas S-coordinations contain two. Results from off-line (sentence completion) and online studies (a self-paced reading and an eye tracking experiment) support this topic-structure explanation. The processing difficulty associated with S-coordinated sentences disappeared when these sentences followed contexts favoring a two-topic continuation. This finding establishes topic-structure as an important factor in online sentence processing.
  • Hoey, E. (2014). Sighing in interaction: Somatic, semiotic, and social. Research on Language and Social Interaction, 47(2), 175-200. doi:10.1080/08351813.2014.900229.

    Abstract

    Participants in interaction routinely orient to gaze, bodily comportment, and nonlexical vocalizations as salient for developing an analysis of the unfolding course of action. In this article, I address the respiratory phenomenon of sighing, the aim being to describe sighing as a situated practice that contributes to the achievement of particular actions in interaction. I report on the various actions sighs implement or construct and how their positioning and delivery informs participants’ understandings of their significance for interaction. Data are in American English
  • Hofer, E., Roshchupkin, G. V., Adams, H. H. H., Knol, M. J., Lin, H., Li, S., Zare, H., Ahmad, S., Armstrong, N. J., Satizabal, C. L., Bernard, M., Bis, J. C., Gillespie, N. A., Luciano, M., Mishra, A., Scholz, M., Teumer, A., Xia, R., Jian, X., Mosley, T. H. and 79 moreHofer, E., Roshchupkin, G. V., Adams, H. H. H., Knol, M. J., Lin, H., Li, S., Zare, H., Ahmad, S., Armstrong, N. J., Satizabal, C. L., Bernard, M., Bis, J. C., Gillespie, N. A., Luciano, M., Mishra, A., Scholz, M., Teumer, A., Xia, R., Jian, X., Mosley, T. H., Saba, Y., Pirpamer, L., Seiler, S., Becker, J. T., Carmichael, O., Rotter, J. I., Psaty, B. M., Lopez, O. L., Amin, N., Van der Lee, S. J., Yang, Q., Himali, J. J., Maillard, P., Beiser, A. S., DeCarli, C., Karama, S., Lewis, L., Harris, M., Bastin, M. E., Deary, I. J., Witte, A. V., Beyer, F., Loeffler, M., Mather, K. A., Schofield, P. R., Thalamuthu, A., Kwok, J. B., Wright, M. J., Ames, D., Trollor, J., Jiang, J., Brodaty, H., Wen, W., Vernooij, M. W., Hofman, A., Uitterlinden, A. G., Niessen, W. J., Wittfeld, K., Bülow, R., Völker, U., Pausova, Z., Pike, G. B., Maingault, S., Crivello, F., Tzourio, C., Amouyel, P., Mazoyer, B., Neale, M. C., Franz, C. E., Lyons, M. J., Panizzon, M. S., Andreassen, O. A., Dale, A. M., Logue, M., Grasby, K. L., Jahanshad, N., Painter, J. N., Colodro-Conde, L., Bralten, J., Hibar, D. P., Lind, P. A., Pizzagalli, F., Stein, J. L., Thompson, P. M., Medland, S. E., ENIGMA-consortium, Sachdev, P. S., Kremen, W. S., Wardlaw, J. M., Villringer, A., Van Duijn, C. M., Grabe, H. J., Longstreth, W. T., Fornage, M., Paus, T., Debette, S., Ikram, M. A., Schmidt, H., Schmidt, R., & Seshadri, S. (2020). Genetic correlations and genome-wide associations of cortical structure in general population samples of 22,824 adults. Nature Communications, 11: 4796. doi:10.1038/s41467-020-18367-y.
  • Hogan-Brown, A. L., Hoedemaker, R. S., Gordon, P. C., & Losh, M. (2014). Eye-voice span during rapid automatized naming: Evidence of reduced automaticity in individuals with autism spectrum disorder and their siblings. Journal of Neurodevelopmental Disorders, 6(1): 33. doi:10.1186/1866-1955-6-33.

    Abstract

    Background: Individuals with autism spectrum disorder (ASD) and their parents demonstrate impaired performance in rapid automatized naming (RAN), a task that recruits a variety of linguistic and executive processes. Though the basic processes that contribute to RAN differences remain unclear, eye-voice relationships, as measured through eye tracking, can provide insight into cognitive and perceptual processes contributing to RAN performance. For example, in RAN, eye-voice span (EVS), the distance ahead the eyes are when articulation of a target item's label begins, is an indirect measure of automaticity of the processes underlying RAN. The primary objective of this study was to investigate automaticity in naming processes, as indexed by EVS during RAN. The secondary objective was to characterize RAN difficulties in individuals with ASD and their siblings. Methods: Participants (aged 15 – 33 years) included 21 individuals with ASD, 23 siblings of individuals with ASD, and 24 control subjects, group-matched on chronological age. Naming time, frequency of errors, and EVS were measured during a RAN task and compared across groups. Results: A stepwise pattern of RAN performance was observed, with individuals with ASD demonstrating the slowest naming across all RAN conditions, controls demonstrating the fastest naming, and siblings demonstrating intermediate performance. Individuals with ASD exhibited smaller EVSs than controls on all RAN conditions, and siblings exhibited smaller EVSs during number naming (the most highly automatized type of naming). EVSs were correlated with naming times in controls only, and only in the more automatized conditions. Conclusions: These results suggest that reduced automaticity in the component processes of RAN may underpin differences in individuals with ASD and their siblings. These findings also provide further support that RAN abilities are impacted by genetic liability to ASD. This study has important implications for understanding the underlying skills contributing to language-related deficits in ASD.
  • Holler, J., & Beattie, G. (2002). A micro-analytic investigation of how iconic gestures and speech represent core semantic features in talk. Semiotica, 142, 31-69.
  • Holler, J., Schubotz, L., Kelly, S., Hagoort, P., Schuetze, M., & Ozyurek, A. (2014). Social eye gaze modulates processing of speech and co-speech gesture. Cognition, 133, 692-697. doi:10.1016/j.cognition.2014.08.008.

    Abstract

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech + gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker’s preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients’ speech processing suffers, gestures can enhance the comprehension of a speaker’s message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension.
  • Hoogman, M., Guadalupe, T., Zwiers, M. P., Klarenbeek, P., Francks, C., & Fisher, S. E. (2014). Assessing the effects of common variation in the FOXP2 gene on human brain structure. Frontiers in Human Neuroscience, 8: 473. doi:10.3389/fnhum.2014.00473.

    Abstract

    The FOXP2 transcription factor is one of the most well-known genes to have been implicated in developmental speech and language disorders. Rare mutations disrupting the function of this gene have been described in different families and cases. In a large three-generation family carrying a missense mutation, neuroimaging studies revealed significant effects on brain structure and function, most notably in the inferior frontal gyrus, caudate nucleus and cerebellum. After the identification of rare disruptive FOXP2 variants impacting on brain structure, several reports proposed that common variants at this locus may also have detectable effects on the brain, extending beyond disorder into normal phenotypic variation. These neuroimaging genetics studies used groups of between 14 and 96 participants. The current study assessed effects of common FOXP2 variants on neuroanatomy using voxel-based morphometry and volumetric techniques in a sample of >1300 people from the general population. In a first targeted stage we analyzed single nucleotide polymorphisms (SNPs) claimed to have effects in prior smaller studies (rs2253478, rs12533005, rs2396753, rs6980093, rs7784315, rs17137124, rs10230558, rs7782412, rs1456031), beginning with regions proposed in the relevant papers, then assessing impact across the entire brain. In the second gene-wide stage, we tested all common FOXP2 variation, focusing on volumetry of those regions most strongly implicated from analyses of rare disruptive mutations. Despite using a sample that is more than ten times that used for prior studies of common FOXP2 variation, we found no evidence for effects of SNPs on variability in neuroanatomy in the general population. Thus, the impact of this gene on brain structure may be largely limited to extreme cases of rare disruptive alleles. Alternatively, effects of common variants at this gene exist but are too subtle to be detected with standard volumetric techniques
  • Horemans, I., & Schiller, N. O. (2004). Form-priming effects in nonword naming. Brain and Language, 90(1-3), 465-469. doi:10.1016/S0093-934X(03)00457-7.

    Abstract

    Form-priming effects from sublexical (syllabic or segmental) primes in masked priming can be accounted for in two ways. One is the sublexical pre-activation view according to which segments are pre-activated by the prime, and at the time the form-related target is to be produced, retrieval/assembly of those pre-activated segments is faster compared to an unrelated situation. However, it has also been argued that form-priming effects from sublexical primes might be due to lexical pre-activation. When the sublexical prime is presented, it activates all form-related words (i.e., cohorts) in the lexicon, necessarily including the form-related target, which—as a consequence—is produced faster than in the unrelated case. Note, however, that this lexical pre-activation account makes previous pre-lexical activation of segments necessary. This study reports a nonword naming experiment to investigate whether or not sublexical pre-activation is involved in masked form priming with sublexical primes. The results demonstrated a priming effect suggesting a nonlexical effect. However, this does not exclude an additional lexical component in form priming.
  • Hörpel, S. G., & Firzlaff, U. (2020). Post-natal development of the envelope following response to amplitude modulated sounds in the bat Phyllostomus discolor. Hearing Research, 388: 107904. doi:10.1016/j.heares.2020.107904.

    Abstract

    Bats use a large repertoire of calls for social communication, which are often characterized by temporal amplitude and frequency modulations. As bats are considered to be among the few mammalian species capable of vocal learning, the perception of temporal sound modulations should be crucial for juvenile bats to develop social communication abilities. However, the post-natal development of auditory processing of temporal modulations has not been investigated in bats, so far. Here we use the minimally invasive technique of recording auditory brainstem responses to measure the envelope following response (EFR) to sinusoidally amplitude modulated noise (range of modulation frequencies: 11–130 Hz) in three juveniles (p8-p72) of the bat, Phyllostomus discolor. In two out of three animals, we show that although amplitude modulation processing is basically developed at p8, EFRs maturated further over a period of about two weeks until p33. Maturation of the EFR generally took longer for higher modulation frequencies (87–130 Hz) than for lower modulation frequencies (11–58 Hz).
  • Hostetter, A. B., Pouw, W., & Wakefield, E. M. (2020). Learning from gesture and action: An investigation of memory for where objects went and how they got there. Cognitive Science, 44(9): e12889. doi:10.1111/cogs.12889.

    Abstract

    Speakers often use gesture to demonstrate how to perform actions—for example, they might show how to open the top of a jar by making a twisting motion above the jar. Yet it is unclear whether listeners learn as much from seeing such gestures as they learn from seeing actions that physically change the position of objects (i.e., actually opening the jar). Here, we examined participants' implicit and explicit understanding about a series of movements that demonstrated how to move a set of objects. The movements were either shown with actions that physically relocated each object or with gestures that represented the relocation without touching the objects. Further, the end location that was indicated for each object covaried with whether the object was grasped with one or two hands. We found that memory for the end location of each object was better after seeing the physical relocation of the objects, that is, after seeing action, than after seeing gesture, regardless of whether speech was absent (Experiment 1) or present (Experiment 2). However, gesture and action built similar implicit understanding of how a particular handgrasp corresponded with a particular end location. Although gestures miss the benefit of showing the end state of objects that have been acted upon, the data show that gestures are as good as action in building knowledge of how to perform an action.

    Additional information

    additional analyses Open Data OSF
  • Houwing, D. J., Schuttel, K., Struik, E. L., Arling, C., Ramsteijn, A. S., Heinla, I., & Olivier, J. D. (2020). Perinatal fluoxetine treatment and dams’ early life stress history alter affective behavior in rat offspring depending on serotonin transporter genotype and sex. Behavioural Brain Research, 392: 112657. doi:10.1016/j.bbr.2020.112657.

    Abstract

    Many women diagnosed with a major depression continue or initiate antidepressant treatment during pregnancy. Both maternal stress and selective serotonin inhibitor (SSRI) antidepressant treatment during pregnancy have been associated with changes in offspring behavior, including increased anxiety and depressive-like behavior. Our aim was to investigate the effects of the SSRI fluoxetine (FLX), with and without the presence of a maternal depression, on affective behavior in male and female rat offspring. As reduced serotonin transporter (SERT) availability has been associated with altered behavioral outcome, both offspring with normal (SERT+/+) and reduced (SERT+/−) SERT expression were included. For our animal model of maternal depression, SERT+/− dams exposed to early life stress were used. Perinatal FLX treatment and early life stress in dams (ELSD) had sex- and genotype-specific effects on affective behavior in the offspring. In female offspring, perinatal FLX exposure interacted with SERT genotype to increase anxiety and depressive-like behavior in SERT+/+, but not SERT+/−, females. In male offspring, ELSD reduced anxiety and interacted with SERT genotype to decrease depressive-like behavior in SERT+/−, but not SERT+/+, males. Altogether, SERT+/+ female offspring appear to be more sensitive than SERT+/− females to the effects of perinatal FLX exposure, while SERT+/− male offspring appear more sensitive than SERT+/+ males to the effects of ELSD on affective behavior. Our data suggest a role for offspring SERT genotype and sex in FLX and ELSD-induced effects on affective behavior, thereby contributing to our understanding of the effects of perinatal SSRI treatment on offspring behavior later in life.
  • Howe, L. J., Hemani, G., Lesseur, C., Gaborieau, V., Ludwig, K. U., Mangold, E., Brennan, P., Ness, A. R., St Pourcain, B., Smith, G. D., & Lewis, S. J. (2020). Evaluating shared genetic influences on nonsyndromic cleft lip/palate and oropharyngeal neoplasms. Genetic Epidemiology, 44(8), 924-933. doi:10.1002/gepi.22343.

    Abstract

    It has been hypothesised that nonsyndromic cleft lip/palate (nsCL/P) and cancer may share aetiological risk factors. Population studies have found inconsistent evidence for increased incidence of cancer in nsCL/P cases, but several genes (e.g.,CDH1,AXIN2) have been implicated in the aetiologies of both phenotypes. We aimed to evaluate shared genetic aetiology between nsCL/P and oral cavity/oropharyngeal cancers (OC/OPC), which affect similar anatomical regions. Using a primary sample of 5,048 OC/OPC cases and 5,450 controls of European ancestry and a replication sample of 750 cases and 336,319 controls from UK Biobank, we estimate genetic overlap using nsCL/P polygenic risk scores (PRS) with Mendelian randomization analyses performed to evaluate potential causal mechanisms. In the primary sample, we found strong evidence for an association between a nsCL/P PRS and increased odds of OC/OPC (per standard deviation increase in score, odds ratio [OR]: 1.09; 95% confidence interval [CI]: 1.04, 1.13;p = .000053). Although confidence intervals overlapped with the primary estimate, we did not find confirmatory evidence of an association between the PRS and OC/OPC in UK Biobank (OR 1.02; 95% CI: 0.95, 1.10;p = .55). Mendelian randomization analyses provided evidence that major nsCL/P risk variants are unlikely to influence OC/OPC. Our findings suggest possible shared genetic influences on nsCL/P and OC/OPC.

    Additional information

    Supporting information
  • Howells, H., Puglisi, G., Leonetti, A., Vigano, L., Fornia, L., Simone, L., Forkel, S. J., Rossi, M., Riva, M., Cerri, G., & Bello, L. (2020). The role of left fronto-parietal tracts in hand selection: Evidence from neurosurgery. Cortex, 128, 297-311. doi:10.1016/j.cortex.2020.03.018.

    Abstract

    Strong right-hand preference on the population level is a uniquely human feature, although its neural basis is still not clearly defined. Recent behavioural and neuroimaging literature suggests that hand preference may be related to the orchestrated function and size of fronto-parietal white matter tracts bilaterally. Lesions to these tracts induced during tumour resection may provide an opportunity to test this hypothesis. In the present study, a cohort of seventeen neurosurgical patients with left hemisphere brain tumours were recruited to investigate whether resection of certain white matter tracts affects the choice of hand selected for the execution of a goal-directed task (assembly of jigsaw puzzles). Patients performed the puzzles, but also tests for basic motor ability, selective attention and visuo-constructional ability, preoperatively and one month after surgery. An atlas-based disconnectome analysis was conducted to evaluate whether resection of tracts was significantly associated with changes in hand selection. Diffusion tractography was also used to dissect fronto-parietal tracts (the superior longitudinal fasciculus) and the corticospinal tract. Results showed a shift in hand selection despite the absence of any motor or cognitive deficits, which was significantly associated with frontal and parietal resections rather than other lobes. In particular, the shift in hand selection was significantly associated with the resection of dorsal rather than ventral fronto-parietal white matter connections. Dorsal white matter pathways contribute bilaterally to control of goal-directed hand movements. We show that unilateral lesions, that may unbalance the cooperation of the two hemispheres, can alter the choice of hand selected to accomplish movements.
  • Hoymann, G. (2014). [Review of the book Bridging the language gap, Approaches to Herero verbal interaction as development practice in Namibia by Rose Marie Beck]. Journal of African languages and linguistics, 35(1), 130-133. doi:10.1515/jall-2014-0004.
  • Hoymann, G. (2004). [Review of the book Botswana: The future of the minority languages ed. by Herman M. Batibo and Birgit Smieja]. Journal of African Languages and Linguistics, 25(2), 171-173. doi:10.1515/jall.2004.25.2.171.
  • Hubers, F., Redl, T., De Vos, H., Reinarz, L., & De Hoop, H. (2020). Processing prescriptively incorrect comparative particles: Evidence from sentence-matching and eye-tracking. Frontiers in Psychology, 11: 186. doi:10.3389/fpsyg.2020.00186.

    Abstract

    Speakers of a language sometimes use particular constructions which violate prescriptive grammar rules. Despite their prescriptive ungrammaticality, they can occur rather frequently. One such example is the comparative construction in Dutch and similarly in German, where the equative particle is used in comparative constructions instead of the prescriptively correct comparative particle (Dutch beter als Jan and German besser wie Jan ‘lit. better as John’). From a theoretical linguist’s point of view, these so-called grammatical norm violations are perfectly grammatical, even though they are not part of the language’s prescriptive grammar. In a series of three experiments using sentence-matching and eye-tracking methodology, we investigated whether grammatical norm violations are processed as truly grammatical, as truly ungrammatical, or whether they fall in between these two. We hypothesized that the latter would be the case. We analyzed our data using linear mixed effects models in order to capture possible individual differences. The results of the sentence-matching experiments, which were conducted in both Dutch and German, showed that the grammatical norm violation patterns with ungrammatical sentences in both languages. Our hypothesis was therefore not borne out. However, using the more sensitive eye-tracking method on Dutch speakers only, we found that the ungrammatical alternative leads to higher reading times than the grammatical norm violation. We also found significant individual variation regarding this very effect. We furthermore replicated the processing difference between the grammatical norm violation and the prescriptively correct variant. In summary, we conclude that while the results of the more sensitive eye-tracking experiment suggest that grammatical norm violations are not processed on a par with ungrammatical sentences, the results of all three experiments clearly show that grammatical norm violations cannot be considered grammatical, either.

    Additional information

    Supplementary Material
  • Hubers, F., Trompenaars, T., Collin, S., De Schepper, K., & De hoop, H. (2020). Hypercorrection as a by-product of education. Applied Linguistics, 41(4), 552-574. doi:10.1093/applin/amz001.

    Abstract

    Prescriptive grammar rules are taught in education, generally to ban the use of certain frequently encountered constructions in everyday language. This may lead to hypercorrection, meaning that the prescribed form in one construction is extended to another one in which it is in fact prohibited by prescriptive grammar. We discuss two such cases in Dutch: the hypercorrect use of the comparative particle dan ‘than’ in equative constructions, and the hypercorrect use of the accusative pronoun hen ‘them’ for a dative object. In two experiments, high school students of three educational levels were tested on their use of these hypercorrect forms (nexp1 = 162, nexp2 = 159). Our results indicate an overall large amount of hypercorrection across all levels of education, including pre-university level students who otherwise perform better in constructions targeted by prescriptive grammar rules. We conclude that while teaching prescriptive grammar rules to high school students seems to increase their use of correct forms in certain constructions, this comes at a cost of hypercorrection in others.
  • Huettig, F., Guerra, E., & Helo, A. (2020). Towards understanding the task dependency of embodied language processing: The influence of colour during language-vision interactions. Journal of Cognition, 3(1): 41. doi:10.5334/joc.135.

    Abstract

    A main challenge for theories of embodied cognition is to understand the task dependency of embodied language processing. One possibility is that perceptual representations (e.g., typical colour of objects mentioned in spoken sentences) are not activated routinely but the influence of perceptual representation emerges only when context strongly supports their involvement in language. To explore this question, we tested the effects of colour representations during language processing in three visual- world eye-tracking experiments. On critical trials, participants listened to sentence- embedded words associated with a prototypical colour (e.g., ‘...spinach...’) while they inspected a visual display with four printed words (Experiment 1), coloured or greyscale line drawings (Experiment 2) and a ‘blank screen’ after a preview of coloured or greyscale line drawings (Experiment 3). Visual context always presented a word/object (e.g., frog) associated with the same prototypical colour (e.g. green) as the spoken target word and three distractors. When hearing spinach participants did not prefer the written word frog compared to other distractor words (Experiment 1). In Experiment 2, colour competitors attracted more overt attention compared to average distractors, but only for the coloured condition and not for greyscale trials. Finally, when the display was removed at the onset of the sentence, and in contrast to the previous blank-screen experiments with semantic competitors, there was no evidence of colour competition in the eye-tracking record (Experiment 3). These results fit best with the notion that the main role of perceptual representations in language processing is to contextualize language in the immediate environment.

    Additional information

    Data files and script
  • Huettig, F., & Mishra, R. K. (2014). How literacy acquisition affects the illiterate mind - A critical examination of theories and evidence. Language and Linguistics Compass, 8(10), 401-427. doi:10.1111/lnc3.12092.

    Abstract

    At present, more than one-fifth of humanity is unable to read and write. We critically examine experimental evidence and theories of how (il)literacy affects the human mind. In our discussion we show that literacy has significant cognitive consequences that go beyond the processing of written words and sentences. Thus, cultural inventions such as reading shape general cognitive processing in non-trivial ways. We suggest that this has important implications for educational policy and guidance as well as research into cognitive processing and brain functioning.
  • Huizeling, E., Wang, H., Holland, C., & Kessler, K. (2020). Age-related changes in attentional refocusing during simulated driving. Brain sciences, 10(8): 530. doi:10.3390/brainsci10080530.

    Abstract

    We recently reported that refocusing attention between temporal and spatial tasks becomes more difficult with increasing age, which could impair daily activities such as driving (Callaghan et al., 2017). Here, we investigated the extent to which difficulties in refocusing attention extend to naturalistic settings such as simulated driving. A total of 118 participants in five age groups (18–30; 40–49; 50–59; 60–69; 70–91 years) were compared during continuous simulated driving, where they repeatedly switched from braking due to traffic ahead (a spatially focal yet temporally complex task) to reading a motorway road sign (a spatially more distributed task). Sequential-Task (switching) performance was compared to Single-Task performance (road sign only) to calculate age-related switch-costs. Electroencephalography was recorded in 34 participants (17 in the 18–30 and 17 in the 60+ years groups) to explore age-related changes in the neural oscillatory signatures of refocusing attention while driving. We indeed observed age-related impairments in attentional refocusing, evidenced by increased switch-costs in response times and by deficient modulation of theta and alpha frequencies. Our findings highlight virtual reality (VR) and Neuro-VR as important methodologies for future psychological and gerontological research.

    Additional information

    supplementary file
  • Hulten, A., Karvonen, L., Laine, M., & Salmelin, R. (2014). Producing speech with a newly learned morphosyntax and vocabulary: An MEG study. Journal of Cognitive Neuroscience, 26(8), 1721-1735. doi:10.1162/jocn_a_00558.
  • Iacozza, S., Meyer, A. S., & Lev-Ari, S. (2020). How in-group bias influences the level of detail of speaker-specific information encoded in novel lexical representations. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(5), 894-906. doi:10.1037/xlm0000765.

    Abstract

    An important issue in theories of word learning is how abstract or context-specific representations of novel words are. One aspect of this broad issue is how well learners maintain information about the source of novel words. We investigated whether listeners’ source memory was better for words learned from members of their in-group (students of their own university) than it is for words learned from members of an out-group (students from another institution). In the first session, participants saw 6 faces and learned which of the depicted students attended either their own or a different university. In the second session, they learned competing labels (e.g., citrus-peller and citrus-schiller; in English, lemon peeler and lemon stripper) for novel gadgets, produced by the in-group and out-group speakers. Participants were then tested for source memory of these labels and for the strength of their in-group bias, that is, for how much they preferentially process in-group over out-group information. Analyses of source memory accuracy demonstrated an interaction between speaker group membership status and participants’ in-group bias: Stronger in-group bias was associated with less accurate source memory for out-group labels than in-group labels. These results add to the growing body of evidence on the importance of social variables for adult word learning.
  • Indefrey, P., & Levelt, W. J. M. (2004). The spatial and temporal signatures of word production components. Cognition, 92(1-2), 101-144. doi:10.1016/j.cognition.2002.06.001.

    Abstract

    This paper presents the results of a comprehensive meta-analysis of the relevant imaging literature on word production (82 experiments). In addition to the spatial overlap of activated regions, we also analyzed the available data on the time course of activations. The analysis specified regions and time windows of activation for the core processes of word production: lexical selection, phonological code retrieval, syllabification, and phonetic/articulatory preparation. A comparison of the word production results with studies on auditory word/non-word perception and reading showed that the time course of activations in word production is, on the whole, compatible with the temporal constraints that perception processes impose on the production processes they affect in picture/word interference paradigms.
  • Indefrey, P., & Levelt, W. J. M. (1999). A meta-analysis of neuroimaging experiments on word production. Neuroimage, 7, 1028.
  • Indefrey, P., Kleinschmidt, A., Merboldt, K.-D., Krüger, G., Brown, C. M., Hagoort, P., & Frahm, J. (1997). Equivalent responses to lexical and nonlexical visual stimuli in occipital cortex: a functional magnetic resonance imaging study. Neuroimage, 5, 78-81. doi:10.1006/nimg.1996.0232.

    Abstract

    Stimulus-related changes in cerebral blood oxygenation were measured using high-resolution functional magnetic resonance imaging sequentially covering visual occipital areas in contiguous sections. During dynamic imaging, healthy subjects silently viewed pseudowords, single false fonts, or length-matched strings of the same false fonts. The paradigm consisted of a sixfold alternation of an activation and a control task. With pseudowords as activation vs single false fonts as control, responses were seen mainly in medial occipital cortex. These responses disappeared when pseudowords were alternated with false font strings as the control and reappeared when false font strings instead of pseudowords served as activation and were alternated with single false fonts. The string-length contrast alone, therefore, is sufficient to account for the activation pattern observed in medial visual cortex when word-like stimuli are contrasted with single characters.
  • Indefrey, P., Hellwig, F. M., Herzog, H., Seitz, R. J., & Hagoort, P. (2004). Neural responses to the production and comprehension of syntax in identical utterances. Brain and Language, 89(2), 312-319. doi:10.1016/S0093-934X(03)00352-3.

    Abstract

    Following up on an earlier positron emission tomography (PET) experiment (Indefrey et al., 2001), we used a scene description paradigm to investigate whether a posterior inferior frontal region subserving syntactic encoding for speaking is also involved in syntactic parsing during listening. In the language production part of the experiment, subjects described visually presented scenes
    using either sentences, sequences of noun phrases, or sequences of syntactically unrelated words. In the language comprehension part of the experiment, subjects were auditorily presented with the same kinds of utterances and judged whether they matched the visual scenes. We were able to replicate the previous finding of a region in caudal Broca s area that is sensitive to the complexity of
    syntactic encoding in language production. In language comprehension, no hemodynamic activation differences due to syntactic complexity were found. Given that correct performance in the judgment task did not require syntactic processing of the auditory stimuli, the results suggest that the degree to which listeners recruit syntactic processing resources in language comprehension may be a function of the syntactic demands of the task or the stimulus material.
  • Indefrey, P. (1999). Some problems with the lexical status of nondefault inflection. Behavioral and Brain Sciences, 22(6), 1025. doi:10.1017/S0140525X99342229.

    Abstract

    Clahsen's characterization of nondefault inflection as based exclusively on lexical entries does not capture the full range of empirical data on German inflection. In the verb system differential effects of lexical frequency seem to be input-related rather than affecting morphological production. In the noun system, the generalization properties of -n and -e plurals exceed mere analogy-based productivity.
  • Indefrey, P. (2014). Time course of word production does not support a parallel input architecture. Language, Cognition and Neuroscience, 29(1), 33-34. doi:10.1080/01690965.2013.847191.

    Abstract

    Hickok's enterprise to unify psycholinguistic and motor control models is highly stimulating. Nonetheless, there are problems of the model with respect to the time course of neural activation in word production, the flexibility for continuous speech, and the need for non-motor feedback.

    Files private

    Request files
  • Isbilen, E. S., McCauley, S. M., Kidd, E., & Christiansen, M. H. (2020). Statistically induced chunking recall: A memory‐based approach to statistical learning. Cognitive Science, 44(7): e12848. doi:10.1111/cogs.12848.

    Abstract

    The computations involved in statistical learning have long been debated. Here, we build on work suggesting that a basic memory process, chunking , may account for the processing of statistical regularities into larger units. Drawing on methods from the memory literature, we developed a novel paradigm to test statistical learning by leveraging a robust phenomenon observed in serial recall tasks: that short‐term memory is fundamentally shaped by long‐term distributional learning. In the statistically induced chunking recall (SICR) task, participants are exposed to an artificial language, using a standard statistical learning exposure phase. Afterward, they recall strings of syllables that either follow the statistics of the artificial language or comprise the same syllables presented in a random order. We hypothesized that if individuals had chunked the artificial language into word‐like units, then the statistically structured items would be more accurately recalled relative to the random controls. Our results demonstrate that SICR effectively captures learning in both the auditory and visual modalities, with participants displaying significantly improved recall of the statistically structured items, and even recall specific trigram chunks from the input. SICR also exhibits greater test–retest reliability in the auditory modality and sensitivity to individual differences in both modalities than the standard two‐alternative forced‐choice task. These results thereby provide key empirical support to the chunking account of statistical learning and contribute a valuable new tool to the literature.
  • Ischebeck, A., Indefrey, P., Usui, N., Nose, I., Hellwig, F. M., & Taira, M. (2004). Reading in a regular orthography: An fMRI study investigating the role of visual familiarity. Journal of Cognitive Neuroscience, 16(5), 727-741. doi:10.1162/089892904970708.

    Abstract

    In order to separate the cognitive processes associated with phonological encoding and the use of a visual word form lexicon in reading, it is desirable to compare the processing of words presented in a visually familiar form with words in a visually unfamiliar form. Japanese Kana orthography offers this possibility. Two phonologically equivalent but visually dissimilar syllabaries allow the writing of, for example, foreign loanwords in two ways, only one of which is visually familiar. Familiarly written words, unfamiliarly written words, and pseudowords were presented in both Kana syllabaries (yielding six conditions in total) to participants during an fMRI measurement with a silent articulation task (Experiment 1) and a phonological lexical decision task (Experiment 2) using an event-related design. Consistent over two experimental tasks, the three different stimulus types (familiar, unfamiliar, and pseudoword) were found to activate selectively different brain regions previously associated with phonological encoding and word retrieval or meaning. Compatible with the predictions of the dual-route model for reading, pseudowords and visually unfamiliar words, which have to be read using phonological assembly, caused an increase in brain activity in left inferior frontal regions (BA 44/47), as compared to visually familiar words. Visually familiar and unfamiliar words were found to activate a range of areas associated with lexico-semantic processing more strongly than pseudowords, such as the left and right temporo-parietal region (BA 39/40), a region in the left middle/inferior temporal gyrus (BA 20/21), and the posterior cingulate (BA 31).
  • Jacoby, N., Margulis, E. H., Clayton, M., Hannon, E., Honing, H., Iversen, J., Klein, T. R., Mehr, S. A., Pearson, L., Peretz, I., Perlman, M., Polak, R., Ravignani, A., Savage, P. E., Steingo, G., Stevens, C. J., Trainor, L., Trehub, S., Veal, M., & Wald-Fuhrmann, M. (2020). Cross-cultural work in music cognition: Challenges, insights, and recommendations. Music Perception, 37(3), 185-195. doi:10.1525/mp.2020.37.3.185.

    Abstract

    Many foundational questions in the psychology of music require cross-cultural approaches, yet the vast majority of work in the field to date has been conducted with Western participants and Western music. For cross-cultural research to thrive, it will require collaboration between people from different disciplinary backgrounds, as well as strategies for overcoming differences in assumptions, methods, and terminology. This position paper surveys the current state of the field and offers a number of concrete recommendations focused on issues involving ethics, empirical methods, and definitions of “music” and “culture.”
  • Janse, E., & Klitsch, J. (2004). Auditieve perceptie bij gezonde sprekers en bij sprekers met verworven taalstoornissen. Afasiologie, 26(1), 2-6.
  • Janse, E. (2004). Word perception in fast speech: Artificially time-compressed vs. naturally produced fast speech. Speech Communication, 42, 155-173. doi:10.1016/j.specom.2003.07.001.

    Abstract

    Natural fast speech differs from normal-rate speech with respect to its temporal pattern. Previous results showed that word intelligibility of heavily artificially time-compressed speech could not be improved by making its temporal pattern more similar to that of natural fast speech. This might have been due to the extrapolation of timing rules for natural fast speech to rates that are much faster than can be attained by human speakers. The present study investigates whether, at a speech rate that human speakers can attain, artificially time-compressed speech is easier to process if its timing pattern is similar to that of naturally produced fast speech. Our first experiment suggests, however, that word processing speed was slowed down, relative to linear compression. In a second experiment, word processing of artificially time-compressed speech was compared with processing of naturally produced fast speech. Even when naturally produced fast speech is perfectly intelligible, its less careful articulation, combined with the changed timing pattern, slows down processing, relative to linearly time-compressed speech. Furthermore, listeners preferred artificially time-compressed speech over naturally produced fast speech. These results suggest that linearly time-compressed speech has both a temporal and a segmental advantage over natural fast speech.
  • Janse, E., & Jesse, A. (2014). Working memory affects older adults’ use of context in spoken-word recognition. Quarterly Journal of Experimental Psychology, 67, 1842-1862. doi:10.1080/17470218.2013.879391.

    Abstract

    Many older listeners report difficulties in understanding speech in noisy situations. Working memory and other cognitive skills may modulate, however, older listeners’ ability to use context information to alleviate the effects of noise on spoken-word recognition. In the present study, we investigated whether working memory predicts older adults’ ability to immediately use context information in the recognition of words embedded in sentences, presented in different listening conditions. In a phoneme-monitoring task, older adults were asked to detect as fast and as accurately as possible target phonemes in sentences spoken by a target speaker. Target speech was presented without noise, with fluctuating speech-shaped noise, or with competing speech from a single distractor speaker. The gradient measure of contextual probability (derived from a separate offline rating study) mainly affected the speed of recognition, with only a marginal effect on detection accuracy. Contextual facilitation was modulated by older listeners’ working memory and age across listening conditions. Working memory and age, as well as hearing loss, were also the most consistent predictors of overall listening performance. Older listeners’ immediate benefit from context in spoken-word recognition thus relates to their ability to keep and update a semantic representation of the sentence content in working memory.

    Files private

    Request files
  • Jansma, B. M., & Schiller, N. O. (2004). Monitoring syllable boundaries during speech production. Brain and Language, 90(1-3), 311-317. doi:10.1016/S0093-934X(03)00443-7.

    Abstract

    This study investigated the encoding of syllable boundary information during speech production in Dutch. Based on Levelt's model of phonological encoding, we hypothesized segments and syllable boundaries to be encoded in an incremental way. In a selfmonitoring experiment, decisions about the syllable affiliation (first or second syllable) of a pre-specified consonant, which was the third phoneme in a word, were required (e.g., ka.No canoe vs. kaN.sel pulpit ; capital letters indicate pivotal consonants, dots mark syllable boundaries). First syllable responses were faster than second syllable responses, indicating the incremental nature of segmental encoding and syllabification during speech production planning. The results of the experiment are discussed in the context of Levelt 's model of phonological encoding.
  • Janssen, D. P., Roelofs, A., & Levelt, W. J. M. (2004). Stem complexity and inflectional encoding in language production. Journal of Psycholinguistic Research, 33(5), 365-381. doi:10.1023/B:JOPR.0000039546.60121.a8.

    Abstract

    Three experiments are reported that examined whether stem complexity plays a role in inflecting polymorphemic words in language production. Experiment 1 showed that preparation effects for words with polymorphemic stems are larger when they are produced among words with constant inflectional structures compared to words with variable inflectional structures and simple stems. This replicates earlier findings for words with monomorphemic stems (Janssen et al., 2002). Experiments 2 and 3 showed that when inflectional structure is held constant, the preparation effects are equally large with simple and compound stems, and with compound and complex adjectival stems. These results indicate that inflectional encoding is blind to the complexity of the stem, which suggests that specific inflectional rather than generic morphological frames guide the generation of inflected forms in speaking words.
  • Janssen, D. P., Roelofs, A., & Levelt, W. J. M. (2002). Inflectional frames in language production. Language and Cognitive Processes, 17(3), 209-236. doi:10.1006/jmla.2001.2800.

    Abstract

    The authors report six implicit priming experiments that examined the production of inflected forms. Participants produced words out of small sets in response to prompts. The words differed in form or shared word-initial segments, which allowed for preparation. In constant inflectional sets, the words had the same number of inflectional suffixes, whereas in variable sets the number of suffixes differed. In the experiments, preparation effects were obtained, which were larger in the constant than in the variable sets. Control experiments showed that this difference in effect was not due to syntactic class or phonological form per se. The results are interpreted in terms of a slot-and-filler model of word production, in which inflectional frames, on the one hand, and stems and affixes, on the other hand, are independently spelled out on the basis of an abstract morpho-syntactic specification of the word, which is followed by morpheme-to-frame association.
  • Janzen, G., & Van Turennout, M. (2004). Selective neural representation of objects relevant for navigation. Nature Neuroscience, 7(6), 673-677. doi:10.1038/nn1257.

    Abstract

    As people find their way through their environment, objects at navigationally relevant locations can serve as crucial landmarks. The parahippocampal gyrus has previously been shown to be involved in object and scene recognition. In the present study, we investigated the neural representation of navigationally relevant locations. Healthy human adults viewed a route through a virtual museum with objects placed at intersections (decision points) or at simple turns (non-decision points). Event-related functional magnetic resonance imaging (fMRI) data were acquired during subsequent recognition of the objects in isolation. Neural activity in the parahippocampal gyrus reflected the navigational relevance of an object's location in the museum. Parahippocampal responses were selectively increased for objects that occurred at decision points, independent of attentional demands. This increase occurred for forgotten as well as remembered objects, showing implicit retrieval of navigational information. The automatic storage of relevant object location in the parahippocampal gyrus provides a part of the neural mechanism underlying successful navigation.
  • Jebb, D., Huang, Z., Pippel, M., Hughes, G. M., Lavrichenko, K., Devanna, P., Winkler, S., Jermiin, L. S., Skirmuntt, E. C., Katzourakis, A., Burkitt-Gray, L., Ray, D. A., Sullivan, K. A. M., Roscito, J. G., Kirilenko, B. M., Dávalos, L. M., Corthals, A. P., Power, M. L., Jones, G., Ransome, R. D. and 9 moreJebb, D., Huang, Z., Pippel, M., Hughes, G. M., Lavrichenko, K., Devanna, P., Winkler, S., Jermiin, L. S., Skirmuntt, E. C., Katzourakis, A., Burkitt-Gray, L., Ray, D. A., Sullivan, K. A. M., Roscito, J. G., Kirilenko, B. M., Dávalos, L. M., Corthals, A. P., Power, M. L., Jones, G., Ransome, R. D., Dechmann, D., Locatelli, A. G., Puechmaille, S. J., Fedrigo, O., Jarvis, E. D., Hiller, M., Vernes, S. C., Myers, E. W., & Teeling, E. C. (2020). Six reference-quality genomes reveal evolution of bat adaptations. Nature, 583, 578-584. doi:10.1038/s41586-020-2486-3.

    Abstract

    Bats possess extraordinary adaptations, including flight, echolocation, extreme longevity and unique immunity. High-quality genomes are crucial for understanding the molecular basis and evolution of these traits. Here we incorporated long-read sequencing and state-of-the-art scaffolding protocols1 to generate, to our knowledge, the first reference-quality genomes of six bat species (Rhinolophus ferrumequinum, Rousettus aegyptiacus, Phyllostomus discolor, Myotis myotis, Pipistrellus kuhlii and Molossus molossus). We integrated gene projections from our ‘Tool to infer Orthologs from Genome Alignments’ (TOGA) software with de novo and homology gene predictions as well as short- and long-read transcriptomics to generate highly complete gene annotations. To resolve the phylogenetic position of bats within Laurasiatheria, we applied several phylogenetic methods to comprehensive sets of orthologous protein-coding and noncoding regions of the genome, and identified a basal origin for bats within Scrotifera. Our genome-wide screens revealed positive selection on hearing-related genes in the ancestral branch of bats, which is indicative of laryngeal echolocation being an ancestral trait in this clade. We found selection and loss of immunity-related genes (including pro-inflammatory NF-κB regulators) and expansions of anti-viral APOBEC3 genes, which highlights molecular mechanisms that may contribute to the exceptional immunity of bats. Genomic integrations of diverse viruses provide a genomic record of historical tolerance to viral infection in bats. Finally, we found and experimentally validated bat-specific variation in microRNAs, which may regulate bat-specific gene-expression programs. Our reference-quality bat genomes provide the resources required to uncover and validate the genomic basis of adaptations of bats, and stimulate new avenues of research that are directly relevant to human health and disease

    Additional information

    41586_2020_2486_MOESM1_ESM.pdf
  • Jescheniak, J. D., & Levelt, W. J. M. (1994). Word frequency effects in speech production: Retrieval of syntactic information and of phonological form. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20(4), 824-843.

    Abstract

    In 7 experiments the authors investigated the locus of word frequency effects in speech production. Experiment 1 demonstrated a frequency effect in picture naming that was robust over repetitions. Experiments 2, 3, and 7 excluded contributions from object identification and initiation of articulation. Experiments 4 and 5 investigated whether the effect arises in accessing the syntactic word (lemma) by using a grammatical gender decision task. Although a frequency effect was found, it dissipated under repeated access to word's gender. Experiment 6 tested whether the robust frequency effect arises in accessing the phonological form (lexeme) by having Ss translate words that produced homophones. Low-frequent homophones behaved like high-frequent controls, inheriting the accessing speed of their high-frequent homophone twins. Because homophones share the lexeme, not the lemma, this suggests a lexeme-level origin of the robust effect.
  • Jesse, A., & McQueen, J. M. (2014). Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition. Quarterly Journal of Experimental Psychology, 67, 793-808. doi:10.1080/17470218.2013.834371.

    Abstract

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker say fragments of word pairs that were segmentally identical but differed in their stress realization (e.g., 'ca-vi from cavia "guinea pig" vs. 'ka-vi from kaviaar "caviar"). Participants were able to distinguish between these pairs from seeing a speaker alone. Only the presence of primary stress in the fragment, not its absence, was informative. Participants were able to distinguish visually primary from secondary stress on first syllables, but only when the fragment-bearing target word carried phrase-level emphasis. Furthermore, participants distinguished fragments with primary stress on their second syllable from those with secondary stress on their first syllable (e.g., pro-'jec from projector "projector" vs. 'pro-jec from projectiel "projectile"), independently of phrase-level emphasis. Seeing a speaker thus contributes to spoken-word recognition by providing suprasegmental information about the presence of primary lexical stress.
  • Jessop, A., & Chang, F. (2020). Thematic role information is maintained in the visual object-tracking system. Quarterly journal of experimental psychology, 73(1), 146-163. doi:10.1177%2F1747021819882842.

    Abstract

    Thematic roles characterise the functions of participants in events, but there is no agreement on how these roles are identified in the real world. In three experiments, we examined how role identification in push events is supported by the visual object-tracking system. Participants saw one to three push events in visual scenes with nine identical randomly moving circles. After a period of random movement, two circles from one of the push events and a foil object were given different colours and the participants had to identify their roles in the push with an active sentence, such as red pushed blue. It was found that the participants could track the agent and patient targets and generate descriptions that identified their roles at above chance levels, even under difficult conditions, such as when tracking multiple push events (Experiments 1–3), fixating their gaze (Experiment 1), performing a concurrent speeded-response task (Experiment 2), and when tracking objects that were temporarily invisible (Experiment 3). The results were consistent with previous findings of an average tracking capacity limit of four objects, individual differences in this capacity, and the use of attentional strategies. The studies demonstrated that thematic role information can be maintained when tracking the identity of visually identical objects, then used to map role fillers (e.g., the agent of a push event) into their appropriate sentence positions. This suggests that thematic role features are stored temporarily in the visual object-tracking system.
  • De Jong, N. H., Feldman, L. B., Schreuder, R., Pastizzo, M., & Baayen, R. H. (2002). The processing and representation of Dutch and English compounds: Peripheral morphological, and central orthographic effects. Brain and Language, 81(1-3), 555-567. doi:10.1006/brln.2001.2547.

    Abstract

    In this study, we use the association between various measures of the morphological family and decision latencies to reveal the way in which the components of Dutch and English compounds are processed. The results show that for constituents of concatenated compounds in both languages, a position-related token count of the morphological family plays a role, whereas English open compounds show an effect of a type count, similar to the effect of family size for simplex words. When Dutch compounds are written with an artificial space, they reveal no effect of type count, which shows that the differential effect for the English open compounds is not superficial. The final experiment provides converging evidence for the lexical consequences of the space in English compounds. Decision latencies for English simplex words are better predicted from counts of the morphological family that include concatenated and hyphenated but not open family members.
  • Jongman, S. R., Roelofs, A., & Lewis, A. G. (2020). Attention for speaking: Prestimulus motor-cortical alpha power predicts picture naming latencies. Journal of Cognitive Neuroscience, 32(5), 747-761. doi:10.1162/jocn_a_01513.

    Abstract

    There is a range of variability in the speed with which a single speaker will produce the same word from one instance to another. Individual differences studies have shown that the speed of production and the ability to maintain attention are related. This study investigated whether fluctuations in production latencies can be explained by spontaneous fluctuations in speakers' attention just prior to initiating speech planning. A relationship between individuals' incidental attentional state and response performance is well attested in visual perception, with lower prestimulus alpha power associated with faster manual responses. Alpha is thought to have an inhibitory function: Low alpha power suggests less inhibition of a specific brain region, whereas high alpha power suggests more inhibition. Does the same relationship hold for cognitively demanding tasks such as word production? In this study, participants named pictures while EEG was recorded, with alpha power taken to index an individual's momentary attentional state. Participants' level of alpha power just prior to picture presentation and just prior to speech onset predicted subsequent naming latencies. Specifically, higher alpha power in the motor system resulted in faster speech initiation. Our results suggest that one index of a lapse of attention during speaking is reduced inhibition of motor-cortical regions: Decreased motor-cortical alpha power indicates reduced inhibition of this area while early stages of production planning unfold, which leads to increased interference from motor-cortical signals and longer naming latencies. This study shows that the language production system is not impermeable to the influence of attention.
  • Jongman, S. R., Piai, V., & Meyer, A. S. (2020). Planning for language production: The electrophysiological signature of attention to the cue to speak. Language, Cognition and Neuroscience, 35(7), 915-932. doi:10.1080/23273798.2019.1690153.

    Abstract

    In conversation, speech planning can overlap with listening to the interlocutor. It has been
    postulated that once there is enough information to formulate a response, planning is initiated
    and the response is maintained in working memory. Concurrently, the auditory input is
    monitored for the turn end such that responses can be launched promptly. In three EEG
    experiments, we aimed to identify the neural signature of phonological planning and monitoring
    by comparing delayed responding to not responding (reading aloud, repetition and lexical
    decision). These comparisons consistently resulted in a sustained positivity and beta power
    reduction over posterior regions. We argue that these effects reflect attention to the sequence
    end. Phonological planning and maintenance were not detected in the neural signature even
    though it is highly likely these were taking place. This suggests that EEG must be used cautiously
    to identify response planning when the neural signal is overridden by attention effects
  • Jordens, P. (2004). Systematiek en dynamiek bij de verwerving van Finietheid. Toegepaste Taalwetenschap in Artikelen, 71, 9-22.

    Abstract

    In early Dutch learner varieties, there is no evidence of finiteness being a functional category. There is no V2nd: no correlation between inflectional morphology and movement. Initially, learners express the illocutive function of finiteness through the use of illocutive markers, with the non-use of an illocutive marker expressing the default illocutive function of assertion. Illocutive markers are functioning as adjuncts with scope over the predicate. Illocutive markers become re-analysed as functional elements.The driving force is the acquisition of the auxiliary verbs that occur with past participles. It leads to a reanalysis of illocutive markers as two separate elements: an auxiliary verb and a scope adverb. The (modal) auxiliary carries illocutive function. Lexical verb-argument structure (including the external argument) occurs within the domain of the auxiliary verb. The predicate as the focus constituent occurs within the domain of a scope adverb. This reanalysis establishes a position for the external argument within the domain of AUX. The acquisition of AUX causes the acquisition of a (hierarchical) structure with a complement as a constituent which represents an underlying verb-argument structure, a predicate as the domain of elements that are in focus, and an external (specifier) position as a landing site for elements with topic function.
  • Jordens, P. (2002). Finiteness in early child Dutch. Linguistics, 40(4), 687-765. doi:10.1515/ling.2002.029.
  • Jordens, P. (1997). Introducing the basic variety. Second Language Research, 13(4), 289-300. doi:10.1191%2F026765897672176425.
  • Junge, C., & Cutler, A. (2014). Early word recognition and later language skills. Brain sciences, 4(4), 532-559. doi:10.3390/brainsci4040532.

    Abstract

    Recent behavioral and electrophysiological evidence has highlighted the long-term importance for language skills of an early ability to recognize words in continuous speech. We here present further tests of this long-term link in the form of follow-up studies conducted with two (separate) groups of infants who had earlier participated in speech segmentation tasks. Each study extends prior follow-up tests: Study 1 by using a novel follow-up measure that taps into online processing, Study 2 by assessing language performance relationships over a longer time span than previously tested. Results of Study 1 show that brain correlates of speech segmentation ability at 10 months are positively related to 16-month-olds’ target fixations in a looking-while-listening task. Results of Study 2 show that infant speech segmentation ability no longer directly predicts language profiles at the age of five. However, a meta-analysis across our results and those of similar studies (Study 3) reveals that age at follow-up does not moderate effect size. Together, the results suggest that infants’ ability to recognize words in speech certainly benefits early vocabulary development; further observed relationships of later language skills to early word recognition may be consequent upon this vocabulary size effect.
  • Junge, C., Cutler, A., & Hagoort, P. (2014). Successful word recognition by 10-month-olds given continuous speech both at initial exposure and test. Infancy, 19(2), 179-193. doi:10.1111/infa.12040.

    Abstract

    Most words that infants hear occur within fluent speech. To compile a vocabulary, infants therefore need to segment words from speech contexts. This study is the first to investigate whether infants (here: 10-month-olds) can recognize words when both initial exposure and test presentation are in continuous speech. Electrophysiological evidence attests that this indeed occurs: An increased extended negativity (word recognition effect) appears for familiarized target words relative to control words. This response proved constant at the individual level: Only infants who showed this negativity at test had shown such a response, within six repetitions after first occurrence, during familiarization.
  • Kaufeld, G., Naumann, W., Meyer, A. S., Bosker, H. R., & Martin, A. E. (2020). Contextual speech rate influences morphosyntactic prediction and integration. Language, Cognition and Neuroscience, 35(7), 933-948. doi:10.1080/23273798.2019.1701691.

    Abstract

    Understanding spoken language requires the integration and weighting of multiple cues, and may call on cue integration mechanisms that have been studied in other areas of perception. In the current study, we used eye-tracking (visual-world paradigm) to examine how contextual speech rate (a lower-level, perceptual cue) and morphosyntactic knowledge (a higher-level, linguistic cue) are iteratively combined and integrated. Results indicate that participants used contextual rate information immediately, which we interpret as evidence of perceptual inference and the generation of predictions about upcoming morphosyntactic information. Additionally, we observed that early rate effects remained active in the presence of later conflicting lexical information. This result demonstrates that (1) contextual speech rate functions as a cue to morphosyntactic inferences, even in the presence of subsequent disambiguating information; and (2) listeners iteratively use multiple sources of information to draw inferences and generate predictions during speech comprehension. We discuss the implication of these demonstrations for theories of language processing
  • Kaufeld, G., Ravenschlag, A., Meyer, A. S., Martin, A. E., & Bosker, H. R. (2020). Knowledge-based and signal-based cues are weighted flexibly during spoken language comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(3), 549-562. doi:10.1037/xlm0000744.

    Abstract

    During spoken language comprehension, listeners make use of both knowledge-based and signal-based sources of information, but little is known about how cues from these distinct levels of representational hierarchy are weighted and integrated online. In an eye-tracking experiment using the visual world paradigm, we investigated the flexible weighting and integration of morphosyntactic gender marking (a knowledge-based cue) and contextual speech rate (a signal-based cue). We observed that participants used the morphosyntactic cue immediately to make predictions about upcoming referents, even in the presence of uncertainty about the cue’s reliability. Moreover, we found speech rate normalization effects in participants’ gaze patterns even in the presence of preceding morphosyntactic information. These results demonstrate that cues are weighted and integrated flexibly online, rather than adhering to a strict hierarchy. We further found rate normalization effects in the looking behavior of participants who showed a strong behavioral preference for the morphosyntactic gender cue. This indicates that rate normalization effects are robust and potentially automatic. We discuss these results in light of theories of cue integration and the two-stage model of acoustic context effects
  • Kaufeld, G., Bosker, H. R., Ten Oever, S., Alday, P. M., Meyer, A. S., & Martin, A. E. (2020). Linguistic structure and meaning organize neural oscillations into a content-specific hierarchy. The Journal of Neuroscience, 49(2), 9467-9475. doi:10.1523/JNEUROSCI.0302-20.2020.

    Abstract

    Neural oscillations track linguistic information during speech comprehension (e.g., Ding et al., 2016; Keitel et al., 2018), and are known to be modulated by acoustic landmarks and speech intelligibility (e.g., Doelling et al., 2014; Zoefel & VanRullen, 2015). However, studies investigating linguistic tracking have either relied on non-naturalistic isochronous stimuli or failed to fully control for prosody. Therefore, it is still unclear whether low frequency activity tracks linguistic structure during natural speech, where linguistic structure does not follow such a palpable temporal pattern. Here, we measured electroencephalography (EEG) and manipulated the presence of semantic and syntactic information apart from the timescale of their occurrence, while carefully controlling for the acoustic-prosodic and lexical-semantic information in the signal. EEG was recorded while 29 adult native speakers (22 women, 7 men) listened to naturally-spoken Dutch sentences, jabberwocky controls with morphemes and sentential prosody, word lists with lexical content but no phrase structure, and backwards acoustically-matched controls. Mutual information (MI) analysis revealed sensitivity to linguistic content: MI was highest for sentences at the phrasal (0.8-1.1 Hz) and lexical timescale (1.9-2.8 Hz), suggesting that the delta-band is modulated by lexically-driven combinatorial processing beyond prosody, and that linguistic content (i.e., structure and meaning) organizes neural oscillations beyond the timescale and rhythmicity of the stimulus. This pattern is consistent with neurophysiologically inspired models of language comprehension (Martin, 2016, 2020; Martin & Doumas, 2017) where oscillations encode endogenously generated linguistic content over and above exogenous or stimulus-driven timing and rhythm information.

Share this page