Publications

Displaying 301 - 400 of 1000
  • Gialluisi, A., Pippucci, T., Anikster, Y., Ozbek, U., Medlej-Hashim, M., Mégarbané, A., & Romeo, G. (2012). Estimating the allele frequency of autosomal recessive disorders through mutational records and consanguinity: The homozygosity index (HI). Annals of Human Genetics, 76, 159-167. doi:10.1111/j.1469-1809.2011.00693.x.

    Abstract

    In principle mutational records make it possible to estimate frequencies of disease alleles (q) for autosomal recessive disorders using a novel approach based on the calculation of the Homozygosity Index (HI), i.e., the proportion of homozygous patients, which is complementary to the proportion of compound heterozygous patients P(CH). In other words, the rarer the disorder, the higher will be the HI and the lower will be the P(CH). To test this hypothesis we used mutational records of individuals affected with Familial Mediterranean Fever (FMF) and Phenylketonuria (PKU), born to either consanguineous or apparently unrelated parents from six population samples of the Mediterranean region. Despite the unavailability of precise values of the inbreeding coefficient for the general population, which are needed in the case of apparently unrelated parents, our estimates of q are very similar to those of previous descriptive epidemiological studies. Finally, we inferred from simulation studies that the minimum sample size needed to use this approach is 25 patients either with unrelated or first cousin parents. These results show that the HI can be used to produce a ranking order of allele frequencies of autosomal recessive disorders, especially in populations with high rates of consanguineous marriages.
  • Giglio, L., Ostarek, M., Weber, K., & Hagoort, P. (2022). Commonalities and asymmetries in the neurobiological infrastructure for language production and comprehension. Cerebral Cortex, 32(7), 1405-1418. doi:10.1093/cercor/bhab287.

    Abstract

    The neurobiology of sentence production has been largely understudied compared to the neurobiology of sentence comprehension, due to difficulties with experimental control and motion-related artifacts in neuroimaging. We studied the neural response to constituents of increasing size and specifically focused on the similarities and differences in the production and comprehension of the same stimuli. Participants had to either produce or listen to stimuli in a gradient of constituent size based on a visual prompt. Larger constituent sizes engaged the left inferior frontal gyrus (LIFG) and middle temporal gyrus (LMTG) extending to inferior parietal areas in both production and comprehension, confirming that the neural resources for syntactic encoding and decoding are largely overlapping. An ROI analysis in LIFG and LMTG also showed that production elicited larger responses to constituent size than comprehension and that the LMTG was more engaged in comprehension than production, while the LIFG was more engaged in production than comprehension. Finally, increasing constituent size was characterized by later BOLD peaks in comprehension but earlier peaks in production. These results show that syntactic encoding and parsing engage overlapping areas, but there are asymmetries in the engagement of the language network due to the specific requirements of production and comprehension.

    Additional information

    supplementary material
  • Gisselgard, J., Uddén, J., Ingvar, M., & Petersson, K. M. (2007). Disruption of order information by irrelevant items: A serial recognition paradigm. Acta Psychologica, 124(3), 356-369. doi:10.1016/j.actpsy.2006.04.002.

    Abstract

    Irrelevant speech effect (ISE) is defined as a decrement in visually presented digit-list short-term memory performance due to exposure to irrelevant auditory material. Perhaps the most successful theoretical explanation of the effect is the changing state hypothesis. This hypothesis explains the effect in terms of confusion between amodal serial order cues, and represents a view based on the interference caused by the processing of similar order information of the visual and auditory materials. An alternative view suggests that the interference occurs as a consequence of the similarity between the visual and auditory contents of the stimuli. An important argument for the former view is the observation that ISE is almost exclusively observed in tasks that require memory for serial order. However, most short-term memory tasks require that both item and order information be retained in memory. An ideal task to investigate the sensitivity of maintenance of serial order to irrelevant speech would be one that calls upon order information but not item information. One task that is particularly suited to address this issue is serial recognition. In a typical serial recognition task, a list of items is presented and then probed by the same list in which the order of two adjacent items has been transposed. Due to the re-presentation of the encoding string, serial recognition requires primarily the serial order to be maintained while the content of the presented items is deemphasized. In demonstrating a highly significant ISE of changing versus steady-state auditory items in a serial recognition task, the present finding lends support for and extends previous empirical findings suggesting that irrelevant speech has the potential to interfere with the coding of the order of the items to be memorized.
  • Gisselgard, J., Petersson, K. M., Baddeley, A., & Ingvar, M. (2003). The irrelevant speech effect: A PET study. Neuropsychologia, 41, 1899-1911. doi:10.1016/S0028-3932(03)00122-2.

    Abstract

    Positron emission tomography (PET) was performed in normal volunteers during a serial recall task under the influence of irrelevant speech comprising both single item repetition and multi-item sequences. An interaction approach was used to identify brain areas specifically related to the irrelevant speech effect. We interpreted activations as compensatory recruitment of complementary working memory processing, and decreased activity in terms of suppression of task relevant areas invoked by the irrelevant speech. The interaction between the distractors and working memory revealed a significant effect in the left, and to a lesser extent in the right, superior temporal region, indicating that initial phonological processing was relatively suppressed. Additional areas of decreased activity were observed in an a priori defined cortical network related to verbalworking memory, incorporating the bilateral superior temporal and inferior/middle frontal corticesn extending into Broca’s area on the left. We also observed a weak activation in the left inferior parietal cortex, a region suggested to reflect the phonological store, the subcomponent where the interference is assumed to take place. The results suggest that the irrelevant speech effect is correlated with and thus tentatively may be explained in terms of a suppression of components of the verbal working memory network as outlined. The results can be interpreted in terms of inhibitory top–down attentional mechanisms attenuating the influence of the irrelevant speech, although additional studies are clearly necessary to more fully characterize the nature of this phenomenon and its theoretical implications for existing short-term memory models
  • Glaser, B., Nikolov, I., Chubb, D., Hamshere, M. L., Segurado, R., Moskvina, V., & Holmans, P. (2007). Analyses of single marker and pairwise effects of candidate loci for rheumatoid arthritis using logistic regression and random forests. BMC Proceedings, 1(Suppl 1): 54.

    Abstract

    Using parametric and nonparametric techniques, our study investigated the presence of single locus and pairwise effects between 20 markers of the Genetic Analysis Workshop 15 (GAW15) North American Rheumatoid Arthritis Consortium (NARAC) candidate gene data set (Problem 2), analyzing 463 independent patients and 855 controls. Specifically, our work examined the correspondence between logistic regression (LR) analysis of single-locus and pairwise interaction effects, and random forest (RF) single and joint importance measures. For this comparison, we selected small but stable RFs (500 trees), which showed strong correlations (r~0.98) between their importance measures and those by RFs grown on 5000 trees. Both RF importance measures captured most of the LR single-locus and pairwise interaction effects, while joint importance measures also corresponded to full LR models containing main and interaction effects. We furthermore showed that RF measures were particularly sensitive to data imputation. The most consistent pairwise effect on rheumatoid arthritis was found between two markers within MAP3K7IP2/SUMO4 on 6q25.1, although LR and RFs assigned different significance levels. Within a hypothetical two-stage design, pairwise LR analysis of all markers with significant RF single importance would have reduced the number of possible combinations in our small data set by 61%, whereas joint importance measures would have been less efficient for marker pair reduction. This suggests that RF single importance measures, which are able to detect a wide range of interaction effects and are computationally very efficient, might be exploited as pre-screening tool for larger association studies. Follow-up analysis, such as by LR, is required since RFs do not indicate highrisk genotype combinations.
  • Goodhew, S. C., Reynolds, K., Edwards, M., & Kidd, E. (2022). The content of gender stereotypes embedded in language use. Journal of Language and Social Psychology, 41(2), 219-231. doi:10.1177/0261927X211033930.

    Abstract

    Gender stereotypes have endured despite substantial change in gender roles. Previous work has assessed how gender stereotypes affect language production in particular interactional contexts. Here, we assessed communication biases where context was less specified: written texts to diffuse audiences. We used Latent Semantic Analysis (LSA) to computationally quantify the similarity in meaning between gendered names and stereotype-linked terms in these communications. This revealed that female names were more similar in meaning to the proscriptive (undesirable) masculine terms, such as emotional.
  • Gordon, J. K., & Clough, S. (2022). How do clinicians judge fluency in aphasia? Journal of Speech, Language, and Hearing Research, 65(4), 1521-1542. doi:10.1044/2021_JSLHR-21-00484.

    Abstract

    Purpose: Aphasia fluency is multiply determined by underlying impairments in lexical retrieval, grammatical formulation, and speech production. This poses challenges for establishing a reliable and feasible tool to measure fluency in the clinic. We examine the reliability and validity of perceptual ratings and clinical perspectives on the utility and relevance of methods used to assess fluency.
    Method: In an online survey, 112 speech-language pathologists rated spontaneous speech samples from 181 people with aphasia (PwA) on eight perceptual rating scales (overall fluency, speech rate, pausing, effort, melody, phrase length, grammaticality, and lexical retrieval) and answered questions about their current practices for assessing fluency in the clinic.
    Results: Interrater reliability for the eight perceptual rating scales ranged from fair to good. The most reliable scales were speech rate, pausing, and phrase length. Similarly, clinicians' perceived fluency ratings were most strongly correlated to objective measures of speech rate and utterance length but were also related to grammatical complexity, lexical diversity, and phonological errors. Clinicians' ratings reflected expected aphasia subtype patterns: Individuals with Broca's and transcortical motor aphasia were rated below average on fluency, whereas those with anomic, conduction, and Wernicke's aphasia were rated above average. Most respondents reported using multiple methods in the clinic to measure fluency but relying most frequently on subjective judgments.
    Conclusions: This study lends support for the use of perceptual rating scales as valid assessments of speech-language production but highlights the need for a more reliable method for clinical use. We describe next steps for developing such a tool that is clinically feasible and helps to identify the underlying deficits disrupting fluency to inform treatment targets.
  • Gretscher, H., Haun, D. B. M., Liebal, K., & Kaminski, J. (2012). Orang-utans rely on orientation cues and egocentric rules when judging others' perspectives in a competitive food task. Animal Behaviour, 84, 323-331. doi:10.1016/j.anbehav.2012.04.021.

    Abstract

    Adopting the paradigm of a study conducted with chimpanzees, Pan troglodytes (Melis et al. 2006, Journal of Comparative Psychology, 120, 154–162), we investigated orang-utans', Pongo pygmaeus, understanding of others' visual perspectives. More specifically, we examined whether orang-utans would adjust their behaviour in a way that prevents a human competitor from seeing them steal a piece of food. In the task, subjects had to reach through one of two opposing Plexiglas tunnels in order to retrieve a food reward. Both rewards were also physically accessible to a human competitor sitting opposite the subject. Subjects always had the possibility of reaching one piece of food that was outside the human's line of sight. This was because either the human was oriented to one, but not the other, reward or because one tunnel was covered by an opaque barrier and the other remained transparent. In the situation in which the human was oriented towards one reward, the orang-utans successfully avoided the tunnel that the competitor was facing. If one tunnel was covered, they marginally preferred to reach through the opaque versus the transparent tunnel. However, they did so frequently after initially inspecting the transparent tunnel (then switching to the opaque one). Considering only the subjects' initial inspections, they chose randomly between the opaque and transparent tunnel, indicating that their final decision to reach was probably driven by a more egocentric behavioural rule. Overall the results suggest that orang-utans have a limited understanding of others' perspectives, relying mainly on cues from facial and bodily orientation and egocentric rules when making such judgements.
  • Guadalupe, T., Kong, X., Akkermans, S. E. A., Fisher, S. E., & Francks, C. (2022). Relations between hemispheric asymmetries of grey matter and auditory processing of spoken syllables in 281 healthy adults. Brain Structure & Function, 227, 561-572. doi:10.1007/s00429-021-02220-z.

    Abstract

    Most people have a right-ear advantage for the perception of spoken syllables, consistent with left hemisphere dominance for speech processing. However, there is considerable variation, with some people showing left-ear advantage. The extent to which this variation is reflected in brain structure remains unclear. We tested for relations between hemispheric asymmetries of auditory processing and of grey matter in 281 adults, using dichotic listening and voxel-based morphometry. This was the largest study of this issue to date. Per-voxel asymmetry indexes were derived for each participant following registration of brain magnetic resonance images to a template that was symmetrized. The asymmetry index derived from dichotic listening was related to grey matter asymmetry in clusters of voxels corresponding to the amygdala and cerebellum lobule VI. There was also a smaller, non-significant cluster in the posterior superior temporal gyrus, a region of auditory cortex. These findings contribute to the mapping of asymmetrical structure–function links in the human brain and suggest that subcortical structures should be investigated in relation to hemispheric dominance for speech processing, in addition to auditory cortex.

    Additional information

    supplementary information
  • Le Guen, O. (2003). Quand les morts reviennent, réflexion sur l'ancestralité chez les Mayas des Basses Terres. Journal de la Société des Américanistes, 89(2), 171-205.

    Abstract

    When the dead come home… Remarks on ancestor worship among the Lowland Mayas. In Amerindian ethnographical literature, ancestor worship is often mentioned but evidence of its existence is lacking. This article will try to demonstrate that some Lowland Maya do worship ancestors ; it will use precise criteria taken from ethnological studies of societies where ancestor worship is common, compared to maya beliefs and practices. The All Souls’ Day, or hanal pixan, seems to be the most significant manifestation of this cult. Our approach will be comparative, through time – using colonial and ethnographical data of the twentieth century, and space – contemplating uses and beliefs of two maya groups, the Yucatec and the Lacandon Maya.
  • Guggenheim, J. A., Northstone, K., McMahon, G., Ness, A. R., Deere, K., Mattocks, C., St Pourcain, B., & Williams, C. (2012). Time outdoors and physical activity as predictors of incident myopia in childhood: a prospective cohort study. Investigative Ophthalmology and Visual Science, 53(6), 2856-2865. doi:10.1167/iovs.11-9091.

    Abstract

    PURPOSE: Time spent in "sports/outdoor activity" has shown a negative association with incident myopia during childhood. We investigated the association of incident myopia with time spent outdoors and physical activity separately. METHODS: Participants in the Avon Longitudinal Study of Parents and Children (ALSPAC) were assessed by noncycloplegic autorefraction at ages 7, 10, 11, 12, and 15 years, and classified as myopic (≤-1 diopters) or as emmetropic/hyperopic (≥-0.25 diopters) at each visit (N = 4,837-7,747). Physical activity at age 11 years was measured objectively using an accelerometer, worn for 1 week. Time spent outdoors was assessed via a parental questionnaire administered when children were aged 8-9 years. Variables associated with incident myopia were examined using Cox regression. RESULTS: In analyses using all available data, both time spent outdoors and physical activity were associated with incident myopia, with time outdoors having the larger effect. The results were similar for analyses restricted to children classified as either nonmyopic or emmetropic/hyperopic at age 11 years. Thus, for children nonmyopic at age 11, the hazard ratio (95% confidence interval, CI) for incident myopia was 0.66 (0.47-0.93) for a high versus low amount of time spent outdoors, and 0.87 (0.76-0.99) per unit standard deviation above average increase in moderate/vigorous physical activity. CONCLUSION: Time spent outdoors was predictive of incident myopia independently of physical activity level. The greater association observed for time outdoors suggests that the previously reported link between "sports/outdoor activity" and incident myopia is due mainly to its capture of information relating to time outdoors rather than physical activity.
  • Gullberg, M. (1995). Giving language a hand: gesture as a cue based communicative strategy. Working Papers, Lund University, Dept. of Linguistics, 44, 41-60.

    Abstract

    All accounts of communicative behaviour in general, and communicative strategies in particular, mention gesture1 in relation to language acquisition (cf. Faerch & Kasper 1983 for an overview). However, few attempts have been made to investigate how spoken language and spontaneous gesture combine to determine discourse referents. Referential gesture and referential discourse will be of particular interest, since communicative strategies in second language discourse often involve labelling problems.

    This paper will focus on two issues:

    1) Within a cognitive account of communicative strategies, gesture will be seen to be part of conceptual or analysis-based strategies, in that relational features in the referents are exploited;

    2) It will be argued that communication strategies can be seen in terms of cue manipulation in the same sense as sentence processing has been analysed in terms of competing cues. Strategic behaviour, and indeed the process of referring in general, are seen in terms of cues, combining or competing to determine discourse referents. Gesture can then be regarded as being such a cue at the discourse level, and as a cue-based communicative strategy, in that gesture functions by exploiting physically based cues which can be recognised as being part of the referent. The question of iconicity and motivation vs. the arbitrary qualities of gesture as a strategic cue will be addressed in connection with this.
  • Gullberg, M., Roberts, L., & Dimroth, C. (2012). What word-level knowledge can adult learners acquire after minimal exposure to a new language? International Review of Applied Linguistics, 50, 239-276.

    Abstract

    Discussions about the adult L2 learning capacity often take as their starting point stages where considerable L2 knowledge has already been accumulated. This paper probes the absolute earliest stages of learning and investigates what lexical knowledge adult learners can extract from complex, continuous speech in an unknown language after minimal exposure and without any help. Dutch participants were exposed to naturalistic but controlled audiovisual input in Mandarin Chinese, in which item frequency and gestural highlighting were manipulated. The results from a word recognition task showed that adults are able to draw on frequency to recognize disyllabic words appearing only eight times in continuous speech. The findings from a sound-to-picture matching task revealed that the mapping of meaning to word form requires a combination of cues: disyllabic words accompanied by a gesture were correctly assigned meaning after eight encounters. Overall, the study suggests that the adult learning mechanism is a considerably more powerful than typically assumed in the SLA literature drawing on frequency, gestural cues and syllable structure. Even in the absence of pre-existing knowledge about cognates and sound system to bootstrap and boost learning, it deals efficiently with very little, very complex input.
  • Gur, C., & Sumer, B. (2022). Learning to introduce referents in narration is resilient to the effects of late sign language exposure. Sign Language & Linguistics, 25(2), 205-234. doi:10.1075/sll.21004.gur.

    Abstract

    The present study investigates the effects of late sign language exposure on narrative development in Turkish Sign Language (TİD) by focusing on the introductions of main characters and the linguistic strategies used in these introductions. We study these domains by comparing narrations produced by native and late signers in TİD. The results of our study reveal that late sign language exposure does not hinder the acquisition of linguistic devices to introduce main characters in narrations. Thus, their acquisition seems to be resilient to the effects of late language exposure. Our study further suggests that a two-year exposure to sign language facilitates the acquisition of these skills in signing children even in the case of late language exposure, thus providing further support for the importance of sign language exposure to develop linguistic skills for signing children.
  • Gussenhoven, C., Lu, Y.-A., Lee-Kim, S.-I., Liu, C., Rahmani, H., Riad, T., & Zora, H. (2022). The sequence recall task and lexicality of tone: Exploring tone “deafness”. Frontiers in Psychology, 13: 902569. doi:10.3389/fpsyg.2022.902569.

    Abstract

    Many perception and processing effects of the lexical status of tone have been found in behavioral, psycholinguistic, and neuroscientific research, often pitting varieties of tonal Chinese against non-tonal Germanic languages. While the linguistic and cognitive evidence for lexical tone is therefore beyond dispute, the word prosodic systems of many languages continue to escape the categorizations of typologists. One controversy concerns the existence of a typological class of “pitch accent languages,” another the underlying phonological nature of surface tone contrasts, which in some cases have been claimed to be metrical rather than tonal. We address the question whether the Sequence Recall Task (SRT), which has been shown to discriminate between languages with and without word stress, can distinguish languages with and without lexical tone. Using participants from non-tonal Indonesian, semi-tonal Swedish, and two varieties of tonal Mandarin, we ran SRTs with monosyllabic tonal contrasts to test the hypothesis that high performance in a tonal SRT indicates the lexical status of tone. An additional question concerned the extent to which accuracy scores depended on phonological and phonetic properties of a language’s tone system, like its complexity, the existence of an experimental contrast in a language’s phonology, and the phonetic salience of a contrast. The results suggest that a tonal SRT is not likely to discriminate between tonal and non-tonal languages within a typologically varied group, because of the effects of specific properties of their tone systems. Future research should therefore address the first hypothesis with participants from otherwise similar tonal and non-tonal varieties of the same language, where results from a tonal SRT may make a useful contribution to the typological debate on word prosody.

    Additional information

    also published as book chapter (2023)
  • Haagen, T., Dona, L., Bosscha, S., Zamith, B., Koetschruyter, R., & Wijnholds, G. (2022). Noun Phrase and Verb Phrase Ellipsis in Dutch: Identifying Subject-Verb Dependencies with BERTje. Computational Linguistics in the Netherlands Journal, 12, 49-63.

    Abstract

    Previous research has set out to quantify the syntactic capacity of BERTje (the Dutch equivalent of BERT) in the context of phenomena such as control verb nesting and verb raising in Dutch. Another complex language phenomenon is ellipsis, where a constituent is omitted from a sentence and can be recovered using context. Like verb raising and control verb nesting, ellipsis is suitable for evaluating BERTje’s linguistic capacity since it requires the processing of syntactic and lexical cues to recover the elided phrases. This work outlines an approach to identify subject-verb dependencies in Dutch sentences with verb phrase and noun phrase ellipsis using BERTje. Results will inform about BERTje’s capability of capturing syntactic information and its ability to capture ellipsis in particular. Understanding more about how computational models process ellipsis and how it can be improved is crucial for boosting the performance of language models, as natural language contains many instances of ellipsis. Using training data from Lassy, converted to contextualized embeddings using BERTje, a probe model is trained to identify subject-verb dependencies. The model is tested on sentences generated using a Context Free Grammar (CFG), which is designed to generate sentences containing ellipsis. These sentences are also converted to contextualized representations using BERTje. Results show that BERTje’s syntactic abilities are lacking, shown by accuracy drops compared to baseline measures.

    Additional information

    direct link to journal
  • Habscheid, S., & Klein, W. (2012). Einleitung: Dinge und Maschinen in der Kommunikation. Zeitschrift für Literaturwissenschaft und Linguistik, 42(168), 8-12. Retrieved from http://www.uni-siegen.de/lili/ausgaben/2012/lili168.html?lang=de#einleitung.

    Abstract

    “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.” (Weiser 1991, S. 94). – Die Behauptung stammt aus einem vielzitierten Text von Mark Weiser, ehemals Chief Technology Officer am berühmten Xerox Palo Alto Research Center (PARC), wo nicht nur einige bedeutende computertechnische Innovationen ihren Ursprung hatten, sondern auch grundlegende anthropologische Einsichten zum Umgang mit technischen Artefakten gewonnen wurden.1 In einem populärwissenschaftlichen Artikel mit dem Titel „The Computer for the 21st Century” entwarf Weiser 1991 die Vision einer Zukunft, in der wir nicht mehr mit einem einzelnen PC an unserem Arbeitsplatz umgehen – vielmehr seien wir in jedem Raum umgeben von hunderten elektronischer Vorrichtungen, die untrennbar in Alltagsgegenstände eingebettet und daher in unserer materiellen Umwelt gleichsam „verschwunden“ sind. Dabei ging es Weiser nicht allein um das ubiquitäre Phänomen, das in der Medientheorie als „Transparenz der Medien“ bekannt ist2 oder in allgemeineren Theorien der Alltagserfahrung als eine selbstverständliche Verwobenheit des Menschen mit den Dingen, die uns in ihrem Sinn vertraut und praktisch „zuhanden“ sind.3 Darüber hinaus zielte Weisers Vision darauf, unsere bereits existierende Umwelt durch computerlesbare Daten zu erweitern und in die Operationen eines solchen allgegenwärtigen Netzwerks alltägliche Praktiken gleichsam lückenlos zu integrieren: In der Welt, die Weiser entwirft, öffnen sich Türen für denjenigen, der ein bestimmtes elektronisches Abzeichen trägt, begrüßen Räume Personen, die sie betreten, mit Namen, passen sich Computerterminals an die Präferenzen individueller Nutzer an usw. (Weiser 1991, S. 99).
  • Habscheid, S., & Klein, W. (Eds.). (2012). Dinge und Maschinen in der Kommunikation [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 42(168).

    Abstract

    “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.” (Weiser 1991, S. 94). – Die Behauptung stammt aus einem vielzitierten Text von Mark Weiser, ehemals Chief Technology Officer am berühmten Xerox Palo Alto Research Center (PARC), wo nicht nur einige bedeutende computertechnische Innovationen ihren Ursprung hatten, sondern auch grundlegende anthropologische Einsichten zum Umgang mit technischen Artefakten gewonnen wurden.1 In einem populärwissenschaftlichen Artikel mit dem Titel „The Computer for the 21st Century” entwarf Weiser 1991 die Vision einer Zukunft, in der wir nicht mehr mit einem einzelnen PC an unserem Arbeitsplatz umgehen – vielmehr seien wir in jedem Raum umgeben von hunderten elektronischer Vorrichtungen, die untrennbar in Alltagsgegenstände eingebettet und daher in unserer materiellen Umwelt gleichsam „verschwunden“ sind. Dabei ging es Weiser nicht allein um das ubiquitäre Phänomen, das in der Medientheorie als „Transparenz der Medien“ bekannt ist2 oder in allgemeineren Theorien der Alltagserfahrung als eine selbstverständliche Verwobenheit des Menschen mit den Dingen, die uns in ihrem Sinn vertraut und praktisch „zuhanden“ sind.3 Darüber hinaus zielte Weisers Vision darauf, unsere bereits existierende Umwelt durch computerlesbare Daten zu erweitern und in die Operationen eines solchen allgegenwärtigen Netzwerks alltägliche Praktiken gleichsam lückenlos zu integrieren: In der Welt, die Weiser entwirft, öffnen sich Türen für denjenigen, der ein bestimmtes elektronisches Abzeichen trägt, begrüßen Räume Personen, die sie betreten, mit Namen, passen sich Computerterminals an die Präferenzen individueller Nutzer an usw. (Weiser 1991, S. 99).
  • Hagoort, P., Wassenaar, M., & Brown, C. M. (2003). Syntax-related ERP-effects in Dutch. Cognitive Brain Research, 16(1), 38-50. doi:10.1016/S0926-6410(02)00208-2.

    Abstract

    In two studies subjects were required to read Dutch sentences that in some cases contained a syntactic violation, in other cases a semantic violation. All syntactic violations were word category violations. The design excluded differential contributions of expectancy to influence the syntactic violation effects. The syntactic violations elicited an Anterior Negativity between 300 and 500 ms. This negativity was bilateral and had a frontal distribution. Over posterior sites the same violations elicited a P600/SPS starting at about 600 ms. The semantic violations elicited an N400 effect. The topographic distribution of the AN was more frontal than the distribution of the classical N400 effect, indicating that the underlying generators of the AN and the N400 are, at least to a certain extent, non-overlapping. Experiment 2 partly replicated the design of Experiment 1, but with differences in rate of presentation and in the distribution of items over subjects, and without semantic violations. The word category violations resulted in the same effects as were observed in Experiment 1, showing that they were independent of some of the specific parameters of Experiment 1. The discussion presents a tentative account of the functional differences in the triggering conditions of the AN and the P600/SPS.
  • Hagoort, P., Wassenaar, M., & Brown, C. M. (2003). Real-time semantic compensation in patients with agrammatic comprehension: Electrophysiological evidence for multiple-route plasticity. Proceedings of the National Academy of Sciences of the United States of America, 100(7), 4340-4345. doi:10.1073/pnas.0230613100.

    Abstract

    To understand spoken language requires that the brain provides rapid access to different kinds of knowledge, including the sounds and meanings of words, and syntax. Syntax specifies constraints on combining words in a grammatically well formed manner. Agrammatic patients are deficient in their ability to use these constraints, due to a lesion in the perisylvian area of the languagedominant hemisphere. We report a study on real-time auditory sentence processing in agrammatic comprehenders, examining
    their ability to accommodate damage to the language system. We recorded event-related brain potentials (ERPs) in agrammatic comprehenders, nonagrammatic aphasics, and age-matched controls. When listening to sentences with grammatical violations, the agrammatic aphasics did not show the same syntax-related ERP effect as the two other subject groups. Instead, the waveforms of the agrammatic aphasics were dominated by a meaning-related ERP effect, presumably reflecting their attempts to achieve understanding by the use of semantic constraints. These data demonstrate that although agrammatic aphasics are impaired in their ability to exploit syntactic information in real time, they can reduce the consequences of a syntactic deficit by exploiting a semantic route. They thus provide evidence for the compensation of a syntactic deficit by a stronger reliance on another route in mapping
    sound onto meaning. This is a form of plasticity that we refer to as multiple-route plasticity.
  • Hagoort, P. (1998). De electrofysiologie van taal: Wat hersenpotentialen vertellen over het menselijk taalvermogen. Neuropraxis, 2, 223-229.
  • Hagoort, P. (1998). De spreker als sprinter. Psychologie, 17, 48-49.
  • Hagoort, P. (2002). De koninklijke verloving tussen psychologie en neurowetenschap. De Psycholoog, 37, 107-113.
  • Hagoort, P., & Van Berkum, J. J. A. (2007). Beyond the sentence given. Philosophical Transactions of the Royal Society. Series B: Biological Sciences, 362, 801-811.

    Abstract

    A central and influential idea among researchers of language is that our language faculty is organized according to Fregean compositionality, which states that the meaning of an utterance is a function of the meaning of its parts and of the syntactic rules by which these parts are combined. Since the domain of syntactic rules is the sentence, the implication of this idea is that language interpretation takes place in a two-step fashion. First, the meaning of a sentence is computed. In a second step, the sentence meaning is integrated with information from prior discourse, world knowledge, information about the speaker and semantic information from extra-linguistic domains such as co-speech gestures or the visual world. Here, we present results from recordings of event-related brain potentials that are inconsistent with this classical two-step model of language interpretation. Our data support a one-step model in which knowledge about the context and the world, concomitant information from other modalities, and the speaker are brought to bear immediately, by the same fast-acting brain system that combines the meanings of individual words into a message-level representation. Underlying the one-step model is the immediacy assumption, according to which all available information will immediately be used to co-determine the interpretation of the speaker's message. Functional magnetic resonance imaging data that we collected indicate that Broca's area plays an important role in semantic unification. Language comprehension involves the rapid incorporation of information in a 'single unification space', coming from a broader range of cognitive domains than presupposed in the standard two-step model of interpretation.
  • Hagoort, P. (2003). How the brain solves the binding problem for language: A neurocomputational model of syntactic processing. NeuroImage, 20(suppl. 1), S18-S29. doi:10.1016/j.neuroimage.2003.09.013.

    Abstract

    Syntax is one of the components in the architecture of language processing that allows the listener/reader to bind single-word information into a unified interpretation of multiword utterances. This paper discusses ERP effects that have been observed in relation to syntactic processing. The fact that these effects differ from the semantic N400 indicates that the brain honors the distinction between semantic and syntactic binding operations. Two models of syntactic processing attempt to account for syntax-related ERP effects. One type of model is serial, with a first phase that is purely syntactic in nature (syntax-first model). The other type of model is parallel and assumes that information immediately guides the interpretation process once it becomes available. This is referred to as the immediacy model. ERP evidence is presented in support of the latter model. Next, an explicit computational model is proposed to explain the ERP data. This Unification Model assumes that syntactic frames are stored in memory and retrieved on the basis of the spoken or written word form input. The syntactic frames associated with the individual lexical items are unified by a dynamic binding process into a structural representation that spans the whole utterance. On the basis of a meta-analysis of imaging studies on syntax, it is argued that the left posterior inferior frontal cortex is involved in binding syntactic frames together, whereas the left superior temporal cortex is involved in retrieval of the syntactic frames stored in memory. Lesion data that support the involvement of this left frontotemporal network in syntactic processing are discussed.
  • Hagoort, P. (2003). Interplay between syntax and semantics during sentence comprehension: ERP effects of combining syntactic and semantic violations. Journal of Cognitive Neuroscience, 15(6), 883-899. doi:10.1162/089892903322370807.

    Abstract

    This study investigated the effects of combined semantic and syntactic violations in relation to the effects of single semantic and single syntactic violations on language-related event-related brain potential (ERP) effects (N400 and P600/ SPS). Syntactic violations consisted of a mismatch in grammatical gender or number features of the definite article and the noun in sentence-internal or sentence-final noun phrases (NPs). Semantic violations consisted of semantically implausible adjective–noun combinations in the same NPs. Combined syntactic and semantic violations were a summation of these two respective violation types. ERPs were recorded while subjects read the sentences with the different types of violations and the correct control sentences. ERP effects were computed relative to ERPs elicited by the sentence-internal or sentence-final nouns. The size of the N400 effect to the semantic violation was increased by an additional syntactic violation (the syntactic boost). In contrast, the size of the P600/ SPS to the syntactic violation was not affected by an additional semantic violation. This suggests that in the absence of syntactic ambiguity, the assignment of syntactic structure is independent of semantic context. However, semantic integration is influenced by syntactic processing. In the sentence-final position, additional global processing consequences were obtained as a result of earlier violations in the sentence. The resulting increase in the N400 amplitude to sentence-final words was independent of the nature of the violation. A speeded anomaly detection task revealed that it takes substantially longer to detect semantic than syntactic anomalies. These results are discussed in relation to the latency and processing characteristics of the N400 and P600/SPS effects. Overall, the results reveal an asymmetry in the interplay between syntax and semantics during on-line sentence comprehension.
  • Hagoort, P. (1998). Hersenen en taal in onderzoek en praktijk. Neuropraxis, 6, 204-205.
  • Hagoort, P. (2012). Het muzikale brein. Speling: Tijdschrift voor bezinning. Muziek als bron van bezieling, 64(1), 44-48.
  • Hagoort, P. (2012). Het sprekende brein. MemoRad, 17(1), 27-30.

    Abstract

    Geen andere soort dan homo sapiens heeft in de loop van zijn evolutionaire geschiedenis een communicatiesysteem ontwikkeld waarin een eindig aantal symbolen samen met een reeks van regels voor het combineren daarvan een oneindig aantal uitdrukkingen mogelijk maakt. Dit natuurlijke taalsysteem stelt leden van onze soort in staat gedachten een uiterlijke vorm te geven en uit te wisselen met de sociale groep en, door de uitvinding van schriftsystemen, met de gehele samenleving. Spraak en taal zijn effectieve middelen voor het behoud van sociale cohesie in samenlevingen waarvan de groepsgrootte en de complexe sociale organisatie van dien aard is dat dit niet langer kan door middel van ‘vlooien’, de wijze waarop onze genetische buren, de primaten van de oude wereld, sociale cohesie bevorderen [1,2].
  • Hagoort, P., Brown, C. M., & Swaab, T. Y. (1995). Semantic deficits in right hemisphere patients. Brain and Language, 51, 161-163. doi:10.1006/brln.1995.1058.
  • Hald, L. A., Steenbeek-Planting, E. G., & Hagoort, P. (2007). The interaction of discourse context and world knowledge in online sentence comprehension: Evidence from the N400. Brain Research, 1146, 210-218. doi:10.1016/j.brainres.2007.02.054.

    Abstract

    In an ERP experiment we investigated how the recruitment and integration of world knowledge information relate to the integration of information within a current discourse context. Participants were presented with short discourse contexts which were followed by a sentence that contained a critical word that was correct or incorrect based on general world knowledge and the supporting discourse context, or was more or less acceptable based on the combination of general world knowledge and the specific local discourse context. Relative to the critical word in the correct world knowledge sentences following a neutral discourse, all other critical words elicited an N400 effect that began at about 300 ms after word onset. However, the magnitude of the N400 effect varied in a way that suggests an interaction between world knowledge and discourse context. The results indicate that both world knowledge and discourse context have an effect on sentence interpretation, but neither overrides the other.
  • Haller, S., Klarhoefer, M., Schwarzbach, J., Radue, E. W., & Indefrey, P. (2007). Spatial and temporal analysis of fMRI data on word and sentence reading. European Journal of Neuroscience, 26(7), 2074-2084. doi:10.1111/j.1460-9568.2007.05816.x.

    Abstract

    Written language comprehension at the word and the sentence level was analysed by the combination of spatial and temporal analysis of functional magnetic resonance imaging (fMRI). Spatial analysis was performed via general linear modelling (GLM). Concerning the temporal analysis, local differences in neurovascular coupling may confound a direct comparison of blood oxygenation level-dependent (BOLD) response estimates between regions. To avoid this problem, we parametrically varied linguistic task demands and compared only task-induced within-region BOLD response differences across areas. We reasoned that, in a hierarchical processing system, increasing task demands at lower processing levels induce delayed onset of higher-level processes in corresponding areas. The flow of activation is thus reflected in the size of task-induced delay increases. We estimated BOLD response delay and duration for each voxel and each participant by fitting a model function to the event-related average BOLD response. The GLM showed increasing activations with increasing linguistic demands dominantly in the left inferior frontal gyrus (IFG) and the left superior temporal gyrus (STG). The combination of spatial and temporal analysis allowed a functional differentiation of IFG subregions involved in written language comprehension. Ventral IFG region (BA 47) and STG subserve earlier processing stages than two dorsal IFG regions (BA 44 and 45). This is in accordance with the assumed early lexical semantic and late syntactic processing of these regions and illustrates the complementary information provided by spatial and temporal fMRI data analysis of the same data set.
  • Hammarström, H. (2012). [Review of Ferdinand von Mengden, Cardinal numerals: Old English from a cross-linguistic perspective]. Linguistic Typology, 16, 321-324. doi:10.1515/lity-2012-0010.
  • Hammarström, H., & van den Heuvel, W. (2012). Introduction to the LLM Special Issue 2012 on the History, contact and classification of Papuan languages. Language & Linguistics in Melanesia, 2012(Special Issue, Part 1), i-v.
  • Hammarström, H., & van den Heuvel, W. (Eds.). (2012). On the history, contact & classification of Papuan languages [Special Issue]. Language & Linguistics in Melanesia, 2012. Retrieved from http://www.langlxmelanesia.com/specialissues.htm.
  • Hammarström, H. (2012). Pronouns and the (Preliminary) Classification of Papuan languages. Language and linguistics in Melanesia, Special issue 2012 Part 2, 428-539. Retrieved from http://www.langlxmelanesia.com/hammarstrom428-539.pdf.

    Abstract

    A series of articles by Ross (1995, 2001, 2005) use pronoun sim- ilarities to gauge relatedness between various Papuan microgroups, arguing that the similarities could not be the result of chance or bor- rowing. I argue that a more appropriate manner of calculating chance gives a signicantly dierent result: when cross-comparing a pool of languages the prospects for chance matches of rst and second person pronouns are very good. Using pronoun form data from over 3000 lan- guages and over 300 language families inside and outside New Guinea, I show that there is, nevertheless, a tendency for Papuan pronouns to use certain consonants more often in 1P and 2P SG forms than in the rest of the world. This could reect an underlying family. An alter- native explanation is the established Papuan areal feature of having a small consonant inventory, which results in a higher functional load on the remaining consonants, which is, in turn, reected in the enhanced popularity of certain consonants in pronouns of those languages. A test of surface forms (i.e., non-reconstructed forms) favours the latter explanation.
  • Hamshere, M. L., Segurado, R., Moskvina, V., Nikolov, I., Glaser, B., & Holmans, P. A. (2007). Large-scale linkage analysis of 1302 affected relative pairs with rheumatoid arthritis. BMC Proceedings, 1 (Suppl 1), S100.

    Abstract

    Rheumatoid arthritis is the most common systematic autoimmune disease and its etiology is believed to have both strong genetic and environmental components. We demonstrate the utility of including genetic and clinical phenotypes as covariates within a linkage analysis framework to search for rheumatoid arthritis susceptibility loci. The raw genotypes of 1302 affected relative pairs were combined from four large family-based samples (North American Rheumatoid Arthritis Consortium, United Kingdom, European Consortium on Rheumatoid Arthritis Families, and Canada). The familiality of the clinical phenotypes was assessed. The affected relative pairs were subjected to autosomal multipoint affected relative-pair linkage analysis. Covariates were included in the linkage analysis to take account of heterogeneity within the sample. Evidence of familiality was observed with age at onset (p <} 0.001) and rheumatoid factor (RF) IgM (p {< 0.001), but not definite erosions (p = 0.21). Genome-wide significant evidence for linkage was observed on chromosome 6. Genome-wide suggestive evidence for linkage was observed on chromosomes 13 and 20 when conditioning on age at onset, chromosome 15 conditional on gender, and chromosome 19 conditional on RF IgM after allowing for multiple testing of covariates.
  • Hanique, I., & Ernestus, M. (2012). The role of morphology in acoustic reduction. Lingue e linguaggio, 2012(2), 147-164. doi:10.1418/38783.

    Abstract

    This paper examines the role of morphological structure in the reduced pronunciation of morphologically complex words by discussing and re-analyzing data from the literature. Acoustic reduction refers to the phenomenon that, in spontaneous speech, phonemes may be shorter or absent. We review studies investigating effects of the repetition of a morpheme, of whether a segment plays a crucial role in the identification of its morpheme, and of a word's morphological decomposability. We conclude that these studies report either no effects of morphological structure or effects that are open to alternative interpretations. Our analysis also reveals the need for a uniform definition of morphological decomposability. Furthermore, we examine whether the reduction of segments in morphologically complex words correlates with these segments' contribution to the identification of the whole word, and discuss previous studies and new analyses supporting this hypothesis. We conclude that the data show no convincing evidence that morphological structure conditions reduction, which contrasts with the expectations of several models of speech production and of morphological processing (e.g., weaver++ and dual-route models). The data collected so far support psycholinguistic models which assume that all morphologically complex words are processed as complete units.
  • Hanulikova, A., Dediu, D., Fang, Z., Basnakova, J., & Huettig, F. (2012). Individual differences in the acquisition of a complex L2 phonology: A training study. Language Learning, 62(Supplement S2), 79-109. doi:10.1111/j.1467-9922.2012.00707.x.

    Abstract

    Many learners of a foreign language (L2) struggle to correctly pronounce newly-learned speech sounds, yet many others achieve this with apparent ease. Here we explored how a training study of learning complex consonant clusters at the very onset of the L2 acquisition can inform us about L2 learning in general and individual differences in particular. To this end, adult Dutch native speakers were trained on Slovak words with complex consonant clusters (e.g., pstruh /pstrux/‘trout’, štvrť /ʃtvrc/ ‘quarter’) using auditory and orthographic input. In the same session following training, participants were tested on a battery of L2 perception and production tasks. The battery of L2 tests was repeated twice more with one week between each session. In the first session, an additional battery of control tests was used to test participants’ native language (L1) skills. Overall, in line with some previous research, participants showed only weak learning effects across the L2 perception tasks. However, there were considerable individual differences across all L2 tasks, which remained stable across sessions. Only two participants showed overall high L2 production performance that fell within 2 standard deviations of the mean ratings obtained for an L1 speaker. The mispronunciation detection task was the only perception task which significantly predicted production performance in the final session. We conclude by discussing several recommendations for future L2 learning studies.
  • Hanulikova, A., & Weber, A. (2012). Sink positive: Linguistic experience with th substitutions influences nonnative word recognition. Attention, Perception & Psychophysics, 74(3), 613-629. doi:10.3758/s13414-011-0259-7.

    Abstract

    We used eyetracking, perceptual discrimination, and production tasks to examine the influences of perceptual similarity and linguistic experience on word recognition in nonnative (L2) speech. Eye movements to printed words were tracked while German and Dutch learners of English heard words containing one of three pronunciation variants (/t/, /s/, or /f/) of the interdental fricative /θ/. Irrespective of whether the speaker was Dutch or German, looking preferences for target words with /θ/ matched the preferences for producing /s/ variants in German speakers and /t/ variants in Dutch speakers (as determined via the production task), while a control group of English participants showed no such preferences. The perceptually most similar and most confusable /f/ variant (as determined via the discrimination task) was never preferred as a match for /θ/. These results suggest that linguistic experience with L2 pronunciations facilitates recognition of variants in an L2, with effects of frequency outweighing effects of perceptual similarity.
  • Hanulikova, A., Van Alphen, P. M., Van Goch, M. M., & Weber, A. (2012). When one person’s mistake is another’s standard usage: The effect of foreign accent on syntactic processing. Journal of Cognitive Neuroscience, 24(4), 878-887. doi:10.1162/jocn_a_00103.

    Abstract

    How do native listeners process grammatical errors that are frequent in non-native speech? We investigated whether the neural correlates of syntactic processing are modulated by speaker identity. ERPs to gender agreement errors in sentences spoken by a native speaker were compared with the same errors spoken by a non-native speaker. In line with previous research, gender violations in native speech resulted in a P600 effect (larger P600 for violations in comparison with correct sentences), but when the same violations were produced by the non-native speaker with a foreign accent, no P600 effect was observed. Control sentences with semantic violations elicited comparable N400 effects for both the native and the non-native speaker, confirming no general integration problem in foreign-accented speech. The results demonstrate that the P600 is modulated by speaker identity, extending our knowledge about the role of speaker's characteristics on neural correlates of speech processing.
  • Härle, M., Dobel, C., Cohen, R., & Rockstroh, B. (2002). Brain activity during syntactic and semantic processing - a magnetoencephalographic study. Brain Topography, 15(1), 3-11. doi:10.1023/A:1020070521429.

    Abstract

    Drawings of objects were presented in series of 54 each to 14 German speaking subjects with the tasks to indicate by button presses a) whether the grammatical gender of an object name was masculine ("der") or feminine ("die") and b) whether the depicted object was man-made or nature-made. The magnetoencephalogram (MEG) was recorded with a whole-head neuromagnetometer and task-specific patterns of brain activity were determined in the source space (Minimum Norm Estimates, MNE). A left-temporal focus of activity 150-275 ms after stimulus onset in the gender decision compared to the semantic classification task was discussed as indicating the retrieval of syntactic information, while a more expanded left hemispheric activity in the gender relative to the semantic task 300-625 ms after stimulus onset was discussed as indicating phonological encoding. A predominance of activity in the semantic task was observed over right fronto-central region 150-225 ms after stimulus-onset, suggesting that semantic and syntactic processes are prominent in this stage of lexical selection.
  • Hartz, S. M., Short, S. E., Saccone, N. L., Culverhouse, R., Chen, L., Schwantes-An, T.-H., Coon, H., Han, Y., Stephens, S. H., Sun, J., Chen, X., Ducci, F., Dueker, N., Franceschini, N., Frank, J., Geller, F., Gubjartsson, D., Hansel, N. N., Jiang, C., Keskitalo-Vuokko, K. and 132 moreHartz, S. M., Short, S. E., Saccone, N. L., Culverhouse, R., Chen, L., Schwantes-An, T.-H., Coon, H., Han, Y., Stephens, S. H., Sun, J., Chen, X., Ducci, F., Dueker, N., Franceschini, N., Frank, J., Geller, F., Gubjartsson, D., Hansel, N. N., Jiang, C., Keskitalo-Vuokko, K., Liu, Z., Lyytikainen, L.-P., Michel, M., Rawal, R., Rosenberger, A., Scheet, P., Shaffer, J. R., Teumer, A., Thompson, J. R., Vink, J. M., Vogelzangs, N., Wenzlaff, A. S., Wheeler, W., Xiao, X., Yang, B.-Z., Aggen, S. H., Balmforth, A. J., Baumeister, S. E., Beaty, T., Bennett, S., Bergen, A. W., Boyd, H. A., Broms, U., Campbell, H., Chatterjee, N., Chen, J., Cheng, Y.-C., Cichon, S., Couper, D., Cucca, F., Dick, D. M., Foroud, T., Furberg, H., Giegling, I., Gu, F., Hall, A. S., Hallfors, J., Han, S., Hartmann, A. M., Hayward, C., Heikkila, K., Hewitt, J. K., Hottenga, J. J., Jensen, M. K., Jousilahti, P., Kaakinen, M., Kittner, S. J., Konte, B., Korhonen, T., Landi, M.-T., Laatikainen, T., Leppert, M., Levy, S. M., Mathias, R. A., McNeil, D. W., Medland, S. E., Montgomery, G. W., Muley, T., Murray, T., Nauck, M., North, K., Pergadia, M., Polasek, O., Ramos, E. M., Ripatti, S., Risch, A., Ruczinski, I., Rudan, I., Salomaa, V., Schlessinger, D., Styrkarsdottir, U., Terracciano, A., Uda, M., Willemsen, G., Wu, X., Abecasis, G., Barnes, K., Bickeboller, H., Boerwinkle, E., Boomsma, D. I., Caporaso, N., Duan, J., Edenberg, H. J., Francks, C., Gejman, P. V., Gelernter, J., Grabe, H. J., Hops, H., Jarvelin, M.-R., Viikari, J., Kahonen, M., Kendler, K. S., Lehtimaki, T., Levinson, D. F., Marazita, M. L., Marchini, J., Melbye, M., Mitchell, B., Murray, J. C., Nothen, M. M., Penninx, B. W., Raitakari, O., Rietschel, M., Rujescu, D., Samani, N. J., Sanders, A. R., Schwartz, A. G., Shete, S., Shi, J., Spitz, M., Stefansson, K., Swan, G. E., Thorgeirsson, T., Volzke, H., Wei, Q., Wichmann, H.-E., Amos, C. I., Breslau, N., Cannon, D. S., Ehringer, M., Grucza, R., Hatsukami, D., Heath, A., Johnson, E. O., Kaprio, J., Madden, P., Martin, N. G., Stevens, V. L., Stitzel, J. A., Weiss, R. B., Kraft, P., & Bierut, L. J. (2012). Increased genetic vulnerability to smoking at CHRNA5 in early-onset smokers. Archives of General Psychiatry, 69, 854-860. doi:10.1001/archgenpsychiatry.2012.124.

    Abstract

    CONTEXT Recent studies have shown an association between cigarettes per day (CPD) and a nonsynonymous single-nucleotide polymorphism in CHRNA5, rs16969968. OBJECTIVE To determine whether the association between rs16969968 and smoking is modified by age at onset of regular smoking. DATA SOURCES Primary data. STUDY SELECTION Available genetic studies containing measures of CPD and the genotype of rs16969968 or its proxy. DATA EXTRACTION Uniform statistical analysis scripts were run locally. Starting with 94 050 ever-smokers from 43 studies, we extracted the heavy smokers (CPD >20) and light smokers (CPD ≤10) with age-at-onset information, reducing the sample size to 33 348. Each study was stratified into early-onset smokers (age at onset ≤16 years) and late-onset smokers (age at onset >16 years), and a logistic regression of heavy vs light smoking with the rs16969968 genotype was computed for each stratum. Meta-analysis was performed within each age-at-onset stratum. DATA SYNTHESIS Individuals with 1 risk allele at rs16969968 who were early-onset smokers were significantly more likely to be heavy smokers in adulthood (odds ratio [OR] = 1.45; 95% CI, 1.36-1.55; n = 13 843) than were carriers of the risk allele who were late-onset smokers (OR = 1.27; 95% CI, 1.21-1.33, n = 19 505) (P = .01). CONCLUSION These results highlight an increased genetic vulnerability to smoking in early-onset smokers.

    Files private

    Request files
  • Haun, D. B. M. (2003). What's so special about spatial cognition. De Psychonoom, 18, 3-4.
  • Haun, D. B. M., Rekers, Y., & Tomasello, M. (2012). Majority-biased transmission in chimpanzees and human children, but not orangutans. Current Biology, 22, 727-731. doi:10.1016/j.cub.2012.03.006.

    Abstract

    Cultural transmission is a key component of human evolution. Two of humans' closest living relatives, chimpanzees and orangutans, have also been argued to transmit behavioral traditions across generations culturally [ [1], [2] and [3]], but how much the process might resemble the human process is still in large part unknown. One key phenomenon of human cultural transmission is majority-biased transmission: the increased likelihood for learners to end up not with the most frequent behavior but rather with the behavior demonstrated by most individuals. Here we show that chimpanzees and human children as young as 2 years of age, but not orangutans, are more likely to copy an action performed by three individuals, once each, than an action performed by one individual three times. The tendency to acquire the behaviors of the majority has been posited as key to the transmission of relatively safe, reliable, and productive behavioral strategies [ [4], [5], [6] and [7]] but has not previously been demonstrated in primates.
  • Hayano, K. (2003). Self-presentation as a face-threatening act: A comparative study of self-oriented topic introduction in English and Japanese. Veritas, 24, 45-58.
  • Heesen, R., Fröhlich, M., Sievers, C., Woensdregt, M., & Dingemanse, M. (2022). Coordinating social action: A primer for the cross-species investigation of communicative repair. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 377(1859): 20210110. doi:10.1098/rstb.2021.0110.

    Abstract

    Human joint action is inherently cooperative, manifested in the collaborative efforts of participants to minimize communicative trouble through interactive repair. Although interactive repair requires sophisticated cognitive abilities,
    it can be dissected into basic building blocks shared with non-human animal species. A review of the primate literature shows that interactionally contingent signal sequences are at least common among species of nonhuman great apes, suggesting a gradual evolution of repair. To pioneer a cross-species assessment of repair this paper aims at (i) identifying necessary precursors of human interactive repair; (ii) proposing a coding framework for its comparative study in humans and non-human species; and (iii) using this framework to analyse examples of interactions of humans (adults/children) and non-human great apes. We hope this paper will serve as a primer for cross-species comparisons of communicative breakdowns and how they are repaired.
  • Heilbron, M., Armeni, K., Schoffelen, J.-M., Hagoort, P., & De Lange, F. P. (2022). A hierarchy of linguistic predictions during natural language comprehension. Proceedings of the National Academy of Sciences of the United States of America, 119(32): e2201968119. doi:10.1073/pnas.2201968119.

    Abstract

    Understanding spoken language requires transforming ambiguous acoustic streams into a hierarchy of representations, from phonemes to meaning. It has been suggested that the brain uses prediction to guide the interpretation of incoming input. However, the role of prediction in language processing remains disputed, with disagreement about both the ubiquity and representational nature of predictions. Here, we address both issues by analyzing brain recordings of participants listening to audiobooks, and using a deep neural network (GPT-2) to precisely quantify contextual predictions. First, we establish that brain responses to words are modulated by ubiquitous predictions. Next, we disentangle model-based predictions into distinct dimensions, revealing dissociable neural signatures of predictions about syntactic category (parts of speech), phonemes, and semantics. Finally, we show that high-level (word) predictions inform low-level (phoneme) predictions, supporting hierarchical predictive processing. Together, these results underscore the ubiquity of prediction in language processing, showing that the brain spontaneously predicts upcoming language at multiple levels of abstraction.

    Additional information

    supporting information
  • Hersh, T. A., Gero, S., Rendell, L., Cantor, M., Weilgart, L., Amano, M., Dawson, S. M., Slooten, E., Johnson, C. M., Kerr, I., Payne, R., Rogan, A., Antunes, R., Andrews, O., Ferguson, E. L., Hom-Weaver, C. A., Norris, T. F., Barkley, Y. M., Merkens, K. P., Oleson, E. M. and 7 moreHersh, T. A., Gero, S., Rendell, L., Cantor, M., Weilgart, L., Amano, M., Dawson, S. M., Slooten, E., Johnson, C. M., Kerr, I., Payne, R., Rogan, A., Antunes, R., Andrews, O., Ferguson, E. L., Hom-Weaver, C. A., Norris, T. F., Barkley, Y. M., Merkens, K. P., Oleson, E. M., Doniol-Valcroze, T., Pilkington, J. F., Gordon, J., Fernandes, M., Guerra, M., Hickmott, L., & Whitehead, H. (2022). Evidence from sperm whale clans of symbolic marking in non-human cultures. Proceedings of the National Academy of Sciences of the United States of America, 119(37): e2201692119. doi:10.1073/pnas.2201692119.

    Abstract

    Culture, a pillar of the remarkable ecological success of humans, is increasingly recognized as a powerful force structuring nonhuman animal populations. A key gap between these two types of culture is quantitative evidence of symbolic markers—seemingly arbitrary traits that function as reliable indicators of cultural group membership to conspecifics. Using acoustic data collected from 23 Pacific Ocean locations, we provide quantitative evidence that certain sperm whale acoustic signals exhibit spatial patterns consistent with a symbolic marker function. Culture segments sperm whale populations into behaviorally distinct clans, which are defined based on dialects of stereotyped click patterns (codas). We classified 23,429 codas into types using contaminated mixture models and hierarchically clustered coda repertoires into seven clans based on similarities in coda usage; then we evaluated whether coda usage varied with geographic distance within clans or with spatial overlap between clans. Similarities in within-clan usage of both “identity codas” (coda types diagnostic of clan identity) and “nonidentity codas” (coda types used by multiple clans) decrease as space between repertoire recording locations increases. However, between-clan similarity in identity, but not nonidentity, coda usage decreases as clan spatial overlap increases. This matches expectations if sympatry is related to a measurable pressure to diversify to make cultural divisions sharper, thereby providing evidence that identity codas function as symbolic markers of clan identity. Our study provides quantitative evidence of arbitrary traits, resembling human ethnic markers, conveying cultural identity outside of humans, and highlights remarkable similarities in the distributions of human ethnolinguistic groups and sperm whale clans.
  • Hervais-Adelman, A., Kumar, U., Mishra, R., Tripathi, V., Guleria, A., Singh, J. P., & Huettig, F. (2022). How does literacy affect speech processing? Not by enhancing cortical responses to speech, but by promoting connectivity of acoustic-phonetic and graphomotor cortices. Journal of Neuroscience, 42(47), 8826-8841. doi:10.1523/JNEUROSCI.1125-21.2022.

    Abstract

    Previous research suggests that literacy, specifically learning alphabetic letter-to-phoneme mappings, modifies online speech processing, and enhances brain responses, as indexed by the blood-oxygenation level dependent signal (BOLD), to speech in auditory areas associated with phonological processing (Dehaene et al., 2010). However, alphabets are not the only orthographic systems in use in the world, and hundreds of millions of individuals speak languages that are not written using alphabets. In order to make claims that literacy per se has broad and general consequences for brain responses to speech, one must seek confirmatory evidence from non-alphabetic literacy. To this end, we conducted a longitudinal fMRI study in India probing the effect of literacy in Devanagari, an abugida, on functional connectivity and cerebral responses to speech in 91 variously literate Hindi-speaking male and female human participants. Twenty-two completely illiterate participants underwent six months of reading and writing training. Devanagari literacy increases functional connectivity between acoustic-phonetic and graphomotor brain areas, but we find no evidence that literacy changes brain responses to speech, either in cross-sectional or longitudinal analyses. These findings shows that a dramatic reconfiguration of the neurofunctional substrates of online speech processing may not be a universal result of learning to read, and suggest that the influence of writing on speech processing should also be investigated.
  • Hervais-Adelman, A., Carlyon, R. P., Johnsrude, I. S., & Davis, M. H. (2012). Brain regions recruited for the effortful comprehension of noise-vocoded words. Language and Cognitive Processes, 27(7-8), 1145-1166. doi:10.1080/01690965.2012.662280.

    Abstract

    We used functional magnetic resonance imaging (fMRI) to investigate the neural basis of comprehension and perceptual learning of artificially degraded [noise vocoded (NV)] speech. Fifteen participants were scanned while listening to 6-channel vocoded words, which are difficult for naive listeners to comprehend, but can be readily learned with appropriate feedback presentations. During three test blocks, we compared responses to potentially intelligible NV words, incomprehensible distorted words and clear speech. Training sessions were interleaved with the test sessions and included paired presentation of clear then noise-vocoded words: a type of feedback that enhances perceptual learning. Listeners' comprehension of NV words improved significantly as a consequence of training. Listening to NV compared to clear speech activated left insula, and prefrontal and motor cortices. These areas, which are implicated in speech production, may play an active role in supporting the comprehension of degraded speech. Elevated activation in the precentral gyrus during paired clear-then-distorted presentations that enhance learning further suggests a role for articulatory representations of speech in perceptual learning of degraded speech.
  • Hickman, L. J., Keating, C. T., Ferrari, A., & Cook, J. L. (2022). Skin conductance as an index of alexithymic traits in the general population. Psychological Reports, 125(3), 1363-1379. doi:10.1177/00332941211005118.

    Abstract

    Alexithymia concerns a difficulty identifying and communicating one’s own emotions, and a tendency towards externally-oriented thinking. Recent work argues that such alexithymic traits are due to altered arousal response and poor subjective awareness of “objective” arousal responses. Although there are individual differences within the general population in identifying and describing emotions, extant research has focused on highly alexithymic individuals. Here we investigated whether mean arousal and concordance between subjective and objective arousal underpin individual differences in alexithymic traits in a general population sample. Participants rated subjective arousal responses to 60 images from the International Affective Picture System whilst their skin conductance was recorded. The Autism Quotient was employed to control for autistic traits in the general population. Analysis using linear models demonstrated that mean arousal significantly predicted Toronto Alexithymia Scale scores above and beyond autistic traits, but concordance scores did not. This indicates that, whilst objective arousal is a useful predictor in populations that are both above and below the cut-off values for alexithymia, concordance scores between objective and subjective arousal do not predict variation in alexithymic traits in the general population.
  • Hoeks, J. C. J., Vonk, W., & Schriefers, H. (2002). Processing coordinated structures in context: The effect of topic-structure on ambiguity resolution. Journal of Memory and Language, 46(1), 99-119. doi:10.1006/jmla.2001.2800.

    Abstract

    When a sentence such as The model embraced the designer and the photographer laughed is read, the noun phrase the photographer is temporarily ambiguous: It can be either one of the objects of embraced (NP-coordination) or the subject of a new, conjoined sentence (S-coordination). It has been shown for a number of languages, including Dutch (the language used in this study), that readers prefer NP-coordination over S-coordination, at least in isolated sentences. In the present paper, it will be suggested that NP-coordination is preferred because it is the simpler of the two options in terms of topic-structure; in NP-coordinations there is only one topic, whereas S-coordinations contain two. Results from off-line (sentence completion) and online studies (a self-paced reading and an eye tracking experiment) support this topic-structure explanation. The processing difficulty associated with S-coordinated sentences disappeared when these sentences followed contexts favoring a two-topic continuation. This finding establishes topic-structure as an important factor in online sentence processing.
  • Holler, J., Drijvers, L., Rafiee, A., & Majid, A. (2022). Embodied space-pitch associations are shaped by language. Cognitive Science, 46(2): e13083. doi:10.1111/cogs.13083.

    Abstract

    Height-pitch associations are claimed to be universal and independent of language, but this claim remains controversial. The present study sheds new light on this debate with a multimodal analysis of individual sound and melody descriptions obtained in an interactive communication paradigm with speakers of Dutch and Farsi. The findings reveal that, in contrast to Dutch speakers, Farsi speakers do not use a height-pitch metaphor consistently in speech. Both Dutch and Farsi speakers’ co-speech gestures did reveal a mapping of higher pitches to higher space and lower pitches to lower space, and this gesture space-pitch mapping tended to co-occur with corresponding spatial words (high-low). However, this mapping was much weaker in Farsi speakers than Dutch speakers. This suggests that cross-linguistic differences shape the conceptualization of pitch and further calls into question the universality of height-pitch associations.

    Additional information

    supporting information
  • Holler, J. (2022). Visual bodily signals as core devices for coordinating minds in interaction. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 377(1859): 20210094. doi:10.1098/rstb.2021.0094.

    Abstract

    The view put forward here is that visual bodily signals play a core role in human communication and the coordination of minds. Critically, this role goes far beyond referential and propositional meaning. The human communication system that we consider to be the explanandum in the evolution of language thus is not spoken language. It is, instead, a deeply multimodal, multilayered, multifunctional system that developed—and survived—owing to the extraordinary flexibility and adaptability that it endows us with. Beyond their undisputed iconic power, visual bodily signals (manual and head gestures, facial expressions, gaze, torso movements) fundamentally contribute to key pragmatic processes in modern human communication. This contribution becomes particularly evident with a focus that includes non-iconic manual signals, non-manual signals and signal combinations. Such a focus also needs to consider meaning encoded not just via iconic mappings, since kinematic modulations and interaction-bound meaning are additional properties equipping the body with striking pragmatic capacities. Some of these capacities, or its precursors, may have already been present in the last common ancestor we share with the great apes and may qualify as early versions of the components constituting the hypothesized interaction engine.
  • Holler, J., Bavelas, J., Woods, J., Geiger, M., & Simons, L. (2022). Given-new effects on the duration of gestures and of words in face-to-face dialogue. Discourse Processes, 59(8), 619-645. doi:10.1080/0163853X.2022.2107859.

    Abstract

    The given-new contract entails that speakers must distinguish for their addressee whether references are new or already part of their dialogue. Past research had found that, in a monologue to a listener, speakers shortened repeated words. However, the notion of the given-new contract is inherently dialogic, with an addressee and the availability of co-speech gestures. Here, two face-to-face dialogue experiments tested whether gesture duration also follows the given-new contract. In Experiment 1, four experimental sequences confirmed that when speakers repeated their gestures, they shortened the duration significantly. Experiment 2 replicated the effect with spontaneous gestures in a different task. This experiment also extended earlier results with words, confirming that speakers shortened their repeated words significantly in a multimodal dialogue setting, the basic form of language use. Because words and gestures were not necessarily redundant, these results offer another instance in which gestures and words independently serve pragmatic requirements of dialogue.
  • Holler, J., & Beattie, G. (2002). A micro-analytic investigation of how iconic gestures and speech represent core semantic features in talk. Semiotica, 142, 31-69.
  • Holler, J., & Beattie, G. (2003). How iconic gestures and speech interact in the representation of meaning: are both aspects really integral to the process? Semiotica, 146, 81-116.
  • Holler, J., & Stevens, R. (2007). The effect of common ground on how speakers use gesture and speech to represent size information. Journal of Language and Social Psychology, 26, 4-27.
  • Holler, J., & Beattie, G. (2003). Pragmatic aspects of representational gestures: Do speakers use them to clarify verbal ambiguity for the listener? Gesture, 3, 127-154.
  • Hoogman, M., Van Rooij, D., Klein, M., Boedhoe, P., Ilioska, I., Li, T., Patel, Y., Postema, M., Zhang-James, Y., Anagnostou, E., Arango, C., Auzias, G., Banaschewski, T., Bau, C. H. D., Behrmann, M., Bellgrove, M. A., Brandeis, D., Brem, S., Busatto, G. F., Calderoni, S. and 60 moreHoogman, M., Van Rooij, D., Klein, M., Boedhoe, P., Ilioska, I., Li, T., Patel, Y., Postema, M., Zhang-James, Y., Anagnostou, E., Arango, C., Auzias, G., Banaschewski, T., Bau, C. H. D., Behrmann, M., Bellgrove, M. A., Brandeis, D., Brem, S., Busatto, G. F., Calderoni, S., Calvo, R., Castellanos, F. X., Coghill, D., Conzelmann, A., Daly, E., Deruelle, C., Dinstein, I., Durston, S., Ecker, C., Ehrlich, S., Epstein, J. N., Fair, D. A., Fitzgerald, J., Freitag, C. M., Frodl, T., Gallagher, L., Grevet, E. H., Haavik, J., Hoekstra, P. J., Janssen, J., Karkashadze, G., King, J. A., Konrad, K., Kuntsi, J., Lazaro, L., Lerch, J. P., Lesch, K.-P., Louza, M. R., Luna, B., Mattos, P., McGrath, J., Muratori, F., Murphy, C., Nigg, J. T., Oberwelland-Weiss, E., O'Gorman Tuura, R. L., O'Hearn, K., Oosterlaan, J., Parellada, M., Pauli, P., Plessen, K. J., Ramos-Quiroga, J. A., Reif, A., Reneman, L., Retico, A., Rosa, P. G. P., Rubia, K., Shaw, P., Silk, T. J., Tamm, L., Vilarroya, O., Walitza, S., Jahanshad, N., Faraone, S. V., Francks, C., Van den Heuvel, O. A., Paus, T., Thompson, P. M., Buitelaar, J. K., & Franke, B. (2022). Consortium neuroscience of attention deficit/hyperactivity disorder and autism spectrum disorder: The ENIGMA adventure. Human Brain Mapping, 43(1), 37-55. doi:10.1002/hbm.25029.

    Abstract

    Abstract Neuroimaging has been extensively used to study brain structure and function in individuals with attention deficit/hyperactivity disorder (ADHD) and autism spectrum disorder (ASD) over the past decades. Two of the main shortcomings of the neuroimaging literature of these disorders are the small sample sizes employed and the heterogeneity of methods used. In 2013 and 2014, the ENIGMA-ADHD and ENIGMA-ASD working groups were respectively, founded with a common goal to address these limitations. Here, we provide a narrative review of the thus far completed and still ongoing projects of these working groups. Due to an implicitly hierarchical psychiatric diagnostic classification system, the fields of ADHD and ASD have developed largely in isolation, despite the considerable overlap in the occurrence of the disorders. The collaboration between the ENIGMA-ADHD and -ASD working groups seeks to bring the neuroimaging efforts of the two disorders closer together. The outcomes of case–control studies of subcortical and cortical structures showed that subcortical volumes are similarly affected in ASD and ADHD, albeit with small effect sizes. Cortical analyses identified unique differences in each disorder, but also considerable overlap between the two, specifically in cortical thickness. Ongoing work is examining alternative research questions, such as brain laterality, prediction of case–control status, and anatomical heterogeneity. In brief, great strides have been made toward fulfilling the aims of the ENIGMA collaborations, while new ideas and follow-up analyses continue that include more imaging modalities (diffusion MRI and resting-state functional MRI), collaborations with other large databases, and samples with dual diagnoses.
  • Hoogman, M., Rijpkema, M., Janss, L., Brunner, H., Fernandez, G., Buitelaar, J., Franke, B., & Arias-Vásquez, A. (2012). Current self-reported symptoms of attention deficit/hyperactivity disorder are associated with total brain volume in healthy adults. PLoS One, 7(2), e31273. doi:10.1371/journal.pone.0031273.

    Abstract

    Background Reduced total brain volume is a consistent finding in children with Attention Deficit/Hyperactivity Disorder (ADHD). In order to get a better understanding of the neurobiology of ADHD, we take the first step in studying the dimensionality of current self-reported adult ADHD symptoms, by looking at its relation with total brain volume. Methodology/Principal Findings In a sample of 652 highly educated adults, the association between total brain volume, assessed with magnetic resonance imaging, and current number of self-reported ADHD symptoms was studied. The results showed an association between these self-reported ADHD symptoms and total brain volume. Post-hoc analysis revealed that the symptom domain of inattention had the strongest association with total brain volume. In addition, the threshold for impairment coincides with the threshold for brain volume reduction. Conclusions/Significance This finding improves our understanding of the biological substrates of self-reported ADHD symptoms, and suggests total brain volume as a target intermediate phenotype for future gene-finding in ADHD.
  • Hoogman, M., Weisfelt, M., van de Beek, D., de Gans, J., & Schmand, B. (2007). Cognitive outcome in adults after bacterial meningitis. Journal of Neurology, Neurosurgery & Psychiatry, 78, 1092-1096. doi:10.1136/jnnp.2006.110023.

    Abstract

    Objective: To evaluate cognitive outcome in adult survivors of bacterial meningitis. Methods: Data from three prospective multicentre studies were pooled and reanalysed, involving 155 adults surviving bacterial meningitis (79 after pneumococcal and 76 after meningococcal meningitis) and 72 healthy controls. Results: Cognitive impairment was found in 32% of patients and this proportion was similar for survivors of pneumococcal and meningococcal meningitis. Survivors of pneumococcal meningitis performed worse on memory tasks (p<0.001) and tended to be cognitively slower than survivors of meningococcal meningitis (p = 0.08). We found a diffuse pattern of cognitive impairment in which cognitive speed played the most important role. Cognitive performance was not related to time since meningitis; however, there was a positive association between time since meningitis and self-reported physical impairment (p<0.01). The frequency of cognitive impairment and the numbers of abnormal test results for patients with and without adjunctive dexamethasone were similar. Conclusions: Adult survivors of bacterial meningitis are at risk of cognitive impairment, which consists mainly of cognitive slowness. The loss of cognitive speed is stable over time after bacterial meningitis; however, there is a significant improvement in subjective physical impairment in the years after bacterial meningitis. The use of dexamethasone was not associated with cognitive impairment.
  • Hribar, A., Haun, D. B. M., & Call, J. (2012). Children’s reasoning about spatial relational similarity: The effect of alignment and relational complexity. Journal of Experimental Child Psychology, 111, 490-500. doi:10.1016/j.jecp.2011.11.004.

    Abstract

    We investigated 4- and 5-year-old children’s mapping strategies in a spatial task. Children were required to find a picture in an array of three identical cups after observing another picture being hidden in another array of three cups. The arrays were either aligned one behind the other in two rows or placed side by side forming one line. Moreover, children were rewarded for two different mapping strategies. Half of the children needed to choose a cup that held the same relative position as the rewarded cup in the other array; they needed to map left–left, middle–middle, and right–right cups together (aligned mapping), which required encoding and mapping of two relations (e.g., the cup left of the middle cup and left of the right cup). The other half needed to map together the cups that held the same relation to the table’s spatial features—the cups at the edges, the middle cups, and the cups in the middle of the table (landmark mapping)—which required encoding and mapping of one relation (e.g., the cup at the table’s edge). Results showed that children’s success was constellation dependent; performance was higher when the arrays were aligned one behind the other in two rows than when they were placed side by side. Furthermore, children showed a preference for landmark mapping over aligned mapping.
  • Huettig, F., & McQueen, J. M. (2007). The tug of war between phonological, semantic and shape information in language-mediated visual search. Journal of Memory and Language, 57(4), 460-482. doi:10.1016/j.jml.2007.02.001.

    Abstract

    Experiments 1 and 2 examined the time-course of retrieval of phonological, visual-shape and semantic knowledge as Dutch participants listened to sentences and looked at displays of four pictures. Given a sentence with beker, `beaker', for example, the display contained phonological (a beaver, bever), shape (a bobbin, klos), and semantic (a fork, vork) competitors. When the display appeared at sentence onset, fixations to phonological competitors preceded fixations to shape and semantic competitors. When display onset was 200 ms before (e.g.) beker, fixations were directed to shape and then semantic competitors, but not phonological competitors. In Experiments 3 and 4, displays contained the printed names of the previously-pictured entities; only phonological competitors were fixated preferentially. These findings suggest that retrieval of phonological, shape and semantic knowledge in the spoken-word and picture-recognition systems is cascaded, and that visual attention shifts are co-determined by the time-course of retrieval of all three knowledge types and by the nature of the information in the visual environment.
  • Huettig, F., & Altmann, G. T. M. (2007). Visual-shape competition during language-mediated attention is based on lexical input and not modulated by contextual appropriateness. Visual Cognition, 15(8), 985-1018. doi:10.1080/13506280601130875.

    Abstract

    Visual attention can be directed immediately, as a spoken word unfolds, towards conceptually related but nonassociated objects, even if they mismatch on other dimensions that would normally determine which objects in the scene were appropriate referents for the unfolding word (Huettig & Altmann, 2005). Here we demonstrate that the mapping between language and concurrent visual objects can also be mediated by visual-shape relations. On hearing "snake", participants directed overt attention immediately, within a visual display depicting four objects, to a picture of an electric cable, although participants had viewed the visual display with four objects for approximately 5 s before hearing the target word - sufficient time to recognize the objects for what they were. The time spent fixating the cable correlated significantly with ratings of the visual similarity between snakes in general and this particular cable. Importantly, with sentences contextually biased towards the concept snake, participants looked at the snake well before the onset of "snake", but they did not look at the visually similar cable until hearing "snake". Finally, we demonstrate that such activation can, under certain circumstances (e.g., during the processing of dominant meanings of homonyms), constrain the direction of visual attention even when it is clearly contextually inappropriate. We conclude that language-mediated attention can be guided by a visual match between spoken words and visual objects, but that such a match is based on lexical input and may not be modulated by contextual appropriateness.
  • Huettig, F., Audring, J., & Jackendoff, R. (2022). A parallel architecture perspective on pre-activation and prediction in language processing. Cognition, 224: 105050. doi:10.1016/j.cognition.2022.105050.

    Abstract

    A recent trend in psycholinguistic research has been to posit prediction as an essential function of language processing. The present paper develops a linguistic perspective on viewing prediction in terms of pre-activation. We describe what predictions are and how they are produced. Our basic premises are that (a) no prediction can be made without knowledge to support it; and (b) it is therefore necessary to characterize the precise form of that knowledge, as revealed by a suitable theory of linguistic representations. We describe the Parallel Architecture (PA: Jackendoff, 2002; Jackendoff and Audring, 2020), which makes explicit our commitments about linguistic representations, and we develop an account of processing based on these representations. Crucial to our account is that what have been traditionally treated as derivational rules of grammar are formalized by the PA as lexical items, encoded in the same format as words. We then present a theory of prediction in these terms: linguistic input activates lexical items whose beginning (or incipit) corresponds to the input encountered so far; and prediction amounts to pre-activation of the as yet unheard parts of those lexical items (the remainder). Thus the generation of predictions is a natural byproduct of processing linguistic representations. We conclude that the PA perspective on pre-activation provides a plausible account of prediction in language processing that bridges linguistic and psycholinguistic theorizing.
  • Huettig, F., Mishra, R. K., & Olivers, C. N. (2012). Mechanisms and representations of language-mediated visual attention. Frontiers in Psychology, 2, 394. doi:10.3389/fpsyg.2011.00394.

    Abstract

    The experimental investigation of language-mediated visual attention is a promising way to study the interaction of the cognitive systems involved in language, vision, attention, and memory. Here we highlight four challenges for a mechanistic account of this oculomotor behavior: the levels of representation at which language-derived and vision-derived representations are integrated; attentional mechanisms; types of memory; and the degree of individual and group differences. Central points in our discussion are (a) the possibility that local microcircuitries involving feedforward and feedback loops instantiate a common representational substrate of linguistic and non-linguistic information and attention; and (b) that an explicit working memory may be central to explaining interactions between language and visual attention. We conclude that a synthesis of further experimental evidence from a variety of fields of inquiry and the testing of distinct, non-student, participant populations will prove to be critical.
  • Huizeling, E., Arana, S., Hagoort, P., & Schoffelen, J.-M. (2022). Lexical frequency and sentence context influence the brain’s response to single words. Neurobiology of Language, 3(1), 149-179. doi:10.1162/nol_a_00054.

    Abstract

    Typical adults read remarkably quickly. Such fast reading is facilitated by brain processes that are sensitive to both word frequency and contextual constraints. It is debated as to whether these attributes have additive or interactive effects on language processing in the brain. We investigated this issue by analysing existing magnetoencephalography data from 99 participants reading intact and scrambled sentences. Using a cross-validated model comparison scheme, we found that lexical frequency predicted the word-by-word elicited MEG signal in a widespread cortical network, irrespective of sentential context. In contrast, index (ordinal word position) was more strongly encoded in sentence words, in left front-temporal areas. This confirms that frequency influences word processing independently of predictability, and that contextual constraints affect word-by-word brain responses. With a conservative multiple comparisons correction, only the interaction between lexical frequency and surprisal survived, in anterior temporal and frontal cortex, and not between lexical frequency and entropy, nor between lexical frequency and index. However, interestingly, the uncorrected index*frequency interaction revealed an effect in left frontal and temporal cortex that reversed in time and space for intact compared to scrambled sentences. Finally, we provide evidence to suggest that, in sentences, lexical frequency and predictability may independently influence early (<150ms) and late stages of word processing, but interact during later stages of word processing (>150-250ms), thus helping to converge previous contradictory eye-tracking and electrophysiological literature. Current neuro-cognitive models of reading would benefit from accounting for these differing effects of lexical frequency and predictability on different stages of word processing.
  • Huizeling, E., Peeters, D., & Hagoort, P. (2022). Prediction of upcoming speech under fluent and disfluent conditions: Eye tracking evidence from immersive virtual reality. Language, Cognition and Neuroscience, 37(4), 481-508. doi:10.1080/23273798.2021.1994621.

    Abstract

    Traditional experiments indicate that prediction is important for efficient speech processing. In three virtual reality visual world paradigm experiments, we tested whether such findings hold in naturalistic settings (Experiment 1) and provided novel insights into whether disfluencies in speech (repairs/hesitations) inform one’s predictions in rich environments (Experiments 2–3). Experiment 1 supports that listeners predict upcoming speech in naturalistic environments, with higher proportions of anticipatory target fixations in predictable compared to unpredictable trials. In Experiments 2–3, disfluencies reduced anticipatory fixations towards predicted referents, compared to conjunction (Experiment 2) and fluent (Experiment 3) sentences. Unexpectedly, Experiment 2 provided no evidence that participants made new predictions from a repaired verb. Experiment 3 provided novel findings that fixations towards the speaker increase upon hearing a hesitation, supporting current theories of how hesitations influence sentence processing. Together, these findings unpack listeners’ use of visual (objects/speaker) and auditory (speech/disfluencies) information when predicting upcoming words.
  • Huttar, G. L., Essegbey, J., & Ameka, F. K. (2007). Gbe and other West African sources of Suriname creole semantic structures: Implications for creole genesis. Journal of Pidgin and Creole Languages, 22(1), 57-72. doi:10.1075/jpcl.22.1.05hut.

    Abstract

    This paper reports on ongoing research on the role of various kinds of potential substrate languages in the development of the semantic structures of Ndyuka (Eastern Suriname Creole). A set of 100 senses of noun, verb, and other lexemes in Ndyuka were compared with senses of corresponding lexemes in three kinds of languages of the former Slave Coast and Gold Coast areas, and immediately adjoining hinterland: (a) Gbe languages; (b) other Kwa languages, specifically Akan and Ga; (c) non-Kwa Niger-Congo languages. The results of this process provide some evidence for the importance of the Gbe languages in the formation of the Suriname creoles, but also for the importance of other languages, and for the areal nature of some of the collocations studied, rendering specific identification of a single substrate source impossible and inappropriate. These results not only provide information about the role of Gbe and other languages in the formation of Ndyuka, but also give evidence for effects of substrate languages spoken by late arrivals some time after the "founders" of a given creole-speaking society. The conclusions are extrapolated beyond Suriname to creole genesis generally.
  • IJzerman, H., Gallucci, M., Pouw, W., Weiβgerber, S. C., Van Doesum, N. J., & Williams, K. D. (2012). Cold-blooded loneliness: Social exclusion leads to lower skin temperatures. Acta Psychologica, 140(3), 283-288. doi:10.1016/j.actpsy.2012.05.002.

    Abstract

    Being ostracized or excluded, even briefly and by strangers, is painful and threatens fundamental needs. Recent work by Zhong and Leonardelli (2008) found that excluded individuals perceive the room as cooler and that they desire warmer drinks. A perspective that many rely on in embodiment is the theoretical idea that people use metaphorical associations to understand social exclusion (see Landau, Meier, & Keefer, 2010). We suggest that people feel colder because they are colder. The results strongly support the idea that more complex metaphorical understandings of social relations are scaffolded onto literal changes in bodily temperature: Being excluded in an online ball tossing game leads to lower finger temperatures (Study 1), while the negative affect typically experienced after such social exclusion is alleviated after holding a cup of warm tea (Study 2). The authors discuss further implications for the interaction between body and social relations specifically, and for basic and cognitive systems in general.
  • Ikram, M. A., Fornage, M., Smith, A. V., Seshadri, S., Schmidt, R., Debette, S., Vrooman, H. A., Sigurdsson, S., Ropele, S., Taal, H. R., Mook-Kanamori, D. O., Coker, L. H., Longstreth, W. T., Niessen, W. J., DeStefano, A. L., Beiser, A., Zijdenbos, A. P., Struchalin, M., Jack, C. R., Rivadeneira, F. and 37 moreIkram, M. A., Fornage, M., Smith, A. V., Seshadri, S., Schmidt, R., Debette, S., Vrooman, H. A., Sigurdsson, S., Ropele, S., Taal, H. R., Mook-Kanamori, D. O., Coker, L. H., Longstreth, W. T., Niessen, W. J., DeStefano, A. L., Beiser, A., Zijdenbos, A. P., Struchalin, M., Jack, C. R., Rivadeneira, F., Uitterlinden, A. G., Knopman, D. S., Hartikainen, A.-L., Pennell, C. E., Thiering, E., Steegers, E. A. P., Hakonarson, H., Heinrich, J., Palmer, L. J., Jarvelin, M.-R., McCarthy, M. I., Grant, S. F. A., St Pourcain, B., Timpson, N. J., Smith, G. D., Sovio, U., Nalls, M. A., Au, R., Hofman, A., Gudnason, H., van der Lugt, A., Harris, T. B., Meeks, W. M., Vernooij, M. W., van Buchem, M. A., Catellier, D., Jaddoe, V. W. V., Gudnason, V., Windham, B. G., Wolf, P. A., van Duijn, C. M., Mosley, T. H., Schmidt, H., Launer, L. J., Breteler, M. M. B., DeCarli, C., Consortiumthe Cohorts for Heart and Aging Research in Genomic Epidemiology (CHARGE) Consortium, & Early Growth Genetics (EGG) Consortium (2012). Common variants at 6q22 and 17q21 are associated with intracranial volume. Nature Genetics, 44(5), 539-544. doi:10.1038/ng.2245.

    Abstract

    During aging, intracranial volume remains unchanged and represents maximally attained brain size, while various interacting biological phenomena lead to brain volume loss. Consequently, intracranial volume and brain volume in late life reflect different genetic influences. Our genome-wide association study (GWAS) in 8,175 community-dwelling elderly persons did not reveal any associations at genome-wide significance (P < 5 × 10(-8)) for brain volume. In contrast, intracranial volume was significantly associated with two loci: rs4273712 (P = 3.4 × 10(-11)), a known height-associated locus on chromosome 6q22, and rs9915547 (P = 1.5 × 10(-12)), localized to the inversion on chromosome 17q21. We replicated the associations of these loci with intracranial volume in a separate sample of 1,752 elderly persons (P = 1.1 × 10(-3) for 6q22 and 1.2 × 10(-3) for 17q21). Furthermore, we also found suggestive associations of the 17q21 locus with head circumference in 10,768 children (mean age of 14.5 months). Our data identify two loci associated with head size, with the inversion at 17q21 also likely to be involved in attaining maximal brain size.
  • Indefrey, P. (1998). De neurale architectuur van taal: Welke hersengebieden zijn betrokken bij het spreken. Neuropraxis, 2(6), 230-237.
  • Indefrey, P., Gruber, O., Brown, C. M., Hagoort, P., Posse, S., & Kleinschmidt, A. (1998). Lexicality and not syllable frequency determine lateralized premotor activation during the pronunciation of word-like stimuli: An fMRI study. NeuroImage, 7, S4.
  • Ioana, M., Ferwerda, B., Farjadian, S., Ioana, L., Ghaderi, A., Oosting, M., Joosten, L. A., Van der Meer, J. W., Romeo, G., Luiselli, D., Dediu, D., & Netea, M. G. (2012). High variability of TLR4 gene in different ethnic groups of Iran. Innate Immunity, 18, 492-502. doi:10.1177/1753425911423043.

    Abstract

    Infectious diseases exert a constant evolutionary pressure on the innate immunity genes. TLR4, an important member of the Toll-like receptors family, specifically recognizes conserved structures of various infectious pathogens. Two functional TLR4 polymorphisms, Asp299Gly and Thr399Ile, modulate innate host defense against infections, and their prevalence between various populations has been proposed to be influenced by local infectious pressures. If this assumption is true, strong local infectious pressures would lead to a homogeneous pattern of these ancient TLR4 polymorphisms in geographically close populations, while a weak selection or genetic drift may result in a diverse pattern. We evaluated TLR4 polymorphisms in 15 ethnic groups of Iran, to assess whether infections exerted selective pressures on different haplotypes containing these variants. The Iranian subpopulations displayed a heterogeneous pattern of TLR4 polymorphisms, comprising various percentages of Asp299Gly and Thr399Ile alone or in combination. The Iranian sample as a whole showed an intermediate mixed pattern when compared with commonly found patterns in Africa, Europe, Eastern Asia and Americas. These findings suggest a weak or absent selection pressure on TLR4 polymorphisms in the Middle-East, that does not support the assumption of an important role of these polymorphisms in the host defence against local pathogens.
  • Isbilen, E. S., Frost, R. L. A., Monaghan, P., & Christiansen, M. H. (2022). Statistically based chunking of nonadjacent dependencies. Journal of Experimental Psychology: General, 151(11), 2623-2640. doi:10.1037/xge0001207.

    Abstract

    How individuals learn complex regularities in the environment and generalize them to new instances is a key question in cognitive science. Although previous investigations have advocated the idea that learning and generalizing depend upon separate processes, the same basic learning mechanisms may account for both. In language learning experiments, these mechanisms have typically been studied in isolation of broader cognitive phenomena such as memory, perception, and attention. Here, we show how learning and generalization in language is embedded in these broader theories by testing learners on their ability to chunk nonadjacent dependencies—a key structure in language but a challenge to theories that posit learning through the memorization of structure. In two studies, adult participants were trained and tested on an artificial language containing nonadjacent syllable dependencies, using a novel chunking-based serial recall task involving verbal repetition of target sequences (formed from learned strings) and scrambled foils. Participants recalled significantly more syllables, bigrams, trigrams, and nonadjacent dependencies from sequences conforming to the language’s statistics (both learned and generalized sequences). They also encoded and generalized specific nonadjacent chunk information. These results suggest that participants chunk remote dependencies and rapidly generalize this information to novel structures. The results thus provide further support for learning-based approaches to language acquisition, and link statistical learning to broader cognitive mechanisms of memory.
  • Jaeger, E., Leedham, S., Lewis, A., Segditsas, S., Becker, M., Rodenas-Cuadrado, P., Davis, H., Kaur, K., Heinimann, K., Howarth, K., East, J., Taylor, J., Thomas, H., & Tomlinson, I. (2012). Hereditary mixed polyposis syndrome is caused by a 40-kb upstream duplication that leads to increased and ectopic expression of the BMP antagonist GREM1. Nature Genetics, 44, 699-703. doi:10.1038/ng.2263.

    Abstract

    Hereditary mixed polyposis syndrome (HMPS) is characterized by apparent autosomal dominant inheritance of multiple types of colorectal polyp, with colorectal carcinoma occurring in a high proportion of affected individuals. Here, we use genetic mapping, copy-number analysis, exclusion of mutations by high-throughput sequencing, gene expression analysis and functional assays to show that HMPS is caused by a duplication spanning the 3' end of the SCG5 gene and a region upstream of the GREM1 locus. This unusual mutation is associated with increased allele-specific GREM1 expression. Whereas GREM1 is expressed in intestinal subepithelial myofibroblasts in controls, GREM1 is predominantly expressed in the epithelium of the large bowel in individuals with HMPS. The HMPS duplication contains predicted enhancer elements; some of these interact with the GREM1 promoter and can drive gene expression in vitro. Increased GREM1 expression is predicted to cause reduced bone morphogenetic protein (BMP) pathway activity, a mechanism that also underlies tumorigenesis in juvenile polyposis of the large bowel.
  • Janse, E. (2012). A non-auditory measure of interference predicts distraction by competing speech in older adults. Aging, Neuropsychology and Cognition, 19, 741-758. doi:10.1080/13825585.2011.652590.

    Abstract

    In this study, older adults monitored for pre-assigned target sounds in a target talker's speech in a quiet (no noise) condition and in a condition with competing-talker noise. The question was to which extent the impact of the competing-talker noise on performance could be predicted from individual hearing loss and from a cognitive measure of inhibitory abilities, i.e., a measure of Stroop interference. The results showed that the non-auditory measure of Stroop interference predicted the impact of distraction on performance, over and above the effect of hearing loss. This suggests that individual differences in inhibitory abilities among older adults relate to susceptibility to distracting speech.
  • Janse, E., Nooteboom, S. G., & Quené, H. (2007). Coping with gradient forms of /t/-deletion and lexical ambiguity in spoken word recognition. Language and Cognitive Processes, 22(2), 161-200. doi:10.1080/01690960500371024.

    Abstract

    This study investigates how listeners cope with gradient forms of deletion of word-final /t/ when recognising words in a phonological context that makes /t/-deletion viable. A corpus study confirmed a high incidence of /t/-deletion in an /st#b/ context in Dutch. A discrimination study showed that differences between released /t/, unreleased /t/ and fully deleted /t/ in this specific /st#b/ context were salient. Two on-line experiments were carried out to investigate whether lexical activation might be affected by this form variation. Even though unreleased and released variants were processed equally fast by listeners, a detailed analysis of the unreleased condition provided evidence for gradient activation. Activating a target ending in /t/ is slowest for the most reduced variant because phonological context has to be taken into account. Importantly, activation for a target with /t/ in the absence of cues for /t/ is reduced if there is a surface-matching lexical competitor.
  • Janse, I., Bok, J., Hamidjaja, R. A., Hodemaekers, H. M., & van Rotterdam, B. J. (2012). Development and comparison of two assay formats for parallel detection of four biothreat pathogens by using suspension microarrays. PLoS One, 7(2), e31958. doi:10.1371/journal.pone.0031958.

    Abstract

    Microarrays provide a powerful analytical tool for the simultaneous detection of multiple pathogens. We developed diagnostic suspension microarrays for sensitive and specific detection of the biothreat pathogens Bacillus anthracis, Yersinia pestis, Francisella tularensis and Coxiella burnetii. Two assay chemistries for amplification and labeling were developed, one method using direct hybridization and the other using target-specific primer extension, combined with hybridization to universal arrays. Asymmetric PCR products for both assay chemistries were produced by using a multiplex asymmetric PCR amplifying 16 DNA signatures (16-plex). The performances of both assay chemistries were compared and their advantages and disadvantages are discussed. The developed microarrays detected multiple signature sequences and an internal control which made it possible to confidently identify the targeted pathogens and assess their virulence potential. The microarrays were highly specific and detected various strains of the targeted pathogens. Detection limits for the different pathogen signatures were similar or slightly higher compared to real-time PCR. Probit analysis showed that even a few genomic copies could be detected with 95% confidence. The microarrays detected DNA from different pathogens mixed in different ratios and from spiked or naturally contaminated samples. The assays that were developed have a potential for application in surveillance and diagnostics. © 2012 Janse et al.
  • Janse, E., & Adank, P. (2012). Predicting foreign-accent adaptation in older adults. Quarterly Journal of Experimental Psychology, 65, 1563-1585. doi:10.1080/17470218.2012.658822.

    Abstract

    We investigated comprehension of and adaptation to speech in an unfamiliar accent in older adults. Participants performed a speeded sentence verification task for accented sentences: one group upon auditory-only presentation, and the other group upon audiovisual presentation. Our questions were whether audiovisual presentation would facilitate adaptation to the novel accent, and which cognitive and linguistic measures would predict adaptation. Participants were therefore tested on a range of background tests: hearing acuity, auditory verbal short-term memory, working memory, attention-switching control, selective attention, and vocabulary knowledge. Both auditory-only and audiovisual groups showed improved accuracy and decreasing response times over the course of the experiment, effectively showing accent adaptation. Even though the total amount of improvement was similar for the auditory-only and audiovisual groups, initial rate of adaptation was faster in the audiovisual group. Hearing sensitivity and short-term and working memory measures were associated with efficient processing of the novel accent. Analysis of the relationship between accent comprehension and the background tests revealed furthermore that selective attention and vocabulary size predicted the amount of adaptation over the course of the experiment. These results suggest that vocabulary knowledge and attentional abilities facilitate the attention-shifting strategies proposed to be required for perceptual learning.
  • Janse, E., Nooteboom, S. G., & Quené, H. (2003). Word-level intelligibility of time-compressed speech: Prosodic and segmental factors. Speech Communication, 41, 287-301. doi:10.1016/S0167-6393(02)00130-9.

    Abstract

    In this study we investigate whether speakers, in line with the predictions of the Hyper- and Hypospeech theory, speed up most during the least informative parts and less during the more informative parts, when they are asked to speak faster. We expected listeners to benefit from these changes in timing, and our main goal was to find out whether making the temporal organisation of artificially time-compressed speech more like that of natural fast speech would improve intelligibility over linear time compression. Our production study showed that speakers reduce unstressed syllables more than stressed syllables, thereby making the prosodic pattern more pronounced. We extrapolated fast speech timing to even faster rates because we expected that the more salient prosodic pattern could be exploited in difficult listening situations. However, at very fast speech rates, applying fast speech timing worsens intelligibility. We argue that the non-uniform way of speeding up may not be due to an underlying communicative principle, but may result from speakers’ inability to speed up otherwise. As both prosodic and segmental information contribute to word recognition, we conclude that extrapolating fast speech timing to extremely fast rates distorts this balance between prosodic and segmental information.
  • Janssen, D. P., Roelofs, A., & Levelt, W. J. M. (2002). Inflectional frames in language production. Language and Cognitive Processes, 17(3), 209-236. doi:10.1006/jmla.2001.2800.

    Abstract

    The authors report six implicit priming experiments that examined the production of inflected forms. Participants produced words out of small sets in response to prompts. The words differed in form or shared word-initial segments, which allowed for preparation. In constant inflectional sets, the words had the same number of inflectional suffixes, whereas in variable sets the number of suffixes differed. In the experiments, preparation effects were obtained, which were larger in the constant than in the variable sets. Control experiments showed that this difference in effect was not due to syntactic class or phonological form per se. The results are interpreted in terms of a slot-and-filler model of word production, in which inflectional frames, on the one hand, and stems and affixes, on the other hand, are independently spelled out on the basis of an abstract morpho-syntactic specification of the word, which is followed by morpheme-to-frame association.
  • Janssens, S. E. W., Sack, A. T., Ten Oever, S., & Graaf, T. A. (2022). Calibrating rhythmic stimulation parameters to individual electroencephalography markers: The consistency of individual alpha frequency in practical lab settings. European Journal of Neuroscience, 55(11/12), 3418-3437. doi:10.1111/ejn.15418.

    Abstract

    Rhythmic stimulation can be applied to modulate neuronal oscillations. Such ‘entrainment’ is optimized when stimulation frequency is individually calibrated based on magneto/encephalography markers. It remains unknown how consistent such individual markers are across days/sessions, within a session, or across cognitive states, hemispheres and estimation methods, especially in a realistic, practical, lab setting. We here estimated individual alpha frequency (IAF) repeatedly from short electroencephalography (EEG) measurements at rest or during an attention task (cognitive state), using single parieto-occipital electrodes in 24 participants on 4 days (between-sessions), with multiple measurements over an hour on 1 day (within-session). First, we introduce an algorithm to automatically reject power spectra without a sufficiently clear peak to ensure unbiased IAF estimations. Then we estimated IAF via the traditional ‘maximum’ method and a ‘Gaussian fit’ method. IAF was reliable within- and between-sessions for both cognitive states and hemispheres, though task-IAF estimates tended to be more variable. Overall, the ‘Gaussian fit’ method was more reliable than the ‘maximum’ method. Furthermore, we evaluated how far from an approximated ‘true’ task-related IAF the selected ‘stimulation frequency’ was, when calibrating this frequency based on a short rest-EEG, a short task-EEG, or simply selecting 10 Hz for all participants. For the ‘maximum’ method, rest-EEG calibration was best, followed by task-EEG, and then 10 Hz. For the ‘Gaussian fit’ method, rest-EEG and task-EEG-based calibration were similarly accurate, and better than 10 Hz. These results lead to concrete recommendations about valid, and automated, estimation of individual oscillation markers in experimental and clinical settings.
  • Janssens, S. E., Ten Oever, S., Sack, A. T., & de Graaf, T. A. (2022). “Broadband Alpha Transcranial Alternating Current Stimulation”: Exploring a new biologically calibrated brain stimulation protocol. NeuroImage, 253: 119109. doi:10.1016/j.neuroimage.2022.119109.

    Abstract

    Transcranial alternating current stimulation (tACS) can be used to study causal contributions of oscillatory brain mechanisms to cognition and behavior. For instance, individual alpha frequency (IAF) tACS was reported to enhance alpha power and impact visuospatial attention performance. Unfortunately, such results have been inconsistent and difficult to replicate. In tACS, stimulation generally involves one frequency, sometimes individually calibrated to a peak value observed in an M/EEG power spectrum. Yet, the ‘peak’ actually observed in such power spectra often contains a broader range of frequencies, raising the question whether a biologically calibrated tACS protocol containing this fuller range of alpha-band frequencies might be more effective. Here, we introduce ‘Broadband-alpha-tACS’, a complex individually calibrated electrical stimulation protocol. We band-pass filtered left posterior resting-state EEG data around the IAF (+/- 2 Hz), and converted that time series into an electrical waveform for tACS stimulation of that same left posterior parietal cortex location. In other words, we stimulated a brain region with a ‘replay’ of its own alpha-band frequency content, based on spontaneous activity. Within-subjects (N=24), we compared to a sham tACS session the effects of broadband-alpha tACS, power-matched spectral inverse (‘alpha-removed’) control tACS, and individual alpha frequency tACS, on EEG alpha power and performance in an endogenous attention task previously reported to be affected by alpha tACS. Broadband-alpha-tACS significantly modulated attention task performance (i.e., reduced the rightward visuospatial attention bias in trials without distractors, and reduced attention benefits). Alpha-removed tACS also reduced the rightward visuospatial attention bias. IAF-tACS did not significantly modulate attention task performance compared to sham tACS, but also did not statistically significantly differ from broadband-alpha-tACS. This new broadband-alpha tACS approach seems promising, but should be further explored and validated in future studies.

    Additional information

    supplementary materials
  • Janzen, G., Wagensveld, B., & Van Turennout, M. (2007). Neural representation of navigational relevance is rapidly induced and long lasting. Cerebral Cortex, 17(4), 975-981. doi:10.1093/cercor/bhl008.

    Abstract

    Successful navigation is facilitated by the presence of landmarks. Previous functional magnetic resonance imaging (fMRI) evidence indicated that the human parahippocampal gyrus automatically distinguishes between landmarks placed at navigationally relevant (decision points) and irrelevant locations (nondecision points). This storage of navigational relevance can provide a neural mechanism underlying successful navigation. However, an efficient wayfinding mechanism requires that important spatial information is learned quickly and maintained over time. The present study investigates whether the representation of navigational relevance is modulated by time and practice. Participants learned 2 film sequences through virtual mazes containing objects at decision and at nondecision points. One maze was shown one time, and the other maze was shown 3 times. Twenty-four hours after study, event-related fMRI data were acquired during recognition of the objects. The results showed that activity in the parahippocampal gyrus was increased for objects previously placed at decision points as compared with objects placed at nondecision points. The decision point effect was not modulated by the number of exposures to the mazes and independent of explicit memory functions. These findings suggest a persistent representation of navigationally relevant information, which is stable after only one exposure to an environment. These rapidly induced and long-lasting changes in object representation provide a basis for successful wayfinding.
  • Janzen, G., & Weststeijn, C. G. (2007). Neural representation of object location and route direction: An event-related fMRI study. Brain Research, 1165, 116-125. doi:10.1016/j.brainres.2007.05.074.

    Abstract

    The human brain distinguishes between landmarks placed at navigationally relevant and irrelevant locations. However, to provide a successful wayfinding mechanism not only landmarks but also the routes between them need to be stored. We examined the neural representation of a memory for route direction and a memory for relevant landmarks. Healthy human adults viewed objects along a route through a virtual maze. Event-related functional magnetic resonance imaging (fMRI) data were acquired during a subsequent subliminal priming recognition task. Prime-objects either preceded or succeeded a target-object on a preciously learned route. Our results provide evidence that the parahippocampal gyri distinguish between relevant and irrelevant landmarks whereas the inferior parietal gyrus, the anterior cingulate gyrus as well as the right caudate nucleus are involved in the coding of route direction. These data show that separated memory systems store different spatial information. A memory for navigationally relevant object information and a memory for route direction exist.
  • Janzen, G., Haun, D. B. M., & Levinson, S. C. (2012). Tracking down abstract linguistic meaning: Neural correlates of spatial frame of reference ambiguities in language. PLoS One, 7(2), e30657. doi:10.1371/journal.pone.0030657.

    Abstract

    This functional magnetic resonance imaging (fMRI) study investigates a crucial parameter in spatial description, namely variants in the frame of reference chosen. Two frames of reference are available in European languages for the description of small-scale assemblages, namely the intrinsic (or object-oriented) frame and the relative (or egocentric) frame. We showed participants a sentence such as “the ball is in front of the man”, ambiguous between the two frames, and then a picture of a scene with a ball and a man – participants had to respond by indicating whether the picture did or did not match the sentence. There were two blocks, in which we induced each frame of reference by feedback. Thus for the crucial test items, participants saw exactly the same sentence and the same picture but now from one perspective, now the other. Using this method, we were able to precisely pinpoint the pattern of neural activation associated with each linguistic interpretation of the ambiguity, while holding the perceptual stimuli constant. Increased brain activity in bilateral parahippocampal gyrus was associated with the intrinsic frame of reference whereas increased activity in the right superior frontal gyrus and in the parietal lobe was observed for the relative frame of reference. The study is among the few to show a distinctive pattern of neural activation for an abstract yet specific semantic parameter in language. It shows with special clarity the nature of the neural substrate supporting each frame of spatial reference
  • Jara-Ettinger, J., & Rubio-Fernández, P. (2022). The social basis of referential communication: Speakers construct physical reference based on listeners’ expected visual search. Psychological Review, 129, 1394-1413. doi:10.1037/rev0000345.

    Abstract

    A foundational assumption of human communication is that speakers should say as much as necessary, but no more. Yet, people routinely produce redundant adjectives and their propensity to do so varies cross-linguistically. Here, we propose a computational theory, whereby speakers create referential expressions designed to facilitate listeners’ reference resolution, as they process words in real time. We present a computational model of our account, the Incremental Collaborative Efficiency (ICE) model, which generates referential expressions by considering listeners’ real-time incremental processing and reference identification. We apply the ICE framework to physical reference, showing that listeners construct expressions designed to minimize listeners’ expected visual search effort during online language processing. Our model captures a number of known effects in the literature, including cross-linguistic differences in speakers’ propensity to over-specify. Moreover, the ICE model predicts graded acceptability judgments with quantitative accuracy, systematically outperforming an alternative, brevity-based model. Our findings suggest that physical reference production is best understood as driven by a collaborative goal to help the listener identify the intended referent, rather than by an egocentric effort to minimize utterance length.
  • Jasmin, K., & Casasanto, D. (2012). The QWERTY Effect: How typing shapes the meanings of words. Psychonomic Bulletin & Review, 19, 499-504. doi:10.3758/s13423-012-0229-7.

    Abstract

    The QWERTY keyboard mediates communication for millions of language users. Here, we investigated whether differences in the way words are typed correspond to differences in their meanings. Some words are spelled with more letters on the right side of the keyboard and others with more letters on the left. In three experiments, we tested whether asymmetries in the way people interact with keys on the right and left of the keyboard influence their evaluations of the emotional valence of the words. We found the predicted relationship between emotional valence and QWERTY key position across three languages (English, Spanish, and Dutch). Words with more right-side letters were rated as more positive in valence, on average, than words with more left-side letters: the QWERTY effect. This effect was strongest in new words coined after QWERTY was invented and was also found in pseudowords. Although these data are correlational, the discovery of a similar pattern across languages, which was strongest in neologisms, suggests that the QWERTY keyboard is shaping the meanings of words as people filter language through their fingers. Widespread typing introduces a new mechanism by which semanntic changes in language can arise.
  • Jepma, M., Verdonschot, R. G., Van Steenbergen, H., Rombouts, S. A. R. B., & Nieuwenhuis, S. (2012). Neural mechanisms underlying the induction and relief of perceptual curiosity. Frontiers in Behavioral Neuroscience, 6: 5. doi:10.3389/fnbeh.2012.00005.

    Abstract

    Curiosity is one of the most basic biological drives in both animals and humans, and has been identified as a key motive for learning and discovery. Despite the importance of curiosity and related behaviors, the topic has been largely neglected in human neuroscience; hence little is known about the neurobiological mechanisms underlying curiosity. We used functional magnetic resonance imaging (fMRI) to investigate what happens in our brain during the induction and subsequent relief of perceptual curiosity. Our core findings were that (1) the induction of perceptual curiosity, through the presentation of ambiguous visual input, activated the anterior insula and anterior cingulate cortex (ACC), brain regions sensitive to conflict and arousal; (2) the relief of perceptual curiosity, through visual disambiguation, activated regions of the striatum that have been related to reward processing; and (3) the relief of perceptual curiosity was associated with hippocampal activation and enhanced incidental memory. These findings provide the first demonstration of the neural basis of human perceptual curiosity. Our results provide neurobiological support for a classic psychological theory of curiosity, which holds that curiosity is an aversive condition of increased arousal whose termination is rewarding and facilitates memory.
  • Jescheniak, J. D., Levelt, W. J. M., & Meyer, A. S. (2003). Specific word frequency is not all that counts in speech production: Comments on Caramazza, Costa, et al. (2001) and new experimental data. Journal of Experimental Psychology: Learning, Memory, & Cognition, 29(3), 432-438. doi:10.1037/0278-7393.29.3.432.

    Abstract

    A. Caramazza, A. Costa, M. Miozzo, and Y. Bi(2001) reported a series of experiments demonstrating that the ease of producing a word depends only on the frequency of that specific word but not on the frequency of a homophone twin. A. Caramazza, A. Costa, et al. concluded that homophones have separate word form representations and that the absence of frequency-inheritance effects for homophones undermines an important argument in support of 2-stage models of lexical access, which assume that syntactic (lemma) representations mediate between conceptual and phonological representations. The authors of this article evaluate the empirical basis of this conclusion, report 2 experiments demonstrating a frequency-inheritance effect, and discuss other recent evidence. It is concluded that homophones share a common word form and that the distinction between lemmas and word forms should be upheld.
  • Jesse, A., & Janse, E. (2012). Audiovisual benefit for recognition of speech presented with single-talker noise in older listeners. Language and Cognitive Processes, 27(7/8), 1167-1191. doi:10.1080/01690965.2011.620335.

    Abstract

    Older listeners are more affected than younger listeners in their recognition of speech in adverse conditions, such as when they also hear a single-competing speaker. In the present study, we investigated with a speeded response task whether older listeners with various degrees of hearing loss benefit under such conditions from also seeing the speaker they intend to listen to. We also tested, at the same time, whether older adults need postperceptual processing to obtain an audiovisual benefit. When tested in a phoneme-monitoring task with single-talker noise present, older (and younger) listeners detected target phonemes more reliably and more rapidly in meaningful sentences uttered by the target speaker when they also saw the target speaker. This suggests that older adults processed audiovisual speech rapidly and efficiently enough to benefit already during spoken sentence processing. Audiovisual benefits for older adults were similar in size to those observed for younger adults in terms of response latencies, but smaller for detection accuracy. Older adults with more hearing loss showed larger audiovisual benefits. Attentional abilities predicted the size of audiovisual response time benefits in both age groups. Audiovisual benefits were found in both age groups when monitoring for the visually highly distinct phoneme /p/ and when monitoring for the visually less distinct phoneme /k/. Visual speech thus provides segmental information about the target phoneme, but also provides more global contextual information that helps both older and younger adults in this adverse listening situation.
  • Jesse, A., & Johnson, E. K. (2012). Prosodic temporal alignment of co-speech gestures to speech facilitates referent resolution. Journal of Experimental Psychology: Human Perception and Performance, 38, 1567-1581. doi:10.1037/a0027921.

    Abstract

    Using a referent detection paradigm, we examined whether listeners can determine the object speakers are referring to by using the temporal alignment between the motion speakers impose on objects and their labeling utterances. Stimuli were created by videotaping speakers labeling a novel creature. Without being explicitly instructed to do so, speakers moved the creature during labeling. Trajectories of these motions were used to animate photographs of the creature. Participants in subsequent perception studies heard these labeling utterances while seeing side-by-side animations of two identical creatures in which only the target creature moved as originally intended by the speaker. Using the cross-modal temporal relationship between speech and referent motion, participants identified which creature the speaker was labeling, even when the labeling utterances were low-pass filtered to remove their semantic content or replaced by tone analogues. However, when the prosodic structure was eliminated by reversing the speech signal, participants no longer detected the referent as readily. These results provide strong support for a prosodic cross-modal alignment hypothesis. Speakers produce a perceptible link between the motion they impose upon a referent and the prosodic structure of their speech, and listeners readily use this prosodic cross-modal relationship to resolve referential ambiguity in word-learning situations.
  • Jessop, A., & Chang, F. (2022). Thematic role tracking difficulties across multiple visual events influences role use in language production. Visual Cognition, 30(3), 151-173. doi:10.1080/13506285.2021.2013374.

    Abstract

    Language sometimes requires tracking the same participant in different thematic roles across multiple visual events (e.g., The girl that another girl pushed chased a third girl). To better understand how vision and language interact in role tracking, participants described videos of multiple randomly moving circles where two push events were presented. A circle might have the same role in both push events (e.g., agent) or different roles (e.g., agent of one push and patient of other push). The first three studies found higher production accuracy for the same role conditions compared to the different role conditions across different linguistic structure manipulations. The last three studies compared a featural account, where role information was associated with particular circles, or a relational account, where role information was encoded with particular push events. These studies found no interference between different roles, contrary to the predictions of the featural account. The foil was manipulated in these studies to increase the saliency of the second push and it was found that this changed the accuracy in describing the first push. The results suggest that language-related thematic role processing uses a relational representation that can encode multiple events.

    Additional information

    https://doi.org/10.17605/OSF.IO/PKXZH
  • Jiang, J., Dai, B., Peng, D., Zhu, C., Liu, L., & Lu, C. (2012). Neural synchronization during face-to-face communication. Journal of Neuroscience, 32(45), 16064-16069. doi:10.1523/JNEUROSCI.2926-12.2012.

    Abstract

    Although the human brain may have evolutionarily adapted to face-to-face communication, other modes of communication, e.g., telephone and e-mail, increasingly dominate our modern daily life. This study examined the neural difference between face-to-face communication and other types of communication by simultaneously measuring two brains using a hyperscanning approach. The results showed a significant increase in the neural synchronization in the left inferior frontal cortex during a face-to-face dialog between partners but none during a back-to-back dialog, a face-to-face monologue, or a back-to-back monologue. Moreover, the neural synchronization between partners during the face-to-face dialog resulted primarily from the direct interactions between the partners, including multimodal sensory information integration and turn-taking behavior. The communicating behavior during the face-to-face dialog could be predicted accurately based on the neural synchronization level. These results suggest that face-to-face communication, particularly dialog, has special neural features that other types of communication do not have and that the neural synchronization between partners may underlie successful face-to-face communication.
  • Joergens, S., Kleiser, R., & Indefrey, P. (2007). Handedness and fMRI-activation patterns in sentence processing. NeuroReport, 18(13), 1339-1343.

    Abstract

    We investigate differences of cerebral activation in 12 right-handed and left-handed participants, respectively, using a sentence-processing task. Functional MRI shows activation of left-frontal and inferior-parietal speech areas (BA 44, BA9, BA 40) in both groups, but a stronger bilateral activation in left-handers. Direct group comparison reveals a stronger activation in right-frontal cortex (BA 47, BA 6) and left cerebellum in left-handers. Laterality indices for the inferior-frontal cortex are less asymmetric in left-handers and are not related to the degree of handedness. Thus, our results show that sentence-processing induced enhanced activation involving a bilateral network in left-handed participants.
  • Johns, T. G., Perera, R. M., Vernes, S. C., Vitali, A. A., Cao, D. X., Cavenee, W. K., Scott, A. M., & Furnari, F. B. (2007). The efficacy of epidermal growth factor receptor-specific antibodies against glioma xenografts is influenced by receptor levels, activation status, and heterodimerization. Clinical Cancer Research, 13, 1911-1925. doi:10.1158/1078-0432.CCR-06-1453.

    Abstract

    Purpose: Factors affecting the efficacy of therapeutic monoclonal antibodies (mAb) directed to the epidermal growth factor receptor (EGFR) remain relatively unknown, especially in glioma. Experimental Design: We examined the efficacy of two EGFR-specific mAbs (mAbs 806 and 528) against U87MG-derived glioma xenografts expressing EGFR variants. Using this approach allowed us to change the form of the EGFR while keeping the genetic background constant. These variants included the de2-7 EGFR (or EGFRvIII), a constitutively active mutation of the EGFR expressed in glioma. Results: The efficacy of the mAbs correlated with EGFR number; however, the most important factor was receptor activation. Whereas U87MG xenografts expressing the de2-7 EGFR responded to therapy, those exhibiting a dead kinase de2-7 EGFR were refractory. A modified de2-7 EGFR that was kinase active but autophosphorylation deficient also responded, suggesting that these mAbs function in de2-7 EGFR–expressing xenografts by blocking transphosphorylation. Because de2-7 EGFR–expressing U87MG xenografts coexpress the wild-type EGFR, efficacy of the mAbs was also tested against NR6 xenografts that expressed the de2-7 EGFR in isolation. Whereas mAb 806 displayed antitumor activity against NR6 xenografts, mAb 528 therapy was ineffective, suggesting that mAb 528 mediates its antitumor activity by disrupting interactions between the de2-7 and wild-type EGFR. Finally, genetic disruption of Src in U87MG xenografts expressing the de2-7 EGFR dramatically enhanced mAb 806 efficacy. Conclusions: The effective use of EGFR-specific antibodies in glioma will depend on identifying tumors with activated EGFR. The combination of EGFR and Src inhibitors may be an effective strategy for the treatment of glioma.
  • Johnson, E. K., Jusczyk, P. W., Cutler, A., & Norris, D. (2003). Lexical viability constraints on speech segmentation by infants. Cognitive Psychology, 46(1), 65-97. doi:10.1016/S0010-0285(02)00507-8.

    Abstract

    The Possible Word Constraint limits the number of lexical candidates considered in speech recognition by stipulating that input should be parsed into a string of lexically viable chunks. For instance, an isolated single consonant is not a feasible word candidate. Any segmentation containing such a chunk is disfavored. Five experiments using the head-turn preference procedure investigated whether, like adults, 12-month-olds observe this constraint in word recognition. In Experiments 1 and 2, infants were familiarized with target words (e.g., rush), then tested on lists of nonsense items containing these words in “possible” (e.g., “niprush” [nip + rush]) or “impossible” positions (e.g., “prush” [p + rush]). The infants listened significantly longer to targets in “possible” versus “impossible” contexts when targets occurred at the end of nonsense items (rush in “prush”), but not when they occurred at the beginning (tan in “tance”). In Experiments 3 and 4, 12-month-olds were similarly familiarized with target words, but test items were real words in sentential contexts (win in “wind” versus “window”). The infants listened significantly longer to words in the “possible” condition regardless of target location. Experiment 5 with targets at the beginning of isolated real words (e.g., win in “wind”) replicated Experiment 2 in showing no evidence of viability effects in beginning position. Taken together, the findings suggest that, in situations in which 12-month-olds are required to rely on their word segmentation abilities, they give evidence of observing lexical viability constraints in the way that they parse fluent speech.

Share this page