Publications

Displaying 301 - 400 of 969
  • Goudbeek, M., Swingley, D., & Smits, R. (2009). Supervised and unsupervised learning of multidimensional acoustic categories. Journal of Experimental Psychology: Human Perception and Performance, 35, 1913-1933. doi:10.1037/a0015781.

    Abstract

    Learning to recognize the contrasts of a language-specific phonemic repertoire can be viewed as forming categories in a multidimensional psychophysical space. Research on the learning of distributionally defined visual categories has shown that categories defined over I dimension are easy to learn and that learning multidimensional categories is more difficult but tractable under specific task conditions. In 2 experiments, adult participants learned either a unidimensional ora multidimensional category distinction with or without supervision (feedback) during learning. The unidimensional distinctions were readily learned and supervision proved beneficial, especially in maintaining category learning beyond the learning phase. Learning the multidimensional category distinction proved to be much more difficult and supervision was not nearly as beneficial as with unidimensionally defined categories. Maintaining a learned multidimensional category distinction was only possible when the distributional information (hat identified the categories remained present throughout the testing phase. We conclude that listeners are sensitive to both trial-by-trial feedback and the distributional information in the stimuli. Even given limited exposure, listeners learned to use 2 relevant dimensions. albeit with considerable difficulty.
  • Graham, S. A., Antonopoulos, A., Hitchen, P. G., Haslam, S. M., Dell, A., Drickamer, K., & Taylor, M. E. (2011). Identification of neutrophil granule glycoproteins as Lewisx-containing ligands cleared by the scavenger receptor C-type lectin. Journal of Biological Chemistry, 286, 24336-24349. doi:10.1074/jbc.M111.244772.

    Abstract

    The scavenger receptor C-type lectin (SRCL) is a glycan-binding receptor that has the capacity to mediate endocytosis of glycoproteins carrying terminal Lewis(x) groups (Galβ1-4(Fucα1-3)GlcNAc). A screen for glycoprotein ligands for SRCL using affinity chromatography on immobilized SRCL followed by mass spectrometry-based proteomic analysis revealed that soluble glycoproteins from secondary granules of neutrophils, including lactoferrin and matrix metalloproteinases 8 and 9, are major ligands. Binding competition and surface plasmon resonance analysis showed affinities in the low micromolar range. Comparison of SRCL binding to neutrophil and milk lactoferrin indicates that the binding is dependent on cell-specific glycosylation in the neutrophils, as the milk form of the glycoprotein is a much poorer ligand. Binding to neutrophil glycoproteins is fucose dependent and mass spectrometry-based glycomic analysis of neutrophil and milk lactoferrin was used to establish a correlation between high affinity binding to SRCL and the presence of multiple, clustered terminal Lewis(x) groups on a heterogeneous mixture of branched glycans, some with poly N-acetyllactosamine extensions. The ability of SRCL to mediate uptake of neutrophil lactoferrin was confirmed using fibroblasts transfected with SRCL. The common presence of Lewis(x) groups in granule protein glycans can thus target granule proteins for clearance by SRCL. PCR and immunohistochemical analysis confirms that SRCL is widely expressed on endothelial cells and thus represents a distributed system which could scavenge released neutrophil glycoproteins both locally at sites of inflammation or systemically when they are released in the circulation.

    Additional information

    graham_supp_info.pdf
  • Graham, S. A., Deriziotis, P., & Fisher, S. E. (2015). Insights into the genetic foundations of human communication. Neuropsychology Review, 25(1), 3-26. doi:10.1007/s11065-014-9277-2.

    Abstract

    The human capacity to acquire sophisticated language is unmatched in the animal kingdom. Despite the discontinuity in communicative abilities between humans and other primates, language is built on ancient genetic foundations, which are being illuminated by comparative genomics. The genetic architecture of the language faculty is also being uncovered by research into neurodevelopmental disorders that disrupt the normally effortless process of language acquisition. In this article, we discuss the strategies that researchers are using to reveal genetic factors contributing to communicative abilities, and review progress in identifying the relevant genes and genetic variants. The first gene directly implicated in a speech and language disorder was FOXP2. Using this gene as a case study, we illustrate how evidence from genetics, molecular cell biology, animal models and human neuroimaging has converged to build a picture of the role of FOXP2 in neurodevelopment, providing a framework for future endeavors to bridge the gaps between genes, brains and behavior
  • Graham, S. A., Jégouzo, S. A. F., Yan, S., Powlesland, A. S., Brady, J. P., Taylor, M. E., & Drickamer, K. (2009). Prolectin, a glycan-binding receptor on dividing B cells in germinal centers. The Journal of Biological Chemistry, 284, 18537-18544. doi:10.1074/jbc.M109.012807.

    Abstract

    Prolectin, a previously undescribed glycan-binding receptor, has been identified by re-screening of the human genome for genes encoding proteins containing potential C-type carbohydrate-recognition domains. Glycan array analysis revealed that the carbohydrate-recognition domain in the extracellular domain of the receptor binds glycans with terminal α-linked mannose or fucose residues. Prolectin expressed in fibroblasts is found at the cell surface, but unlike many glycan-binding receptors it does not mediate endocytosis of a neoglycoprotein ligand. However, compared with other known glycan-binding receptors, the receptor contains an unusually large intracellular domain that consists of multiple sequence motifs, including phosphorylated tyrosine residues, that allow it to interact with signaling molecules such as Grb2. Immunohistochemistry has been used to demonstrate that prolectin is expressed on a specialized population of proliferating B cells in germinal centers. Thus, this novel receptor has the potential to function in carbohydrate-mediated communication between cells in the germinal center.
  • Graham, S. A., & Fisher, S. E. (2015). Understanding language from a genomic perspective. Annual Review of Genetics, 49, 131-160. doi:10.1146/annurev-genet-120213-092236.

    Abstract

    Language is a defining characteristic of the human species, but its foundations remain mysterious. Heritable disorders offer a gateway into biological underpinnings, as illustrated by the discovery that FOXP2 disruptions cause a rare form of speech and language impairment. The genetic architecture underlying language-related disorders is complex, and although some progress has been made, it has proved challenging to pinpoint additional relevant genes with confidence. Next-generation sequencing and genome-wide association studies are revolutionizing understanding of the genetic bases of other neurodevelopmental disorders, like autism and schizophrenia, and providing fundamental insights into the molecular networks crucial for typical brain development. We discuss how a similar genomic perspective, brought to the investigation of language-related phenotypes, promises to yield equally informative discoveries. Moreover, we outline how follow-up studies of genetic findings using cellular systems and animal models can help to elucidate the biological mechanisms involved in the development of brain circuits supporting language.

    Files private

    Request files
  • Gretsch, P. (2004). What does finiteness mean to children? A cross-linguistic perspective onroot infinitives. Linguistics, 42(2), 419-468. doi:10.1515/ling.2004.014.

    Abstract

    The discussion on root infinitives has mainly centered around their supposed modal usage. This article aims at modelling the form-function relation of the root infinitive phenomenon by taking into account the full range of interpretational facets encountered cross-linguistically and interindividually. Following the idea of a subsequent ‘‘cell partitioning’’ in the emergence of form-function correlations, I claim that it is the major fission between [+-finite] which is central to express temporal reference different from the default here&now in tense-oriented languages. In aspectual-oriented languages, a similar opposition is mastered with the marking of early aspectual forms. It is observed that in tense-oriented languages like Dutch and German, the progression of functions associated with the infinitival form proceeds from nonmodal to modal, whereas the reverse progression holds for the Russian infinitive. Based on this crucial observation, a model of acquisition is proposed which allows for a flexible and systematic relationship between morphological forms and their respective interpretational biases dependent on their developmental context. As for early child language, I argue that children entertain only two temporal parameters: one parameter is fixed to the here&now point in time, and a second parameter relates to the time talked about, the topic time; this latter time overlaps the situation time as long as no empirical evidence exists to support the emergence of a proper distinction between tense and aspect.

    Files private

    Request files
  • Grünloh, T., & Liszkowski, U. (2015). Prelinguistic vocalizations distinguish pointing acts. Journal of Child Language, 42(6), 1312-1336. doi:10.1017/S0305000914000816.

    Abstract

    The current study investigated whether point-accompanying characteristics, like vocalizations and hand shape, differentiate infants' underlying motives of prelinguistic pointing. We elicited imperative (requestive) and declarative (expressive and informative) pointing acts in experimentally controlled situations, and analyzed accompanying characteristics. Experiment 1 revealed that prosodic characteristics of point-accompanying vocalizations distinguished requestive from both expressive and informative pointing acts, with little differences between the latter two. In addition, requestive points were more often realized with the whole hand than the index finger, while this was the opposite for expressive and informative acts. Experiment 2 replicated Experiment 1, revealing distinct prosodic characteristics for requestive pointing also when the referent was distal and when it had an index-finger shape. Findings reveal that beyond the social context, point-accompanying vocalizations give clues to infants' underlying intentions when pointing.
  • Guadalupe, T., Zwiers, M. P., Wittfeld, K., Teumer, A., Vasquez, A. A., Hoogman, M., Hagoort, P., Fernandez, G., Buitelaar, J., van Bokhoven, H., Hegenscheid, K., Völzke, H., Franke, B., Fisher, S. E., Grabe, H. J., & Francks, C. (2015). Asymmetry within and around the human planum temporale is sexually dimorphic and influenced by genes involved in steroid hormone receptor activity. Cortex, 62, 41-55. doi:10.1016/j.cortex.2014.07.015.

    Abstract

    The genetic determinants of cerebral asymmetries are unknown. Sex differences in asymmetry of the planum temporale, that overlaps Wernicke’s classical language area, have been inconsistently reported. Meta-analysis of previous studies has suggested that publication bias established this sex difference in the literature. Using probabilistic definitions of cortical regions we screened over the cerebral cortex for sexual dimorphisms of asymmetry in 2337 healthy subjects, and found the planum temporale to show the strongest sex-linked asymmetry of all regions, which was supported by two further datasets, and also by analysis with the Freesurfer package that performs automated parcellation of cerebral cortical regions. We performed a genome-wide association scan meta-analysis of planum temporale asymmetry in a pooled sample of 3095 subjects, followed by a candidate-driven approach which measured a significant enrichment of association in genes of the ´steroid hormone receptor activity´ and 'steroid metabolic process' pathways. Variants in the genes and pathways identified may affect the role of the planum temporale in language cognition.
  • Gubian, M., Torreira, F., & Boves, L. (2015). Using functional data analysis for investigating multidimensional dynamic phonetic contrasts. Journal of Phonetics, 49, 16-40. doi:10.1016/j.wocn.2014.10.001.

    Abstract

    The study of phonetic contrasts and related phenomena, e.g. inter- and intra-speaker variability, often requires to analyse data in the form of measured time series, like f0 contours and formant trajectories. As a consequence, the investigator has to find suitable ways to reduce the raw and abundant numerical information contained in a bundle of time series into a small but sufficient set of numerical descriptors of their shape. This approach requires one to decide in advance which dynamic traits to include in the analysis and which not. For example, a rising pitch gesture may be represented by its duration and slope, hence reducing it to a straight segment, or by a richer coding specifying also whether (and how much) the rising contour is concave or convex, the latter being irrelevant in some context but crucial in others. Decisions become even more complex when a phenomenon is described by a multidimensional time series, e.g. by the first two formants. In this paper we introduce a methodology based on Functional Data Analysis (FDA) that allows the investigator to delegate most of the decisions involved in the quantitative description of multidimensional time series to the data themselves. FDA produces a data-driven parametrisation of the main shape traits present in the data that is visually interpretable, in the same way as slopes or peak heights are. These output parameters are numbers that are amenable to ordinary statistical analysis, e.g. linear (mixed effects) models. FDA is also able to capture correlations among different dimensions of a time series, e.g. between formants F1 and F2. We present FDA by means of an extended case study on diphthong – hiatus distinction in Spanish, a contrast that involves duration, formant trajectories and pitch contours.
  • Le Guen, O., Samland, J., Friedrich, T., Hanus, D., & Brown, P. (2015). Making sense of (exceptional) causal relations. A cross-cultural and cross-linguistic study. Frontiers in Psychology, 6: 1645. doi:10.3389/fpsyg.2015.01645.

    Abstract

    In order to make sense of the world, humans tend to see causation almost everywhere. Although most causal relations may seem straightforward, they are not always construed in the same way cross-culturally. In this study, we investigate concepts of ‘chance’, ‘coincidence’ or ‘randomness’ that refer to assumed relations between intention, action, and outcome in situations, and we ask how people from different cultures make sense of such non-law-like connections. Based on a framework proposed by Alicke (2000), we administered a task that aims to be a neutral tool for investigating causal construals cross-culturally and cross-linguistically. Members of four different cultural groups, rural Mayan Yucatec and Tseltal speakers from Mexico and urban students from Mexico and Germany, were presented with a set of scenarios involving various types of causal and non-causal relations and were asked to explain the described events. Three links varied as to whether they were present or not in the scenarios: Intention to Action, Action to Outcome, and Intention to Outcome. Our results show that causality is recognized in all four cultural groups. However, how causality and especially non-law-like causality are interpreted depends on the type of links, the cultural background and the language used. In all three groups, Action to Outcome is the decisive link for recognizing causality. Despite the fact that the two Mayan groups share similar cultural backgrounds, they display different ideologies regarding concepts of non-law causality. The data suggests that the concept of ‘chance’ is not universal, but seems to be an explanation that only some cultural groups draw on to make sense of specific situations. Of particular importance is the existence of linguistic concepts in each language that trigger ideas of causality in the responses from each cultural group

    Additional information

    LeGuen_etal_2015sup.docx
  • Le Guen, O. (2011). Materiality vs. expressivity: The use of sensory vocabulary in Yucatec Maya. The Senses & Society, 6(1), 117-126. doi:10.2752/174589311X12893982233993.

    Abstract

    In this article, sensory vocabulary relating to color, texture, and other sensory experiences in Yucatec Maya (a language spoken in Mexico) is examined, and its possible relation to material culture practices explored. In Yucatec Maya, some perceptual experience can be expressed in a fine-grained way through a compact one-word adjective. Complex notions can be succinctly expressed by combining roots with a general meaning and applying templates or compounds to those sensory roots. For instance, the root tak’, which means ‘adhere/adherence,’ can be derived to express the notion of ‘dirty red’ chak-tak’-e’en or ‘sticky with an unbounded pattern’ tak’aknak, or the root ts’ap ‘piled-up’ can express ‘several tones of green (e.g. in the forest)’ ya’axts’ape’en or ‘piled-up, known through a tactile experience’ ts’aplemak. The productive nature of this linguistic system seems at first glance to be very well fitted to orient practices relating to the production of local material culture. In examining several hours of video-recorded natural data contrasting work and non-work directed interactions, it emerges that sensory vocabulary is not used for calibrating knowledge but is instead recruited by speakers to achieve vividness in an effort to verbally reproduce the way speakers experience percepts
  • Le Guen, O. (2011). Modes of pointing to existing spaces and the use of frames of reference. Gesture, 11, 271-307. doi:10.1075/gest.11.3.02leg.

    Abstract

    This paper aims at providing a systematic framework for investigating differences in how people point to existing spaces. Pointing is considered according to two conditions: (1) A non-transposed condition where the body of the speaker always constitutes the origo and where the various types of pointing are differentiated by the status of the target and (2) a transposed condition where both the distant figure and the distant ground are identified and their relation specified according to two frames of reference (FoRs): the egocentric FoR (where spatial relationships are coded with respect to the speaker's point of view) and the geocentric FoR (where spatial relationships are coded in relation to external cues in the environment). The preference for one or the other frame of reference not only has consequences for pointing to real spaces but has some resonance in other domains, constraining the production of gesture in these related domains.
  • Le Guen, O. (2011). Speech and gesture in spatial language and cognition among the Yucatec Mayas. Cognitive Science, 35, 905-938. doi:10.1111/j.1551-6709.2011.01183.x.

    Abstract

    In previous analyses of the influence of language on cognition, speech has been the main channel examined. In studies conducted among Yucatec Mayas, efforts to determine the preferred frame of reference in use in this community have failed to reach an agreement (Bohnemeyer & Stolz, 2006; Levinson, 2003 vs. Le Guen, 2006, 2009). This paper argues for a multimodal analysis of language that encompasses gesture as well as speech, and shows that the preferred frame of reference in Yucatec Maya is only detectable through the analysis of co-speech gesture and not through speech alone. A series of experiments compares knowledge of the semantics of spatial terms, performance on nonlinguistic tasks and gestures produced by men and women. The results show a striking gender difference in the knowledge of the semantics of spatial terms, but an equal preference for a geocentric frame of reference in nonverbal tasks. In a localization task, participants used a variety of strategies in their speech, but they all exhibited a systematic preference for a geocentric frame of reference in their gestures.
  • Guerrero, L., & Van Valin Jr., R. D. (2004). Yaqui and the analysis of primary object languages. International Journal of American Linguistics, 70(3), 290-319. doi:10.1086/425603.

    Abstract

    The central topic of this study is to investigate three- and four-place predicate in Yaqui, which are characterized by having multiple object arguments. As with other Southern Uto-Aztecan languages, it has been said that Yaqui follows the Primary/Secondary Object pattern (Dryer 1986). Actually, Yaqui presents three patterns: verbs like nenka ‘sell’ follow the direct–indirect object pattern, verbs like miika ‘give’ follow the primary object pattern, and verbs like chijakta ‘sprinkle’ follow the locative alternation pattern; the primary object pattern is the exclusive one found with derived verbs. This paper shows that the contrast between direct object and primary object languages is not absolute but rather one of degree, and hence two “object” selection principles are needed to explain this mixed system. The two principles are not limited to Yaqui but are found in other languages as well, including English.
  • Guggenheim, J. A., St Pourcain, B., McMahon, G., Timpson, N. J., Evans, D. M., & Williams, C. (2015). Assumption-free estimation of the genetic contribution to refractive error across childhood. Molecular Vision, 21, 621-632. Retrieved from http://www.molvis.org/molvis/v21/621.

    Abstract

    Studies in relatives have generally yielded high heritability estimates for refractive error: twins 75–90%, families 15–70%. However, because related individuals often share a common environment, these estimates are inflated (via misallocation of unique/common environment variance). We calculated a lower-bound heritability estimate for refractive error free from such bias.
    Between the ages 7 and 15 years, participants in the Avon Longitudinal Study of Parents and Children (ALSPAC) underwent non-cycloplegic autorefraction at regular research clinics. At each age, an estimate of the variance in refractive error explained by single nucleotide polymorphism (SNP) genetic variants was calculated using genome-wide complex trait analysis (GCTA) using high-density genome-wide SNP genotype information (minimum N at each age=3,404).
    The variance in refractive error explained by the SNPs (“SNP heritability”) was stable over childhood: Across age 7–15 years, SNP heritability averaged 0.28 (SE=0.08, p<0.001). The genetic correlation for refractive error between visits varied from 0.77 to 1.00 (all p<0.001) demonstrating that a common set of SNPs was responsible for the genetic contribution to refractive error across this period of childhood. Simulations suggested lack of cycloplegia during autorefraction led to a small underestimation of SNP heritability (adjusted SNP heritability=0.35; SE=0.09). To put these results in context, the variance in refractive error explained (or predicted) by the time participants spent outdoors was <0.005 and by the time spent reading was <0.01, based on a parental questionnaire completed when the child was aged 8–9 years old.
    Genetic variation captured by common SNPs explained approximately 35% of the variation in refractive error between unrelated subjects. This value sets an upper limit for predicting refractive error using existing SNP genotyping arrays, although higher-density genotyping in larger samples and inclusion of interaction effects is expected to raise this figure toward twin- and family-based heritability estimates. The same SNPs influenced refractive error across much of childhood. Notwithstanding the strong evidence of association between time outdoors and myopia, and time reading and myopia, less than 1% of the variance in myopia at age 15 was explained by crude measures of these two risk factors, indicating that their effects may be limited, at least when averaged over the whole population.
  • Gullberg, M., & Kita, S. (2009). Attention to speech-accompanying gestures: Eye movements and information uptake. Journal of Nonverbal Behavior, 33(4), 251-277. doi:10.1007/s10919-009-0073-2.

    Abstract

    There is growing evidence that addressees in interaction integrate the semantic information conveyed by speakers’ gestures. Little is known, however, about whether and how addressees’ attention to gestures and the integration of gestural information can be modulated. This study examines the influence of a social factor (speakers’ gaze to their own gestures), and two physical factors (the gesture’s location in gesture space and gestural holds) on addressees’ overt visual attention to gestures (direct fixations of gestures) and their uptake of gestural information. It also examines the relationship between gaze and uptake. The results indicate that addressees’ overt visual attention to gestures is affected both by speakers’ gaze and holds but for different reasons, whereas location in space plays no role. Addressees’ uptake of gesture information is only influenced by speakers’ gaze. There is little evidence of a direct relationship between addressees’ direct fixations of gestures and their uptake.
  • Gullberg, M. (2004). [Review of the book Pointing: Where language, culture and cognition meet ed. by Sotaro Kita]. Gesture, 4(2), 235-248. doi:10.1075/gest.4.2.08gul.
  • Gullberg, M. (2009). Gestures and the development of semantic representations in first and second language acquisition. Acquisition et Interaction en Langue Etrangère..Languages, Interaction, and Acquisition (former AILE), 1, 117-139.

    Abstract

    This paper argues that speech-associated gestures can usefully inform studies exploring development of meaning in first and second language acquisition. The example domain is caused motion or placement meaning (putting a cup on a table) where acquisition problems have been observed and where adult native gesture use reflects crosslinguistically different placement verb semantics. Against this background, the paper summarises three studies examining the development of semantic representations in Dutch children acquiring Dutch, and adult learners’ acquiring Dutch and French placement verbs. Overall, gestures change systematically with semantic development both in children and adults and (1) reveal what semantic elements are included in current semantic representations, whether target-like or not, and (2) highlight developmental shifts in those representations. There is little evidence that gestures chiefly act as a support channel. Instead, the data support the theoretical notion that speech and gesture form an integrated system, opening new possibilities for studying the processes of acquisition.
  • Gullberg, M. (2009). Reconstructing verb meaning in a second language: How English speakers of L2 Dutch talk and gesture about placement. Annual Review of Cognitive Linguistics, 7, 221-245. doi:10.1075/arcl.7.09gul.

    Abstract

    This study examines to what extent English speakers of L2 Dutch reconstruct the meanings of placement verbs when moving from a general L1 verb of caused motion (put) to two specific caused posture verbs (zetten/leggen ‘set/lay’) in the L2 and whether the existence of low-frequency cognate forms in the L1 (set/lay) alleviates the reconstruction problem. Evidence from speech and gesture indicates that English speakers have difficulties with the specific verbs in L2 Dutch, initially looking for means to express general caused motion in L1-like fashion through over-generalisation. The gesture data further show that targetlike forms are often used to convey L1-like meaning. However, the differentiated use of zetten for vertical placement and dummy verbs (gaan ‘go’ and doen ‘do’) and intransitive posture verbs (zitten/staan/liggen ‘sit, stand, lie’) for horizontal placement, and a positive correlation between appropriate verb use and target-like gesturing suggest a beginning sensitivity to the semantic parameters of the L2 verbs and possible reconstruction.
  • Gupta, C. N., Calhoun, V. D., Rachkonda, S., Chen, J., Patel, V., Liu, J., Segall, J., Franke, B., Zwiers, M. P., Arias-Vasquez, A., Buitelaar, J., Fisher, S. E., Fernández, G., van Erp, T. G. M., Potkin, S., Ford, J., Matalon, D., McEwen, S., Lee, H. J., Mueller, B. A. and 16 moreGupta, C. N., Calhoun, V. D., Rachkonda, S., Chen, J., Patel, V., Liu, J., Segall, J., Franke, B., Zwiers, M. P., Arias-Vasquez, A., Buitelaar, J., Fisher, S. E., Fernández, G., van Erp, T. G. M., Potkin, S., Ford, J., Matalon, D., McEwen, S., Lee, H. J., Mueller, B. A., Greve, D. N., Andreassen, O., Agartz, I., Gollub, R. L., Sponheim, S. R., Ehrlich, S., Wang, L., Pearlson, G., Glahn, D. S., Sprooten, E., Mayer, A. R., Stephen, J., Jung, R. E., Canive, J., Bustillo, J., & Turner, J. A. (2015). Patterns of gray matter abnormalities in schizophrenia based on an international mega-analysis. Schizophrenia Bulletin, 41(5), 1133-1142. doi:10.1093/schbul/sbu177.

    Abstract

    Analyses of gray matter concentration (GMC) deficits in patients with schizophrenia (Sz) have identified robust changes throughout the cortex. We assessed the relationships between diagnosis, overall symptom severity, and patterns of gray matter in the largest aggregated structural imaging dataset to date. We performed both source-based morphometry (SBM) and voxel-based morphometry (VBM) analyses on GMC images from 784 Sz and 936 controls (Ct) across 23 scanning sites in Europe and the United States. After correcting for age, gender, site, and diagnosis by site interactions, SBM analyses showed 9 patterns of diagnostic differences. They comprised separate cortical, subcortical, and cerebellar regions. Seven patterns showed greater GMC in Ct than Sz, while 2 (brainstem and cerebellum) showed greater GMC for Sz. The greatest GMC deficit was in a single pattern comprising regions in the superior temporal gyrus, inferior frontal gyrus, and medial frontal cortex, which replicated over analyses of data subsets. VBM analyses identified overall cortical GMC loss and one small cluster of increased GMC in Sz, which overlapped with the SBM brainstem component. We found no significant association between the component loadings and symptom severity in either analysis. This mega-analysis confirms that the commonly found GMC loss in Sz in the anterior temporal lobe, insula, and medial frontal lobe form a single, consistent spatial pattern even in such a diverse dataset. The separation of GMC loss into robust, repeatable spatial patterns across multiple datasets paves the way for the application of these methods to identify subtle genetic and clinical cohort effects.
  • Habets, B., Kita, S., Shao, Z., Ozyurek, A., & Hagoort, P. (2011). The role of synchrony and ambiguity in speech–gesture integration during comprehension. Journal of Cognitive Neuroscience, 23, 1845-1854. doi:10.1162/jocn.2010.21462.

    Abstract

    During face-to-face communication, one does not only hear speech but also see a speaker's communicative hand movements. It has been shown that such hand gestures play an important role in communication where the two modalities influence each other's interpretation. A gesture typically temporally overlaps with coexpressive speech, but the gesture is often initiated before (but not after) the coexpressive speech. The present ERP study investigated what degree of asynchrony in the speech and gesture onsets are optimal for semantic integration of the concurrent gesture and speech. Videos of a person gesturing were combined with speech segments that were either semantically congruent or incongruent with the gesture. Although gesture and speech always overlapped in time, gesture and speech were presented with three different degrees of asynchrony. In the SOA 0 condition, the gesture onset and the speech onset were simultaneous. In the SOA 160 and 360 conditions, speech was delayed by 160 and 360 msec, respectively. ERPs time locked to speech onset showed a significant difference between semantically congruent versus incongruent gesture–speech combinations on the N400 for the SOA 0 and 160 conditions. No significant difference was found for the SOA 360 condition. These results imply that speech and gesture are integrated most efficiently when the differences in onsets do not exceed a certain time span because of the fact that iconic gestures need speech to be disambiguated in a way relevant to the speech context.
  • Hagoort, P. (1994). Afasie als een tekort aan tijd voor spreken en verstaan. De Psycholoog, 4, 153-154.
  • Hagoort, P. (1998). De electrofysiologie van taal: Wat hersenpotentialen vertellen over het menselijk taalvermogen. Neuropraxis, 2, 223-229.
  • Hagoort, P. (1998). De spreker als sprinter. Psychologie, 17, 48-49.
  • Hagoort, P., Hald, L. A., Bastiaansen, M. C. M., & Petersson, K. M. (2004). Integration of word meaning and world knowledge in language comprehension. Science, 304(5669), 438-441. doi:10.1126/science.1095455.

    Abstract

    Although the sentences that we hear or read have meaning, this does not necessarily mean that they are also true. Relatively little is known about the critical brain structures for, and the relative time course of, establishing the meaning and truth of linguistic expressions. We present electroencephalogram data that show the rapid parallel integration of both semantic and world
    knowledge during the interpretation of a sentence. Data from functional magnetic resonance imaging revealed that the left inferior prefrontal cortex is involved in the integration of both meaning and world knowledge. Finally, oscillatory brain responses indicate that the brain keeps a record of what makes a sentence hard to interpret.
  • Hagoort, P. (1998). Hersenen en taal in onderzoek en praktijk. Neuropraxis, 6, 204-205.
  • Hagoort, P. (1994). Het brein op een kier: Over hersenen gesproken. Psychologie, 13, 42-46.
  • Hagoort, P., & Levelt, W. J. M. (2009). The speaking brain. Science, 326(5951), 372-373. doi:10.1126/science.1181675.

    Abstract

    How does intention to speak become the action of speaking? It involves the generation of a preverbal message that is tailored to the requirements of a particular language, and through a series of steps, the message is transformed into a linear sequence of speech sounds (1, 2). These steps include retrieving different kinds of information from memory (semantic, syntactic, and phonological), and combining them into larger structures, a process called unification. Despite general agreement about the steps that connect intention to articulation, there is no consensus about their temporal profile or the role of feedback from later steps (3, 4). In addition, since the discovery by the French physician Pierre Paul Broca (in 1865) of the role of the left inferior frontal cortex in speaking, relatively little progress has been made in understanding the neural infrastructure that supports speech production (5). One reason is that the characteristics of natural language are uniquely human, and thus the neurobiology of language lacks an adequate animal model. But on page 445 of this issue, Sahin et al. (6) demonstrate, by recording neuronal activity in the human brain, that different kinds of linguistic information are indeed sequentially processed within Broca's area.
  • Hall, M. L., Ahn, D., Mayberry, R. I., & Ferreira, V. S. (2015). Production and comprehension show divergent constituent order preferences: Evidence from elicited pantomime. Journal of Memory and Language, 81, 16-33. doi:10.1016/j.jml.2014.12.003.

    Abstract

    All natural languages develop devices to communicate who did what to whom. Elicited pantomime provides one model for studying this process, by providing a window into how humans (hearing non-signers) behave in a natural communicative modality (silent gesture) without established conventions from a grammar. Most studies in this paradigm focus on production, although they sometimes make assumptions about how comprehenders would likely behave. Here, we directly assess how naïve speakers of English (Experiments 1 & 2), Korean (Experiment 1), and Turkish (Experiment 2) comprehend pantomimed descriptions of transitive events, which are either semantically reversible (Experiments 1 & 2) or not (Experiment 2). Contrary to previous assumptions, we find no evidence that Person-Person-Action sequences are ambiguous to comprehenders, who simply adopt an agent-first parsing heuristic for all constituent orders. We do find that Person-Action-Person sequences yield the most consistent interpretations, even in native speakers of SOV languages. The full range of behavior in both production and comprehension provides counter-evidence to the notion that producers’ utterances are motivated by the needs of comprehenders. Instead, we argue that production and comprehension are subject to different sets of cognitive pressures, and that the dynamic interaction between these competing pressures can help explain synchronic and diachronic constituent order phenomena in natural human languages, both signed and spoken.
  • Hammarström, H. (2011). A note on the Maco (Piaroan) language of the lower Ventuari, Venezuela. Cadernos de Etnolingüística, 3(1), 1-11. Retrieved from http://www.etnolinguistica.org/issue:vol3n1.

    Abstract

    The present paper seeks to clarify the position of the Maco [wpc] language of the lower Ventuari, Venezuela, since there has been some uncertainty in the literature on this matter. Maco-Ventuari, not to be confused with other languages with a similar name, is so far poorly documented, but the present paper shows that it is nevertheless possible to show that it is a dialect of Piaroa or a language closely related to Piaroa
  • Hammarström, H., & Nordhoff, S. (2011). LangDoc: Bibliographic infrastructure for linguistic typology. Oslo Studies in Language, 3(2), 31-43. Retrieved from https://www.journals.uio.no/index.php/osla/article/view/75.

    Abstract

    The present paper describes the ongoing project LangDoc to make a bibliography website for linguistic typology, with a near-complete database of references to documents that contain descriptive data on the languages of the world. This is intended to provide typologists with a more precise and comprehensive way to search for information on languages, and for the specific kind information that they are interested in. The annotation scheme devised is a trade-off between annotation effort and search desiderata. The end goal is a website with browse, search, update, new items subscription and download facilities, which can hopefully be enriched by spontaneous collaborative efforts.
  • Hammarström, H. (2015). Ethnologue 16/17/18th editions: A comprehensive review. Language, 91, 723-737. doi:10.1353/lan.2015.0038.

    Abstract

    Ethnologue (http://www.ethnologue.com) is the most widely consulted inventory of the world’slanguages used today. The present review article looks carefully at the goals and description of the content of the Ethnologue’s 16th, 17th, and 18th editions, and reports on a comprehensive survey of the accuracy of the inventory itself. While hundreds of spurious and missing languages can be documented for Ethnologue, it is at present still better than any other nonderivative work of the same scope, in all aspects but one. Ethnologue fails to disclose the sources for the information presented, at odds with well-established scientific principles. The classification of languages into families in Ethnologue is also evaluated, and found to be far off from that argued in the specialist literature on the classification of individual languages. Ethnologue is frequently held to be splitting: that is, it tends to recognize more languages than an application of the criterion of mutual intelligibility would yield. By means of a random sample, we find that, indeed, with confidence intervals, the number of mutually unintelligible languages is on average 85% of the number found in Ethnologue. © 2015, Linguistic Society of America. All rights reserved.
  • Hammarström, H. (2015). Ethnologue 16/17/18th editions: A comprehensive review: Online appendices. Language, 91(3), s1-s188. doi:10.1353/lan.2015.0049.
  • Hammarström, H., & Borin, L. (2011). Unsupervised learning of morphology. Computational Linguistics, 37(2), 309-350. doi:10.1162/COLI_a_00050.

    Abstract

    This article surveys work on Unsupervised Learning of Morphology. We define Unsupervised Learning of Morphology as the problem of inducing a description (of some kind, even if only morpheme segmentation) of how orthographic words are built up given only raw text data of a language. We briefly go through the history and motivation of this problem. Next, over 200 items of work are listed with a brief characterization, and the most important ideas in the field are critically discussed. We summarize the achievements so far and give pointers for future developments.
  • Hammond, J. (2011). JVC GY-HM100U HD video camera and FFmpeg libraries [Technology review]. Language Documentation and Conservation, 5, 69-80.
  • Hanique, I., Ernestus, M., & Boves, L. (2015). Choice and pronunciation of words: Individual differences within a homogeneous group of speakers. Corpus Linguistics and Linguistic Theory, 11, 161-185. doi:10.1515/cllt-2014-0025.

    Abstract

    This paper investigates whether individual speakers forming a homogeneous group differ in their choice and pronunciation of words when engaged in casual conversation, and if so, how they differ. More specifically, it examines whether the Balanced Winnow classifier is able to distinguish between the twenty speakers of the Ernestus Corpus of Spontaneous Dutch, who all have the same social background. To examine differences in choice and pronunciation of words, instead of characteristics of the speech signal itself, classification was based on lexical and pronunciation features extracted from hand-made orthographic and automatically generated broad phonetic transcriptions. The lexical features consisted of words and two-word combinations. The pronunciation features represented pronunciation variations at the word and phone level that are typical for casual speech. The best classifier achieved a performance of 79.9% and was based on the lexical features and on the pronunciation features representing single phones and triphones. The speakers must thus differ from each other in these features. Inspection of the relevant features indicated that, among other things, the words relevant for classification generally do not contain much semantic content, and that speakers differ not only from each other in the use of these words but also in their pronunciation.
  • Hannerfors, A.-K., Hellgren, C., Schijven, D., Iliadis, S. I., Comasco, E., Skalkidou, A., Olivier, J. D., & Sundström-Poromaa, I. (2015). Treatment with serotonin reuptake inhibitors during pregnancy is associated with elevated corticotropin-releasing hormone levels. Psychoneuroendocrinology, 58, 104-113. doi:10.1016/j.psyneuen.2015.04.009.

    Abstract

    Treatment with serotonin reuptake inhibitors (SSRI) has been associated with an increased risk of preterm birth, but causality remains unclear. While placental CRH production is correlated with gestational length and preterm birth, it has been difficult to establish if psychological stress or mental health problems are associated with increased CRH levels. This study compared second trimester CRH serum concentrations in pregnant women on SSRI treatment (n=207) with untreated depressed women (n=56) and controls (n=609). A secondary aim was to investigate the combined effect of SSRI treatment and CRH levels on gestational length and risk for preterm birth. Women on SSRI treatment had significantly higher second trimester CRH levels than controls, and untreated depressed women. CRH levels and SSRI treatment were independently associated with shorter gestational length. The combined effect of SSRI treatment and high CRH levels yielded the highest risk estimate for preterm birth. SSRI treatment during pregnancy is associated with increased CRH levels. However, the elevated risk for preterm birth in SSRI users appear not to be mediated by increased placental CRH production, instead CRH appear as an independent risk factor for shorter gestational length and preterm birth.
  • Hanulikova, A., Mitterer, H., & McQueen, J. M. (2011). Effects of first and second language on segmentation of non-native speech. Bilingualism: Language and Cognition, 14, 506-521. doi:10.1017/S1366728910000428.

    Abstract

    We examined whether Slovak-German bilinguals apply native Slovak phonological and lexical knowledge when segmenting German speech. When Slovaks listen to their native language (Hanulíková, McQueen, & Mitterer, 2010), segmentation is impaired when fixed-stress cues are absent, and, following the Possible-Word Constraint (PWC; Norris, McQueen, Cutler, & Butterfield, 1997), lexical candidates are disfavored if segmentation leads to vowelless residues, unless those residues are existing Slovak words. In the present study, fixed-stress cues on German target words were again absent. Nevertheless, in support of the PWC, both German and Slovak listeners recognized German words (e.g., Rose "rose") faster in syllable contexts (suckrose) than in single- onsonant contexts (krose, trose). But only the Slovak listeners recognized Rose, for example, faster in krose than in trose (k is a Slovak word, t is not). It appears that non-native listeners can suppress native stress segmentation procedures, but that they suffer from prevailing interference from native lexical knowledge
  • Hanulová, J., Davidson, D. J., & Indefrey, P. (2011). Where does the delay in L2 picture naming come from? Psycholinguistic and neurocognitive evidence on second language word production. Language and Cognitive Processes, 26, 902-934. doi:10.1080/01690965.2010.509946.

    Abstract

    Bilinguals are slower when naming a picture in their second language than when naming it in their first language. Although the phenomenon has been frequently replicated, it is not known what causes the delay in the second language. In this article we discuss at what processing stages a delay might arise according to current models of bilingual processing and how the available behavioural and neurocognitive evidence relates to these proposals. Suggested plausible mechanisms, such as frequency or interference effects, are compatible with a naming delay arising at different processing stages. Haemodynamic and electrophysiological data seem to point to a postlexical stage but are still too scarce to support a definite conclusion.
  • Hardies, K., De Kovel, C. G. F., Weckhuysen, S., Asselbergh, B., Geuens, T., Deconinck, T., Azmi, A., May, P., Brilstra, E., Becker, F., Barisic, N., Craiu, D., Braun, K. P. J., Lal, D., Thiele, H., Schubert, J., Weber, Y., van't Slot, R., Nurnberg, P., Balling, R. and 8 moreHardies, K., De Kovel, C. G. F., Weckhuysen, S., Asselbergh, B., Geuens, T., Deconinck, T., Azmi, A., May, P., Brilstra, E., Becker, F., Barisic, N., Craiu, D., Braun, K. P. J., Lal, D., Thiele, H., Schubert, J., Weber, Y., van't Slot, R., Nurnberg, P., Balling, R., Timmerman, V., Lerche, H., Maudsley, S., Helbig, I., Suls, A., Koeleman, B. P. C., De Jonghe, P., & Euro Res Consortium, E. (2015). Recessive mutations in SLC13A5 result in a loss of citrate transport and cause neonatal epilepsy, developmental delay and teeth hypoplasia. Brain., 138(11), 3238-3250. doi:10.1093/brain/awv263.

    Abstract

    The epileptic encephalopathies are a clinically and aetiologically heterogeneous subgroup of epilepsy syndromes. Most epileptic encephalopathies have a genetic cause and patients are often found to carry a heterozygous de novo mutation in one of the genes associated with the disease entity. Occasionally recessive mutations are identified: a recent publication described a distinct neonatal epileptic encephalopathy (MIM 615905) caused by autosomal recessive mutations in the SLC13A5 gene. Here, we report eight additional patients belonging to four different families with autosomal recessive mutations in SLC13A5. SLC13A5 encodes a high affinity sodium-dependent citrate transporter, which is expressed in the brain. Neurons are considered incapable of de novo synthesis of tricarboxylic acid cycle intermediates; therefore they rely on the uptake of intermediates, such as citrate, to maintain their energy status and neurotransmitter production. The effect of all seven identified mutations (two premature stops and five amino acid substitutions) was studied in vitro, using immunocytochemistry, selective western blot and mass spectrometry. We hereby demonstrate that cells expressing mutant sodium-dependent citrate transporter have a complete loss of citrate uptake due to various cellular loss-of-function mechanisms. In addition, we provide independent proof of the involvement of autosomal recessive SLC13A5 mutations in the development of neonatal epileptic encephalopathies, and highlight teeth hypoplasia as a possible indicator for SLC13A5 screening. All three patients who tried the ketogenic diet responded well to this treatment, and future studies will allow us to ascertain whether this is a recurrent feature in this severe disorder.
  • Hartsuiker, R. J., Huettig, F., & Olivers, C. N. (Eds.). (2011). Visual search and visual world: Interactions among visual attention, language, and working memory [Special Issue]. Acta Psychologica, 137(2). doi:10.1016/j.actpsy.2011.01.005.
  • Hartsuiker, R. J., Huettig, F., & Olivers, C. N. (2011). Visual search and visual world: Interactions among visual attention, language, and working memory (introduction to the special issue). Acta Psychologica, 137(2), 135-137. doi:10.1016/j.actpsy.2011.01.005.
  • Haun, D. B. M., & Tomasello, M. (2011). Conformity to peer pressure in preschool children. Child Development, 82, 1759-1767. doi:10.1111/j.1467-8624.2011.01666.x.

    Abstract

    Both adults and adolescents often conform their behavior and opinions to peer groups, even when they themselves know better. The current study investigated this phenomenon in 24 groups of 4 children between 4;2 and 4;9 years of age. Children often made their judgments conform to those of 3 peers, who had made obviously erroneous but unanimous public judgments right before them. A follow-up study with 18 groups of 4 children between 4;0 and 4;6 years of age revealed that children did not change their “real” judgment of the situation, but only their public expression of it. Preschool children are subject to peer pressure, indicating sensitivity to peers as a primary social reference group already during the preschool years.
  • Haun, D. B. M. (2011). Memory for body movements in Namibian hunter-gatherer children. Journal of Cognitive Education and Psychology, 10, 56-62.

    Abstract

    Despite the global universality of physical space, different cultural groups vary substantially as to how they memorize it. Although European participants mostly prefer egocentric strategies (“left, right, front, back”) to memorize spatial relations, others use mostly allocentric strategies (“north, south, east, west”). Prior research has shown that some cultures show a general preference to memorize object locations and even also body movements in relation to the larger environment rather than in relation to their own body. Here, we investigate whether this cultural bias also applies to movements specifically directed at the participants' own body, emphasizing the role of ego. We show that even participants with generally allocentric biases preferentially memorize self-directed movements using egocentric spatial strategies. These results demonstrate an intricate system of interacting cultural biases and momentary situational characteristics.
  • Haun, D. B. M., & Call, J. (2009). Great apes’ capacities to recognize relational similarity. Cognition, 110, 147-159. doi:10.1016/j.cognition.2008.10.012.

    Abstract

    Recognizing relational similarity relies on the ability to understand that defining object properties might not lie in the objects individually, but in the relations of the properties of various object to each other. This aptitude is highly relevant for many important human skills such as language, reasoning, categorization and understanding analogy and metaphor. In the current study, we investigated the ability to recognize relational similarities by testing five species of great apes, including human children in a spatial task. We found that all species performed better if related elements are connected by logico-causal as opposed to non-causal relations. Further, we find that only children above 4 years of age, bonobos and chimpanzees, unlike younger children, gorillas and orangutans display some mastery of reasoning by non-causal relational similarity. We conclude that recognizing relational similarity is not in its entirety unique to the human species. The lack of a capability for language does not prohibit recognition of simple relational similarities. The data are discussed in the light of the phylogenetic tree of relatedness of the great apes.
  • Haun, D. B. M., Nawroth, C., & Call, J. (2011). Great apes’ risk-taking strategies in a decision making task. PLoS One, 6(12), e28801. doi:10.1371/journal.pone.0028801.

    Abstract

    We investigate decision-making behaviour in all four non-human great ape species. Apes chose between a safe and a risky option across trials of varying expected values. All species chose the safe option more often with decreasing probability of success. While all species were risk-seeking, orangutans and chimpanzees chose the risky option more often than gorillas and bonobos. Hence all four species' preferences were ordered in a manner consistent with normative dictates of expected value, but varied predictably in their willingness to take risks.
  • Haun, D. B. M., Rapold, C. J., Janzen, G., & Levinson, S. C. (2011). Plasticity of human spatial memory: Spatial language and cognition covary across cultures. Cognition, 119, 70-80. doi:10.1016/j.cognition.2010.12.009.

    Abstract

    The present paper explores cross-cultural variation in spatial cognition by comparing spatial reconstruction tasks by Dutch and Namibian elementary school children. These two communities differ in the way they predominantly express spatial relations in language. Four experiments investigate cognitive strategy preferences across different levels of task-complexity and instruction. Data show a correlation between dominant linguistic spatial frames of reference and performance patterns in non-linguistic spatial memory tasks. This correlation is shown to be stable across an increase of complexity in the spatial array. When instructed to use their respective non-habitual cognitive strategy, participants were not easily able to switch between strategies and their attempts to do so impaired their performance. These results indicate a difference not only in preference but also in competence and suggest that spatial language and non-linguistic preferences and competences in spatial cognition are systematically aligned across human populations.

    Files private

    Request files
  • Haun, D. B. M., & Rapold, C. J. (2009). Variation in memory for body movements across cultures. Current Biology, 19(23), R1068-R1069. doi:10.1016/j.cub.2009.10.041.

    Abstract

    There has been considerable controversy over the existence of cognitive differences across human cultures: some claim that human cognition is essentially universal [1,2], others that it reflects cultural specificities [3,4]. One domain of interest has been spatial cognition [5,6]. Despite the global universality of physical space, cultures vary as to how space is coded in their language. Some, for example, do not use egocentric ‘left, right, front, back’ constructions to code spatial relations, instead using allocentric notions like ‘north, south, east, west’ [4,6]: “The spoon is north of the bowl!” Whether or not spatial cognition also varies across cultures remains a contested question [7,8]. Here we investigate whether memory for movements of one's own body differs between cultures with contrastive strategies for coding spatial relations. Our results show that the ways in which we memorize movements of our own body differ in line with culture-specific preferences for how to conceive of spatial relations.
  • Havik, E., Roberts, L., Van Hout, R., Schreuder, R., & Haverkort, M. (2009). Processing subject-object ambiguities in L2 Dutch: A self-paced reading study with German L2 learners of Dutch. Language Learning, 59(1), 73-112. doi:10.1111/j.1467-9922.2009.00501.x.

    Abstract

    The results of two self-paced reading experiments are reported, which investigated the on-line processing of subject-object ambiguities in Dutch relative clause constructions like Dat is de vrouw die de meisjes heeft/hebben gezien by German advanced second language (L2) learners of Dutch. Native speakers of both Dutch and German have been shown to have a preference for a subject versus an object reading of such temporarily ambiguous sentences, and so we provided an ideal opportunity for the transfer of first language (L1) processing preferences to take place. We also investigated whether the participants' working memory span would affect their processing of the experimental items. The results suggest that processing decisions may be affected by working memory when task demands are high and in this case, the high working memory span learners patterned like the native speakers of lower working memory. However, when reading for comprehension alone, and when only structural information was available to guide parsing decisions, working memory span had no effect on the L2 learners' on-line processing, and this differed from the native speakers' even though the L1 and the L2 are highly comparable.
  • Hayano, K. (2004). Kaiwa ni okeru ninshikiteki ken’i no koushou: Shuujoshi yo, ne, odoroki hyouji no bunpu to kinou [Negotiation of Epistemic Authority in Conversation: on the use of final particles yo, ne and surprise markers]. Studies in Pragmatics, 6, 17-28.
  • Heidlmayr, K., Hemforth, B., Moutier, S., & Isel, F. (2015). Neurodynamics of executive control processes in bilinguals: Evidence from ERP and source reconstruction analyses. Frontiers in Psychology, 6: 821. doi:10.3389/fpsyg.2015.00821.

    Abstract

    The present study was designed to examine the impact of bilingualism on the neuronal activity in different executive control processes namely conflict monitoring, control implementation (i.e., interference suppression and conflict resolution) and overcoming of inhibition. Twenty-two highly proficient but non-balanced successive French–German bilingual adults and 22 monolingual adults performed a combined Stroop/Negative priming task while event-related potential (ERP) were recorded online. The data revealed that the ERP effects were reduced in bilinguals in comparison to monolinguals but only in the Stroop task and limited to the N400 and the sustained fronto-central negative-going potential time windows. This result suggests that bilingualism may impact the process of control implementation rather than the process of conflict monitoring (N200). Critically, our study revealed a differential time course of the involvement of the anterior cingulate cortex (ACC) and the prefrontal cortex (PFC) in conflict processing. While the ACC showed major activation in the early time windows (N200 and N400) but not in the latest time window (late sustained negative-going potential), the PFC became unilaterally active in the left hemisphere in the N400 and the late sustained negative-going potential time windows. Taken together, the present electroencephalography data lend support to a cascading neurophysiological model of executive control processes, in which ACC and PFC may play a determining role.
  • Hendriks, L., Witteman, M. J., Frietman, L. C. G., Westerhof, G., Van Baaren, R. B., Engels, R. C. M. E., & Dijksterhuis, A. J. (2009). Imitation can reduce malnutrition in residents in assisted living facilities [Letter to the editor]. Journal of the American Geriatrics Society, 571(1), 187-188. doi:10.1111/j.1532-5415.2009.02074.x.
  • Hervais-Adelman, A., Moser-Mercer, B., & Golestani, N. (2015). Brain functional plasticity associated with the emergence of expertise in extreme language control. NeuroImage, 114, 264-274. doi:10.1016/j.neuroimage.2015.03.072.

    Abstract

    We used functional magnetic resonance imaging (fMRI) to longitudinally examine brain plasticity arising from long-term, intensive simultaneous interpretation training. Simultaneous interpretation is a bilingual task with heavy executive control demands. We compared brain responses observed during simultaneous interpretation with those observed during simultaneous speech repetition (shadowing) in a group of trainee simultaneous interpreters, at the beginning and at the end of their professional training program. Age, sex and language-proficiency matched controls were scanned at similar intervals. Using multivariate pattern classification, we found distributed patterns of changes in functional responses from the first to second scan that distinguished the interpreters from the controls. We also found reduced recruitment of the right caudate nucleus during simultaneous interpretation as a result of training. Such practice-related change is consistent with decreased demands on multilingual language control as the task becomes more automatized with practice. These results demonstrate the impact of simultaneous interpretation training on the brain functional response in a cerebral structure that is not specifically linguistic, but that is known to be involved in learning, in motor control, and in a variety of domain-general executive functions. Along with results of recent studies showing functional and structural adaptations in the caudate nuclei of experts in a broad range of domains, our results underline the importance of this structure as a central node in expertise-related networks. (C) 2015 Elsevier Inc. All rights reserved.
  • Hervais-Adelman, A., Moser-Mercer, B., Michel, C. M., & Golestani, N. (2015). fMRI of simultaneous interpretation reveals the neural basis of extreme language control. Cerebral Cortex, 25(12), 4727-4739. doi:10.1093/cercor/bhu158.

    Abstract

    We used functional magnetic resonance imaging (fMRI) to examine the neural basis of extreme multilingual language control in a group of 50 multilingual participants. Comparing brain responses arising during simultaneous interpretation (SI) with those arising during simultaneous repetition revealed activation of regions known to be involved in speech perception and production, alongside a network incorporating the caudate nucleus that is known to be implicated in domain-general cognitive control. The similarity between the networks underlying bilingual language control and general executive control supports the notion that the frequently reported bilingual advantage on executive tasks stems from the day-to-day demands of language control in the multilingual brain. We examined neural correlates of the management of simultaneity by correlating brain activity during interpretation with the duration of simultaneous speaking and hearing. This analysis showed significant modulation of the putamen by the duration of simultaneity. Our findings suggest that, during SI, the caudate nucleus is implicated in the overarching selection and control of the lexico-semantic system, while the putamen is implicated in ongoing control of language output. These findings provide the first clear dissociation of specific dorsal striatum structures in polyglot language control, roles that are consistent with previously described involvement of these regions in nonlinguistic executive control.
  • Hervais-Adelman, A., Davis, M. H., Johnsrude, I. S., Taylor, K. J., & Carlyon, R. P. (2011). Generalization of Perceptual Learning of Vocoded Speech. Journal of Experimental Psychology: Human Perception and Performance, 37(1), 283-295. doi:10.1037/a0020772.

    Abstract

    Recent work demonstrates that learning to understand noise-vocoded (NV) speech alters sublexical perceptual processes but is enhanced by the simultaneous provision of higher-level, phonological, but not lexical content (Hervais-Adelman, Davis, Johnsrude, & Carlyon, 2008), consistent with top-down learning (Davis, Johnsrude, Hervais-Adelman, Taylor, & McGettigan, 2005; Hervais-Adelman et al., 2008). Here, we investigate whether training listeners with specific types of NV speech improves intelligibility of vocoded speech with different acoustic characteristics. Transfer of perceptual learning would provide evidence for abstraction from variable properties of the speech input. In Experiment 1, we demonstrate that learning of NV speech in one frequency region generalizes to an untrained frequency region. In Experiment 2, we assessed generalization among three carrier signals used to create NV speech: noise bands, pulse trains, and sine waves. Stimuli created using these three carriers possess the same slow, time-varying amplitude information and are equated for naive intelligibility but differ in their temporal fine structure. Perceptual learning generalized partially, but not completely, among different carrier signals. These results delimit the functional and neural locus of perceptual learning of vocoded speech. Generalization across frequency regions suggests that learning occurs at a stage of processing at which some abstraction from the physical signal has occurred, while incomplete transfer across carriers indicates that learning occurs at a stage of processing that is sensitive to acoustic features critical for speech perception (e.g., noise, periodicity).
  • Hervais-Adelman, A., Legrand, L. B., Zhan, M. Y., Tamietto, M., de Gelder, B., & Pegna, A. J. (2015). Looming sensitive cortical regions without V1 input: Evidence from a patient with bilateral cortical blindness. Frontiers in Integrative Neuroscience, 9: 51. doi:10.3389/fnint.2015.00051.

    Abstract

    Fast and automatic behavioral responses are required to avoid collision with an approaching stimulus. Accordingly, looming stimuli have been found to be highly salient and efficient attractors of attention due to the implication of potential collision and potential threat. Here, we address the question of whether looming motion is processed in the absence of any functional primary visual cortex and consequently without awareness. For this, we investigated a patient (TN) suffering from complete, bilateral damage to his primary visual cortex. Using an fMRI paradigm, we measured TN's brain activation during the presentation of looming, receding, rotating, and static point lights, of which he was unaware. When contrasted with other conditions, looming was found to produce bilateral activation of the middle temporal areas, as well as the superior temporal sulcus and inferior parietal lobe (IPL). The latter are generally thought to be involved in multisensory processing of motion in extrapersonal space, as well as attentional capture and saliency. No activity was found close to the lesioned V1 area. This demonstrates that looming motion is processed in the absence of awareness through direct subcortical projections to areas involved in multisensory processing of motion and saliency that bypass V-1.
  • Hervais-Adelman, A., Moser-Mercer, B., & Golestani, N. (2011). Executive control of language in the bilingual brain: Integrating the evidence from neuroinnaging to neuropsychology. Frontiers in Psychology, 2: 234. doi:10.3389/fpsyg.2011.00234.

    Abstract

    In this review we will focus on delineating the neural substrates of the executive control of language in the bilingual brain, based on the existing neuroimaging, intracranial, transcranial magnetic stimulation, and neuropsychological evidence. We will also offer insights from ongoing brain-imaging studies into the development of expertise in multilingual language control. We will concentrate specifically on evidence regarding how the brain selects and controls languages for comprehension and production. This question has been addressed in a number of ways and using various tasks, including language switching during production or perception, translation, and interpretation. We will attempt to synthesize existing evidence in order to bring to light the neural substrates that are crucial to executive control of language.
  • Hibar, D. P., Stein, J. L., Renteria, M. E., Arias-Vasquez, A., Desrivières, S., Jahanshad, N., Toro, R., Wittfeld, K., Abramovic, L., Andersson, M., Aribisala, B. S., Armstrong, N. J., Bernard, M., Bohlken, M. M., Boks, M. P., Bralten, J., Brown, A. A., Chakravarty, M. M., Chen, Q., Ching, C. R. K. and 267 moreHibar, D. P., Stein, J. L., Renteria, M. E., Arias-Vasquez, A., Desrivières, S., Jahanshad, N., Toro, R., Wittfeld, K., Abramovic, L., Andersson, M., Aribisala, B. S., Armstrong, N. J., Bernard, M., Bohlken, M. M., Boks, M. P., Bralten, J., Brown, A. A., Chakravarty, M. M., Chen, Q., Ching, C. R. K., Cuellar-Partida, G., den Braber, A., Giddaluru, S., Goldman, A. L., Grimm, O., Guadalupe, T., Hass, J., Woldehawariat, G., Holmes, A. J., Hoogman, M., Janowitz, D., Jia, T., Kim, S., Klein, M., Kraemer, B., Lee, P. H., Olde Loohuis, L. M., Luciano, M., Macare, C., Mather, K. A., Mattheisen, M., Milaneschi, Y., Nho, K., Papmeyer, M., Ramasamy, A., Risacher, S. L., Roiz-Santiañez, R., Rose, E. J., Salami, A., Sämann, P. G., Schmaal, L., Schork, A. J., Shin, J., Strike, L. T., Teumer, A., Van Donkelaar, M. M. J., Van Eijk, K. R., Walters, R. K., Westlye, L. T., Whelan, C. D., Winkler, A. M., Zwiers, M. P., Alhusaini, S., Athanasiu, L., Ehrlich, S., Hakobjan, M. M. H., Hartberg, C. B., Haukvik, U. K., Heister, A. J. G. A. M., Hoehn, D., Kasperaviciute, D., Liewald, D. C. M., Lopez, L. M., Makkinje, R. R. R., Matarin, M., Naber, M. A. M., McKay, D. R., Needham, M., Nugent, A. C., Pütz, B., Royle, N. A., Shen, L., Sprooten, E., Trabzuni, D., Van der Marel, S. S. L., Van Hulzen, K. J. E., Walton, E., Wolf, C., Almasy, L., Ames, D., Arepalli, S., Assareh, A. A., Bastin, M. E., Brodaty, H., Bulayeva, K. B., Carless, M. A., Cichon, S., Corvin, A., Curran, J. E., Czisch, M., De Zubicaray, G. I., Dillman, A., Duggirala, R., Dyer, T. D., Erk, S., Fedko, I. O., Ferrucci, L., Foroud, T. M., Fox, P. T., Fukunaga, M., Gibbs, J. R., Göring, H. H. H., Green, R. C., Guelfi, S., Hansell, N. K., Hartman, C. A., Hegenscheid, K., Heinz, A., Hernandez, D. G., Heslenfeld, D. J., Hoekstra, P. J., Holsboer, F., Homuth, G., Hottenga, J.-J., Ikeda, M., Jack, C. R., Jenkinson, M., Johnson, R., Kanai, R., Keil, M., Kent, J. W., Kochunov, P., Kwok, J. B., Lawrie, S. M., Liu, X., Longo, D. L., McMahon, K. L., Meisenzahl, E., Melle, I., Mohnke, S., Montgomery, G. W., Mostert, J. C., Mühleisen, T. W., Nalls, M. A., Nichols, T. E., Nilsson, L. G., Nöthen, M. M., Ohi, K., Olvera, R. L., Perez-Iglesias, R., Pike, G. B., Potkin, S. G., Reinvang, I., Reppermund, S., Rietschel, M., Romanczuk-Seiferth, N., Rosen, G. D., Rujescu, D., Schnell, K., Schofield, P. R., Smith, C., Steen, V. M., Sussmann, J. E., Thalamuthu, A., Toga, A. W., Traynor, B. J., Troncoso, J., Turner, J. A., Valdes Hernández, M. C., van Ent, D. ’., Van der Brug, M., Van der Wee, N. J. A., Van Tol, M.-J., Veltman, D. J., Wassink, T. H., Westman, E., Zielke, R. H., Zonderman, A. B., Ashbrook, D. G., Hager, R., Lu, L., McMahon, F. J., Morris, D. W., Williams, R. W., Brunner, H. G., Buckner, R. L., Buitelaar, J. K., Cahn, W., Calhoun, V. D., Cavalleri, G. L., Crespo-Facorro, B., Dale, A. M., Davies, G. E., Delanty, N., Depondt, C., Djurovic, S., Drevets, W. C., Espeseth, T., Gollub, R. L., Ho, B.-C., Hoffmann, W., Hosten, N., Kahn, R. S., Le Hellard, S., Meyer-Lindenberg, A., Müller-Myhsok, B., Nauck, M., Nyberg, L., Pandolfo, M., Penninx, B. W. J. H., Roffman, J. L., Sisodiya, S. M., Smoller, J. W., Van Bokhoven, H., Van Haren, N. E. M., Völzke, H., Walter, H., Weiner, M. W., Wen, W., White, T., Agartz, I., Andreassen, O. A., Blangero, J., Boomsma, D. I., Brouwer, R. M., Cannon, D. M., Cookson, M. R., De Geus, E. J. C., Deary, I. J., Donohoe, G., Fernández, G., Fisher, S. E., Francks, C., Glahn, D. C., Grabe, H. J., Gruber, O., Hardy, J., Hashimoto, R., Hulshoff Pol, H. E., Jönsson, E. G., Kloszewska, I., Lovestone, S., Mattay, V. S., Mecocci, P., McDonald, C., McIntosh, A. M., Ophoff, R. A., Paus, T., Pausova, Z., Ryten, M., Sachdev, P. S., Saykin, A. J., Simmons, A., Singleton, A., Soininen, H., Wardlaw, J. M., Weale, M. E., Weinberger, D. R., Adams, H. H. H., Launer, L. J., Seiler, S., Schmidt, R., Chauhan, G., Satizabal, C. L., Becker, J. T., Yanek, L., van der Lee, S. J., Ebling, M., Fischl, B., Longstreth, W. T., Greve, D., Schmidt, H., Nyquist, P., Vinke, L. N., Van Duijn, C. M., Xue, L., Mazoyer, B., Bis, J. C., Gudnason, V., Seshadri, S., Ikram, M. A., The Alzheimer’s Disease Neuroimaging Initiative, The CHARGE Consortium, EPIGEN, IMAGEN, SYS, Martin, N. G., Wright, M. J., Schumann, G., Franke, B., Thompson, P. M., & Medland, S. E. (2015). Common genetic variants influence human subcortical brain structures. Nature, 520, 224-229. doi:10.1038/nature14101.

    Abstract

    The highly complex structure of the human brain is strongly shaped by genetic influences. Subcortical brain regions form circuits with cortical areas to coordinate movement, learning, memory and motivation, and altered circuits can lead to abnormal behaviour and disease. To investigate how common genetic variants affect the structure of these brain regions, here we conduct genome-wide association studies of the volumes of seven subcortical regions and the intracranial volume derived from magnetic resonance images of 30,717 individuals from 50 cohorts. We identify five novel genetic variants influencing the volumes of the putamen and caudate nucleus. We also find stronger evidence for three loci with previously established influences on hippocampal volume and intracranial volume. These variants show specific volumetric effects on brain structures rather than global effects across structures. The strongest effects were found for the putamen, where a novel intergenic locus with replicable influence on volume (rs945270; P = 1.08 × 10-33; 0.52% variance explained) showed evidence of altering the expression of the KTN1 gene in both brain and blood tissue. Variants influencing putamen volume clustered near developmental genes that regulate apoptosis, axon guidance and vesicle transport. Identification of these genetic variants provides insight into the causes of variability in human brain development, and may help to determine mechanisms of neuropsychiatric dysfunction

    Files private

    Request files
  • Hilbrink, E., Gattis, M., & Levinson, S. C. (2015). Early developmental changes in the timing of turn-taking: A longitudinal study of mother-infant interaction. Frontiers in Psychology, 6: 1492. doi:10.3389/fpsyg.2015.01492.

    Abstract

    To accomplish a smooth transition in conversation from one speaker to the next, a tight coordination of interaction between speakers is required. Recent studies of adult conversation suggest that this close timing of interaction may well be a universal feature of conversation. In the present paper, we set out to assess the development of this close timing of turns in infancy in vocal exchanges between mothers and infants. Previous research has demonstrated an early sensitivity to timing in interactions (e.g. Murray & Trevarthen, 1985). In contrast, less is known about infants’ abilities to produce turns in a timely manner and existing findings are rather patchy. We conducted a longitudinal study of twelve mother-infant dyads in free-play interactions at the ages of 3, 4, 5, 9, 12 and 18 months. Based on existing work and the predictions made by the Interaction Engine Hypothesis (Levinson, 2006), we expected that infants would begin to develop the temporal properties of turn-taking early in infancy but that their timing of turns would slow down at 12 months, which is around the time when infants start to produce their first words. Findings were consistent with our predictions: Infants were relatively fast at timing their turn early in infancy but slowed down towards the end of the first year. Furthermore, the changes observed in infants’ turn-timing skills were not caused by changes in maternal timing, which remained stable across the 3-18 month period. However, the slowing down of turn-timing started somewhat earlier than predicted: at 9 months.
  • Hill, C. (2011). Named and unnamed spaces: Color, kin and the environment in Umpila. The Senses & Society, 6(1), 57-67. doi:10.2752/174589311X12893982233759.

    Abstract

    Imagine describing the particular characteristics of the hue of a flower, or the quality of its scent, or the texture of its petal. Introspection suggests the expression of such sensory experiences in words is something quite different than the task of naming artifacts. The particular challenges in the linguistic encoding of sensorial experiences pose questions regarding how languages manage semantic gaps and “ineffability.” That is, what strategies do speakers have available to manage phenomena or domains of experience that are inexpressible or difficult to express in their language? This article considers this issue with regard to color in Umpila, an Aboriginal Australian language of the Paman family. The investigation of color naming and ineffability in Umpila reveals rich associations and mappings between color and visual perceptual qualities more generally, categorization of the human social world, and the environment. “Gaps” in the color system are filled or supported by associations with two of the most linguistically and culturally salient domains for Umpila - kinship and the environment
  • Hintz, F., & Meyer, A. S. (2015). Prediction and production of simple mathematical equations: Evidence from anticipatory eye movements. PLoS One, 10(7): e0130766. doi:10.1371/journal.pone.0130766.

    Abstract

    The relationship between the production and the comprehension systems has recently become a topic of interest for many psycholinguists. It has been argued that these systems are tightly linked and in particular that listeners use the production system to predict upcoming content. In this study, we tested how similar production and prediction processes are in a novel version of the visual world paradigm. Dutch speaking participants (native speakers in Experiment 1; German-Dutch bilinguals in Experiment 2) listened to mathematical equations while looking at a clock face featuring the numbers 1 to 12. On alternating trials, they either heard a complete equation ("three plus eight is eleven") or they heard the first part ("three plus eight is") and had to produce the result ("eleven") themselves. Participants were encouraged to look at the relevant numbers throughout the trial. Their eye movements were recorded and analyzed. We found that the participants' eye movements in the two tasks were overall very similar. They fixated the first and second number of the equations shortly after they were mentioned, and fixated the result number well before they named it on production trials and well before the recorded speaker named it on comprehension trials. However, all fixation latencies were shorter on production than on comprehension trials. These findings suggest that the processes involved in planning to say a word and anticipating hearing a word are quite similar, but that people are more aroused or engaged when they intend to respond than when they merely listen to another person.

    Additional information

    Data availability
  • Hoey, E. (2015). Lapses: How people arrive at, and deal with, discontinuities in talk. Research on Language and Social Interaction, 48(4), 430-453. doi:10.1080/08351813.2015.1090116.

    Abstract

    Interaction includes moments of silence. When all participants forgo the option to speak, the silence can be called a “lapse.” This article builds on existing work on lapses and other kinds of silences (gaps, pauses, and so on) to examine how participants reach a point where lapsing is a possibility and how they orient to the lapse that subsequently develops. Drawing from a wide range of activities and settings, I will show that participants may treat lapses as (a) the relevant cessation of talk, (b) the allowable development of silence, or (c) the conspicuous absence of talk. Data are in American and British English.
  • Holler, J., Kendrick, K. H., Casillas, M., & Levinson, S. C. (2015). Editorial: Turn-taking in human communicative interaction. Frontiers in Psychology, 6: 1919. doi:10.3389/fpsyg.2015.01919.
  • Holler, J., Shovelton, H., & Beattie, G. (2009). Do iconic gestures really contribute to the semantic information communicated in face-to-face interaction? Journal of Nonverbal Behavior, 33, 73-88.
  • Holler, J., & Wilkin, K. (2011). Co-speech gesture mimicry in the process of collaborative referring during face-to-face dialogue. Journal of Nonverbal Behavior, 35, 133-153. doi:10.1007/s10919-011-0105-6.

    Abstract

    Mimicry has been observed regarding a range of nonverbal behaviors, but only recently have researchers started to investigate mimicry in co-speech gestures. These gestures are considered to be crucially different from other aspects of nonverbal behavior due to their tight link with speech. This study provides evidence of mimicry in co-speech gestures in face-to-face dialogue, the most common forum of everyday talk. In addition, it offers an analysis of the functions that mimicked co-speech gestures fulfill in the collaborative process of creating a mutually shared understanding of referring expressions. The implications bear on theories of gesture production, research on grounding, and the mechanisms underlying behavioral mimicry.
  • Holler, J., & Wilkin, K. (2009). Communicating common ground: how mutually shared knowledge influences the representation of semantic information in speech and gesture in a narrative task. Language and Cognitive Processes, 24, 267-289.
  • Holler, J., & Wilkin, K. (2011). An experimental investigation of how addressee feedback affects co-speech gestures accompanying speakers’ responses. Journal of Pragmatics, 43, 3522-3536. doi:10.1016/j.pragma.2011.08.002.

    Abstract

    There is evidence that co-speech gestures communicate information to addressees and that they are often communicatively intended. However, we still know comparatively little about the role of gestures in the actual process of communication. The present study offers a systematic investigation of speakers’ gesture use before and after addressee feedback. The findings show that when speakers responded to addressees’ feedback gesture rate remained constant when this feedback encouraged clarification, elaboration or correction. However, speakers gestured proportionally less often after feedback when providing confirmatory responses. That is, speakers may not be drawing on gesture in response to addressee feedback per se, but particularly with responses that enhance addressees’ understanding. Further, the large majority of speakers’ gestures changed in their form. They tended to be more precise, larger, or more visually prominent after feedback. Some changes in gesture viewpoint were also observed. In addition, we found that speakers used deixis in speech and gaze to increase the salience of gestures occurring in response to feedback. Speakers appear to conceive of gesture as a useful modality in redesigning utterances to make them more accessible to addressees. The findings further our understanding of recipient design and co-speech gestures in face-to-face dialogue.
    Highlights

    ► Gesture rate remains constant in response to addressee feedback when the response aims to correct or clarify understanding. ► But gesture rate decreases when speakers provide confirmatory responses to feedback signalling correct understanding. ► Gestures are more communicative in response to addressee feedback, particularly in terms of precision, size and visual prominence. ► Speakers make gestures in response to addressee feedback more salient by using deictic markers in speech and gaze.
  • Holler, J., Kokal, I., Toni, I., Hagoort, P., Kelly, S. D., & Ozyurek, A. (2015). Eye’m talking to you: Speakers’ gaze direction modulates co-speech gesture processing in the right MTG. Social Cognitive & Affective Neuroscience, 10, 255-261. doi:10.1093/scan/nsu047.

    Abstract

    Recipients process information from speech and co-speech gestures, but it is currently unknown how this processing is influenced by the presence of other important social cues, especially gaze direction, a marker of communicative intent. Such cues may modulate neural activity in regions associated either with the processing of ostensive cues, such as eye gaze, or with the processing of semantic information, provided by speech and gesture.
    Participants were scanned (fMRI) while taking part in triadic communication involving two recipients and a speaker. The speaker uttered sentences that
    were and were not accompanied by complementary iconic gestures. Crucially, the speaker alternated her gaze direction, thus creating two recipient roles: addressed (direct gaze) vs unaddressed (averted gaze) recipient. The comprehension of Speech&Gesture relative to SpeechOnly utterances recruited middle occipital, middle temporal and inferior frontal gyri, bilaterally. The calcarine sulcus and posterior cingulate cortex were sensitive to differences between direct and averted gaze. Most importantly, Speech&Gesture utterances, but not SpeechOnly utterances, produced additional activity in the right middle temporal gyrus when participants were addressed. Marking communicative intent with gaze direction modulates the processing of speech–gesture utterances in cerebral areas typically associated with the semantic processing of multi-modal communicative acts.
  • Holler, J., & Kendrick, K. H. (2015). Unaddressed participants’ gaze in multi-person interaction: Optimizing recipiency. Frontiers in Psychology, 6: 98. doi:10.3389/fpsyg.2015.00098.

    Abstract

    One of the most intriguing aspects of human communication is its turn-taking system. It requires the ability to process on-going turns at talk while planning the next, and to launch this next turn without considerable overlap or delay. Recent research has investigated the eye movements of observers of dialogues to gain insight into how we process turns at talk. More specifically, this research has focused on the extent to which we are able to anticipate the end of current and the beginning of next turns. At the same time, there has been a call for shifting experimental paradigms exploring social-cognitive processes away from passive observation towards online processing. Here, we present research that responds to this call by situating state-of-the-art technology for tracking interlocutors’ eye movements within spontaneous, face-to-face conversation. Each conversation involved three native speakers of English. The analysis focused on question-response sequences involving just two of those participants, thus rendering the third momentarily unaddressed. Temporal analyses of the unaddressed participants’ gaze shifts from current to next speaker revealed that unaddressed participants are able to anticipate next turns, and moreover, that they often shift their gaze towards the next speaker before the current turn ends. However, an analysis of the complex structure of turns at talk revealed that the planning of these gaze shifts virtually coincides with the points at which the turns first become recog-nizable as possibly complete. We argue that the timing of these eye movements is governed by an organizational principle whereby unaddressed participants shift their gaze at a point that appears interactionally most optimal: It provides unaddressed participants with access to much of the visual, bodily behavior that accompanies both the current speaker’s and the next speaker’s turn, and it allows them to display recipiency with regard to both speakers’ turns.
  • Holler, J. (2011). Verhaltenskoordination, Mimikry und sprachbegleitende Gestik in der Interaktion. Psychotherapie - Wissenschaft: Special issue: "Sieh mal, wer da spricht" - der Koerper in der Psychotherapie Teil IV, 1(1), 56-64. Retrieved from http://www.psychotherapie-wissenschaft.info/index.php/psy-wis/article/view/13/65.
  • Holman, E. W., Brown, C. H., Wichmann, S., Müller, A., Velupillai, V., Hammarström, H., Sauppe, S., Jung, H., Bakker, D., Brown, P., Belyaev, O., Urban, M., Mailhammer, R., List, J.-M., & Egorov, D. (2011). Automated dating of the world’s language families based on lexical similarity. Current Anthropology, 52(6), 841-875. doi:10.1086/662127.

    Abstract

    This paper describes a computerized alternative to glottochronology for estimating elapsed time since parent languages diverged into daughter languages. The method, developed by the Automated Similarity Judgment Program (ASJP) consortium, is different from glottochronology in four major respects: (1) it is automated and thus is more objective, (2) it applies a uniform analytical approach to a single database of worldwide languages, (3) it is based on lexical similarity as determined from Levenshtein (edit) distances rather than on cognate percentages, and (4) it provides a formula for date calculation that mathematically recognizes the lexical heterogeneity of individual languages, including parent languages just before their breakup into daughter languages. Automated judgments of lexical similarity for groups of related languages are calibrated with historical, epigraphic, and archaeological divergence dates for 52 language groups. The discrepancies between estimated and calibration dates are found to be on average 29% as large as the estimated dates themselves, a figure that does not differ significantly among language families. As a resource for further research that may require dates of known level of accuracy, we offer a list of ASJP time depths for nearly all the world’s recognized language families and for many subfamilies.

    Files private

    Request files
  • Hoogman, M., Aarts, E., Zwiers, M., Slaats-Willemse, D., Naber, M., Onnink, M., Cools, R., Kan, C., Buitelaar, J., & Franke, B. (2011). Nitric Oxide Synthase genotype modulation of impulsivity and ventral striatal activity in adult ADHD patients and healthy comparison subjects. American Journal of Psychiatry, 168, 1099-1106. doi:10.1176/appi.ajp.2011.10101446.

    Abstract

    Objective: Attention deficit hyperactivity disorder (ADHD) is a highly heritable disorder. The NOS1 gene encoding nitric oxide synthase is a candidate gene for ADHD and has been previously linked with impulsivity. In the present study, the authors investigated the effect of a functional variable number of tandem repeats (VNTR) polymorphism in NOS1 (NOS1 exon 1f-VNTR) on the processing of rewards, one of the cognitive deficits in ADHD. Method: A sample of 136 participants, consisting of 87 adult ADHD patients and 49 healthy comparison subjects, completed a reward-related impulsivity task. A total of 104 participants also underwent functional magnetic resonance imaging during a reward anticipation task. The effect of the NOS1 exon 1f-VNTR genotype on reward-related impulsivity and reward-related ventral striatal activity was examined. Results: ADHD patients had higher impulsivity scores and lower ventral striatal activity than healthy comparison subjects. The association between the short allele and increased impulsivity was confirmed. However, independent of disease status, homozygous carriers of the short allele of NOS1, the ADHD risk genotype, demonstrated higher ventral striatal activity than carriers of the other NOS1 VNTR genotypes. Conclusions: The authors suggest that the NOS1 genotype influences impulsivity and its relation with ADHD is mediated through effects on this behavioral trait. Increased ventral striatal activity related to NOS1 may be compensatory for effects in other brain regions.
  • Horemans, I., & Schiller, N. O. (2004). Form-priming effects in nonword naming. Brain and Language, 90(1-3), 465-469. doi:10.1016/S0093-934X(03)00457-7.

    Abstract

    Form-priming effects from sublexical (syllabic or segmental) primes in masked priming can be accounted for in two ways. One is the sublexical pre-activation view according to which segments are pre-activated by the prime, and at the time the form-related target is to be produced, retrieval/assembly of those pre-activated segments is faster compared to an unrelated situation. However, it has also been argued that form-priming effects from sublexical primes might be due to lexical pre-activation. When the sublexical prime is presented, it activates all form-related words (i.e., cohorts) in the lexicon, necessarily including the form-related target, which—as a consequence—is produced faster than in the unrelated case. Note, however, that this lexical pre-activation account makes previous pre-lexical activation of segments necessary. This study reports a nonword naming experiment to investigate whether or not sublexical pre-activation is involved in masked form priming with sublexical primes. The results demonstrated a priming effect suggesting a nonlexical effect. However, this does not exclude an additional lexical component in form priming.
  • Horschig, J. M., Smolders, R., Bonnefond, M., Schoffelen, J.-M., Van den Munckhof, P., Schuurman, P. R., Cools, R., Denys, D., & Jensen, O. (2015). Directed communication between nucleus accumbens and neocortex in humans is differentially supported by synchronization in the theta and alpha band. PLoS One, 10(9): e0138685. doi:10.1371/journal.pone.0138685.

    Abstract

    Here, we report evidence for oscillatory bi-directional interactions between the nucleus accumbens and the neocortex in humans. Six patients performed a demanding covert visual attention task while we simultaneously recorded brain activity from deep-brain electrodes implanted in the nucleus accumbens and the surface electroencephalogram (EEG). Both theta and alpha oscillations were strongly coherent with the frontal and parietal EEG during the task. Theta-band coherence increased during processing of the visual stimuli. Granger causality analysis revealed that the nucleus accumbens was communicating with the neocortex primarily in the theta-band, while the cortex was communicating the nucleus accumbens in the alpha-band. These data are consistent with a model, in which theta- and alpha-band oscillations serve dissociable roles: Prior to stimulus processing, the cortex might suppress ongoing processing in the nucleus accumbens by modulating alpha-band activity. Subsequently, upon stimulus presentation, theta oscillations might facilitate the active exchange of stimulus information from the nucleus accumbens to the cortex.
  • Hoymann, G. (2004). [Review of the book Botswana: The future of the minority languages ed. by Herman M. Batibo and Birgit Smieja]. Journal of African Languages and Linguistics, 25(2), 171-173. doi:10.1515/jall.2004.25.2.171.
  • Hribar, A., Haun, D. B. M., & Call, J. (2011). Great apes’ strategies to map spatial relations. Animal Cognition, 14, 511-523. doi:10.1007/s10071-011-0385-6.

    Abstract

    We investigated reasoning about spatial relational similarity in three great ape species: chimpanzees, bonobos, and orangutans. Apes were presented with three spatial mapping tasks in which they were required to find a reward in an array of three cups, after observing a reward being hidden in a different array of three cups. To obtain a food reward, apes needed to choose the cup that was in the same relative position (i.e., on the left) as the baited cup in the other array. The three tasks differed in the constellation of the two arrays. In Experiment 1, the arrays were placed next to each other, forming a line. In Experiment 2, the positioning of the two arrays varied each trial, being placed either one behind the other in two rows, or next to each other, forming a line. Finally, in Experiment 3, the two arrays were always positioned one behind the other in two rows, but misaligned. Results suggested that apes compared the two arrays and recognized that they were similar in some way. However, we believe that instead of mapping the left–left, middle–middle, and right–right cups from each array, they mapped the cups that shared the most similar relations to nearby landmarks (table’s visual boundaries).
  • Li, W., Li, X., Huang, L., Kong, X., Yang, W., Wei, D., Li, J., Cheng, H., Zhang, Q., Qiu, J., & Liu, J. (2015). Brain structure links trait creativity to openness to experience. Social Cognitive and Affective Neuroscience, 10(2), 191-198. doi:10.1093/scan/nsu041.

    Abstract

    Creativity is crucial to the progression of human civilization and has led to important scientific discoveries. Especially, individuals are more likely to have scientific discoveries if they possess certain personality traits of creativity (trait creativity), including imagination, curiosity, challenge and risk-taking. This study used voxel-based morphometry to identify the brain regions underlying individual differences in trait creativity, as measured by the Williams creativity aptitude test, in a large sample (n = 246). We found that creative individuals had higher gray matter volume in the right posterior middle temporal gyrus (pMTG), which might be related to semantic processing during novelty seeking (e.g. novel association, conceptual integration and metaphor understanding). More importantly, although basic personality factors such as openness to experience, extroversion, conscientiousness and agreeableness (as measured by the NEO Personality Inventory) all contributed to trait creativity, only openness to experience mediated the association between the right pMTG volume and trait creativity. Taken together, our results suggest that the basic personality trait of openness might play an important role in shaping an individual’s trait creativity.
  • Huettig, F., & McQueen, J. M. (2011). The nature of the visual environment induces implicit biases during language-mediated visual search. Memory & Cognition, 39, 1068-1084. doi:10.3758/s13421-011-0086-z.

    Abstract

    Four eye-tracking experiments examined whether semantic and visual-shape representations are routinely retrieved from printed-word displays and used during language-mediated visual search. Participants listened to sentences containing target words which were similar semantically or in shape to concepts invoked by concurrently-displayed printed words. In Experiment 1 the displays contained semantic and shape competitors of the targets, and two unrelated words. There were significant shifts in eye gaze as targets were heard towards semantic but not shape competitors. In Experiments 2-4, semantic competitors were replaced with unrelated words, semantically richer sentences were presented to encourage visual imagery, or participants rated the shape similarity of the stimuli before doing the eye-tracking task. In all cases there were no immediate shifts in eye gaze to shape competitors, even though, in response to the Experiment 1 spoken materials, participants looked to these competitors when they were presented as pictures (Huettig & McQueen, 2007). There was a late shape-competitor bias (more than 2500 ms after target onset) in all experiments. These data show that shape information is not used in online search of printed-word displays (whereas it is used with picture displays). The nature of the visual environment appears to induce implicit biases towards particular modes of processing during language-mediated visual search.
  • Huettig, F., Rommers, J., & Meyer, A. S. (2011). Using the visual world paradigm to study language processing: A review and critical evaluation. Acta Psychologica, 137, 151-171. doi:10.1016/j.actpsy.2010.11.003.

    Abstract

    We describe the key features of the visual world paradigm and review the main research areas where it has been used. In our discussion we highlight that the paradigm provides information about the way language users integrate linguistic information with information derived from the visual environment. Therefore the paradigm is well suited to study one of the key issues of current cognitive psychology, namely the interplay between linguistic and visual information processing. However, conclusions about linguistic processing (e.g., about activation, competition, and timing of access of linguistic representations) in the absence of relevant visual information must be drawn with caution.
  • Huettig, F., & Brouwer, S. (2015). Delayed anticipatory spoken language processing in adults with dyslexia - Evidence from eye-tracking. Dyslexia, 21(2), 97-122. doi:10.1002/dys.1497.

    Abstract

    It is now well-established that anticipation of up-coming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here we investigated whether anticipatory spoken language processing is related to individuals’ word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., "Kijk naar deCOM afgebeelde pianoCOM", look at the displayed piano) while viewing four objects. Articles (Dutch “het” or “de”) were gender-marked such that the article agreed in gender only with the target and thus participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing.
  • Huettig, F. (2015). Four central questions about prediction in language processing. Brain Research, 1626, 118-135. doi:10.1016/j.brainres.2015.02.014.

    Abstract

    The notion that prediction is a fundamental principle of human information processing has been en vogue over recent years. The investigation of language processing may be particularly illuminating for testing this claim. Linguists traditionally have argued prediction plays only a minor role during language understanding because of the vast possibilities available to the language user as each word is encountered. In the present review I consider four central questions of anticipatory language processing: Why (i.e. what is the function of prediction in language processing)? What (i.e. what are the cues used to predict up-coming linguistic information and what type of representations are predicted)? How (what mechanisms are involved in predictive language processing and what is the role of possible mediating factors such as working memory)? When (i.e. do individuals always predict up-coming input during language processing)? I propose that prediction occurs via a set of diverse PACS (production-, association-, combinatorial-, and simulation-based prediction) mechanisms which are minimally required for a comprehensive account of predictive language processing. Models of anticipatory language processing must be revised to take multiple mechanisms, mediating factors, and situational context into account. Finally, I conjecture that the evidence considered here is consistent with the notion that prediction is an important aspect but not a fundamental principle of language processing.
  • Huettig, F., & Altmann, G. (2011). Looking at anything that is green when hearing ‘frog’: How object surface colour and stored object colour knowledge influence language-mediated overt attention. Quarterly Journal of Experimental Psychology, 64(1), 122-145. doi:10.1080/17470218.2010.481474.

    Abstract

    Three eye-tracking experiments investigated the influence of stored colour knowledge, perceived surface colour, and conceptual category of visual objects on language-mediated overt attention. Participants heard spoken target words whose concepts are associated with a diagnostic colour (e.g., "spinach"; spinach is typically green) while their eye movements were monitored to (a) objects associated with a diagnostic colour but presented in black and white (e.g., a black-and-white line drawing of a frog), (b) objects associated with a diagnostic colour but presented in an appropriate but atypical colour (e.g., a colour photograph of a yellow frog), and (c) objects not associated with a diagnostic colour but presented in the diagnostic colour of the target concept (e.g., a green blouse; blouses are not typically green). We observed that colour-mediated shifts in overt attention are primarily due to the perceived surface attributes of the visual objects rather than stored knowledge about the typical colour of the object. In addition our data reveal that conceptual category information is the primary determinant of overt attention if both conceptual category and surface colour competitors are copresent in the visual environment.
  • Huettig, F., Olivers, C. N. L., & Hartsuiker, R. J. (2011). Looking, language, and memory: Bridging research from the visual world and visual search paradigms. Acta Psychologica, 137, 138-150. doi:10.1016/j.actpsy.2010.07.013.

    Abstract

    In the visual world paradigm as used in psycholinguistics, eye gaze (i.e. visual orienting) is measured in order to draw conclusions about linguistic processing. However, current theories are underspecified with respect to how visual attention is guided on the basis of linguistic representations. In the visual search paradigm as used within the area of visual attention research, investigators have become more and more interested in how visual orienting is affected by higher order representations, such as those involved in memory and language. Within this area more specific models of orienting on the basis of visual information exist, but they need to be extended with mechanisms that allow for language-mediated orienting. In the present paper we review the evidence from these two different – but highly related – research areas. We arrive at a model in which working memory serves as the nexus in which long-term visual as well as linguistic representations (i.e. types) are bound to specific locations (i.e. tokens or indices). The model predicts that the interaction between language and visual attention is subject to a number of conditions, such as the presence of the guiding representation in working memory, capacity limitations, and cognitive control mechanisms.
  • Huettig, F., Singh, N., & Mishra, R. K. (2011). Language-mediated visual orienting behavior in low and high literates. Frontiers in Psychology, 2: e285. doi:10.3389/fpsyg.2011.00285.

    Abstract

    The influence of formal literacy on spoken language-mediated visual orienting was investigated by using a simple look and listen task (cf. Huettig & Altmann, 2005) which resembles every day behavior. In Experiment 1, high and low literates listened to spoken sentences containing a target word (e.g., 'magar', crocodile) while at the same time looking at a visual display of four objects (a phonological competitor of the target word, e.g., 'matar', peas; a semantic competitor, e.g., 'kachuwa', turtle, and two unrelated distractors). In Experiment 2 the semantic competitor was replaced with another unrelated distractor. Both groups of participants shifted their eye gaze to the semantic competitors (Experiment 1). In both experiments high literates shifted their eye gaze towards phonological competitors as soon as phonological information became available and moved their eyes away as soon as the acoustic information mismatched. Low literates in contrast only used phonological information when semantic matches between spoken word and visual referent were impossible (Experiment 2) but in contrast to high literates these phonologically-mediated shifts in eye gaze were not closely time-locked to the speech input. We conclude that in high literates language-mediated shifts in overt attention are co-determined by the type of information in the visual environment, the timing of cascaded processing in the word- and object-recognition systems, and the temporal unfolding of the spoken language. Our findings indicate that low literates exhibit a similar cognitive behavior but instead of participating in a tug-of-war among multiple types of cognitive representations, word-object mapping is achieved primarily at the semantic level. If forced, for instance by a situation in which semantic matches are not present (Experiment 2), low literates may on occasion have to rely on phonological information but do so in a much less proficient manner than their highly literate counterparts.
  • Hulten, A., Vihla, M., Laine, M., & Salmelin, R. (2009). Accessing newly learned names and meanings in the native language. Human Brain Mapping, 30, 979-989. doi:10.1002/hbm.20561.

    Abstract

    Ten healthy adults encountered pictures of unfamiliar archaic tools and successfully learned either their name, verbal definition of their usage, or both. Neural representation of the newly acquired information was probed with magnetoencephalography in an overt picture-naming task before and after learning, and in two categorization tasks after learning. Within 400 ms, activation proceeded from occipital through parietal to left temporal cortex, inferior frontal cortex (naming) and right temporal cortex (categorization). Comparison of naming of newly learned versus familiar pictures indicated that acquisition and maintenance of word forms are supported by the same neural network. Explicit access to newly learned phonology when such information was known strongly enhanced left temporal activation. By contrast, access to newly learned semantics had no comparable, direct neural effects. Both the behavioral learning pattern and neurophysiological results point to fundamentally different implementation of and access to phonological versus semantic features in processing pictured objects.
  • Indefrey, P., & Levelt, W. J. M. (2004). The spatial and temporal signatures of word production components. Cognition, 92(1-2), 101-144. doi:10.1016/j.cognition.2002.06.001.

    Abstract

    This paper presents the results of a comprehensive meta-analysis of the relevant imaging literature on word production (82 experiments). In addition to the spatial overlap of activated regions, we also analyzed the available data on the time course of activations. The analysis specified regions and time windows of activation for the core processes of word production: lexical selection, phonological code retrieval, syllabification, and phonetic/articulatory preparation. A comparison of the word production results with studies on auditory word/non-word perception and reading showed that the time course of activations in word production is, on the whole, compatible with the temporal constraints that perception processes impose on the production processes they affect in picture/word interference paradigms.
  • Indefrey, P. (1998). De neurale architectuur van taal: Welke hersengebieden zijn betrokken bij het spreken. Neuropraxis, 2(6), 230-237.
  • Indefrey, P., Hellwig, F. M., Herzog, H., Seitz, R. J., & Hagoort, P. (2004). Neural responses to the production and comprehension of syntax in identical utterances. Brain and Language, 89(2), 312-319. doi:10.1016/S0093-934X(03)00352-3.

    Abstract

    Following up on an earlier positron emission tomography (PET) experiment (Indefrey et al., 2001), we used a scene description paradigm to investigate whether a posterior inferior frontal region subserving syntactic encoding for speaking is also involved in syntactic parsing during listening. In the language production part of the experiment, subjects described visually presented scenes
    using either sentences, sequences of noun phrases, or sequences of syntactically unrelated words. In the language comprehension part of the experiment, subjects were auditorily presented with the same kinds of utterances and judged whether they matched the visual scenes. We were able to replicate the previous finding of a region in caudal Broca s area that is sensitive to the complexity of
    syntactic encoding in language production. In language comprehension, no hemodynamic activation differences due to syntactic complexity were found. Given that correct performance in the judgment task did not require syntactic processing of the auditory stimuli, the results suggest that the degree to which listeners recruit syntactic processing resources in language comprehension may be a function of the syntactic demands of the task or the stimulus material.
  • Indefrey, P., Gruber, O., Brown, C. M., Hagoort, P., Posse, S., & Kleinschmidt, A. (1998). Lexicality and not syllable frequency determine lateralized premotor activation during the pronunciation of word-like stimuli: An fMRI study. NeuroImage, 7, S4.
  • Indefrey, P. (2011). The spatial and temporal signatures of word production components: a critical update. Frontiers in Psychology, 2(255): 255. doi:10.3389/fpsyg.2011.00255.

    Abstract

    In the first decade of neurocognitive word production research the predominant approach was brain mapping, i.e., investigating the regional cerebral brain activation patterns correlated with word production tasks, such as picture naming and word generation. Indefrey and Levelt (2004) conducted a comprehensive meta-analysis of word production studies that used this approach and combined the resulting spatial information on neural correlates of component processes of word production with information on the time course of word production provided by behavioral and electromagnetic studies. In recent years, neurocognitive word production research has seen a major change toward a hypothesis-testing approach. This approach is characterized by the design of experimental variables modulating single component processes of word production and testing for predicted effects on spatial or temporal neurocognitive signatures of these components. This change was accompanied by the development of a broader spectrum of measurement and analysis techniques. The article reviews the findings of recent studies using the new approach. The time course assumptions of Indefrey and Levelt (2004) have largely been confirmed requiring only minor adaptations. Adaptations of the brain structure/function relationships proposed by Indefrey and Leven (2004) include the precise role of subregions of the left inferior frontal gyrus as well as a probable, yet to date unclear role of the inferior parietal cortex in word production.
  • Ingason, A., Rujescu, D., Cichon, S., Sigurdsson, E., Sigmundsson, T., Pietilainen, O. P. H., Buizer-Voskamp, J. E., Strengman, E., Francks, C., Muglia, P., Gylfason, A., Gustafsson, O., Olason, P. I., Steinberg, S., Hansen, T., Jakobsen, K. D., Rasmussen, H. B., Giegling, I., Möller, H.-J., Hartmann, A. and 28 moreIngason, A., Rujescu, D., Cichon, S., Sigurdsson, E., Sigmundsson, T., Pietilainen, O. P. H., Buizer-Voskamp, J. E., Strengman, E., Francks, C., Muglia, P., Gylfason, A., Gustafsson, O., Olason, P. I., Steinberg, S., Hansen, T., Jakobsen, K. D., Rasmussen, H. B., Giegling, I., Möller, H.-J., Hartmann, A., Crombie, C., Fraser, G., Walker, N., Lonnqvist, J., Suvisaari, J., Tuulio-Henriksson, A., Bramon, E., Kiemeney, L. A., Franke, B., Murray, R., Vassos, E., Toulopoulou, T., Mühleisen, T. W., Tosato, S., Ruggeri, M., Djurovic, S., Andreassen, O. A., Zhang, Z., Werge, T., Ophoff, R. A., Rietschel, M., Nöthen, M. M., Petursson, H., Stefansson, H., Peltonen, L., Collier, D., Stefansson, K., & St Clair, D. M. (2011). Copy number variations of chromosome 16p13.1 region associated with schizophrenia. Molecular Psychiatry, 16, 17-25. doi:10.1038/mp.2009.101.

    Abstract

    Deletions and reciprocal duplications of the chromosome 16p13.1 region have recently been reported in several cases of autism and mental retardation (MR). As genomic copy number variants found in these two disorders may also associate with schizophrenia, we examined 4345 schizophrenia patients and 35 079 controls from 8 European populations for duplications and deletions at the 16p13.1 locus, using microarray data. We found a threefold excess of duplications and deletions in schizophrenia cases compared with controls, with duplications present in 0.30% of cases versus 0.09% of controls (P=0.007) and deletions in 0.12 % of cases and 0.04% of controls (P>0.05). The region can be divided into three intervals defined by flanking low copy repeats. Duplications spanning intervals I and II showed the most significant (P=0.00010) association with schizophrenia. The age of onset in duplication and deletion carriers among cases ranged from 12 to 35 years, and the majority were males with a family history of psychiatric disorders. In a single Icelandic family, a duplication spanning intervals I and II was present in two cases of schizophrenia, and individual cases of alcoholism, attention deficit hyperactivity disorder and dyslexia. Candidate genes in the region include NTAN1 and NDE1. We conclude that duplications and perhaps also deletions of chromosome 16p13.1, previously reported to be associated with autism and MR, also confer risk of schizophrenia.
  • Isaac, A., Wang, S., Van der Meij, L., Schlobach, S., Zinn, C., & Matthezing, H. (2009). Evaluating thesaurus alignments for semantic interoperability in the library domain. IEEE Intelligent Systems, 24(2), 76-86.

    Abstract

    Thesaurus alignments play an important role in realising efficient access to heterogeneous Cultural Heritage data. Current technology, however, provides only limited value for such access as it fails to bridge the gap between theoretical study and user needs that stem from practical application requirements. In this paper, we explore common real-world problems of a library, and identify solutions that would greatly benefit from a more application embedded study, development, and evaluation of matching technology.
  • Ischebeck, A., Indefrey, P., Usui, N., Nose, I., Hellwig, F. M., & Taira, M. (2004). Reading in a regular orthography: An fMRI study investigating the role of visual familiarity. Journal of Cognitive Neuroscience, 16(5), 727-741. doi:10.1162/089892904970708.

    Abstract

    In order to separate the cognitive processes associated with phonological encoding and the use of a visual word form lexicon in reading, it is desirable to compare the processing of words presented in a visually familiar form with words in a visually unfamiliar form. Japanese Kana orthography offers this possibility. Two phonologically equivalent but visually dissimilar syllabaries allow the writing of, for example, foreign loanwords in two ways, only one of which is visually familiar. Familiarly written words, unfamiliarly written words, and pseudowords were presented in both Kana syllabaries (yielding six conditions in total) to participants during an fMRI measurement with a silent articulation task (Experiment 1) and a phonological lexical decision task (Experiment 2) using an event-related design. Consistent over two experimental tasks, the three different stimulus types (familiar, unfamiliar, and pseudoword) were found to activate selectively different brain regions previously associated with phonological encoding and word retrieval or meaning. Compatible with the predictions of the dual-route model for reading, pseudowords and visually unfamiliar words, which have to be read using phonological assembly, caused an increase in brain activity in left inferior frontal regions (BA 44/47), as compared to visually familiar words. Visually familiar and unfamiliar words were found to activate a range of areas associated with lexico-semantic processing more strongly than pseudowords, such as the left and right temporo-parietal region (BA 39/40), a region in the left middle/inferior temporal gyrus (BA 20/21), and the posterior cingulate (BA 31).
  • Jaeger, T. F., & Norcliffe, E. (2009). The cross-linguistic study of sentence production. Language and Linguistics Compass, 3, 866-887. doi:10.1111/j.1749-818x.2009.00147.x.

    Abstract

    The mechanisms underlying language production are often assumed to be universal, and hence not contingent on a speaker’s language. This assumption is problematic for at least two reasons. Given the typological diversity of the world’s languages, only a small subset of languages has actually been studied psycholinguistically. And, in some cases, these investigations have returned results that at least superficially raise doubt about the assumption of universal production mechanisms. The goal of this paper is to illustrate the need for more psycholinguistic work on a typologically more diverse set of languages. We summarize cross-linguistic work on sentence production (specifically: grammatical encoding), focusing on examples where such work has improved our theoretical understanding beyond what studies on English alone could have achieved. But cross-linguistic research has much to offer beyond the testing of existing hypotheses: it can guide the development of theories by revealing the full extent of the human ability to produce language structures. We discuss the potential for interdisciplinary collaborations, and close with a remark on the impact of language endangerment on psycholinguistic research on understudied languages.
  • Janse, E., & Klitsch, J. (2004). Auditieve perceptie bij gezonde sprekers en bij sprekers met verworven taalstoornissen. Afasiologie, 26(1), 2-6.
  • Janse, E. (2009). Neighbourhood density effects in auditory nonword processing in aphasic listeners. Clinical Linguistics and Phonetics, 23(3), 196-207. doi:10.1080/02699200802394989.

    Abstract

    This study investigates neighbourhood density effects on lexical decision performance (both accuracy and response times) of aphasic patients. Given earlier results on lexical activation and deactivation in Broca's and Wernicke's aphasia, the prediction was that smaller neighbourhood density effects would be found for Broca's aphasic patients, compared to age-matched non-brain-damaged control participants, whereas enlarged density effects were expected for Wernicke's aphasic patients. The results showed density effects for all three groups of listeners, and overall differences in performance between groups, but no significant interaction between neighbourhood density and listener group. Several factors are discussed to account for the present results.
  • Janse, E. (2009). Processing of fast speech by elderly listeners. Journal of the Acoustical Society of America, 125(4), 2361-2373. doi:10.1121/1.3082117.

    Abstract

    This study investigates the relative contributions of auditory and cognitive factors to the common finding that an increase in speech rate affects elderly listeners more than young listeners. Since a direct relation between non-auditory factors, such as age-related cognitive slowing, and fast speech performance has been difficult to demonstrate, the present study took an on-line, rather than off-line, approach and focused on processing time. Elderly and young listeners were presented with speech at two rates of time compression and were asked to detect pre-assigned target words as quickly as possible. A number of auditory and cognitive measures were entered in a statistical model as predictors of elderly participants’ fast speech performance: hearing acuity, an information processing rate measure, and two measures of reading speed. The results showed that hearing loss played a primary role in explaining elderly listeners’ increased difficulty with fast speech. However, non-auditory factors such as reading speed and the extent to which participants were affected by
    increased rate of presentation in a visual analog of the listening experiment also predicted fast
    speech performance differences among the elderly participants. These on-line results confirm that slowed information processing is indeed part of elderly listeners’ problem keeping up with fast language
  • Janse, E., & Ernestus, M. (2009). Recognition of reduced speech and use of phonetic context in listeners with age-related hearing impairment [Abstract]. Journal of the Acoustical Society of America, 125(4), 2535.
  • Janse, E., & Ernestus, M. (2011). The roles of bottom-up and top-down information in the recognition of reduced speech: Evidence from listeners with normal and impaired hearing. Journal of Phonetics, 39(3), 330-343. doi:10.1016/j.wocn.2011.03.005.
  • Janse, E. (2004). Word perception in fast speech: Artificially time-compressed vs. naturally produced fast speech. Speech Communication, 42, 155-173. doi:10.1016/j.specom.2003.07.001.

    Abstract

    Natural fast speech differs from normal-rate speech with respect to its temporal pattern. Previous results showed that word intelligibility of heavily artificially time-compressed speech could not be improved by making its temporal pattern more similar to that of natural fast speech. This might have been due to the extrapolation of timing rules for natural fast speech to rates that are much faster than can be attained by human speakers. The present study investigates whether, at a speech rate that human speakers can attain, artificially time-compressed speech is easier to process if its timing pattern is similar to that of naturally produced fast speech. Our first experiment suggests, however, that word processing speed was slowed down, relative to linear compression. In a second experiment, word processing of artificially time-compressed speech was compared with processing of naturally produced fast speech. Even when naturally produced fast speech is perfectly intelligible, its less careful articulation, combined with the changed timing pattern, slows down processing, relative to linearly time-compressed speech. Furthermore, listeners preferred artificially time-compressed speech over naturally produced fast speech. These results suggest that linearly time-compressed speech has both a temporal and a segmental advantage over natural fast speech.

Share this page