Publications

Displaying 301 - 400 of 1084
  • Francks, C., Maegawa, S., Laurén, J., Abrahams, B. S., Velayos-Baeza, A., Medland, S. E., Colella, S., Groszer, M., McAuley, E. Z., Caffrey, T. M., Timmusk, T., Pruunsild, P., Koppel, I., Lind, P. A., Matsumoto-Itaba, N., Nicod, J., Xiong, L., Joober, R., Enard, W., Krinsky, B. and 22 moreFrancks, C., Maegawa, S., Laurén, J., Abrahams, B. S., Velayos-Baeza, A., Medland, S. E., Colella, S., Groszer, M., McAuley, E. Z., Caffrey, T. M., Timmusk, T., Pruunsild, P., Koppel, I., Lind, P. A., Matsumoto-Itaba, N., Nicod, J., Xiong, L., Joober, R., Enard, W., Krinsky, B., Nanba, E., Richardson, A. J., Riley, B. P., Martin, N. G., Strittmatter, S. M., Möller, H.-J., Rujescu, D., St Clair, D., Muglia, P., Roos, J. L., Fisher, S. E., Wade-Martins, R., Rouleau, G. A., Stein, J. F., Karayiorgou, M., Geschwind, D. H., Ragoussis, J., Kendler, K. S., Airaksinen, M. S., Oshimura, M., DeLisi, L. E., & Monaco, A. P. (2007). LRRTM1 on chromosome 2p12 is a maternally suppressed gene that is associated paternally with handedness and schizophrenia. Molecular Psychiatry, 12, 1129-1139. doi:10.1038/sj.mp.4002053.

    Abstract

    Left-right asymmetrical brain function underlies much of human cognition, behavior and emotion. Abnormalities of cerebral asymmetry are associated with schizophrenia and other neuropsychiatric disorders. The molecular, developmental and evolutionary origins of human brain asymmetry are unknown. We found significant association of a haplotype upstream of the gene LRRTM1 (Leucine-rich repeat transmembrane neuronal 1) with a quantitative measure of human handedness in a set of dyslexic siblings, when the haplotype was inherited paternally (P=0.00002). While we were unable to find this effect in an epidemiological set of twin-based sibships, we did find that the same haplotype is overtransmitted paternally to individuals with schizophrenia/schizoaffective disorder in a study of 1002 affected families (P=0.0014). We then found direct confirmatory evidence that LRRTM1 is an imprinted gene in humans that shows a variable pattern of maternal downregulation. We also showed that LRRTM1 is expressed during the development of specific forebrain structures, and thus could influence neuronal differentiation and connectivity. This is the first potential genetic influence on human handedness to be identified, and the first putative genetic effect on variability in human brain asymmetry. LRRTM1 is a candidate gene for involvement in several common neurodevelopmental disorders, and may have played a role in human cognitive and behavioral evolution.
  • Francks, C. (2011). Leucine-rich repeat genes and the fine-tuning of synapses. Biological Psychiatry, 69, 820-821. doi:10.1016/j.biopsych.2010.12.018.
  • Francks, C. (2009). Understanding the genetics of behavioural and psychiatric traits will only be achieved through a realistic assessment of their complexity. Laterality: Asymmetries of Body, Brain and Cognition, 14(1), 11-16. doi:10.1080/13576500802536439.

    Abstract

    Francks et al. (2007) performed a recent study in which the first putative genetic effect on human handedness was identified (the imprinted locus LRRTM1 on human chromosome 2). In this issue of Laterality, Tim Crow and colleagues present a critique of that study. The present paper presents a personal response to that critique which argues that Francks et al. (2007) published a substantial body of evidence implicating LRRTM1 in handedness and schizophrenia. Progress will now be achieved by others trying to validate, refute, or extend those findings, rather than by further armchair discussion.
  • Frank, S. L., Koppen, M., Noordman, L. G. M., & Vonk, W. (2007). Coherence-driven resolution of referential ambiguity: A computational model. Memory & Cognition, 35(6), 1307-1322.

    Abstract

    We present a computational model that provides a unified account of inference, coherence, and disambiguation. It simulates how the build-up of coherence in text leads to the knowledge-based resolution of referential ambiguity. Possible interpretations of an ambiguity are represented by centers of gravity in a high-dimensional space. The unresolved ambiguity forms a vector in the same space. This vector is attracted by the centers of gravity, while also being affected by context information and world knowledge. When the vector reaches one of the centers of gravity, the ambiguity is resolved to the corresponding interpretation. The model accounts for reading time and error rate data from experiments on ambiguous pronoun resolution and explains the effects of context informativeness, anaphor type, and processing depth. It shows how implicit causality can have an early effect during reading. A novel prediction is that ambiguities can remain unresolved if there is insufficient disambiguating information.
  • French, C. A., Groszer, M., Preece, C., Coupe, A.-M., Rajewsky, K., & Fisher, S. E. (2007). Generation of mice with a conditional Foxp2 null allele. Genesis, 45(7), 440-446. doi:10.1002/dvg.20305.

    Abstract

    Disruptions of the human FOXP2 gene cause problems with articulation of complex speech sounds, accompanied by impairment in many aspects of language ability. The FOXP2/Foxp2 transcription factor is highly similar in humans and mice, and shows a complex conserved expression pattern, with high levels in neuronal subpopulations of the cortex, striatum, thalamus, and cerebellum. In the present study we generated mice in which loxP sites flank exons 12-14 of Foxp2; these exons encode the DNA-binding motif, a key functional domain. We demonstrate that early global Cre-mediated recombination yields a null allele, as shown by loss of the loxP-flanked exons at the RNA level and an absence of Foxp2 protein. Homozygous null mice display severe motor impairment, cerebellar abnormalities and early postnatal lethality, consistent with other Foxp2 mutants. When crossed to transgenic lines expressing Cre protein in a spatially and/or temporally controlled manner, these conditional mice will provide new insights into the contributions of Foxp2 to distinct neural circuits, and allow dissection of roles during development and in the mature brain.
  • Friederici, A. D., & Levelt, W. J. M. (1986). Cognitive processes of spatial coordinate assignment: On weighting perceptual cues. Naturwissenschaften, 73, 455-458.
  • Friederici, A., & Levelt, W. J. M. (1990). Spatial reference in weightlessness: Perceptual factors and mental representations. Perception and Psychophysics, 47, 253-266.

    Abstract

    The role of gravity in spatial coordinate assignment and the mental representation of space were studiedin three experiments, varying different perceptual cues systematically: the retinal, the visual background, the vestibular, and proprioceptive information. Verbal descriptions of visually presented arrays were required under different head positions (straight/tilt) and under different gravitational conditions (gravity present/gravity absent). The results of two experiments conducted with 2 subjects who participated in a space flight revealed that subjects are able to adequately assign positions in space in the absence of gravitational information, and that they do this by using their head—retinal coordinates as primary references. This indicates that they cognitively adapted to the perceptually new situation.The findings from a third experiment conducted with a larger group of subjects under a condition in which the gravitational information was present but irrelevant to the task being solved (subjects were in a-horizontal 8upine-position) show that subjects, in general, are flexible in using cues other than gravitational ones as references when the latter cannot serve as a referential system. These findings, together with the observation that consistent spatial assignment is possible evenimmediately after first exposure to the perceptually totally novel situation of weightlessness, seem to suggest that the mental representation of space, onto which given perceptual information is mapped, is independent of a particular percept.
  • Friedlaender, J., Hunley, K., Dunn, M., Terrill, A., Lindström, E., Reesink, G., & Friedlaender, F. (2009). Linguistics more robust than genetics [Letter to the editor]. Science, 324, 464-465. doi:10.1126/science.324_464c.
  • Furman, R., & Ozyurek, A. (2007). Development of interactional discourse markers: Insights from Turkish children's and adults' narratives. Journal of Pragmatics, 39(10), 1742-1757. doi:10.1016/j.pragma.2007.01.008.

    Abstract

    Discourse markers (DMs) are linguistic elements that index different relations and coherence between units of talk (Schiffrin, Deborah, 1987. Discourse Markers. Cambridge University Press, Cambridge). Most research on the development of these forms has focused on conversations rather than narratives and furthermore has not directly compared children's use of DMs to adult usage. This study examines the development of three DMs (şey ‘uuhh’, yani ‘I mean’, işte ‘y’know’) that mark interactional levels of discourse in oral Turkish narratives in 60 Turkish children (3-, 5- and 9-year-olds) and 20 Turkish-speaking adults. The results show that the frequency and functions of DMs change with age. Children learn şey, which mainly marks exchange level structures, earliest. However, yani and işte have multi-functions such as marking both information states and participation frameworks and are consequently learned later. Children also use DMs with different functions than adults. Overall, the results show that learning to use interactional DMs in narratives is complex and goes beyond age 9, especially for multi-functional DMs that index an interplay of discourse coherence at different levels.
  • Gaby, A. R. (2004). Extended functions of Thaayorre body part terms. Papers in Linguistics and Applied Linguistics, 4(2), 24-34.
  • Ganushchak, L. Y., Verdonschot, R. G., & Schiller, N. O. (2011). When leaf becomes neuter: Event related potential evidence for grammatical gender transfer in bilingualism. Neuroreport, 22(3), 106-110. doi:10.1097/WNR.0b013e3283427359.

    Abstract

    This study addressed the question as to whether grammatical properties of a first language are transferred to a second language. Dutch-English bilinguals classified Dutch words in white print according to their grammatical gender and colored words (i.e. Dutch common and neuter words, and their English translations) according to their color. Both the classifications were made with the same hand (congruent trials) or different hands (incongruent trials). Performance was more erroneous and the error-elated negativity was enhanced on incongruent compared with congruent trials. This effect was independent of the language in which words were presented. These results provide evidence for the fact thatbilinguals may transfer grammatical characteristics oftheir first language to a second language, even when such characteristics are absent in the grammar of the latter.

    Files private

    Request files
  • Ganushchak, L. Y., & Schiller, N. O. (2009). Speaking in one’s second language under time pressure: An ERP study on verbal self-monitoring in German-Dutch bilinguals. Psychophysiology, 46, 410-419. doi:10.1111/j.1469-8986.2008.00774.x.

    Abstract

    This study addresses how verbal self-monitoring and the Error-Related Negativity (ERN) are affected by time pressure
    when a task is performed in a second language as opposed to performance in the native language. German–Dutch
    bilinguals were required to perform a phoneme-monitoring task in Dutch with and without a time pressure manipulation.
    We obtained an ERN following verbal errors that showed an atypical increase in amplitude under time
    pressure. This finding is taken to suggest that under time pressure participants had more interference from their native
    language, which in turn led to a greater response conflict and thus enhancement of the amplitude of the ERN. This
    result demonstrates once more that the ERN is sensitive to psycholinguistic manipulations and suggests that the
    functioning of the verbal self-monitoring systemduring speaking is comparable to other performance monitoring, such
    as action monitoring.
  • Ganushchak, L. Y., Christoffels, I., & Schiller, N. (2011). The use of electroencephalography (EEG) in language production research: A review. Frontiers in Psychology, 2, 208. doi:10.3389/fpsyg.2011.00208.

    Abstract

    Speech production long avoided electrophysiological experiments due to the suspicion that potential artifacts caused by muscle activity of overt speech may lead to a bad signal-to-noise ratio in the measurements. Therefore, researchers have sought to assess speech production by using indirect speech production tasks, such as tacit or implicit naming, delayed naming, or metalinguistic tasks, such as phoneme monitoring. Covert speech may, however, involve different processes than overt speech production. Recently, overt speech has been investigated using EEG. As the number of papers published is rising steadily, this clearly indicates the increasing interest and demand for overt speech research within the field of cognitive neuroscience of language. Our main goal here is to review all currently available results of overt speech production involving EEG measurements, such as picture naming, Stroop naming, and reading aloud. We conclude that overt speech production can be successfully studied using electrophysiological measures, for instance, event-related brain potentials (ERPs). We will discuss possible relevant components in the ERP waveform of speech production and aim to address the issue of how to interpret the results of ERP research using overt speech, and whether the ERP components in language production are comparable to results from other fields.
  • Garrido, L., Eisner, F., McGettigan, C., Stewart, L., Sauter, D., Hanley, J. R., Schweinberger, S. R., Warren, J. D., & Duchaine, B. (2009). Developmental phonagnosia: A selective deficit of vocal identity recognition. Neuropsychologia, 47(1), 123-131. doi:10.1016/j.neuropsychologia.2008.08.003.

    Abstract

    Phonagnosia, the inability to recognize familiar voices, has been studied in brain-damaged patients but no cases due to developmental problems have been reported. Here we describe the case of KH, a 60-year-old active professional woman who reports that she has always experienced severe voice recognition difficulties. Her hearing abilities are normal, and an MRI scan showed no evidence of brain damage in regions associated with voice or auditory perception. To better understand her condition and to assess models of voice and high-level auditory processing, we tested KH on behavioural tasks measuring voice recognition, recognition of vocal emotions, face recognition, speech perception, and processing of environmental sounds and music. KH was impaired on tasks requiring the recognition of famous voices and the learning and recognition of new voices. In contrast, she performed well on nearly all other tasks. Her case is the first report of developmental phonagnosia, and the results suggest that the recognition of a speaker’s vocal identity depends on separable mechanisms from those used to recognize other information from the voice or non-vocal auditory stimuli.
  • Gazendam, L., Wartena, C., Malaise, V., Schreiber, G., De Jong, A., & Brugman, H. (2009). Automatic annotation suggestions for audiovisual archives: Evaluation aspects. Interdisciplinary Science Reviews, 34(2/3), 172-188. doi:10.1179/174327909X441090.

    Abstract

    In the context of large and ever growing archives, generating annotation suggestions automatically from textual resources related to the documents to be archived is an interesting option in theory. It could save a lot of work in the time consuming and expensive task of manual annotation and it could help cataloguers attain a higher inter-annotator agreement. However, some questions arise in practice: what is the quality of the automatically produced annotations? How do they compare with manual annotations and with the requirements for annotation that were defined in the archive? If different from the manual annotations, are the automatic annotations wrong? In the CHOICE project, partially hosted at the Netherlands Institute for Sound and Vision, the Dutch public archive for audiovisual broadcasts, we automatically generate annotation suggestions for cataloguers. In this paper, we define three types of evaluation of these annotation suggestions: (1) a classic and strict evaluation measure expressing the overlap between automatically generated keywords and the manual annotations, (2) a loosened evaluation measure for which semantically very similar annotations are also considered as relevant matches, and (3) an in-use evaluation of the usefulness of manual versus automatic annotations in the context of serendipitous browsing. During serendipitous browsing, the annotations (manual or automatic) are used to retrieve and visualize semantically related documents.
  • Gertz, J., Varley, K. E., Reddy, T. E., Bowling, K. M., Pauli, F., Parker, S. L., Kucera, K. S., Willard, H. F., & Myers, R. M. (2011). Analysis of DNA Methylation in a three-generation family reveals widespread genetic influence on epigenetic regulation. PLoS Genetics, 7, e1002228. doi:10.1371/journal.pgen.1002228.

    Abstract

    The methylation of cytosines in CpG dinucleotides is essential for cellular differentiation and the progression of many cancers, and it plays an important role in gametic imprinting. To assess variation and inheritance of genome-wide patterns of DNA methylation simultaneously in humans, we applied reduced representation bisulfite sequencing (RRBS) to somatic DNA from six members of a three-generation family. We observed that 8.1% of heterozygous SNPs are associated with differential methylation in cis, which provides a robust signature for Mendelian transmission and relatedness. The vast majority of differential methylation between homologous chromosomes (>92%) occurs on a particular haplotype as opposed to being associated with the gender of the parent of origin, indicating that genotype affects DNA methylation of far more loci than does gametic imprinting. We found that 75% of genotype-dependent differential methylation events in the family are also seen in unrelated individuals and that overall genotype can explain 80% of the variation in DNA methylation. These events are under-represented in CpG islands, enriched in intergenic regions, and located in regions of low evolutionary conservation. Even though they are generally not in functionally constrained regions, 22% (twice as many as expected by chance) of genes harboring genotype-dependent DNA methylation exhibited allele-specific gene expression as measured by RNA-seq of a lymphoblastoid cell line, indicating that some of these events are associated with gene expression differences. Overall, our results demonstrate that the influence of genotype on patterns of DNA methylation is widespread in the genome and greatly exceeds the influence of imprinting on genome-wide methylation patterns.
  • Ghatan, P. H., Hsieh, J. C., Petersson, K. M., Stone-Elander, S., & Ingvar, M. (1998). Coexistence of attention-based facilitation and inhibition in the human cortex. NeuroImage, 7, 23-29.

    Abstract

    A key function of attention is to select an appropriate subset of available information by facilitation of attended processes and/or inhibition of irrelevant processing. Functional imaging studies, using positron emission tomography, have during different experimental tasks revealed decreased neuronal activity in areas that process input from unattended sensory modalities. It has been hypothesized that these decreases reflect a selective inhibitory modulation of nonrelevant cortical processing. In this study we addressed this question using a continuous arithmetical task with and without concomitant disturbing auditory input (task-irrelevant speech). During the arithmetical task, irrelevant speech did not affect task-performance but yielded decreased activity in the auditory and midcingulate cortices and increased activity in the left posterior parietal cortex. This pattern of modulation is consistent with a top down inhibitory modulation of a nonattended input to the auditory cortex and a coexisting, attention-based facilitation of taskrelevant processing in higher order cortices. These findings suggest that task-related decreases in cortical activity may be of functional importance in the understanding of both attentional mechanisms and taskrelated information processing.
  • Gisselgard, J., Uddén, J., Ingvar, M., & Petersson, K. M. (2007). Disruption of order information by irrelevant items: A serial recognition paradigm. Acta Psychologica, 124(3), 356-369. doi:10.1016/j.actpsy.2006.04.002.

    Abstract

    Irrelevant speech effect (ISE) is defined as a decrement in visually presented digit-list short-term memory performance due to exposure to irrelevant auditory material. Perhaps the most successful theoretical explanation of the effect is the changing state hypothesis. This hypothesis explains the effect in terms of confusion between amodal serial order cues, and represents a view based on the interference caused by the processing of similar order information of the visual and auditory materials. An alternative view suggests that the interference occurs as a consequence of the similarity between the visual and auditory contents of the stimuli. An important argument for the former view is the observation that ISE is almost exclusively observed in tasks that require memory for serial order. However, most short-term memory tasks require that both item and order information be retained in memory. An ideal task to investigate the sensitivity of maintenance of serial order to irrelevant speech would be one that calls upon order information but not item information. One task that is particularly suited to address this issue is serial recognition. In a typical serial recognition task, a list of items is presented and then probed by the same list in which the order of two adjacent items has been transposed. Due to the re-presentation of the encoding string, serial recognition requires primarily the serial order to be maintained while the content of the presented items is deemphasized. In demonstrating a highly significant ISE of changing versus steady-state auditory items in a serial recognition task, the present finding lends support for and extends previous empirical findings suggesting that irrelevant speech has the potential to interfere with the coding of the order of the items to be memorized.
  • Gisselgard, J., Petersson, K. M., & Ingvar, M. (2004). The irrelevant speech effect and working memory load. NeuroImage, 22, 1107-1116. doi:10.1016/j.neuroimage.2004.02.031.

    Abstract

    Irrelevant speech impairs the immediate serial recall of visually presented material. Previously, we have shown that the irrelevant speech effect (ISE) was associated with a relative decrease of regional blood flow in cortical regions subserving the verbal working memory, in particular the superior temporal cortex. In this extension of the previous study, the working memory load was increased and an increased activity as a response to irrelevant speech was noted in the dorsolateral prefrontal cortex. We suggest that the two studies together provide some basic insights as to the nature of the irrelevant speech effect. Firstly, no area in the brain can be ascribed as the single locus of the irrelevant speech effect. Instead, the functional neuroanatomical substrate to the effect can be characterized in terms of changes in networks of functionally interrelated areas. Secondly, the areas that are sensitive to the irrelevant speech effect are also generically activated by the verbal working memory task itself. Finally, the impact of irrelevant speech and related brain activity depends on working memory load as indicated by the differences between the present and the previous study. From a brain perspective, the irrelevant speech effect may represent a complex phenomenon that is a composite of several underlying mechanisms, which depending on the working memory load, include top-down inhibition as well as recruitment of compensatory support and control processes. We suggest that, in the low-load condition, a selection process by an inhibitory top-down modulation is sufficient, whereas in the high-load condition, at or above working memory span, auxiliary adaptive cognitive resources are recruited as compensation
  • Glaser, B., Gunnell, D., Timpson, N. J., Joinson, C., Zammit, S., Smith, G. D., & Lewis, G. (2011). Age- and puberty-dependent association between IQ score in early childhood and depressive symptoms in adolescence. Psychological Medicine, 41(2), 333-343. doi:10.1017/S0033291710000814.

    Abstract

    BACKGROUND: Lower cognitive functioning in early childhood has been proposed as a risk factor for depression in later life but its association with depressive symptoms during adolescence has rarely been investigated. Our study examines the relationship between total intelligence quotient (IQ) score at age 8 years, and depressive symptoms at 11, 13, 14 and 17 years. METHOD: Study participants were 5250 children and adolescents from the Avon Longitudinal Study of Parents and their Children (ALSPAC), UK, for whom longitudinal data on depressive symptoms were available. IQ was assessed with the Wechsler Intelligence Scale for Children III, and self-reported depressive symptoms were measured with the Short Mood and Feelings Questionnaire (SMFQ). RESULTS: Multi-level analysis on continuous SMFQ scores showed that IQ at age 8 years was inversely associated with depressive symptoms at age 11 years, but the association changed direction by age 13 and 14 years (age-IQ interaction, p<}0.0001; age squared-IQ interaction, p{<}0.0001) when a higher IQ score was associated with a higher risk of depressive symptoms. This change in IQ effect was also found in relation to pubertal stage (pubertal stage-IQ interaction, 0.00049{

    Additional information

    S0033291710000814sup001.doc
  • Glaser, B., & Holmans, P. (2009). Comparison of methods for combining case-control and family-based association studies. Human Heredity, 68(2), 106-116. doi:10.1159/000212503.

    Abstract

    OBJECTIVES: Combining the analysis of family-based samples with unrelated individuals can enhance the power of genetic association studies. Various combined analysis techniques have been recently developed; as yet, there have been no comparisons of their power, or robustness to confounding factors. We investigated empirically the power of up to six combined methods using simulated samples of trios and unrelated cases/controls (TDTCC), trios and unrelated controls (TDTC), and affected sibpairs with parents and unrelated cases/controls (ASPFCC). METHODS: We simulated multiplicative, dominant and recessive models with varying risk parameters in single samples. Additionally, we studied false-positive rates and investigated, if possible, the coverage of the true genetic effect (TDTCC). RESULTS/CONCLUSIONS: Under the TDTCC design, we identified four approaches with equivalent power and false-positive rates. Combined statistics were more powerful than single-sample statistics or a pooled chi(2)-statistic when risk parameters were similar in single samples. Adding parental information to the CC part of the joint likelihood increased the power of generalised logistic regression under the TDTC but not the TDTCC scenario. Formal testing of differences between risk parameters in subsamples was the most sensitive approach to avoid confounding in combined analysis. Non-parametric analysis based on Monte-Carlo testing showed the highest power for ASPFCC samples.
  • Glaser, B., Nikolov, I., Chubb, D., Hamshere, M. L., Segurado, R., Moskvina, V., & Holmans, P. (2007). Analyses of single marker and pairwise effects of candidate loci for rheumatoid arthritis using logistic regression and random forests. BMC Proceedings, 1(Suppl 1): 54.

    Abstract

    Using parametric and nonparametric techniques, our study investigated the presence of single locus and pairwise effects between 20 markers of the Genetic Analysis Workshop 15 (GAW15) North American Rheumatoid Arthritis Consortium (NARAC) candidate gene data set (Problem 2), analyzing 463 independent patients and 855 controls. Specifically, our work examined the correspondence between logistic regression (LR) analysis of single-locus and pairwise interaction effects, and random forest (RF) single and joint importance measures. For this comparison, we selected small but stable RFs (500 trees), which showed strong correlations (r~0.98) between their importance measures and those by RFs grown on 5000 trees. Both RF importance measures captured most of the LR single-locus and pairwise interaction effects, while joint importance measures also corresponded to full LR models containing main and interaction effects. We furthermore showed that RF measures were particularly sensitive to data imputation. The most consistent pairwise effect on rheumatoid arthritis was found between two markers within MAP3K7IP2/SUMO4 on 6q25.1, although LR and RFs assigned different significance levels. Within a hypothetical two-stage design, pairwise LR analysis of all markers with significant RF single importance would have reduced the number of possible combinations in our small data set by 61%, whereas joint importance measures would have been less efficient for marker pair reduction. This suggests that RF single importance measures, which are able to detect a wide range of interaction effects and are computationally very efficient, might be exploited as pre-screening tool for larger association studies. Follow-up analysis, such as by LR, is required since RFs do not indicate highrisk genotype combinations.
  • De Goede, D., Shapiro, L. P., Wester, F., Swinney, D. A., & Bastiaanse, Y. R. M. (2009). The time course of verb processing in Dutch sentences. Journal of Psycholinguistic Research, 38(3), 181-199. doi:10.1007/s10936-009-9117-3.

    Abstract

    The verb has traditionally been characterized as the central element in a sentence. Nevertheless, the exact role of the verb during the actual ongoing comprehension of a sentence as it unfolds in time remains largely unknown. This paper reports the results of two Cross-Modal Lexical Priming (CMLP) experiments detailing the pattern of verb priming during on-line processing of Dutch sentences. Results are contrasted with data from a third CMLP experiment on priming of nouns in similar sentences. It is demonstrated that the meaning of a matrix verb remains active throughout the entire matrix clause, while this is not the case for the meaning of a subject head noun. Activation of the meaning of the verb only dissipates upon encountering a clear signal as to the start of a new clause.
  • Gonzalez da Silva, C., Petersson, K. M., Faísca, L., Ingvar, M., & Reis, A. (2004). The effects of literacy and education on the quantitative and qualitative aspects of semantic verbal fluency. Journal of Clinical and Experimental Neuropsychology, 26(2), 266-277. doi:10.1076/jcen.26.2.266.28089.

    Abstract

    Semantic verbal fluency tasks are commonly used in neuropsychological assessment. Investigations of the influence of level of literacy have not yielded consistent results in the literature. This prompted us to investigate the ecological relevance of task specifics, in particular, the choice of semantic criteria used. Two groups of literate and illiterate subjects were compared on two verbal fluency tasks using different semantic criteria. The performance on a food criterion (supermarket fluency task), considered more ecologically relevant for the two literacy groups, and an animal criterion (animal fluency task) were compared. The data were analysed using both quantitative and qualitative measures. The quantitative analysis indicated that the two literacy groups performed equally well on the supermarket fluency task. In contrast, results differed significantly during the animal fluency task. The qualitative analyses indicated differences between groups related to the strategies used, especially with respect to the animal fluency task. The overall results suggest that there is not a substantial difference between literate and illiterate subjects related to the fundamental workings of semantic memory. However, there is indication that the content of semantic memory reflects differences in shared cultural background - in other words, formal education –, as indicated by the significant interaction between level of literacy and semantic criterion.
  • Goudbeek, M., Swingley, D., & Smits, R. (2009). Supervised and unsupervised learning of multidimensional acoustic categories. Journal of Experimental Psychology: Human Perception and Performance, 35, 1913-1933. doi:10.1037/a0015781.

    Abstract

    Learning to recognize the contrasts of a language-specific phonemic repertoire can be viewed as forming categories in a multidimensional psychophysical space. Research on the learning of distributionally defined visual categories has shown that categories defined over I dimension are easy to learn and that learning multidimensional categories is more difficult but tractable under specific task conditions. In 2 experiments, adult participants learned either a unidimensional ora multidimensional category distinction with or without supervision (feedback) during learning. The unidimensional distinctions were readily learned and supervision proved beneficial, especially in maintaining category learning beyond the learning phase. Learning the multidimensional category distinction proved to be much more difficult and supervision was not nearly as beneficial as with unidimensionally defined categories. Maintaining a learned multidimensional category distinction was only possible when the distributional information (hat identified the categories remained present throughout the testing phase. We conclude that listeners are sensitive to both trial-by-trial feedback and the distributional information in the stimuli. Even given limited exposure, listeners learned to use 2 relevant dimensions. albeit with considerable difficulty.
  • Graham, S. A., Antonopoulos, A., Hitchen, P. G., Haslam, S. M., Dell, A., Drickamer, K., & Taylor, M. E. (2011). Identification of neutrophil granule glycoproteins as Lewisx-containing ligands cleared by the scavenger receptor C-type lectin. Journal of Biological Chemistry, 286, 24336-24349. doi:10.1074/jbc.M111.244772.

    Abstract

    The scavenger receptor C-type lectin (SRCL) is a glycan-binding receptor that has the capacity to mediate endocytosis of glycoproteins carrying terminal Lewis(x) groups (Galβ1-4(Fucα1-3)GlcNAc). A screen for glycoprotein ligands for SRCL using affinity chromatography on immobilized SRCL followed by mass spectrometry-based proteomic analysis revealed that soluble glycoproteins from secondary granules of neutrophils, including lactoferrin and matrix metalloproteinases 8 and 9, are major ligands. Binding competition and surface plasmon resonance analysis showed affinities in the low micromolar range. Comparison of SRCL binding to neutrophil and milk lactoferrin indicates that the binding is dependent on cell-specific glycosylation in the neutrophils, as the milk form of the glycoprotein is a much poorer ligand. Binding to neutrophil glycoproteins is fucose dependent and mass spectrometry-based glycomic analysis of neutrophil and milk lactoferrin was used to establish a correlation between high affinity binding to SRCL and the presence of multiple, clustered terminal Lewis(x) groups on a heterogeneous mixture of branched glycans, some with poly N-acetyllactosamine extensions. The ability of SRCL to mediate uptake of neutrophil lactoferrin was confirmed using fibroblasts transfected with SRCL. The common presence of Lewis(x) groups in granule protein glycans can thus target granule proteins for clearance by SRCL. PCR and immunohistochemical analysis confirms that SRCL is widely expressed on endothelial cells and thus represents a distributed system which could scavenge released neutrophil glycoproteins both locally at sites of inflammation or systemically when they are released in the circulation.

    Additional information

    graham_supp_info.pdf
  • Graham, S. A., Jégouzo, S. A. F., Yan, S., Powlesland, A. S., Brady, J. P., Taylor, M. E., & Drickamer, K. (2009). Prolectin, a glycan-binding receptor on dividing B cells in germinal centers. The Journal of Biological Chemistry, 284, 18537-18544. doi:10.1074/jbc.M109.012807.

    Abstract

    Prolectin, a previously undescribed glycan-binding receptor, has been identified by re-screening of the human genome for genes encoding proteins containing potential C-type carbohydrate-recognition domains. Glycan array analysis revealed that the carbohydrate-recognition domain in the extracellular domain of the receptor binds glycans with terminal α-linked mannose or fucose residues. Prolectin expressed in fibroblasts is found at the cell surface, but unlike many glycan-binding receptors it does not mediate endocytosis of a neoglycoprotein ligand. However, compared with other known glycan-binding receptors, the receptor contains an unusually large intracellular domain that consists of multiple sequence motifs, including phosphorylated tyrosine residues, that allow it to interact with signaling molecules such as Grb2. Immunohistochemistry has been used to demonstrate that prolectin is expressed on a specialized population of proliferating B cells in germinal centers. Thus, this novel receptor has the potential to function in carbohydrate-mediated communication between cells in the germinal center.
  • Gray, R., & Jordan, F. (2000). Language trees support the express-train sequence of Austronesian expansion. Nature, 405, 1052-1055. doi:10.1038/35016575.

    Abstract

    Languages, like molecules, document evolutionary history. Darwin(1) observed that evolutionary change in languages greatly resembled the processes of biological evolution: inheritance from a common ancestor and convergent evolution operate in both. Despite many suggestions(2-4), few attempts have been made to apply the phylogenetic methods used in biology to linguistic data. Here we report a parsimony analysis of a large language data set. We use this analysis to test competing hypotheses - the "express-train''(5) and the "entangled-bank''(6,7) models - for the colonization of the Pacific by Austronesian-speaking peoples. The parsimony analysis of a matrix of 77 Austronesian languages with 5,185 lexical items produced a single most-parsimonious tree. The express-train model was converted into an ordered geographical character and mapped onto the language tree. We found that the topology of the language tree was highly compatible with the express-train model.
  • Gretsch, P. (2004). What does finiteness mean to children? A cross-linguistic perspective onroot infinitives. Linguistics, 42(2), 419-468. doi:10.1515/ling.2004.014.

    Abstract

    The discussion on root infinitives has mainly centered around their supposed modal usage. This article aims at modelling the form-function relation of the root infinitive phenomenon by taking into account the full range of interpretational facets encountered cross-linguistically and interindividually. Following the idea of a subsequent ‘‘cell partitioning’’ in the emergence of form-function correlations, I claim that it is the major fission between [+-finite] which is central to express temporal reference different from the default here&now in tense-oriented languages. In aspectual-oriented languages, a similar opposition is mastered with the marking of early aspectual forms. It is observed that in tense-oriented languages like Dutch and German, the progression of functions associated with the infinitival form proceeds from nonmodal to modal, whereas the reverse progression holds for the Russian infinitive. Based on this crucial observation, a model of acquisition is proposed which allows for a flexible and systematic relationship between morphological forms and their respective interpretational biases dependent on their developmental context. As for early child language, I argue that children entertain only two temporal parameters: one parameter is fixed to the here&now point in time, and a second parameter relates to the time talked about, the topic time; this latter time overlaps the situation time as long as no empirical evidence exists to support the emergence of a proper distinction between tense and aspect.

    Files private

    Request files
  • Griffin, Z. M., & Bock, K. (2000). What the eyes say about speaking. Psychological Science, 11(4), 274-279. doi:10.1111/1467-9280.00255.

    Abstract

    To study the time course of sentence formulation, we monitored the eye movements of speakers as they described simple events. The similarity between speakers' initial eye movements and those of observers performing a nonverbal event-comprehension task suggested that response-relevant information was rapidly extracted from scenes, allowing speakers to select grammatical subjects based on comprehended events rather than salience. When speaking extemporaneously, speakers began fixating pictured elements less than a second before naming them within their descriptions, a finding consistent with incremental lexical encoding. Eye movements anticipated the order of mention despite changes in picture orientation, in who-did-what-to-whom, and in sentence structure. The results support Wundt's theory of sentence production.

    Files private

    Request files
  • Le Guen, O. (2011). Materiality vs. expressivity: The use of sensory vocabulary in Yucatec Maya. The Senses & Society, 6(1), 117-126. doi:10.2752/174589311X12893982233993.

    Abstract

    In this article, sensory vocabulary relating to color, texture, and other sensory experiences in Yucatec Maya (a language spoken in Mexico) is examined, and its possible relation to material culture practices explored. In Yucatec Maya, some perceptual experience can be expressed in a fine-grained way through a compact one-word adjective. Complex notions can be succinctly expressed by combining roots with a general meaning and applying templates or compounds to those sensory roots. For instance, the root tak’, which means ‘adhere/adherence,’ can be derived to express the notion of ‘dirty red’ chak-tak’-e’en or ‘sticky with an unbounded pattern’ tak’aknak, or the root ts’ap ‘piled-up’ can express ‘several tones of green (e.g. in the forest)’ ya’axts’ape’en or ‘piled-up, known through a tactile experience’ ts’aplemak. The productive nature of this linguistic system seems at first glance to be very well fitted to orient practices relating to the production of local material culture. In examining several hours of video-recorded natural data contrasting work and non-work directed interactions, it emerges that sensory vocabulary is not used for calibrating knowledge but is instead recruited by speakers to achieve vividness in an effort to verbally reproduce the way speakers experience percepts
  • Le Guen, O. (2011). Modes of pointing to existing spaces and the use of frames of reference. Gesture, 11, 271-307. doi:10.1075/gest.11.3.02leg.

    Abstract

    This paper aims at providing a systematic framework for investigating differences in how people point to existing spaces. Pointing is considered according to two conditions: (1) A non-transposed condition where the body of the speaker always constitutes the origo and where the various types of pointing are differentiated by the status of the target and (2) a transposed condition where both the distant figure and the distant ground are identified and their relation specified according to two frames of reference (FoRs): the egocentric FoR (where spatial relationships are coded with respect to the speaker's point of view) and the geocentric FoR (where spatial relationships are coded in relation to external cues in the environment). The preference for one or the other frame of reference not only has consequences for pointing to real spaces but has some resonance in other domains, constraining the production of gesture in these related domains.
  • Le Guen, O. (2011). Speech and gesture in spatial language and cognition among the Yucatec Mayas. Cognitive Science, 35, 905-938. doi:10.1111/j.1551-6709.2011.01183.x.

    Abstract

    In previous analyses of the influence of language on cognition, speech has been the main channel examined. In studies conducted among Yucatec Mayas, efforts to determine the preferred frame of reference in use in this community have failed to reach an agreement (Bohnemeyer & Stolz, 2006; Levinson, 2003 vs. Le Guen, 2006, 2009). This paper argues for a multimodal analysis of language that encompasses gesture as well as speech, and shows that the preferred frame of reference in Yucatec Maya is only detectable through the analysis of co-speech gesture and not through speech alone. A series of experiments compares knowledge of the semantics of spatial terms, performance on nonlinguistic tasks and gestures produced by men and women. The results show a striking gender difference in the knowledge of the semantics of spatial terms, but an equal preference for a geocentric frame of reference in nonverbal tasks. In a localization task, participants used a variety of strategies in their speech, but they all exhibited a systematic preference for a geocentric frame of reference in their gestures.
  • Guerrero, L., & Van Valin Jr., R. D. (2004). Yaqui and the analysis of primary object languages. International Journal of American Linguistics, 70(3), 290-319. doi:10.1086/425603.

    Abstract

    The central topic of this study is to investigate three- and four-place predicate in Yaqui, which are characterized by having multiple object arguments. As with other Southern Uto-Aztecan languages, it has been said that Yaqui follows the Primary/Secondary Object pattern (Dryer 1986). Actually, Yaqui presents three patterns: verbs like nenka ‘sell’ follow the direct–indirect object pattern, verbs like miika ‘give’ follow the primary object pattern, and verbs like chijakta ‘sprinkle’ follow the locative alternation pattern; the primary object pattern is the exclusive one found with derived verbs. This paper shows that the contrast between direct object and primary object languages is not absolute but rather one of degree, and hence two “object” selection principles are needed to explain this mixed system. The two principles are not limited to Yaqui but are found in other languages as well, including English.
  • Gullberg, M., & Kita, S. (2009). Attention to speech-accompanying gestures: Eye movements and information uptake. Journal of Nonverbal Behavior, 33(4), 251-277. doi:10.1007/s10919-009-0073-2.

    Abstract

    There is growing evidence that addressees in interaction integrate the semantic information conveyed by speakers’ gestures. Little is known, however, about whether and how addressees’ attention to gestures and the integration of gestural information can be modulated. This study examines the influence of a social factor (speakers’ gaze to their own gestures), and two physical factors (the gesture’s location in gesture space and gestural holds) on addressees’ overt visual attention to gestures (direct fixations of gestures) and their uptake of gestural information. It also examines the relationship between gaze and uptake. The results indicate that addressees’ overt visual attention to gestures is affected both by speakers’ gaze and holds but for different reasons, whereas location in space plays no role. Addressees’ uptake of gesture information is only influenced by speakers’ gaze. There is little evidence of a direct relationship between addressees’ direct fixations of gestures and their uptake.
  • Gullberg, M. (2004). [Review of the book Pointing: Where language, culture and cognition meet ed. by Sotaro Kita]. Gesture, 4(2), 235-248. doi:10.1075/gest.4.2.08gul.
  • Gullberg, M. (2009). Gestures and the development of semantic representations in first and second language acquisition. Acquisition et Interaction en Langue Etrangère..Languages, Interaction, and Acquisition (former AILE), 1, 117-139.

    Abstract

    This paper argues that speech-associated gestures can usefully inform studies exploring development of meaning in first and second language acquisition. The example domain is caused motion or placement meaning (putting a cup on a table) where acquisition problems have been observed and where adult native gesture use reflects crosslinguistically different placement verb semantics. Against this background, the paper summarises three studies examining the development of semantic representations in Dutch children acquiring Dutch, and adult learners’ acquiring Dutch and French placement verbs. Overall, gestures change systematically with semantic development both in children and adults and (1) reveal what semantic elements are included in current semantic representations, whether target-like or not, and (2) highlight developmental shifts in those representations. There is little evidence that gestures chiefly act as a support channel. Instead, the data support the theoretical notion that speech and gesture form an integrated system, opening new possibilities for studying the processes of acquisition.
  • Gullberg, M., & Holmqvist, K. (1999). Keeping an eye on gestures: Visual perception of gestures in face-to-face communication. Pragmatics & Cognition, 7(1), 35-63. doi:10.1075/pc.7.1.04gul.

    Abstract

    Since listeners usually look at the speaker's face, gestural information has to be absorbed through peripheral visual perception. In the literature, it has been suggested that listeners look at gestures under certain circumstances: 1) when the articulation of the gesture is peripheral; 2) when the speech channel is insufficient for comprehension; and 3) when the speaker him- or herself indicates that the gesture is worthy of attention. The research here reported employs eye tracking techniques to study the perception of gestures in face-to-face interaction. The improved control over the listener's visual channel allows us to test the validity of the above claims. We present preliminary findings substantiating claims 1 and 3, and relate them to theoretical proposals in the literature and to the issue of how visual and cognitive attention are related.
  • Gullberg, M. (2009). Reconstructing verb meaning in a second language: How English speakers of L2 Dutch talk and gesture about placement. Annual Review of Cognitive Linguistics, 7, 221-245. doi:10.1075/arcl.7.09gul.

    Abstract

    This study examines to what extent English speakers of L2 Dutch reconstruct the meanings of placement verbs when moving from a general L1 verb of caused motion (put) to two specific caused posture verbs (zetten/leggen ‘set/lay’) in the L2 and whether the existence of low-frequency cognate forms in the L1 (set/lay) alleviates the reconstruction problem. Evidence from speech and gesture indicates that English speakers have difficulties with the specific verbs in L2 Dutch, initially looking for means to express general caused motion in L1-like fashion through over-generalisation. The gesture data further show that targetlike forms are often used to convey L1-like meaning. However, the differentiated use of zetten for vertical placement and dummy verbs (gaan ‘go’ and doen ‘do’) and intransitive posture verbs (zitten/staan/liggen ‘sit, stand, lie’) for horizontal placement, and a positive correlation between appropriate verb use and target-like gesturing suggest a beginning sensitivity to the semantic parameters of the L2 verbs and possible reconstruction.
  • Gumperz, J. J., & Levinson, S. C. (1991). Rethinking linguistic relativity. Current Anthropology, 32(5), 613-623. Retrieved from http://www.jstor.org/stable/2743696.
  • Habets, B., Kita, S., Shao, Z., Ozyurek, A., & Hagoort, P. (2011). The role of synchrony and ambiguity in speech–gesture integration during comprehension. Journal of Cognitive Neuroscience, 23, 1845-1854. doi:10.1162/jocn.2010.21462.

    Abstract

    During face-to-face communication, one does not only hear speech but also see a speaker's communicative hand movements. It has been shown that such hand gestures play an important role in communication where the two modalities influence each other's interpretation. A gesture typically temporally overlaps with coexpressive speech, but the gesture is often initiated before (but not after) the coexpressive speech. The present ERP study investigated what degree of asynchrony in the speech and gesture onsets are optimal for semantic integration of the concurrent gesture and speech. Videos of a person gesturing were combined with speech segments that were either semantically congruent or incongruent with the gesture. Although gesture and speech always overlapped in time, gesture and speech were presented with three different degrees of asynchrony. In the SOA 0 condition, the gesture onset and the speech onset were simultaneous. In the SOA 160 and 360 conditions, speech was delayed by 160 and 360 msec, respectively. ERPs time locked to speech onset showed a significant difference between semantically congruent versus incongruent gesture–speech combinations on the N400 for the SOA 0 and 160 conditions. No significant difference was found for the SOA 360 condition. These results imply that speech and gesture are integrated most efficiently when the differences in onsets do not exceed a certain time span because of the fact that iconic gestures need speech to be disambiguated in a way relevant to the speech context.
  • Hagoort, P. (1998). De electrofysiologie van taal: Wat hersenpotentialen vertellen over het menselijk taalvermogen. Neuropraxis, 2, 223-229.
  • Hagoort, P. (1997). De rappe prater als gewoontedier [Review of the book Smooth talkers: The linguistic performance of auctioneers and sportscasters, by Koenraad Kuiper]. Psychologie, 16, 22-23.
  • Hagoort, P. (1998). De spreker als sprinter. Psychologie, 17, 48-49.
  • Hagoort, P. (1999). De toekomstige eeuw zonder psychologie. Psychologie Magazine, 18, 35-36.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech compared to reading: the P600/SPS to syntactic violations in spoken sentences and rapid serial visual presentation. Neuropsychologia, 38, 1531-1549.

    Abstract

    In this study, event-related brain potential ffects of speech processing are obtained and compared to similar effects in sentence reading. In two experiments sentences were presented that contained three different types of grammatical violations. In one experiment sentences were presented word by word at a rate of four words per second. The grammatical violations elicited a Syntactic Positive Shift (P600/SPS), 500 ms after the onset of the word that rendered the sentence ungrammatical. The P600/SPS consisted of two phases, an early phase with a relatively equal anterior-posterior distribution and a later phase with a strong posterior distribution. We interpret the first phase as an indication of structural integration complexity, and the second phase as an indication of failing parsing operations and/or an attempt at reanalysis. In the second experiment the same syntactic violations were presented in sentences spoken at a normal rate and with normal intonation. These violations elicited a P600/SPS with the same onset as was observed for the reading of these sentences. In addition two of the three violations showed a preceding frontal negativity, most clearly over the left hemisphere.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech: semantic ERP effects. Neuropsychologia, 38, 1518-1530.

    Abstract

    In this study, event-related brain potential effects of speech processing are obtained and compared to similar effects insentence reading. In two experiments spoken sentences were presented with semantic violations in sentence-signal or mid-sentence positions. For these violations N400 effects were obtained that were very similar to N400 effects obtained in reading. However, the N400 effects in speech were preceded by an earlier negativity (N250). This negativity is not commonly observed with written input. The early effect is explained as a manifestation of a mismatch between the word forms expected on the basis of the context, and the actual cohort of activated word candidates that is generated on the basis of the speech signal.
  • Hagoort, P. (1990). [Review of the book Neurolinguistics and linguistic aphasiology: An introduction by David Caplan]. Linguistics, 5, 1069-1073.
  • Hagoort, P., & Van Berkum, J. J. A. (2007). Beyond the sentence given. Philosophical Transactions of the Royal Society. Series B: Biological Sciences, 362, 801-811.

    Abstract

    A central and influential idea among researchers of language is that our language faculty is organized according to Fregean compositionality, which states that the meaning of an utterance is a function of the meaning of its parts and of the syntactic rules by which these parts are combined. Since the domain of syntactic rules is the sentence, the implication of this idea is that language interpretation takes place in a two-step fashion. First, the meaning of a sentence is computed. In a second step, the sentence meaning is integrated with information from prior discourse, world knowledge, information about the speaker and semantic information from extra-linguistic domains such as co-speech gestures or the visual world. Here, we present results from recordings of event-related brain potentials that are inconsistent with this classical two-step model of language interpretation. Our data support a one-step model in which knowledge about the context and the world, concomitant information from other modalities, and the speaker are brought to bear immediately, by the same fast-acting brain system that combines the meanings of individual words into a message-level representation. Underlying the one-step model is the immediacy assumption, according to which all available information will immediately be used to co-determine the interpretation of the speaker's message. Functional magnetic resonance imaging data that we collected indicate that Broca's area plays an important role in semantic unification. Language comprehension involves the rapid incorporation of information in a 'single unification space', coming from a broader range of cognitive domains than presupposed in the standard two-step model of interpretation.
  • Hagoort, P., Hald, L. A., Bastiaansen, M. C. M., & Petersson, K. M. (2004). Integration of word meaning and world knowledge in language comprehension. Science, 304(5669), 438-441. doi:10.1126/science.1095455.

    Abstract

    Although the sentences that we hear or read have meaning, this does not necessarily mean that they are also true. Relatively little is known about the critical brain structures for, and the relative time course of, establishing the meaning and truth of linguistic expressions. We present electroencephalogram data that show the rapid parallel integration of both semantic and world
    knowledge during the interpretation of a sentence. Data from functional magnetic resonance imaging revealed that the left inferior prefrontal cortex is involved in the integration of both meaning and world knowledge. Finally, oscillatory brain responses indicate that the brain keeps a record of what makes a sentence hard to interpret.
  • Hagoort, P., & Brown, C. M. (1999). Gender electrified: ERP evidence on the syntactic nature of gender processing. Journal of Psycholinguistic Research, 28(6), 715-728. doi:10.1023/A:1023277213129.

    Abstract

    The central issue of this study concerns the claim that the processing of gender agreement in online sentence comprehension is a syntactic rather than a conceptual/semantic process. This claim was tested for the grammatical gender agreement in Dutch between the definite article and the noun. Subjects read sentences in which the definite article and the noun had the same gender and sentences in which the gender agreement was violated, While subjects read these sentences, their electrophysiological activity was recorded via electrodes placed on the scalp. Earlier research has shown that semantic and syntactic processing events manifest themselves in different event-related brain potential (ERP) effects. Semantic integration modulates the amplitude of the so-called N400.The P600/SPS is an ERP effect that is more sensitive to syntactic processes. The violation of grammatical gender agreement was found to result in a P600/SPS. For violations in sentence-final position, an additional increase of the N400 amplitude was observed. This N400 effect is interpreted as resulting from the consequence of a syntactic violation for the sentence-final wrap-up. The overall pattern of results supports the claim that the on-line processing of gender agreement information is not a content driven but a syntactic-form driven process.
  • Hagoort, P. (1998). Hersenen en taal in onderzoek en praktijk. Neuropraxis, 6, 204-205.
  • Hagoort, P. (1989). Processing of lexical ambiguities: a comment on Milberg, Blumstein, and Dworetzky (1987). Brain and Language, 36, 335-348. doi:10.1016/0093-934X(89)90070-9.

    Abstract

    In a study by Milberg, Blumstein, and Dworetzky (1987), normal control subjects and Wernicke's and Broca's aphasics performed a lexical decision task on the third element of auditorily presented triplets of words with either a word or a nonword as target. In three of the four types of word triplets, the first and the third words were related to one or both meanings of the second word, which was semantically ambiguous. The fourth type of word triplet consisted of three unrelated, unambiguous words, functioning as baseline. Milberg et al. (1987) claim that the results for their control subjects are similar to those reported by Schvaneveldt, Meyer, and Becker's original study (1976) with the same prime types, and so interpret these as evidence for a selective lexical access of the different meanings of ambiguous words. It is argued here that Milberg et al. only partially replicate the Schvaneveldt et al. results. Moreover, the results of Milberg et al. are not fully in line with the selective access hypothesis adopted. Replication of the Milberg et al. (1987) study with Dutch materials, using both a design without and a design with repetition of the same target words for the same subjects led to the original pattern as reported by Schvaneveldt et al. (1976). In the design with four separate presentations of the same target word, a strong repetition effect was found. It is therefore argued that the discrepancy between the Milberg et al. results on the one hand, and the Schvaneveldt et al. results on the other, might be due to the absence of a control for repetition effects in the within-subject design used by Milberg et al. It is concluded that this makes the results for both normal and aphasic subjects in the latter study difficult to interpret in terms of a selective access model for normal processing.
  • Hagoort, P., & Brown, C. M. (1999). The consequences of the temporal interaction between syntactic and semantic processes for haemodynamic studies of language. NeuroImage, 9, S1024-S1024.
  • Hagoort, P. (1997). Semantic priming in Broca's aphasics at a short SOA: No support for an automatic access deficit. Brain and Language, 56, 287-300. doi:10.1006/brln.1997.1849.

    Abstract

    This study tests the recent claim that Broca’s aphasics are impaired in automatic lexical access, including the retrieval of word meaning. Subjects are required to perform a lexical decision on visually presented prime target pairs. Half of the word targets are preceded by a related word, half by an unrelated word. Primes and targets are presented with a long stimulus-onset-asynchrony (SOA) of 1400 msec and with a short SOA of 300 msec. Normal priming effects are observed in Broca’s aphasics for both SOAs. This result is discussed in the context of the claim that Broca’s aphasics suffer from an impairment in the automatic access of lexical–semantic information. It is argued that none of the current priming studies provides evidence supporting this claim, since with short SOAs priming effects have been reliably obtained in Broca’s aphasics. The results are more compatible with the claim that in many Broca’s aphasics the functional locus of their comprehension deficit is at the level of postlexical integration processes.
  • Hagoort, P., Ramsey, N., Rutten, G.-J., & Van Rijen, P. (1999). The role of the left anterior temporal cortex in language processing. Brain and Language, 69, 322-325. doi:10.1006/brln.1999.2169.
  • Hagoort, P., & Levelt, W. J. M. (2009). The speaking brain. Science, 326(5951), 372-373. doi:10.1126/science.1181675.

    Abstract

    How does intention to speak become the action of speaking? It involves the generation of a preverbal message that is tailored to the requirements of a particular language, and through a series of steps, the message is transformed into a linear sequence of speech sounds (1, 2). These steps include retrieving different kinds of information from memory (semantic, syntactic, and phonological), and combining them into larger structures, a process called unification. Despite general agreement about the steps that connect intention to articulation, there is no consensus about their temporal profile or the role of feedback from later steps (3, 4). In addition, since the discovery by the French physician Pierre Paul Broca (in 1865) of the role of the left inferior frontal cortex in speaking, relatively little progress has been made in understanding the neural infrastructure that supports speech production (5). One reason is that the characteristics of natural language are uniquely human, and thus the neurobiology of language lacks an adequate animal model. But on page 445 of this issue, Sahin et al. (6) demonstrate, by recording neuronal activity in the human brain, that different kinds of linguistic information are indeed sequentially processed within Broca's area.
  • Hagoort, P., Indefrey, P., Brown, C. M., Herzog, H., Steinmetz, H., & Seitz, R. J. (1999). The neural circuitry involved in the reading of german words and pseudowords: A PET study. Journal of Cognitive Neuroscience, 11(4), 383-398. doi:10.1162/089892999563490.

    Abstract

    Silent reading and reading aloud of German words and pseudowords were used in a PET study using (15O)butanol to examine the neural correlates of reading and of the phonological conversion of legal letter strings, with or without meaning.
    The results of 11 healthy, right-handed volunteers in the age range of 25 to 30 years showed activation of the lingual gyri during silent reading in comparison with viewing a fixation cross. Comparisons between the reading of words and pseudowords suggest the involvement of the middle temporal gyri in retrieving both the phonological and semantic code for words. The reading of pseudowords activates the left inferior frontal gyrus, including the ventral part of Broca’s area, to a larger extent than the reading of words. This suggests that this area might be involved in the sublexical conversion of orthographic input strings into phonological output codes. (Pre)motor areas were found to be activated during both silent reading and reading aloud. On the basis of the obtained activation patterns, it is hypothesized that the articulation of high-frequency syllables requires the retrieval of their concomitant articulatory gestures from the SMA and that the articulation of lowfrequency syllables recruits the left medial premotor cortex.
  • Hagoort, P. (2000). What we shall know only tomorrow. Brain and Language, 71, 89-92. doi:10.1006/brln.1999.2221.
  • Hagoort, P. (1997). Valt er nog te lachen zonder de rechter hersenhelft? Psychologie, 16, 52-55.
  • Hald, L. A., Steenbeek-Planting, E. G., & Hagoort, P. (2007). The interaction of discourse context and world knowledge in online sentence comprehension: Evidence from the N400. Brain Research, 1146, 210-218. doi:10.1016/j.brainres.2007.02.054.

    Abstract

    In an ERP experiment we investigated how the recruitment and integration of world knowledge information relate to the integration of information within a current discourse context. Participants were presented with short discourse contexts which were followed by a sentence that contained a critical word that was correct or incorrect based on general world knowledge and the supporting discourse context, or was more or less acceptable based on the combination of general world knowledge and the specific local discourse context. Relative to the critical word in the correct world knowledge sentences following a neutral discourse, all other critical words elicited an N400 effect that began at about 300 ms after word onset. However, the magnitude of the N400 effect varied in a way that suggests an interaction between world knowledge and discourse context. The results indicate that both world knowledge and discourse context have an effect on sentence interpretation, but neither overrides the other.
  • Haller, S., Klarhoefer, M., Schwarzbach, J., Radue, E. W., & Indefrey, P. (2007). Spatial and temporal analysis of fMRI data on word and sentence reading. European Journal of Neuroscience, 26(7), 2074-2084. doi:10.1111/j.1460-9568.2007.05816.x.

    Abstract

    Written language comprehension at the word and the sentence level was analysed by the combination of spatial and temporal analysis of functional magnetic resonance imaging (fMRI). Spatial analysis was performed via general linear modelling (GLM). Concerning the temporal analysis, local differences in neurovascular coupling may confound a direct comparison of blood oxygenation level-dependent (BOLD) response estimates between regions. To avoid this problem, we parametrically varied linguistic task demands and compared only task-induced within-region BOLD response differences across areas. We reasoned that, in a hierarchical processing system, increasing task demands at lower processing levels induce delayed onset of higher-level processes in corresponding areas. The flow of activation is thus reflected in the size of task-induced delay increases. We estimated BOLD response delay and duration for each voxel and each participant by fitting a model function to the event-related average BOLD response. The GLM showed increasing activations with increasing linguistic demands dominantly in the left inferior frontal gyrus (IFG) and the left superior temporal gyrus (STG). The combination of spatial and temporal analysis allowed a functional differentiation of IFG subregions involved in written language comprehension. Ventral IFG region (BA 47) and STG subserve earlier processing stages than two dorsal IFG regions (BA 44 and 45). This is in accordance with the assumed early lexical semantic and late syntactic processing of these regions and illustrates the complementary information provided by spatial and temporal fMRI data analysis of the same data set.
  • Hammarström, H. (2011). A note on the Maco (Piaroan) language of the lower Ventuari, Venezuela. Cadernos de Etnolingüística, 3(1), 1-11. Retrieved from http://www.etnolinguistica.org/issue:vol3n1.

    Abstract

    The present paper seeks to clarify the position of the Maco [wpc] language of the lower Ventuari, Venezuela, since there has been some uncertainty in the literature on this matter. Maco-Ventuari, not to be confused with other languages with a similar name, is so far poorly documented, but the present paper shows that it is nevertheless possible to show that it is a dialect of Piaroa or a language closely related to Piaroa
  • Hammarström, H., & Nordhoff, S. (2011). LangDoc: Bibliographic infrastructure for linguistic typology. Oslo Studies in Language, 3(2), 31-43. Retrieved from https://www.journals.uio.no/index.php/osla/article/view/75.

    Abstract

    The present paper describes the ongoing project LangDoc to make a bibliography website for linguistic typology, with a near-complete database of references to documents that contain descriptive data on the languages of the world. This is intended to provide typologists with a more precise and comprehensive way to search for information on languages, and for the specific kind information that they are interested in. The annotation scheme devised is a trade-off between annotation effort and search desiderata. The end goal is a website with browse, search, update, new items subscription and download facilities, which can hopefully be enriched by spontaneous collaborative efforts.
  • Hammarström, H., & Borin, L. (2011). Unsupervised learning of morphology. Computational Linguistics, 37(2), 309-350. doi:10.1162/COLI_a_00050.

    Abstract

    This article surveys work on Unsupervised Learning of Morphology. We define Unsupervised Learning of Morphology as the problem of inducing a description (of some kind, even if only morpheme segmentation) of how orthographic words are built up given only raw text data of a language. We briefly go through the history and motivation of this problem. Next, over 200 items of work are listed with a brief characterization, and the most important ideas in the field are critically discussed. We summarize the achievements so far and give pointers for future developments.
  • Hammond, J. (2011). JVC GY-HM100U HD video camera and FFmpeg libraries [Technology review]. Language Documentation and Conservation, 5, 69-80.
  • Hamshere, M. L., Segurado, R., Moskvina, V., Nikolov, I., Glaser, B., & Holmans, P. A. (2007). Large-scale linkage analysis of 1302 affected relative pairs with rheumatoid arthritis. BMC Proceedings, 1 (Suppl 1), S100.

    Abstract

    Rheumatoid arthritis is the most common systematic autoimmune disease and its etiology is believed to have both strong genetic and environmental components. We demonstrate the utility of including genetic and clinical phenotypes as covariates within a linkage analysis framework to search for rheumatoid arthritis susceptibility loci. The raw genotypes of 1302 affected relative pairs were combined from four large family-based samples (North American Rheumatoid Arthritis Consortium, United Kingdom, European Consortium on Rheumatoid Arthritis Families, and Canada). The familiality of the clinical phenotypes was assessed. The affected relative pairs were subjected to autosomal multipoint affected relative-pair linkage analysis. Covariates were included in the linkage analysis to take account of heterogeneity within the sample. Evidence of familiality was observed with age at onset (p <} 0.001) and rheumatoid factor (RF) IgM (p {< 0.001), but not definite erosions (p = 0.21). Genome-wide significant evidence for linkage was observed on chromosome 6. Genome-wide suggestive evidence for linkage was observed on chromosomes 13 and 20 when conditioning on age at onset, chromosome 15 conditional on gender, and chromosome 19 conditional on RF IgM after allowing for multiple testing of covariates.
  • Hanulikova, A., Mitterer, H., & McQueen, J. M. (2011). Effects of first and second language on segmentation of non-native speech. Bilingualism: Language and Cognition, 14, 506-521. doi:10.1017/S1366728910000428.

    Abstract

    We examined whether Slovak-German bilinguals apply native Slovak phonological and lexical knowledge when segmenting German speech. When Slovaks listen to their native language (Hanulíková, McQueen, & Mitterer, 2010), segmentation is impaired when fixed-stress cues are absent, and, following the Possible-Word Constraint (PWC; Norris, McQueen, Cutler, & Butterfield, 1997), lexical candidates are disfavored if segmentation leads to vowelless residues, unless those residues are existing Slovak words. In the present study, fixed-stress cues on German target words were again absent. Nevertheless, in support of the PWC, both German and Slovak listeners recognized German words (e.g., Rose "rose") faster in syllable contexts (suckrose) than in single- onsonant contexts (krose, trose). But only the Slovak listeners recognized Rose, for example, faster in krose than in trose (k is a Slovak word, t is not). It appears that non-native listeners can suppress native stress segmentation procedures, but that they suffer from prevailing interference from native lexical knowledge
  • Hanulová, J., Davidson, D. J., & Indefrey, P. (2011). Where does the delay in L2 picture naming come from? Psycholinguistic and neurocognitive evidence on second language word production. Language and Cognitive Processes, 26, 902-934. doi:10.1080/01690965.2010.509946.

    Abstract

    Bilinguals are slower when naming a picture in their second language than when naming it in their first language. Although the phenomenon has been frequently replicated, it is not known what causes the delay in the second language. In this article we discuss at what processing stages a delay might arise according to current models of bilingual processing and how the available behavioural and neurocognitive evidence relates to these proposals. Suggested plausible mechanisms, such as frequency or interference effects, are compatible with a naming delay arising at different processing stages. Haemodynamic and electrophysiological data seem to point to a postlexical stage but are still too scarce to support a definite conclusion.
  • Hartsuiker, R. J., Huettig, F., & Olivers, C. N. (Eds.). (2011). Visual search and visual world: Interactions among visual attention, language, and working memory [Special Issue]. Acta Psychologica, 137(2). doi:10.1016/j.actpsy.2011.01.005.
  • Hartsuiker, R. J., Huettig, F., & Olivers, C. N. (2011). Visual search and visual world: Interactions among visual attention, language, and working memory (introduction to the special issue). Acta Psychologica, 137(2), 135-137. doi:10.1016/j.actpsy.2011.01.005.
  • Haun, D. B. M., & Tomasello, M. (2011). Conformity to peer pressure in preschool children. Child Development, 82, 1759-1767. doi:10.1111/j.1467-8624.2011.01666.x.

    Abstract

    Both adults and adolescents often conform their behavior and opinions to peer groups, even when they themselves know better. The current study investigated this phenomenon in 24 groups of 4 children between 4;2 and 4;9 years of age. Children often made their judgments conform to those of 3 peers, who had made obviously erroneous but unanimous public judgments right before them. A follow-up study with 18 groups of 4 children between 4;0 and 4;6 years of age revealed that children did not change their “real” judgment of the situation, but only their public expression of it. Preschool children are subject to peer pressure, indicating sensitivity to peers as a primary social reference group already during the preschool years.
  • Haun, D. B. M. (2011). Memory for body movements in Namibian hunter-gatherer children. Journal of Cognitive Education and Psychology, 10, 56-62.

    Abstract

    Despite the global universality of physical space, different cultural groups vary substantially as to how they memorize it. Although European participants mostly prefer egocentric strategies (“left, right, front, back”) to memorize spatial relations, others use mostly allocentric strategies (“north, south, east, west”). Prior research has shown that some cultures show a general preference to memorize object locations and even also body movements in relation to the larger environment rather than in relation to their own body. Here, we investigate whether this cultural bias also applies to movements specifically directed at the participants' own body, emphasizing the role of ego. We show that even participants with generally allocentric biases preferentially memorize self-directed movements using egocentric spatial strategies. These results demonstrate an intricate system of interacting cultural biases and momentary situational characteristics.
  • Haun, D. B. M., & Call, J. (2009). Great apes’ capacities to recognize relational similarity. Cognition, 110, 147-159. doi:10.1016/j.cognition.2008.10.012.

    Abstract

    Recognizing relational similarity relies on the ability to understand that defining object properties might not lie in the objects individually, but in the relations of the properties of various object to each other. This aptitude is highly relevant for many important human skills such as language, reasoning, categorization and understanding analogy and metaphor. In the current study, we investigated the ability to recognize relational similarities by testing five species of great apes, including human children in a spatial task. We found that all species performed better if related elements are connected by logico-causal as opposed to non-causal relations. Further, we find that only children above 4 years of age, bonobos and chimpanzees, unlike younger children, gorillas and orangutans display some mastery of reasoning by non-causal relational similarity. We conclude that recognizing relational similarity is not in its entirety unique to the human species. The lack of a capability for language does not prohibit recognition of simple relational similarities. The data are discussed in the light of the phylogenetic tree of relatedness of the great apes.
  • Haun, D. B. M., Nawroth, C., & Call, J. (2011). Great apes’ risk-taking strategies in a decision making task. PLoS One, 6(12), e28801. doi:10.1371/journal.pone.0028801.

    Abstract

    We investigate decision-making behaviour in all four non-human great ape species. Apes chose between a safe and a risky option across trials of varying expected values. All species chose the safe option more often with decreasing probability of success. While all species were risk-seeking, orangutans and chimpanzees chose the risky option more often than gorillas and bonobos. Hence all four species' preferences were ordered in a manner consistent with normative dictates of expected value, but varied predictably in their willingness to take risks.
  • Haun, D. B. M., Rapold, C. J., Janzen, G., & Levinson, S. C. (2011). Plasticity of human spatial memory: Spatial language and cognition covary across cultures. Cognition, 119, 70-80. doi:10.1016/j.cognition.2010.12.009.

    Abstract

    The present paper explores cross-cultural variation in spatial cognition by comparing spatial reconstruction tasks by Dutch and Namibian elementary school children. These two communities differ in the way they predominantly express spatial relations in language. Four experiments investigate cognitive strategy preferences across different levels of task-complexity and instruction. Data show a correlation between dominant linguistic spatial frames of reference and performance patterns in non-linguistic spatial memory tasks. This correlation is shown to be stable across an increase of complexity in the spatial array. When instructed to use their respective non-habitual cognitive strategy, participants were not easily able to switch between strategies and their attempts to do so impaired their performance. These results indicate a difference not only in preference but also in competence and suggest that spatial language and non-linguistic preferences and competences in spatial cognition are systematically aligned across human populations.

    Files private

    Request files
  • Haun, D. B. M., & Rapold, C. J. (2009). Variation in memory for body movements across cultures. Current Biology, 19(23), R1068-R1069. doi:10.1016/j.cub.2009.10.041.

    Abstract

    There has been considerable controversy over the existence of cognitive differences across human cultures: some claim that human cognition is essentially universal [1,2], others that it reflects cultural specificities [3,4]. One domain of interest has been spatial cognition [5,6]. Despite the global universality of physical space, cultures vary as to how space is coded in their language. Some, for example, do not use egocentric ‘left, right, front, back’ constructions to code spatial relations, instead using allocentric notions like ‘north, south, east, west’ [4,6]: “The spoon is north of the bowl!” Whether or not spatial cognition also varies across cultures remains a contested question [7,8]. Here we investigate whether memory for movements of one's own body differs between cultures with contrastive strategies for coding spatial relations. Our results show that the ways in which we memorize movements of our own body differ in line with culture-specific preferences for how to conceive of spatial relations.
  • Havik, E., Roberts, L., Van Hout, R., Schreuder, R., & Haverkort, M. (2009). Processing subject-object ambiguities in L2 Dutch: A self-paced reading study with German L2 learners of Dutch. Language Learning, 59(1), 73-112. doi:10.1111/j.1467-9922.2009.00501.x.

    Abstract

    The results of two self-paced reading experiments are reported, which investigated the on-line processing of subject-object ambiguities in Dutch relative clause constructions like Dat is de vrouw die de meisjes heeft/hebben gezien by German advanced second language (L2) learners of Dutch. Native speakers of both Dutch and German have been shown to have a preference for a subject versus an object reading of such temporarily ambiguous sentences, and so we provided an ideal opportunity for the transfer of first language (L1) processing preferences to take place. We also investigated whether the participants' working memory span would affect their processing of the experimental items. The results suggest that processing decisions may be affected by working memory when task demands are high and in this case, the high working memory span learners patterned like the native speakers of lower working memory. However, when reading for comprehension alone, and when only structural information was available to guide parsing decisions, working memory span had no effect on the L2 learners' on-line processing, and this differed from the native speakers' even though the L1 and the L2 are highly comparable.
  • Hayano, K. (2004). Kaiwa ni okeru ninshikiteki ken’i no koushou: Shuujoshi yo, ne, odoroki hyouji no bunpu to kinou [Negotiation of Epistemic Authority in Conversation: on the use of final particles yo, ne and surprise markers]. Studies in Pragmatics, 6, 17-28.
  • Heeschen, C., Ryalls, J., & Hagoort, P. (1988). Psychological stress in Broca's versus Wernicke's aphasia. Clinical Linguistics & Phonetics, 2, 309-316. doi:10.3109/02699208808985262.

    Abstract

    We advance the hypothesis here that the higher-than-average vocal pitch (FO) found for speech of Broca's aphasics in experimental settings is due, in part, to increased psychological stress. Two experiments were conducted which manipulated conversational constraints and the sentence forms to be produced by aphasic patients. Our study revealed significant differences between changes in vocal pitch of agrammatic Broca's aphasics versus those of Wernicke's aphasics and normal controls. It is suggested that the greater psychological stress experienced by the Broca's aphasics, but not by the Wernicke's aphasics, accounts for these observed differences.
  • Henderson, L., Coltheart, M., Cutler, A., & Vincent, N. (1988). Preface. Linguistics, 26(4), 519-520. doi:10.1515/ling.1988.26.4.519.
  • Hendriks, L., Witteman, M. J., Frietman, L. C. G., Westerhof, G., Van Baaren, R. B., Engels, R. C. M. E., & Dijksterhuis, A. J. (2009). Imitation can reduce malnutrition in residents in assisted living facilities [Letter to the editor]. Journal of the American Geriatrics Society, 571(1), 187-188. doi:10.1111/j.1532-5415.2009.02074.x.
  • Heritage, J., & Stivers, T. (1999). Online commentary in acute medical visits: A method of shaping patient expectations. Social Science and Medicine, 49(11), 1501-1517. doi:10.1016/S0277-9536(99)00219-1.
  • Hervais-Adelman, A., Davis, M. H., Johnsrude, I. S., Taylor, K. J., & Carlyon, R. P. (2011). Generalization of Perceptual Learning of Vocoded Speech. Journal of Experimental Psychology: Human Perception and Performance, 37(1), 283-295. doi:10.1037/a0020772.

    Abstract

    Recent work demonstrates that learning to understand noise-vocoded (NV) speech alters sublexical perceptual processes but is enhanced by the simultaneous provision of higher-level, phonological, but not lexical content (Hervais-Adelman, Davis, Johnsrude, & Carlyon, 2008), consistent with top-down learning (Davis, Johnsrude, Hervais-Adelman, Taylor, & McGettigan, 2005; Hervais-Adelman et al., 2008). Here, we investigate whether training listeners with specific types of NV speech improves intelligibility of vocoded speech with different acoustic characteristics. Transfer of perceptual learning would provide evidence for abstraction from variable properties of the speech input. In Experiment 1, we demonstrate that learning of NV speech in one frequency region generalizes to an untrained frequency region. In Experiment 2, we assessed generalization among three carrier signals used to create NV speech: noise bands, pulse trains, and sine waves. Stimuli created using these three carriers possess the same slow, time-varying amplitude information and are equated for naive intelligibility but differ in their temporal fine structure. Perceptual learning generalized partially, but not completely, among different carrier signals. These results delimit the functional and neural locus of perceptual learning of vocoded speech. Generalization across frequency regions suggests that learning occurs at a stage of processing at which some abstraction from the physical signal has occurred, while incomplete transfer across carriers indicates that learning occurs at a stage of processing that is sensitive to acoustic features critical for speech perception (e.g., noise, periodicity).
  • Hervais-Adelman, A., Moser-Mercer, B., & Golestani, N. (2011). Executive control of language in the bilingual brain: Integrating the evidence from neuroinnaging to neuropsychology. Frontiers in Psychology, 2: 234. doi:10.3389/fpsyg.2011.00234.

    Abstract

    In this review we will focus on delineating the neural substrates of the executive control of language in the bilingual brain, based on the existing neuroimaging, intracranial, transcranial magnetic stimulation, and neuropsychological evidence. We will also offer insights from ongoing brain-imaging studies into the development of expertise in multilingual language control. We will concentrate specifically on evidence regarding how the brain selects and controls languages for comprehension and production. This question has been addressed in a number of ways and using various tasks, including language switching during production or perception, translation, and interpretation. We will attempt to synthesize existing evidence in order to bring to light the neural substrates that are crucial to executive control of language.
  • Hill, C. (2011). Named and unnamed spaces: Color, kin and the environment in Umpila. The Senses & Society, 6(1), 57-67. doi:10.2752/174589311X12893982233759.

    Abstract

    Imagine describing the particular characteristics of the hue of a flower, or the quality of its scent, or the texture of its petal. Introspection suggests the expression of such sensory experiences in words is something quite different than the task of naming artifacts. The particular challenges in the linguistic encoding of sensorial experiences pose questions regarding how languages manage semantic gaps and “ineffability.” That is, what strategies do speakers have available to manage phenomena or domains of experience that are inexpressible or difficult to express in their language? This article considers this issue with regard to color in Umpila, an Aboriginal Australian language of the Paman family. The investigation of color naming and ineffability in Umpila reveals rich associations and mappings between color and visual perceptual qualities more generally, categorization of the human social world, and the environment. “Gaps” in the color system are filled or supported by associations with two of the most linguistically and culturally salient domains for Umpila - kinship and the environment
  • Holler, J., Shovelton, H., & Beattie, G. (2009). Do iconic gestures really contribute to the semantic information communicated in face-to-face interaction? Journal of Nonverbal Behavior, 33, 73-88.
  • Holler, J., & Wilkin, K. (2011). Co-speech gesture mimicry in the process of collaborative referring during face-to-face dialogue. Journal of Nonverbal Behavior, 35, 133-153. doi:10.1007/s10919-011-0105-6.

    Abstract

    Mimicry has been observed regarding a range of nonverbal behaviors, but only recently have researchers started to investigate mimicry in co-speech gestures. These gestures are considered to be crucially different from other aspects of nonverbal behavior due to their tight link with speech. This study provides evidence of mimicry in co-speech gestures in face-to-face dialogue, the most common forum of everyday talk. In addition, it offers an analysis of the functions that mimicked co-speech gestures fulfill in the collaborative process of creating a mutually shared understanding of referring expressions. The implications bear on theories of gesture production, research on grounding, and the mechanisms underlying behavioral mimicry.
  • Holler, J., & Wilkin, K. (2009). Communicating common ground: how mutually shared knowledge influences the representation of semantic information in speech and gesture in a narrative task. Language and Cognitive Processes, 24, 267-289.
  • Holler, J., & Wilkin, K. (2011). An experimental investigation of how addressee feedback affects co-speech gestures accompanying speakers’ responses. Journal of Pragmatics, 43, 3522-3536. doi:10.1016/j.pragma.2011.08.002.

    Abstract

    There is evidence that co-speech gestures communicate information to addressees and that they are often communicatively intended. However, we still know comparatively little about the role of gestures in the actual process of communication. The present study offers a systematic investigation of speakers’ gesture use before and after addressee feedback. The findings show that when speakers responded to addressees’ feedback gesture rate remained constant when this feedback encouraged clarification, elaboration or correction. However, speakers gestured proportionally less often after feedback when providing confirmatory responses. That is, speakers may not be drawing on gesture in response to addressee feedback per se, but particularly with responses that enhance addressees’ understanding. Further, the large majority of speakers’ gestures changed in their form. They tended to be more precise, larger, or more visually prominent after feedback. Some changes in gesture viewpoint were also observed. In addition, we found that speakers used deixis in speech and gaze to increase the salience of gestures occurring in response to feedback. Speakers appear to conceive of gesture as a useful modality in redesigning utterances to make them more accessible to addressees. The findings further our understanding of recipient design and co-speech gestures in face-to-face dialogue.
    Highlights

    ► Gesture rate remains constant in response to addressee feedback when the response aims to correct or clarify understanding. ► But gesture rate decreases when speakers provide confirmatory responses to feedback signalling correct understanding. ► Gestures are more communicative in response to addressee feedback, particularly in terms of precision, size and visual prominence. ► Speakers make gestures in response to addressee feedback more salient by using deictic markers in speech and gaze.
  • Holler, J., & Stevens, R. (2007). The effect of common ground on how speakers use gesture and speech to represent size information. Journal of Language and Social Psychology, 26, 4-27.
  • Holler, J. (2011). Verhaltenskoordination, Mimikry und sprachbegleitende Gestik in der Interaktion. Psychotherapie - Wissenschaft: Special issue: "Sieh mal, wer da spricht" - der Koerper in der Psychotherapie Teil IV, 1(1), 56-64. Retrieved from http://www.psychotherapie-wissenschaft.info/index.php/psy-wis/article/view/13/65.
  • Holman, E. W., Brown, C. H., Wichmann, S., Müller, A., Velupillai, V., Hammarström, H., Sauppe, S., Jung, H., Bakker, D., Brown, P., Belyaev, O., Urban, M., Mailhammer, R., List, J.-M., & Egorov, D. (2011). Automated dating of the world’s language families based on lexical similarity. Current Anthropology, 52(6), 841-875. doi:10.1086/662127.

    Abstract

    This paper describes a computerized alternative to glottochronology for estimating elapsed time since parent languages diverged into daughter languages. The method, developed by the Automated Similarity Judgment Program (ASJP) consortium, is different from glottochronology in four major respects: (1) it is automated and thus is more objective, (2) it applies a uniform analytical approach to a single database of worldwide languages, (3) it is based on lexical similarity as determined from Levenshtein (edit) distances rather than on cognate percentages, and (4) it provides a formula for date calculation that mathematically recognizes the lexical heterogeneity of individual languages, including parent languages just before their breakup into daughter languages. Automated judgments of lexical similarity for groups of related languages are calibrated with historical, epigraphic, and archaeological divergence dates for 52 language groups. The discrepancies between estimated and calibration dates are found to be on average 29% as large as the estimated dates themselves, a figure that does not differ significantly among language families. As a resource for further research that may require dates of known level of accuracy, we offer a list of ASJP time depths for nearly all the world’s recognized language families and for many subfamilies.

    Files private

    Request files
  • Hoogman, M., Weisfelt, M., van de Beek, D., de Gans, J., & Schmand, B. (2007). Cognitive outcome in adults after bacterial meningitis. Journal of Neurology, Neurosurgery & Psychiatry, 78, 1092-1096. doi:10.1136/jnnp.2006.110023.

    Abstract

    Objective: To evaluate cognitive outcome in adult survivors of bacterial meningitis. Methods: Data from three prospective multicentre studies were pooled and reanalysed, involving 155 adults surviving bacterial meningitis (79 after pneumococcal and 76 after meningococcal meningitis) and 72 healthy controls. Results: Cognitive impairment was found in 32% of patients and this proportion was similar for survivors of pneumococcal and meningococcal meningitis. Survivors of pneumococcal meningitis performed worse on memory tasks (p<0.001) and tended to be cognitively slower than survivors of meningococcal meningitis (p = 0.08). We found a diffuse pattern of cognitive impairment in which cognitive speed played the most important role. Cognitive performance was not related to time since meningitis; however, there was a positive association between time since meningitis and self-reported physical impairment (p<0.01). The frequency of cognitive impairment and the numbers of abnormal test results for patients with and without adjunctive dexamethasone were similar. Conclusions: Adult survivors of bacterial meningitis are at risk of cognitive impairment, which consists mainly of cognitive slowness. The loss of cognitive speed is stable over time after bacterial meningitis; however, there is a significant improvement in subjective physical impairment in the years after bacterial meningitis. The use of dexamethasone was not associated with cognitive impairment.
  • Hoogman, M., Aarts, E., Zwiers, M., Slaats-Willemse, D., Naber, M., Onnink, M., Cools, R., Kan, C., Buitelaar, J., & Franke, B. (2011). Nitric Oxide Synthase genotype modulation of impulsivity and ventral striatal activity in adult ADHD patients and healthy comparison subjects. American Journal of Psychiatry, 168, 1099-1106. doi:10.1176/appi.ajp.2011.10101446.

    Abstract

    Objective: Attention deficit hyperactivity disorder (ADHD) is a highly heritable disorder. The NOS1 gene encoding nitric oxide synthase is a candidate gene for ADHD and has been previously linked with impulsivity. In the present study, the authors investigated the effect of a functional variable number of tandem repeats (VNTR) polymorphism in NOS1 (NOS1 exon 1f-VNTR) on the processing of rewards, one of the cognitive deficits in ADHD. Method: A sample of 136 participants, consisting of 87 adult ADHD patients and 49 healthy comparison subjects, completed a reward-related impulsivity task. A total of 104 participants also underwent functional magnetic resonance imaging during a reward anticipation task. The effect of the NOS1 exon 1f-VNTR genotype on reward-related impulsivity and reward-related ventral striatal activity was examined. Results: ADHD patients had higher impulsivity scores and lower ventral striatal activity than healthy comparison subjects. The association between the short allele and increased impulsivity was confirmed. However, independent of disease status, homozygous carriers of the short allele of NOS1, the ADHD risk genotype, demonstrated higher ventral striatal activity than carriers of the other NOS1 VNTR genotypes. Conclusions: The authors suggest that the NOS1 genotype influences impulsivity and its relation with ADHD is mediated through effects on this behavioral trait. Increased ventral striatal activity related to NOS1 may be compensatory for effects in other brain regions.
  • Horemans, I., & Schiller, N. O. (2004). Form-priming effects in nonword naming. Brain and Language, 90(1-3), 465-469. doi:10.1016/S0093-934X(03)00457-7.

    Abstract

    Form-priming effects from sublexical (syllabic or segmental) primes in masked priming can be accounted for in two ways. One is the sublexical pre-activation view according to which segments are pre-activated by the prime, and at the time the form-related target is to be produced, retrieval/assembly of those pre-activated segments is faster compared to an unrelated situation. However, it has also been argued that form-priming effects from sublexical primes might be due to lexical pre-activation. When the sublexical prime is presented, it activates all form-related words (i.e., cohorts) in the lexicon, necessarily including the form-related target, which—as a consequence—is produced faster than in the unrelated case. Note, however, that this lexical pre-activation account makes previous pre-lexical activation of segments necessary. This study reports a nonword naming experiment to investigate whether or not sublexical pre-activation is involved in masked form priming with sublexical primes. The results demonstrated a priming effect suggesting a nonlexical effect. However, this does not exclude an additional lexical component in form priming.
  • Houston, D. M., Jusczyk, P. W., Kuijpers, C., Coolen, R., & Cutler, A. (2000). Cross-language word segmentation by 9-month-olds. Psychonomic Bulletin & Review, 7, 504-509.

    Abstract

    Dutch-learning and English-learning 9-month-olds were tested, using the Headturn Preference Procedure, for their ability to segment Dutch words with strong/weak stress patterns from fluent Dutch speech. This prosodic pattern is highly typical for words of both languages. The infants were familiarized with pairs of words and then tested on four passages, two that included the familiarized words and two that did not. Both the Dutch- and the English-learning infants gave evidence of segmenting the targets from the passages, to an equivalent degree. Thus, English-learning infants are able to extract words from fluent speech in a language that is phonetically different from English. We discuss the possibility that this cross-language segmentation ability is aided by the similarity of the typical rhythmic structure of Dutch and English words.
  • Hoymann, G. (2004). [Review of the book Botswana: The future of the minority languages ed. by Herman M. Batibo and Birgit Smieja]. Journal of African Languages and Linguistics, 25(2), 171-173. doi:10.1515/jall.2004.25.2.171.
  • Hribar, A., Haun, D. B. M., & Call, J. (2011). Great apes’ strategies to map spatial relations. Animal Cognition, 14, 511-523. doi:10.1007/s10071-011-0385-6.

    Abstract

    We investigated reasoning about spatial relational similarity in three great ape species: chimpanzees, bonobos, and orangutans. Apes were presented with three spatial mapping tasks in which they were required to find a reward in an array of three cups, after observing a reward being hidden in a different array of three cups. To obtain a food reward, apes needed to choose the cup that was in the same relative position (i.e., on the left) as the baited cup in the other array. The three tasks differed in the constellation of the two arrays. In Experiment 1, the arrays were placed next to each other, forming a line. In Experiment 2, the positioning of the two arrays varied each trial, being placed either one behind the other in two rows, or next to each other, forming a line. Finally, in Experiment 3, the two arrays were always positioned one behind the other in two rows, but misaligned. Results suggested that apes compared the two arrays and recognized that they were similar in some way. However, we believe that instead of mapping the left–left, middle–middle, and right–right cups from each array, they mapped the cups that shared the most similar relations to nearby landmarks (table’s visual boundaries).
  • Huettig, F., & McQueen, J. M. (2011). The nature of the visual environment induces implicit biases during language-mediated visual search. Memory & Cognition, 39, 1068-1084. doi:10.3758/s13421-011-0086-z.

    Abstract

    Four eye-tracking experiments examined whether semantic and visual-shape representations are routinely retrieved from printed-word displays and used during language-mediated visual search. Participants listened to sentences containing target words which were similar semantically or in shape to concepts invoked by concurrently-displayed printed words. In Experiment 1 the displays contained semantic and shape competitors of the targets, and two unrelated words. There were significant shifts in eye gaze as targets were heard towards semantic but not shape competitors. In Experiments 2-4, semantic competitors were replaced with unrelated words, semantically richer sentences were presented to encourage visual imagery, or participants rated the shape similarity of the stimuli before doing the eye-tracking task. In all cases there were no immediate shifts in eye gaze to shape competitors, even though, in response to the Experiment 1 spoken materials, participants looked to these competitors when they were presented as pictures (Huettig & McQueen, 2007). There was a late shape-competitor bias (more than 2500 ms after target onset) in all experiments. These data show that shape information is not used in online search of printed-word displays (whereas it is used with picture displays). The nature of the visual environment appears to induce implicit biases towards particular modes of processing during language-mediated visual search.

Share this page