Publications

Displaying 201 - 300 of 712
  • Guerrero, L., & Van Valin Jr., R. D. (2004). Yaqui and the analysis of primary object languages. International Journal of American Linguistics, 70(3), 290-319. doi:10.1086/425603.

    Abstract

    The central topic of this study is to investigate three- and four-place predicate in Yaqui, which are characterized by having multiple object arguments. As with other Southern Uto-Aztecan languages, it has been said that Yaqui follows the Primary/Secondary Object pattern (Dryer 1986). Actually, Yaqui presents three patterns: verbs like nenka ‘sell’ follow the direct–indirect object pattern, verbs like miika ‘give’ follow the primary object pattern, and verbs like chijakta ‘sprinkle’ follow the locative alternation pattern; the primary object pattern is the exclusive one found with derived verbs. This paper shows that the contrast between direct object and primary object languages is not absolute but rather one of degree, and hence two “object” selection principles are needed to explain this mixed system. The two principles are not limited to Yaqui but are found in other languages as well, including English.
  • Guggenheim, J. A., St Pourcain, B., McMahon, G., Timpson, N. J., Evans, D. M., & Williams, C. (2015). Assumption-free estimation of the genetic contribution to refractive error across childhood. Molecular Vision, 21, 621-632. Retrieved from http://www.molvis.org/molvis/v21/621.

    Abstract

    Studies in relatives have generally yielded high heritability estimates for refractive error: twins 75–90%, families 15–70%. However, because related individuals often share a common environment, these estimates are inflated (via misallocation of unique/common environment variance). We calculated a lower-bound heritability estimate for refractive error free from such bias.
    Between the ages 7 and 15 years, participants in the Avon Longitudinal Study of Parents and Children (ALSPAC) underwent non-cycloplegic autorefraction at regular research clinics. At each age, an estimate of the variance in refractive error explained by single nucleotide polymorphism (SNP) genetic variants was calculated using genome-wide complex trait analysis (GCTA) using high-density genome-wide SNP genotype information (minimum N at each age=3,404).
    The variance in refractive error explained by the SNPs (“SNP heritability”) was stable over childhood: Across age 7–15 years, SNP heritability averaged 0.28 (SE=0.08, p<0.001). The genetic correlation for refractive error between visits varied from 0.77 to 1.00 (all p<0.001) demonstrating that a common set of SNPs was responsible for the genetic contribution to refractive error across this period of childhood. Simulations suggested lack of cycloplegia during autorefraction led to a small underestimation of SNP heritability (adjusted SNP heritability=0.35; SE=0.09). To put these results in context, the variance in refractive error explained (or predicted) by the time participants spent outdoors was <0.005 and by the time spent reading was <0.01, based on a parental questionnaire completed when the child was aged 8–9 years old.
    Genetic variation captured by common SNPs explained approximately 35% of the variation in refractive error between unrelated subjects. This value sets an upper limit for predicting refractive error using existing SNP genotyping arrays, although higher-density genotyping in larger samples and inclusion of interaction effects is expected to raise this figure toward twin- and family-based heritability estimates. The same SNPs influenced refractive error across much of childhood. Notwithstanding the strong evidence of association between time outdoors and myopia, and time reading and myopia, less than 1% of the variance in myopia at age 15 was explained by crude measures of these two risk factors, indicating that their effects may be limited, at least when averaged over the whole population.
  • Gullberg, M. (2004). [Review of the book Pointing: Where language, culture and cognition meet ed. by Sotaro Kita]. Gesture, 4(2), 235-248. doi:10.1075/gest.4.2.08gul.
  • Gullberg, M., & Holmqvist, K. (1999). Keeping an eye on gestures: Visual perception of gestures in face-to-face communication. Pragmatics & Cognition, 7(1), 35-63. doi:10.1075/pc.7.1.04gul.

    Abstract

    Since listeners usually look at the speaker's face, gestural information has to be absorbed through peripheral visual perception. In the literature, it has been suggested that listeners look at gestures under certain circumstances: 1) when the articulation of the gesture is peripheral; 2) when the speech channel is insufficient for comprehension; and 3) when the speaker him- or herself indicates that the gesture is worthy of attention. The research here reported employs eye tracking techniques to study the perception of gestures in face-to-face interaction. The improved control over the listener's visual channel allows us to test the validity of the above claims. We present preliminary findings substantiating claims 1 and 3, and relate them to theoretical proposals in the literature and to the issue of how visual and cognitive attention are related.
  • Gumperz, J. J., & Levinson, S. C. (1991). Rethinking linguistic relativity. Current Anthropology, 32(5), 613-623. Retrieved from http://www.jstor.org/stable/2743696.
  • Gupta, C. N., Calhoun, V. D., Rachkonda, S., Chen, J., Patel, V., Liu, J., Segall, J., Franke, B., Zwiers, M. P., Arias-Vasquez, A., Buitelaar, J., Fisher, S. E., Fernández, G., van Erp, T. G. M., Potkin, S., Ford, J., Matalon, D., McEwen, S., Lee, H. J., Mueller, B. A. and 16 moreGupta, C. N., Calhoun, V. D., Rachkonda, S., Chen, J., Patel, V., Liu, J., Segall, J., Franke, B., Zwiers, M. P., Arias-Vasquez, A., Buitelaar, J., Fisher, S. E., Fernández, G., van Erp, T. G. M., Potkin, S., Ford, J., Matalon, D., McEwen, S., Lee, H. J., Mueller, B. A., Greve, D. N., Andreassen, O., Agartz, I., Gollub, R. L., Sponheim, S. R., Ehrlich, S., Wang, L., Pearlson, G., Glahn, D. S., Sprooten, E., Mayer, A. R., Stephen, J., Jung, R. E., Canive, J., Bustillo, J., & Turner, J. A. (2015). Patterns of gray matter abnormalities in schizophrenia based on an international mega-analysis. Schizophrenia Bulletin, 41(5), 1133-1142. doi:10.1093/schbul/sbu177.

    Abstract

    Analyses of gray matter concentration (GMC) deficits in patients with schizophrenia (Sz) have identified robust changes throughout the cortex. We assessed the relationships between diagnosis, overall symptom severity, and patterns of gray matter in the largest aggregated structural imaging dataset to date. We performed both source-based morphometry (SBM) and voxel-based morphometry (VBM) analyses on GMC images from 784 Sz and 936 controls (Ct) across 23 scanning sites in Europe and the United States. After correcting for age, gender, site, and diagnosis by site interactions, SBM analyses showed 9 patterns of diagnostic differences. They comprised separate cortical, subcortical, and cerebellar regions. Seven patterns showed greater GMC in Ct than Sz, while 2 (brainstem and cerebellum) showed greater GMC for Sz. The greatest GMC deficit was in a single pattern comprising regions in the superior temporal gyrus, inferior frontal gyrus, and medial frontal cortex, which replicated over analyses of data subsets. VBM analyses identified overall cortical GMC loss and one small cluster of increased GMC in Sz, which overlapped with the SBM brainstem component. We found no significant association between the component loadings and symptom severity in either analysis. This mega-analysis confirms that the commonly found GMC loss in Sz in the anterior temporal lobe, insula, and medial frontal lobe form a single, consistent spatial pattern even in such a diverse dataset. The separation of GMC loss into robust, repeatable spatial patterns across multiple datasets paves the way for the application of these methods to identify subtle genetic and clinical cohort effects.
  • Hagoort, P., Wassenaar, M., & Brown, C. M. (2003). Syntax-related ERP-effects in Dutch. Cognitive Brain Research, 16(1), 38-50. doi:10.1016/S0926-6410(02)00208-2.

    Abstract

    In two studies subjects were required to read Dutch sentences that in some cases contained a syntactic violation, in other cases a semantic violation. All syntactic violations were word category violations. The design excluded differential contributions of expectancy to influence the syntactic violation effects. The syntactic violations elicited an Anterior Negativity between 300 and 500 ms. This negativity was bilateral and had a frontal distribution. Over posterior sites the same violations elicited a P600/SPS starting at about 600 ms. The semantic violations elicited an N400 effect. The topographic distribution of the AN was more frontal than the distribution of the classical N400 effect, indicating that the underlying generators of the AN and the N400 are, at least to a certain extent, non-overlapping. Experiment 2 partly replicated the design of Experiment 1, but with differences in rate of presentation and in the distribution of items over subjects, and without semantic violations. The word category violations resulted in the same effects as were observed in Experiment 1, showing that they were independent of some of the specific parameters of Experiment 1. The discussion presents a tentative account of the functional differences in the triggering conditions of the AN and the P600/SPS.
  • Hagoort, P., Wassenaar, M., & Brown, C. M. (2003). Real-time semantic compensation in patients with agrammatic comprehension: Electrophysiological evidence for multiple-route plasticity. Proceedings of the National Academy of Sciences of the United States of America, 100(7), 4340-4345. doi:10.1073/pnas.0230613100.

    Abstract

    To understand spoken language requires that the brain provides rapid access to different kinds of knowledge, including the sounds and meanings of words, and syntax. Syntax specifies constraints on combining words in a grammatically well formed manner. Agrammatic patients are deficient in their ability to use these constraints, due to a lesion in the perisylvian area of the languagedominant hemisphere. We report a study on real-time auditory sentence processing in agrammatic comprehenders, examining
    their ability to accommodate damage to the language system. We recorded event-related brain potentials (ERPs) in agrammatic comprehenders, nonagrammatic aphasics, and age-matched controls. When listening to sentences with grammatical violations, the agrammatic aphasics did not show the same syntax-related ERP effect as the two other subject groups. Instead, the waveforms of the agrammatic aphasics were dominated by a meaning-related ERP effect, presumably reflecting their attempts to achieve understanding by the use of semantic constraints. These data demonstrate that although agrammatic aphasics are impaired in their ability to exploit syntactic information in real time, they can reduce the consequences of a syntactic deficit by exploiting a semantic route. They thus provide evidence for the compensation of a syntactic deficit by a stronger reliance on another route in mapping
    sound onto meaning. This is a form of plasticity that we refer to as multiple-route plasticity.
  • Hagoort, P. (1999). De toekomstige eeuw zonder psychologie. Psychologie Magazine, 18, 35-36.
  • Hagoort, P. (2003). How the brain solves the binding problem for language: A neurocomputational model of syntactic processing. NeuroImage, 20(suppl. 1), S18-S29. doi:10.1016/j.neuroimage.2003.09.013.

    Abstract

    Syntax is one of the components in the architecture of language processing that allows the listener/reader to bind single-word information into a unified interpretation of multiword utterances. This paper discusses ERP effects that have been observed in relation to syntactic processing. The fact that these effects differ from the semantic N400 indicates that the brain honors the distinction between semantic and syntactic binding operations. Two models of syntactic processing attempt to account for syntax-related ERP effects. One type of model is serial, with a first phase that is purely syntactic in nature (syntax-first model). The other type of model is parallel and assumes that information immediately guides the interpretation process once it becomes available. This is referred to as the immediacy model. ERP evidence is presented in support of the latter model. Next, an explicit computational model is proposed to explain the ERP data. This Unification Model assumes that syntactic frames are stored in memory and retrieved on the basis of the spoken or written word form input. The syntactic frames associated with the individual lexical items are unified by a dynamic binding process into a structural representation that spans the whole utterance. On the basis of a meta-analysis of imaging studies on syntax, it is argued that the left posterior inferior frontal cortex is involved in binding syntactic frames together, whereas the left superior temporal cortex is involved in retrieval of the syntactic frames stored in memory. Lesion data that support the involvement of this left frontotemporal network in syntactic processing are discussed.
  • Hagoort, P., Hald, L. A., Bastiaansen, M. C. M., & Petersson, K. M. (2004). Integration of word meaning and world knowledge in language comprehension. Science, 304(5669), 438-441. doi:10.1126/science.1095455.

    Abstract

    Although the sentences that we hear or read have meaning, this does not necessarily mean that they are also true. Relatively little is known about the critical brain structures for, and the relative time course of, establishing the meaning and truth of linguistic expressions. We present electroencephalogram data that show the rapid parallel integration of both semantic and world
    knowledge during the interpretation of a sentence. Data from functional magnetic resonance imaging revealed that the left inferior prefrontal cortex is involved in the integration of both meaning and world knowledge. Finally, oscillatory brain responses indicate that the brain keeps a record of what makes a sentence hard to interpret.
  • Hagoort, P. (2003). Interplay between syntax and semantics during sentence comprehension: ERP effects of combining syntactic and semantic violations. Journal of Cognitive Neuroscience, 15(6), 883-899. doi:10.1162/089892903322370807.

    Abstract

    This study investigated the effects of combined semantic and syntactic violations in relation to the effects of single semantic and single syntactic violations on language-related event-related brain potential (ERP) effects (N400 and P600/ SPS). Syntactic violations consisted of a mismatch in grammatical gender or number features of the definite article and the noun in sentence-internal or sentence-final noun phrases (NPs). Semantic violations consisted of semantically implausible adjective–noun combinations in the same NPs. Combined syntactic and semantic violations were a summation of these two respective violation types. ERPs were recorded while subjects read the sentences with the different types of violations and the correct control sentences. ERP effects were computed relative to ERPs elicited by the sentence-internal or sentence-final nouns. The size of the N400 effect to the semantic violation was increased by an additional syntactic violation (the syntactic boost). In contrast, the size of the P600/ SPS to the syntactic violation was not affected by an additional semantic violation. This suggests that in the absence of syntactic ambiguity, the assignment of syntactic structure is independent of semantic context. However, semantic integration is influenced by syntactic processing. In the sentence-final position, additional global processing consequences were obtained as a result of earlier violations in the sentence. The resulting increase in the N400 amplitude to sentence-final words was independent of the nature of the violation. A speeded anomaly detection task revealed that it takes substantially longer to detect semantic than syntactic anomalies. These results are discussed in relation to the latency and processing characteristics of the N400 and P600/SPS effects. Overall, the results reveal an asymmetry in the interplay between syntax and semantics during on-line sentence comprehension.
  • Hagoort, P., & Brown, C. M. (1999). Gender electrified: ERP evidence on the syntactic nature of gender processing. Journal of Psycholinguistic Research, 28(6), 715-728. doi:10.1023/A:1023277213129.

    Abstract

    The central issue of this study concerns the claim that the processing of gender agreement in online sentence comprehension is a syntactic rather than a conceptual/semantic process. This claim was tested for the grammatical gender agreement in Dutch between the definite article and the noun. Subjects read sentences in which the definite article and the noun had the same gender and sentences in which the gender agreement was violated, While subjects read these sentences, their electrophysiological activity was recorded via electrodes placed on the scalp. Earlier research has shown that semantic and syntactic processing events manifest themselves in different event-related brain potential (ERP) effects. Semantic integration modulates the amplitude of the so-called N400.The P600/SPS is an ERP effect that is more sensitive to syntactic processes. The violation of grammatical gender agreement was found to result in a P600/SPS. For violations in sentence-final position, an additional increase of the N400 amplitude was observed. This N400 effect is interpreted as resulting from the consequence of a syntactic violation for the sentence-final wrap-up. The overall pattern of results supports the claim that the on-line processing of gender agreement information is not a content driven but a syntactic-form driven process.
  • Hagoort, P., & Brown, C. M. (1999). The consequences of the temporal interaction between syntactic and semantic processes for haemodynamic studies of language. NeuroImage, 9, S1024-S1024.
  • Hagoort, P., Ramsey, N., Rutten, G.-J., & Van Rijen, P. (1999). The role of the left anterior temporal cortex in language processing. Brain and Language, 69, 322-325. doi:10.1006/brln.1999.2169.
  • Hagoort, P., Indefrey, P., Brown, C. M., Herzog, H., Steinmetz, H., & Seitz, R. J. (1999). The neural circuitry involved in the reading of german words and pseudowords: A PET study. Journal of Cognitive Neuroscience, 11(4), 383-398. doi:10.1162/089892999563490.

    Abstract

    Silent reading and reading aloud of German words and pseudowords were used in a PET study using (15O)butanol to examine the neural correlates of reading and of the phonological conversion of legal letter strings, with or without meaning.
    The results of 11 healthy, right-handed volunteers in the age range of 25 to 30 years showed activation of the lingual gyri during silent reading in comparison with viewing a fixation cross. Comparisons between the reading of words and pseudowords suggest the involvement of the middle temporal gyri in retrieving both the phonological and semantic code for words. The reading of pseudowords activates the left inferior frontal gyrus, including the ventral part of Broca’s area, to a larger extent than the reading of words. This suggests that this area might be involved in the sublexical conversion of orthographic input strings into phonological output codes. (Pre)motor areas were found to be activated during both silent reading and reading aloud. On the basis of the obtained activation patterns, it is hypothesized that the articulation of high-frequency syllables requires the retrieval of their concomitant articulatory gestures from the SMA and that the articulation of lowfrequency syllables recruits the left medial premotor cortex.
  • Hall, M. L., Ahn, D., Mayberry, R. I., & Ferreira, V. S. (2015). Production and comprehension show divergent constituent order preferences: Evidence from elicited pantomime. Journal of Memory and Language, 81, 16-33. doi:10.1016/j.jml.2014.12.003.

    Abstract

    All natural languages develop devices to communicate who did what to whom. Elicited pantomime provides one model for studying this process, by providing a window into how humans (hearing non-signers) behave in a natural communicative modality (silent gesture) without established conventions from a grammar. Most studies in this paradigm focus on production, although they sometimes make assumptions about how comprehenders would likely behave. Here, we directly assess how naïve speakers of English (Experiments 1 & 2), Korean (Experiment 1), and Turkish (Experiment 2) comprehend pantomimed descriptions of transitive events, which are either semantically reversible (Experiments 1 & 2) or not (Experiment 2). Contrary to previous assumptions, we find no evidence that Person-Person-Action sequences are ambiguous to comprehenders, who simply adopt an agent-first parsing heuristic for all constituent orders. We do find that Person-Action-Person sequences yield the most consistent interpretations, even in native speakers of SOV languages. The full range of behavior in both production and comprehension provides counter-evidence to the notion that producers’ utterances are motivated by the needs of comprehenders. Instead, we argue that production and comprehension are subject to different sets of cognitive pressures, and that the dynamic interaction between these competing pressures can help explain synchronic and diachronic constituent order phenomena in natural human languages, both signed and spoken.
  • Hammarström, H. (2015). Ethnologue 16/17/18th editions: A comprehensive review. Language, 91, 723-737. doi:10.1353/lan.2015.0038.

    Abstract

    Ethnologue (http://www.ethnologue.com) is the most widely consulted inventory of the world’slanguages used today. The present review article looks carefully at the goals and description of the content of the Ethnologue’s 16th, 17th, and 18th editions, and reports on a comprehensive survey of the accuracy of the inventory itself. While hundreds of spurious and missing languages can be documented for Ethnologue, it is at present still better than any other nonderivative work of the same scope, in all aspects but one. Ethnologue fails to disclose the sources for the information presented, at odds with well-established scientific principles. The classification of languages into families in Ethnologue is also evaluated, and found to be far off from that argued in the specialist literature on the classification of individual languages. Ethnologue is frequently held to be splitting: that is, it tends to recognize more languages than an application of the criterion of mutual intelligibility would yield. By means of a random sample, we find that, indeed, with confidence intervals, the number of mutually unintelligible languages is on average 85% of the number found in Ethnologue. © 2015, Linguistic Society of America. All rights reserved.
  • Hammarström, H. (2015). Ethnologue 16/17/18th editions: A comprehensive review: Online appendices. Language, 91(3), s1-s188. doi:10.1353/lan.2015.0049.
  • Hanique, I., Ernestus, M., & Boves, L. (2015). Choice and pronunciation of words: Individual differences within a homogeneous group of speakers. Corpus Linguistics and Linguistic Theory, 11, 161-185. doi:10.1515/cllt-2014-0025.

    Abstract

    This paper investigates whether individual speakers forming a homogeneous group differ in their choice and pronunciation of words when engaged in casual conversation, and if so, how they differ. More specifically, it examines whether the Balanced Winnow classifier is able to distinguish between the twenty speakers of the Ernestus Corpus of Spontaneous Dutch, who all have the same social background. To examine differences in choice and pronunciation of words, instead of characteristics of the speech signal itself, classification was based on lexical and pronunciation features extracted from hand-made orthographic and automatically generated broad phonetic transcriptions. The lexical features consisted of words and two-word combinations. The pronunciation features represented pronunciation variations at the word and phone level that are typical for casual speech. The best classifier achieved a performance of 79.9% and was based on the lexical features and on the pronunciation features representing single phones and triphones. The speakers must thus differ from each other in these features. Inspection of the relevant features indicated that, among other things, the words relevant for classification generally do not contain much semantic content, and that speakers differ not only from each other in the use of these words but also in their pronunciation.
  • Hannerfors, A.-K., Hellgren, C., Schijven, D., Iliadis, S. I., Comasco, E., Skalkidou, A., Olivier, J. D., & Sundström-Poromaa, I. (2015). Treatment with serotonin reuptake inhibitors during pregnancy is associated with elevated corticotropin-releasing hormone levels. Psychoneuroendocrinology, 58, 104-113. doi:10.1016/j.psyneuen.2015.04.009.

    Abstract

    Treatment with serotonin reuptake inhibitors (SSRI) has been associated with an increased risk of preterm birth, but causality remains unclear. While placental CRH production is correlated with gestational length and preterm birth, it has been difficult to establish if psychological stress or mental health problems are associated with increased CRH levels. This study compared second trimester CRH serum concentrations in pregnant women on SSRI treatment (n=207) with untreated depressed women (n=56) and controls (n=609). A secondary aim was to investigate the combined effect of SSRI treatment and CRH levels on gestational length and risk for preterm birth. Women on SSRI treatment had significantly higher second trimester CRH levels than controls, and untreated depressed women. CRH levels and SSRI treatment were independently associated with shorter gestational length. The combined effect of SSRI treatment and high CRH levels yielded the highest risk estimate for preterm birth. SSRI treatment during pregnancy is associated with increased CRH levels. However, the elevated risk for preterm birth in SSRI users appear not to be mediated by increased placental CRH production, instead CRH appear as an independent risk factor for shorter gestational length and preterm birth.
  • Hardies, K., De Kovel, C. G. F., Weckhuysen, S., Asselbergh, B., Geuens, T., Deconinck, T., Azmi, A., May, P., Brilstra, E., Becker, F., Barisic, N., Craiu, D., Braun, K. P. J., Lal, D., Thiele, H., Schubert, J., Weber, Y., van't Slot, R., Nurnberg, P., Balling, R. and 8 moreHardies, K., De Kovel, C. G. F., Weckhuysen, S., Asselbergh, B., Geuens, T., Deconinck, T., Azmi, A., May, P., Brilstra, E., Becker, F., Barisic, N., Craiu, D., Braun, K. P. J., Lal, D., Thiele, H., Schubert, J., Weber, Y., van't Slot, R., Nurnberg, P., Balling, R., Timmerman, V., Lerche, H., Maudsley, S., Helbig, I., Suls, A., Koeleman, B. P. C., De Jonghe, P., & Euro Res Consortium, E. (2015). Recessive mutations in SLC13A5 result in a loss of citrate transport and cause neonatal epilepsy, developmental delay and teeth hypoplasia. Brain., 138(11), 3238-3250. doi:10.1093/brain/awv263.

    Abstract

    The epileptic encephalopathies are a clinically and aetiologically heterogeneous subgroup of epilepsy syndromes. Most epileptic encephalopathies have a genetic cause and patients are often found to carry a heterozygous de novo mutation in one of the genes associated with the disease entity. Occasionally recessive mutations are identified: a recent publication described a distinct neonatal epileptic encephalopathy (MIM 615905) caused by autosomal recessive mutations in the SLC13A5 gene. Here, we report eight additional patients belonging to four different families with autosomal recessive mutations in SLC13A5. SLC13A5 encodes a high affinity sodium-dependent citrate transporter, which is expressed in the brain. Neurons are considered incapable of de novo synthesis of tricarboxylic acid cycle intermediates; therefore they rely on the uptake of intermediates, such as citrate, to maintain their energy status and neurotransmitter production. The effect of all seven identified mutations (two premature stops and five amino acid substitutions) was studied in vitro, using immunocytochemistry, selective western blot and mass spectrometry. We hereby demonstrate that cells expressing mutant sodium-dependent citrate transporter have a complete loss of citrate uptake due to various cellular loss-of-function mechanisms. In addition, we provide independent proof of the involvement of autosomal recessive SLC13A5 mutations in the development of neonatal epileptic encephalopathies, and highlight teeth hypoplasia as a possible indicator for SLC13A5 screening. All three patients who tried the ketogenic diet responded well to this treatment, and future studies will allow us to ascertain whether this is a recurrent feature in this severe disorder.
  • Haun, D. B. M. (2003). What's so special about spatial cognition. De Psychonoom, 18, 3-4.
  • Hayano, K. (2004). Kaiwa ni okeru ninshikiteki ken’i no koushou: Shuujoshi yo, ne, odoroki hyouji no bunpu to kinou [Negotiation of Epistemic Authority in Conversation: on the use of final particles yo, ne and surprise markers]. Studies in Pragmatics, 6, 17-28.
  • Hayano, K. (2003). Self-presentation as a face-threatening act: A comparative study of self-oriented topic introduction in English and Japanese. Veritas, 24, 45-58.
  • Heidlmayr, K., Hemforth, B., Moutier, S., & Isel, F. (2015). Neurodynamics of executive control processes in bilinguals: Evidence from ERP and source reconstruction analyses. Frontiers in Psychology, 6: 821. doi:10.3389/fpsyg.2015.00821.

    Abstract

    The present study was designed to examine the impact of bilingualism on the neuronal activity in different executive control processes namely conflict monitoring, control implementation (i.e., interference suppression and conflict resolution) and overcoming of inhibition. Twenty-two highly proficient but non-balanced successive French–German bilingual adults and 22 monolingual adults performed a combined Stroop/Negative priming task while event-related potential (ERP) were recorded online. The data revealed that the ERP effects were reduced in bilinguals in comparison to monolinguals but only in the Stroop task and limited to the N400 and the sustained fronto-central negative-going potential time windows. This result suggests that bilingualism may impact the process of control implementation rather than the process of conflict monitoring (N200). Critically, our study revealed a differential time course of the involvement of the anterior cingulate cortex (ACC) and the prefrontal cortex (PFC) in conflict processing. While the ACC showed major activation in the early time windows (N200 and N400) but not in the latest time window (late sustained negative-going potential), the PFC became unilaterally active in the left hemisphere in the N400 and the late sustained negative-going potential time windows. Taken together, the present electroencephalography data lend support to a cascading neurophysiological model of executive control processes, in which ACC and PFC may play a determining role.
  • Heritage, J., & Stivers, T. (1999). Online commentary in acute medical visits: A method of shaping patient expectations. Social Science and Medicine, 49(11), 1501-1517. doi:10.1016/S0277-9536(99)00219-1.
  • Hervais-Adelman, A., Moser-Mercer, B., & Golestani, N. (2015). Brain functional plasticity associated with the emergence of expertise in extreme language control. NeuroImage, 114, 264-274. doi:10.1016/j.neuroimage.2015.03.072.

    Abstract

    We used functional magnetic resonance imaging (fMRI) to longitudinally examine brain plasticity arising from long-term, intensive simultaneous interpretation training. Simultaneous interpretation is a bilingual task with heavy executive control demands. We compared brain responses observed during simultaneous interpretation with those observed during simultaneous speech repetition (shadowing) in a group of trainee simultaneous interpreters, at the beginning and at the end of their professional training program. Age, sex and language-proficiency matched controls were scanned at similar intervals. Using multivariate pattern classification, we found distributed patterns of changes in functional responses from the first to second scan that distinguished the interpreters from the controls. We also found reduced recruitment of the right caudate nucleus during simultaneous interpretation as a result of training. Such practice-related change is consistent with decreased demands on multilingual language control as the task becomes more automatized with practice. These results demonstrate the impact of simultaneous interpretation training on the brain functional response in a cerebral structure that is not specifically linguistic, but that is known to be involved in learning, in motor control, and in a variety of domain-general executive functions. Along with results of recent studies showing functional and structural adaptations in the caudate nuclei of experts in a broad range of domains, our results underline the importance of this structure as a central node in expertise-related networks. (C) 2015 Elsevier Inc. All rights reserved.
  • Hervais-Adelman, A., Moser-Mercer, B., Michel, C. M., & Golestani, N. (2015). fMRI of simultaneous interpretation reveals the neural basis of extreme language control. Cerebral Cortex, 25(12), 4727-4739. doi:10.1093/cercor/bhu158.

    Abstract

    We used functional magnetic resonance imaging (fMRI) to examine the neural basis of extreme multilingual language control in a group of 50 multilingual participants. Comparing brain responses arising during simultaneous interpretation (SI) with those arising during simultaneous repetition revealed activation of regions known to be involved in speech perception and production, alongside a network incorporating the caudate nucleus that is known to be implicated in domain-general cognitive control. The similarity between the networks underlying bilingual language control and general executive control supports the notion that the frequently reported bilingual advantage on executive tasks stems from the day-to-day demands of language control in the multilingual brain. We examined neural correlates of the management of simultaneity by correlating brain activity during interpretation with the duration of simultaneous speaking and hearing. This analysis showed significant modulation of the putamen by the duration of simultaneity. Our findings suggest that, during SI, the caudate nucleus is implicated in the overarching selection and control of the lexico-semantic system, while the putamen is implicated in ongoing control of language output. These findings provide the first clear dissociation of specific dorsal striatum structures in polyglot language control, roles that are consistent with previously described involvement of these regions in nonlinguistic executive control.
  • Hervais-Adelman, A., Legrand, L. B., Zhan, M. Y., Tamietto, M., de Gelder, B., & Pegna, A. J. (2015). Looming sensitive cortical regions without V1 input: Evidence from a patient with bilateral cortical blindness. Frontiers in Integrative Neuroscience, 9: 51. doi:10.3389/fnint.2015.00051.

    Abstract

    Fast and automatic behavioral responses are required to avoid collision with an approaching stimulus. Accordingly, looming stimuli have been found to be highly salient and efficient attractors of attention due to the implication of potential collision and potential threat. Here, we address the question of whether looming motion is processed in the absence of any functional primary visual cortex and consequently without awareness. For this, we investigated a patient (TN) suffering from complete, bilateral damage to his primary visual cortex. Using an fMRI paradigm, we measured TN's brain activation during the presentation of looming, receding, rotating, and static point lights, of which he was unaware. When contrasted with other conditions, looming was found to produce bilateral activation of the middle temporal areas, as well as the superior temporal sulcus and inferior parietal lobe (IPL). The latter are generally thought to be involved in multisensory processing of motion in extrapersonal space, as well as attentional capture and saliency. No activity was found close to the lesioned V1 area. This demonstrates that looming motion is processed in the absence of awareness through direct subcortical projections to areas involved in multisensory processing of motion and saliency that bypass V-1.
  • Hibar, D. P., Stein, J. L., Renteria, M. E., Arias-Vasquez, A., Desrivières, S., Jahanshad, N., Toro, R., Wittfeld, K., Abramovic, L., Andersson, M., Aribisala, B. S., Armstrong, N. J., Bernard, M., Bohlken, M. M., Boks, M. P., Bralten, J., Brown, A. A., Chakravarty, M. M., Chen, Q., Ching, C. R. K. and 267 moreHibar, D. P., Stein, J. L., Renteria, M. E., Arias-Vasquez, A., Desrivières, S., Jahanshad, N., Toro, R., Wittfeld, K., Abramovic, L., Andersson, M., Aribisala, B. S., Armstrong, N. J., Bernard, M., Bohlken, M. M., Boks, M. P., Bralten, J., Brown, A. A., Chakravarty, M. M., Chen, Q., Ching, C. R. K., Cuellar-Partida, G., den Braber, A., Giddaluru, S., Goldman, A. L., Grimm, O., Guadalupe, T., Hass, J., Woldehawariat, G., Holmes, A. J., Hoogman, M., Janowitz, D., Jia, T., Kim, S., Klein, M., Kraemer, B., Lee, P. H., Olde Loohuis, L. M., Luciano, M., Macare, C., Mather, K. A., Mattheisen, M., Milaneschi, Y., Nho, K., Papmeyer, M., Ramasamy, A., Risacher, S. L., Roiz-Santiañez, R., Rose, E. J., Salami, A., Sämann, P. G., Schmaal, L., Schork, A. J., Shin, J., Strike, L. T., Teumer, A., Van Donkelaar, M. M. J., Van Eijk, K. R., Walters, R. K., Westlye, L. T., Whelan, C. D., Winkler, A. M., Zwiers, M. P., Alhusaini, S., Athanasiu, L., Ehrlich, S., Hakobjan, M. M. H., Hartberg, C. B., Haukvik, U. K., Heister, A. J. G. A. M., Hoehn, D., Kasperaviciute, D., Liewald, D. C. M., Lopez, L. M., Makkinje, R. R. R., Matarin, M., Naber, M. A. M., McKay, D. R., Needham, M., Nugent, A. C., Pütz, B., Royle, N. A., Shen, L., Sprooten, E., Trabzuni, D., Van der Marel, S. S. L., Van Hulzen, K. J. E., Walton, E., Wolf, C., Almasy, L., Ames, D., Arepalli, S., Assareh, A. A., Bastin, M. E., Brodaty, H., Bulayeva, K. B., Carless, M. A., Cichon, S., Corvin, A., Curran, J. E., Czisch, M., De Zubicaray, G. I., Dillman, A., Duggirala, R., Dyer, T. D., Erk, S., Fedko, I. O., Ferrucci, L., Foroud, T. M., Fox, P. T., Fukunaga, M., Gibbs, J. R., Göring, H. H. H., Green, R. C., Guelfi, S., Hansell, N. K., Hartman, C. A., Hegenscheid, K., Heinz, A., Hernandez, D. G., Heslenfeld, D. J., Hoekstra, P. J., Holsboer, F., Homuth, G., Hottenga, J.-J., Ikeda, M., Jack, C. R., Jenkinson, M., Johnson, R., Kanai, R., Keil, M., Kent, J. W., Kochunov, P., Kwok, J. B., Lawrie, S. M., Liu, X., Longo, D. L., McMahon, K. L., Meisenzahl, E., Melle, I., Mohnke, S., Montgomery, G. W., Mostert, J. C., Mühleisen, T. W., Nalls, M. A., Nichols, T. E., Nilsson, L. G., Nöthen, M. M., Ohi, K., Olvera, R. L., Perez-Iglesias, R., Pike, G. B., Potkin, S. G., Reinvang, I., Reppermund, S., Rietschel, M., Romanczuk-Seiferth, N., Rosen, G. D., Rujescu, D., Schnell, K., Schofield, P. R., Smith, C., Steen, V. M., Sussmann, J. E., Thalamuthu, A., Toga, A. W., Traynor, B. J., Troncoso, J., Turner, J. A., Valdes Hernández, M. C., van Ent, D. ’., Van der Brug, M., Van der Wee, N. J. A., Van Tol, M.-J., Veltman, D. J., Wassink, T. H., Westman, E., Zielke, R. H., Zonderman, A. B., Ashbrook, D. G., Hager, R., Lu, L., McMahon, F. J., Morris, D. W., Williams, R. W., Brunner, H. G., Buckner, R. L., Buitelaar, J. K., Cahn, W., Calhoun, V. D., Cavalleri, G. L., Crespo-Facorro, B., Dale, A. M., Davies, G. E., Delanty, N., Depondt, C., Djurovic, S., Drevets, W. C., Espeseth, T., Gollub, R. L., Ho, B.-C., Hoffmann, W., Hosten, N., Kahn, R. S., Le Hellard, S., Meyer-Lindenberg, A., Müller-Myhsok, B., Nauck, M., Nyberg, L., Pandolfo, M., Penninx, B. W. J. H., Roffman, J. L., Sisodiya, S. M., Smoller, J. W., Van Bokhoven, H., Van Haren, N. E. M., Völzke, H., Walter, H., Weiner, M. W., Wen, W., White, T., Agartz, I., Andreassen, O. A., Blangero, J., Boomsma, D. I., Brouwer, R. M., Cannon, D. M., Cookson, M. R., De Geus, E. J. C., Deary, I. J., Donohoe, G., Fernández, G., Fisher, S. E., Francks, C., Glahn, D. C., Grabe, H. J., Gruber, O., Hardy, J., Hashimoto, R., Hulshoff Pol, H. E., Jönsson, E. G., Kloszewska, I., Lovestone, S., Mattay, V. S., Mecocci, P., McDonald, C., McIntosh, A. M., Ophoff, R. A., Paus, T., Pausova, Z., Ryten, M., Sachdev, P. S., Saykin, A. J., Simmons, A., Singleton, A., Soininen, H., Wardlaw, J. M., Weale, M. E., Weinberger, D. R., Adams, H. H. H., Launer, L. J., Seiler, S., Schmidt, R., Chauhan, G., Satizabal, C. L., Becker, J. T., Yanek, L., van der Lee, S. J., Ebling, M., Fischl, B., Longstreth, W. T., Greve, D., Schmidt, H., Nyquist, P., Vinke, L. N., Van Duijn, C. M., Xue, L., Mazoyer, B., Bis, J. C., Gudnason, V., Seshadri, S., Ikram, M. A., The Alzheimer’s Disease Neuroimaging Initiative, The CHARGE Consortium, EPIGEN, IMAGEN, SYS, Martin, N. G., Wright, M. J., Schumann, G., Franke, B., Thompson, P. M., & Medland, S. E. (2015). Common genetic variants influence human subcortical brain structures. Nature, 520, 224-229. doi:10.1038/nature14101.

    Abstract

    The highly complex structure of the human brain is strongly shaped by genetic influences. Subcortical brain regions form circuits with cortical areas to coordinate movement, learning, memory and motivation, and altered circuits can lead to abnormal behaviour and disease. To investigate how common genetic variants affect the structure of these brain regions, here we conduct genome-wide association studies of the volumes of seven subcortical regions and the intracranial volume derived from magnetic resonance images of 30,717 individuals from 50 cohorts. We identify five novel genetic variants influencing the volumes of the putamen and caudate nucleus. We also find stronger evidence for three loci with previously established influences on hippocampal volume and intracranial volume. These variants show specific volumetric effects on brain structures rather than global effects across structures. The strongest effects were found for the putamen, where a novel intergenic locus with replicable influence on volume (rs945270; P = 1.08 × 10-33; 0.52% variance explained) showed evidence of altering the expression of the KTN1 gene in both brain and blood tissue. Variants influencing putamen volume clustered near developmental genes that regulate apoptosis, axon guidance and vesicle transport. Identification of these genetic variants provides insight into the causes of variability in human brain development, and may help to determine mechanisms of neuropsychiatric dysfunction

    Files private

    Request files
  • Hilbrink, E., Gattis, M., & Levinson, S. C. (2015). Early developmental changes in the timing of turn-taking: A longitudinal study of mother-infant interaction. Frontiers in Psychology, 6: 1492. doi:10.3389/fpsyg.2015.01492.

    Abstract

    To accomplish a smooth transition in conversation from one speaker to the next, a tight coordination of interaction between speakers is required. Recent studies of adult conversation suggest that this close timing of interaction may well be a universal feature of conversation. In the present paper, we set out to assess the development of this close timing of turns in infancy in vocal exchanges between mothers and infants. Previous research has demonstrated an early sensitivity to timing in interactions (e.g. Murray & Trevarthen, 1985). In contrast, less is known about infants’ abilities to produce turns in a timely manner and existing findings are rather patchy. We conducted a longitudinal study of twelve mother-infant dyads in free-play interactions at the ages of 3, 4, 5, 9, 12 and 18 months. Based on existing work and the predictions made by the Interaction Engine Hypothesis (Levinson, 2006), we expected that infants would begin to develop the temporal properties of turn-taking early in infancy but that their timing of turns would slow down at 12 months, which is around the time when infants start to produce their first words. Findings were consistent with our predictions: Infants were relatively fast at timing their turn early in infancy but slowed down towards the end of the first year. Furthermore, the changes observed in infants’ turn-timing skills were not caused by changes in maternal timing, which remained stable across the 3-18 month period. However, the slowing down of turn-timing started somewhat earlier than predicted: at 9 months.
  • Hintz, F., & Meyer, A. S. (2015). Prediction and production of simple mathematical equations: Evidence from anticipatory eye movements. PLoS One, 10(7): e0130766. doi:10.1371/journal.pone.0130766.

    Abstract

    The relationship between the production and the comprehension systems has recently become a topic of interest for many psycholinguists. It has been argued that these systems are tightly linked and in particular that listeners use the production system to predict upcoming content. In this study, we tested how similar production and prediction processes are in a novel version of the visual world paradigm. Dutch speaking participants (native speakers in Experiment 1; German-Dutch bilinguals in Experiment 2) listened to mathematical equations while looking at a clock face featuring the numbers 1 to 12. On alternating trials, they either heard a complete equation ("three plus eight is eleven") or they heard the first part ("three plus eight is") and had to produce the result ("eleven") themselves. Participants were encouraged to look at the relevant numbers throughout the trial. Their eye movements were recorded and analyzed. We found that the participants' eye movements in the two tasks were overall very similar. They fixated the first and second number of the equations shortly after they were mentioned, and fixated the result number well before they named it on production trials and well before the recorded speaker named it on comprehension trials. However, all fixation latencies were shorter on production than on comprehension trials. These findings suggest that the processes involved in planning to say a word and anticipating hearing a word are quite similar, but that people are more aroused or engaged when they intend to respond than when they merely listen to another person.

    Additional information

    Data availability
  • Hoey, E. (2015). Lapses: How people arrive at, and deal with, discontinuities in talk. Research on Language and Social Interaction, 48(4), 430-453. doi:10.1080/08351813.2015.1090116.

    Abstract

    Interaction includes moments of silence. When all participants forgo the option to speak, the silence can be called a “lapse.” This article builds on existing work on lapses and other kinds of silences (gaps, pauses, and so on) to examine how participants reach a point where lapsing is a possibility and how they orient to the lapse that subsequently develops. Drawing from a wide range of activities and settings, I will show that participants may treat lapses as (a) the relevant cessation of talk, (b) the allowable development of silence, or (c) the conspicuous absence of talk. Data are in American and British English.
  • Holler, J., Kendrick, K. H., Casillas, M., & Levinson, S. C. (2015). Editorial: Turn-taking in human communicative interaction. Frontiers in Psychology, 6: 1919. doi:10.3389/fpsyg.2015.01919.
  • Holler, J., Kokal, I., Toni, I., Hagoort, P., Kelly, S. D., & Ozyurek, A. (2015). Eye’m talking to you: Speakers’ gaze direction modulates co-speech gesture processing in the right MTG. Social Cognitive & Affective Neuroscience, 10, 255-261. doi:10.1093/scan/nsu047.

    Abstract

    Recipients process information from speech and co-speech gestures, but it is currently unknown how this processing is influenced by the presence of other important social cues, especially gaze direction, a marker of communicative intent. Such cues may modulate neural activity in regions associated either with the processing of ostensive cues, such as eye gaze, or with the processing of semantic information, provided by speech and gesture.
    Participants were scanned (fMRI) while taking part in triadic communication involving two recipients and a speaker. The speaker uttered sentences that
    were and were not accompanied by complementary iconic gestures. Crucially, the speaker alternated her gaze direction, thus creating two recipient roles: addressed (direct gaze) vs unaddressed (averted gaze) recipient. The comprehension of Speech&Gesture relative to SpeechOnly utterances recruited middle occipital, middle temporal and inferior frontal gyri, bilaterally. The calcarine sulcus and posterior cingulate cortex were sensitive to differences between direct and averted gaze. Most importantly, Speech&Gesture utterances, but not SpeechOnly utterances, produced additional activity in the right middle temporal gyrus when participants were addressed. Marking communicative intent with gaze direction modulates the processing of speech–gesture utterances in cerebral areas typically associated with the semantic processing of multi-modal communicative acts.
  • Holler, J., & Beattie, G. (2003). How iconic gestures and speech interact in the representation of meaning: are both aspects really integral to the process? Semiotica, 146, 81-116.
  • Holler, J., & Beattie, G. (2003). Pragmatic aspects of representational gestures: Do speakers use them to clarify verbal ambiguity for the listener? Gesture, 3, 127-154.
  • Holler, J., & Kendrick, K. H. (2015). Unaddressed participants’ gaze in multi-person interaction: Optimizing recipiency. Frontiers in Psychology, 6: 98. doi:10.3389/fpsyg.2015.00098.

    Abstract

    One of the most intriguing aspects of human communication is its turn-taking system. It requires the ability to process on-going turns at talk while planning the next, and to launch this next turn without considerable overlap or delay. Recent research has investigated the eye movements of observers of dialogues to gain insight into how we process turns at talk. More specifically, this research has focused on the extent to which we are able to anticipate the end of current and the beginning of next turns. At the same time, there has been a call for shifting experimental paradigms exploring social-cognitive processes away from passive observation towards online processing. Here, we present research that responds to this call by situating state-of-the-art technology for tracking interlocutors’ eye movements within spontaneous, face-to-face conversation. Each conversation involved three native speakers of English. The analysis focused on question-response sequences involving just two of those participants, thus rendering the third momentarily unaddressed. Temporal analyses of the unaddressed participants’ gaze shifts from current to next speaker revealed that unaddressed participants are able to anticipate next turns, and moreover, that they often shift their gaze towards the next speaker before the current turn ends. However, an analysis of the complex structure of turns at talk revealed that the planning of these gaze shifts virtually coincides with the points at which the turns first become recog-nizable as possibly complete. We argue that the timing of these eye movements is governed by an organizational principle whereby unaddressed participants shift their gaze at a point that appears interactionally most optimal: It provides unaddressed participants with access to much of the visual, bodily behavior that accompanies both the current speaker’s and the next speaker’s turn, and it allows them to display recipiency with regard to both speakers’ turns.
  • Horemans, I., & Schiller, N. O. (2004). Form-priming effects in nonword naming. Brain and Language, 90(1-3), 465-469. doi:10.1016/S0093-934X(03)00457-7.

    Abstract

    Form-priming effects from sublexical (syllabic or segmental) primes in masked priming can be accounted for in two ways. One is the sublexical pre-activation view according to which segments are pre-activated by the prime, and at the time the form-related target is to be produced, retrieval/assembly of those pre-activated segments is faster compared to an unrelated situation. However, it has also been argued that form-priming effects from sublexical primes might be due to lexical pre-activation. When the sublexical prime is presented, it activates all form-related words (i.e., cohorts) in the lexicon, necessarily including the form-related target, which—as a consequence—is produced faster than in the unrelated case. Note, however, that this lexical pre-activation account makes previous pre-lexical activation of segments necessary. This study reports a nonword naming experiment to investigate whether or not sublexical pre-activation is involved in masked form priming with sublexical primes. The results demonstrated a priming effect suggesting a nonlexical effect. However, this does not exclude an additional lexical component in form priming.
  • Horschig, J. M., Smolders, R., Bonnefond, M., Schoffelen, J.-M., Van den Munckhof, P., Schuurman, P. R., Cools, R., Denys, D., & Jensen, O. (2015). Directed communication between nucleus accumbens and neocortex in humans is differentially supported by synchronization in the theta and alpha band. PLoS One, 10(9): e0138685. doi:10.1371/journal.pone.0138685.

    Abstract

    Here, we report evidence for oscillatory bi-directional interactions between the nucleus accumbens and the neocortex in humans. Six patients performed a demanding covert visual attention task while we simultaneously recorded brain activity from deep-brain electrodes implanted in the nucleus accumbens and the surface electroencephalogram (EEG). Both theta and alpha oscillations were strongly coherent with the frontal and parietal EEG during the task. Theta-band coherence increased during processing of the visual stimuli. Granger causality analysis revealed that the nucleus accumbens was communicating with the neocortex primarily in the theta-band, while the cortex was communicating the nucleus accumbens in the alpha-band. These data are consistent with a model, in which theta- and alpha-band oscillations serve dissociable roles: Prior to stimulus processing, the cortex might suppress ongoing processing in the nucleus accumbens by modulating alpha-band activity. Subsequently, upon stimulus presentation, theta oscillations might facilitate the active exchange of stimulus information from the nucleus accumbens to the cortex.
  • Hoymann, G. (2004). [Review of the book Botswana: The future of the minority languages ed. by Herman M. Batibo and Birgit Smieja]. Journal of African Languages and Linguistics, 25(2), 171-173. doi:10.1515/jall.2004.25.2.171.
  • Li, W., Li, X., Huang, L., Kong, X., Yang, W., Wei, D., Li, J., Cheng, H., Zhang, Q., Qiu, J., & Liu, J. (2015). Brain structure links trait creativity to openness to experience. Social Cognitive and Affective Neuroscience, 10(2), 191-198. doi:10.1093/scan/nsu041.

    Abstract

    Creativity is crucial to the progression of human civilization and has led to important scientific discoveries. Especially, individuals are more likely to have scientific discoveries if they possess certain personality traits of creativity (trait creativity), including imagination, curiosity, challenge and risk-taking. This study used voxel-based morphometry to identify the brain regions underlying individual differences in trait creativity, as measured by the Williams creativity aptitude test, in a large sample (n = 246). We found that creative individuals had higher gray matter volume in the right posterior middle temporal gyrus (pMTG), which might be related to semantic processing during novelty seeking (e.g. novel association, conceptual integration and metaphor understanding). More importantly, although basic personality factors such as openness to experience, extroversion, conscientiousness and agreeableness (as measured by the NEO Personality Inventory) all contributed to trait creativity, only openness to experience mediated the association between the right pMTG volume and trait creativity. Taken together, our results suggest that the basic personality trait of openness might play an important role in shaping an individual’s trait creativity.
  • Huettig, F., & Brouwer, S. (2015). Delayed anticipatory spoken language processing in adults with dyslexia - Evidence from eye-tracking. Dyslexia, 21(2), 97-122. doi:10.1002/dys.1497.

    Abstract

    It is now well-established that anticipation of up-coming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here we investigated whether anticipatory spoken language processing is related to individuals’ word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., "Kijk naar deCOM afgebeelde pianoCOM", look at the displayed piano) while viewing four objects. Articles (Dutch “het” or “de”) were gender-marked such that the article agreed in gender only with the target and thus participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing.
  • Huettig, F. (2015). Four central questions about prediction in language processing. Brain Research, 1626, 118-135. doi:10.1016/j.brainres.2015.02.014.

    Abstract

    The notion that prediction is a fundamental principle of human information processing has been en vogue over recent years. The investigation of language processing may be particularly illuminating for testing this claim. Linguists traditionally have argued prediction plays only a minor role during language understanding because of the vast possibilities available to the language user as each word is encountered. In the present review I consider four central questions of anticipatory language processing: Why (i.e. what is the function of prediction in language processing)? What (i.e. what are the cues used to predict up-coming linguistic information and what type of representations are predicted)? How (what mechanisms are involved in predictive language processing and what is the role of possible mediating factors such as working memory)? When (i.e. do individuals always predict up-coming input during language processing)? I propose that prediction occurs via a set of diverse PACS (production-, association-, combinatorial-, and simulation-based prediction) mechanisms which are minimally required for a comprehensive account of predictive language processing. Models of anticipatory language processing must be revised to take multiple mechanisms, mediating factors, and situational context into account. Finally, I conjecture that the evidence considered here is consistent with the notion that prediction is an important aspect but not a fundamental principle of language processing.
  • Indefrey, P., & Levelt, W. J. M. (2004). The spatial and temporal signatures of word production components. Cognition, 92(1-2), 101-144. doi:10.1016/j.cognition.2002.06.001.

    Abstract

    This paper presents the results of a comprehensive meta-analysis of the relevant imaging literature on word production (82 experiments). In addition to the spatial overlap of activated regions, we also analyzed the available data on the time course of activations. The analysis specified regions and time windows of activation for the core processes of word production: lexical selection, phonological code retrieval, syllabification, and phonetic/articulatory preparation. A comparison of the word production results with studies on auditory word/non-word perception and reading showed that the time course of activations in word production is, on the whole, compatible with the temporal constraints that perception processes impose on the production processes they affect in picture/word interference paradigms.
  • Indefrey, P., & Levelt, W. J. M. (1999). A meta-analysis of neuroimaging experiments on word production. Neuroimage, 7, 1028.
  • Indefrey, P., Brown, C. M., Hellwig, F. M., Amunts, K., Herzog, H., Seitz, R. J., & Hagoort, P. (2001). A neural correlate of syntactic encoding during speech production. Proceedings of the National Academy of Sciences of the United States of America, 98, 5933-5936. doi:10.1073/pnas.101118098.

    Abstract

    Spoken language is one of the most compact and structured ways to convey information. The linguistic ability to structure individual words into larger sentence units permits speakers to express a nearly unlimited range of meanings. This ability is rooted in speakers’ knowledge of syntax and in the corresponding process of syntactic encoding. Syntactic encoding is highly automatized, operates largely outside of conscious awareness, and overlaps closely in time with several other processes of language production. With the use of positron emission tomography we investigated the cortical activations during spoken language production that are related to the syntactic encoding process. In the paradigm of restrictive scene description, utterances varying in complexity of syntactic encoding were elicited. Results provided evidence that the left Rolandic operculum, caudally adjacent to Broca’s area, is involved in both sentence-level and local (phrase-level) syntactic encoding during speaking.
  • Indefrey, P., Hellwig, F. M., Herzog, H., Seitz, R. J., & Hagoort, P. (2004). Neural responses to the production and comprehension of syntax in identical utterances. Brain and Language, 89(2), 312-319. doi:10.1016/S0093-934X(03)00352-3.

    Abstract

    Following up on an earlier positron emission tomography (PET) experiment (Indefrey et al., 2001), we used a scene description paradigm to investigate whether a posterior inferior frontal region subserving syntactic encoding for speaking is also involved in syntactic parsing during listening. In the language production part of the experiment, subjects described visually presented scenes
    using either sentences, sequences of noun phrases, or sequences of syntactically unrelated words. In the language comprehension part of the experiment, subjects were auditorily presented with the same kinds of utterances and judged whether they matched the visual scenes. We were able to replicate the previous finding of a region in caudal Broca s area that is sensitive to the complexity of
    syntactic encoding in language production. In language comprehension, no hemodynamic activation differences due to syntactic complexity were found. Given that correct performance in the judgment task did not require syntactic processing of the auditory stimuli, the results suggest that the degree to which listeners recruit syntactic processing resources in language comprehension may be a function of the syntactic demands of the task or the stimulus material.
  • Indefrey, P. (1999). Some problems with the lexical status of nondefault inflection. Behavioral and Brain Sciences, 22(6), 1025. doi:10.1017/S0140525X99342229.

    Abstract

    Clahsen's characterization of nondefault inflection as based exclusively on lexical entries does not capture the full range of empirical data on German inflection. In the verb system differential effects of lexical frequency seem to be input-related rather than affecting morphological production. In the noun system, the generalization properties of -n and -e plurals exceed mere analogy-based productivity.
  • Indefrey, P., Hagoort, P., Herzog, H., Seitz, R. J., & Brown, C. M. (2001). Syntactic processing in left prefrontal cortex is independent of lexical meaning. Neuroimage, 14, 546-555. doi:10.1006/nimg.2001.0867.

    Abstract

    In language comprehension a syntactic representation is built up even when the input is semantically uninterpretable. We report data on brain activation during syntactic processing, from an experiment on the detection of grammatical errors in meaningless sentences. The experimental paradigm was such that the syntactic processing was distinguished from other cognitive and linguistic functions. The data reveal that in syntactic error detection an area of the left dorsolateral prefrontal cortex, adjacent to Broca’s area, is specifically involved in the syntactic processing aspects, whereas other prefrontal areas subserve general error detection processes.
  • Ischebeck, A., Indefrey, P., Usui, N., Nose, I., Hellwig, F. M., & Taira, M. (2004). Reading in a regular orthography: An fMRI study investigating the role of visual familiarity. Journal of Cognitive Neuroscience, 16(5), 727-741. doi:10.1162/089892904970708.

    Abstract

    In order to separate the cognitive processes associated with phonological encoding and the use of a visual word form lexicon in reading, it is desirable to compare the processing of words presented in a visually familiar form with words in a visually unfamiliar form. Japanese Kana orthography offers this possibility. Two phonologically equivalent but visually dissimilar syllabaries allow the writing of, for example, foreign loanwords in two ways, only one of which is visually familiar. Familiarly written words, unfamiliarly written words, and pseudowords were presented in both Kana syllabaries (yielding six conditions in total) to participants during an fMRI measurement with a silent articulation task (Experiment 1) and a phonological lexical decision task (Experiment 2) using an event-related design. Consistent over two experimental tasks, the three different stimulus types (familiar, unfamiliar, and pseudoword) were found to activate selectively different brain regions previously associated with phonological encoding and word retrieval or meaning. Compatible with the predictions of the dual-route model for reading, pseudowords and visually unfamiliar words, which have to be read using phonological assembly, caused an increase in brain activity in left inferior frontal regions (BA 44/47), as compared to visually familiar words. Visually familiar and unfamiliar words were found to activate a range of areas associated with lexico-semantic processing more strongly than pseudowords, such as the left and right temporo-parietal region (BA 39/40), a region in the left middle/inferior temporal gyrus (BA 20/21), and the posterior cingulate (BA 31).
  • Janse, E., & Klitsch, J. (2004). Auditieve perceptie bij gezonde sprekers en bij sprekers met verworven taalstoornissen. Afasiologie, 26(1), 2-6.
  • Janse, E. (2004). Word perception in fast speech: Artificially time-compressed vs. naturally produced fast speech. Speech Communication, 42, 155-173. doi:10.1016/j.specom.2003.07.001.

    Abstract

    Natural fast speech differs from normal-rate speech with respect to its temporal pattern. Previous results showed that word intelligibility of heavily artificially time-compressed speech could not be improved by making its temporal pattern more similar to that of natural fast speech. This might have been due to the extrapolation of timing rules for natural fast speech to rates that are much faster than can be attained by human speakers. The present study investigates whether, at a speech rate that human speakers can attain, artificially time-compressed speech is easier to process if its timing pattern is similar to that of naturally produced fast speech. Our first experiment suggests, however, that word processing speed was slowed down, relative to linear compression. In a second experiment, word processing of artificially time-compressed speech was compared with processing of naturally produced fast speech. Even when naturally produced fast speech is perfectly intelligible, its less careful articulation, combined with the changed timing pattern, slows down processing, relative to linearly time-compressed speech. Furthermore, listeners preferred artificially time-compressed speech over naturally produced fast speech. These results suggest that linearly time-compressed speech has both a temporal and a segmental advantage over natural fast speech.
  • Janse, E., Nooteboom, S. G., & Quené, H. (2003). Word-level intelligibility of time-compressed speech: Prosodic and segmental factors. Speech Communication, 41, 287-301. doi:10.1016/S0167-6393(02)00130-9.

    Abstract

    In this study we investigate whether speakers, in line with the predictions of the Hyper- and Hypospeech theory, speed up most during the least informative parts and less during the more informative parts, when they are asked to speak faster. We expected listeners to benefit from these changes in timing, and our main goal was to find out whether making the temporal organisation of artificially time-compressed speech more like that of natural fast speech would improve intelligibility over linear time compression. Our production study showed that speakers reduce unstressed syllables more than stressed syllables, thereby making the prosodic pattern more pronounced. We extrapolated fast speech timing to even faster rates because we expected that the more salient prosodic pattern could be exploited in difficult listening situations. However, at very fast speech rates, applying fast speech timing worsens intelligibility. We argue that the non-uniform way of speeding up may not be due to an underlying communicative principle, but may result from speakers’ inability to speed up otherwise. As both prosodic and segmental information contribute to word recognition, we conclude that extrapolating fast speech timing to extremely fast rates distorts this balance between prosodic and segmental information.
  • Jansma, B. M., & Schiller, N. O. (2004). Monitoring syllable boundaries during speech production. Brain and Language, 90(1-3), 311-317. doi:10.1016/S0093-934X(03)00443-7.

    Abstract

    This study investigated the encoding of syllable boundary information during speech production in Dutch. Based on Levelt's model of phonological encoding, we hypothesized segments and syllable boundaries to be encoded in an incremental way. In a selfmonitoring experiment, decisions about the syllable affiliation (first or second syllable) of a pre-specified consonant, which was the third phoneme in a word, were required (e.g., ka.No canoe vs. kaN.sel pulpit ; capital letters indicate pivotal consonants, dots mark syllable boundaries). First syllable responses were faster than second syllable responses, indicating the incremental nature of segmental encoding and syllabification during speech production planning. The results of the experiment are discussed in the context of Levelt 's model of phonological encoding.
  • Janssen, D. P., Roelofs, A., & Levelt, W. J. M. (2004). Stem complexity and inflectional encoding in language production. Journal of Psycholinguistic Research, 33(5), 365-381. doi:10.1023/B:JOPR.0000039546.60121.a8.

    Abstract

    Three experiments are reported that examined whether stem complexity plays a role in inflecting polymorphemic words in language production. Experiment 1 showed that preparation effects for words with polymorphemic stems are larger when they are produced among words with constant inflectional structures compared to words with variable inflectional structures and simple stems. This replicates earlier findings for words with monomorphemic stems (Janssen et al., 2002). Experiments 2 and 3 showed that when inflectional structure is held constant, the preparation effects are equally large with simple and compound stems, and with compound and complex adjectival stems. These results indicate that inflectional encoding is blind to the complexity of the stem, which suggests that specific inflectional rather than generic morphological frames guide the generation of inflected forms in speaking words.
  • Janssen, C., Segers, E., McQueen, J. M., & Verhoeven, L. (2015). Lexical specificity training effects in second language learners. Language Learning, 65(2), 358-389. doi:10.1111/lang.12102.

    Abstract

    Children who start formal education in a second language may experience slower vocabulary growth in that language and subsequently experience disadvantages in literacy acquisition. The current study asked whether lexical specificity training can stimulate bilingual children's phonological awareness, which is considered to be a precursor to literacy. Therefore, Dutch monolingual and Turkish-Dutch bilingual children were taught new Dutch words with only minimal acoustic-phonetic differences. As a result of this training, the monolingual and the bilingual children improved on phoneme blending, which can be seen as an early aspect of phonological awareness. During training, the bilingual children caught up with the monolingual children on words with phonological overlap between their first language Turkish and their second language Dutch. It is concluded that learning minimal pair words fosters phoneme awareness, in both first and second language preliterate children, and that for second language learners phonological overlap between the two languages positively affects training outcomes, likely due to linguistic transfer
  • Janzen, G., & Van Turennout, M. (2004). Selective neural representation of objects relevant for navigation. Nature Neuroscience, 7(6), 673-677. doi:10.1038/nn1257.

    Abstract

    As people find their way through their environment, objects at navigationally relevant locations can serve as crucial landmarks. The parahippocampal gyrus has previously been shown to be involved in object and scene recognition. In the present study, we investigated the neural representation of navigationally relevant locations. Healthy human adults viewed a route through a virtual museum with objects placed at intersections (decision points) or at simple turns (non-decision points). Event-related functional magnetic resonance imaging (fMRI) data were acquired during subsequent recognition of the objects in isolation. Neural activity in the parahippocampal gyrus reflected the navigational relevance of an object's location in the museum. Parahippocampal responses were selectively increased for objects that occurred at decision points, independent of attentional demands. This increase occurred for forgotten as well as remembered objects, showing implicit retrieval of navigational information. The automatic storage of relevant object location in the parahippocampal gyrus provides a part of the neural mechanism underlying successful navigation.
  • Jescheniak, J. D., Levelt, W. J. M., & Meyer, A. S. (2003). Specific word frequency is not all that counts in speech production: Comments on Caramazza, Costa, et al. (2001) and new experimental data. Journal of Experimental Psychology: Learning, Memory, & Cognition, 29(3), 432-438. doi:10.1037/0278-7393.29.3.432.

    Abstract

    A. Caramazza, A. Costa, M. Miozzo, and Y. Bi(2001) reported a series of experiments demonstrating that the ease of producing a word depends only on the frequency of that specific word but not on the frequency of a homophone twin. A. Caramazza, A. Costa, et al. concluded that homophones have separate word form representations and that the absence of frequency-inheritance effects for homophones undermines an important argument in support of 2-stage models of lexical access, which assume that syntactic (lemma) representations mediate between conceptual and phonological representations. The authors of this article evaluate the empirical basis of this conclusion, report 2 experiments demonstrating a frequency-inheritance effect, and discuss other recent evidence. It is concluded that homophones share a common word form and that the distinction between lemmas and word forms should be upheld.
  • Jiang, J., Chen, C., Dai, B., Shi, G., Liu, L., & Lu, C. (2015). Leader emergence through interpersonal neural synchronization. Proceedings of the National Academy of Sciences of the United States of America, 112(14), 4274-4279. doi:10.1073/pnas.1422930112.

    Abstract

    The neural mechanism of leader emergence is not well understood. This study investigated (i) whether interpersonal neural synchronization (INS) plays an important role in leader emergence, and (ii) whether INS and leader emergence are associated with the frequency or the quality of communications. Eleven three-member groups were asked to perform a leaderless group discussion (LGD) task, and their brain activities were recorded via functional near infrared spectroscopy (fNIRS)-based hyperscanning. Video recordings of the discussions were coded for leadership and communication. Results showed that the INS for the leader–follower (LF) pairs was higher than that for the follower–follower (FF) pairs in the left temporo-parietal junction (TPJ), an area important for social mentalizing. Although communication frequency was higher for the LF pairs than for the FF pairs, the frequency of leader-initiated and follower-initiated communication did not differ significantly. Moreover, INS for the LF pairs was significantly higher during leader-initiated communication than during follower-initiated communications. In addition, INS for the LF pairs during leader-initiated communication was significantly correlated with the leaders’ communication skills and competence, but not their communication frequency. Finally, leadership could be successfully predicted based on INS as well as communication frequency early during the LGD (before half a minute into the task). In sum, this study found that leader emergence was characterized by high-level neural synchronization between the leader and followers and that the quality, rather than the frequency, of communications was associated with synchronization. These results suggest that leaders emerge because they are able to say the right things at the right time.
  • Johnson, E. K., Jusczyk, P. W., Cutler, A., & Norris, D. (2003). Lexical viability constraints on speech segmentation by infants. Cognitive Psychology, 46(1), 65-97. doi:10.1016/S0010-0285(02)00507-8.

    Abstract

    The Possible Word Constraint limits the number of lexical candidates considered in speech recognition by stipulating that input should be parsed into a string of lexically viable chunks. For instance, an isolated single consonant is not a feasible word candidate. Any segmentation containing such a chunk is disfavored. Five experiments using the head-turn preference procedure investigated whether, like adults, 12-month-olds observe this constraint in word recognition. In Experiments 1 and 2, infants were familiarized with target words (e.g., rush), then tested on lists of nonsense items containing these words in “possible” (e.g., “niprush” [nip + rush]) or “impossible” positions (e.g., “prush” [p + rush]). The infants listened significantly longer to targets in “possible” versus “impossible” contexts when targets occurred at the end of nonsense items (rush in “prush”), but not when they occurred at the beginning (tan in “tance”). In Experiments 3 and 4, 12-month-olds were similarly familiarized with target words, but test items were real words in sentential contexts (win in “wind” versus “window”). The infants listened significantly longer to words in the “possible” condition regardless of target location. Experiment 5 with targets at the beginning of isolated real words (e.g., win in “wind”) replicated Experiment 2 in showing no evidence of viability effects in beginning position. Taken together, the findings suggest that, in situations in which 12-month-olds are required to rely on their word segmentation abilities, they give evidence of observing lexical viability constraints in the way that they parse fluent speech.
  • Jongman, S. R., Roelofs, A., & Meyer, A. S. (2015). Sustained attention in language production: An individual differences investigation. Quarterly Journal of Experimental Psychology, 68, 710-730. doi:10.1080/17470218.2014.964736.

    Abstract

    Whereas it has long been assumed that most linguistic processes underlying language production happen automatically, accumulating evidence suggests that some form of attention is required. Here, we investigated the contribution of sustained attention, which is the ability to maintain alertness over time. First, the sustained attention ability of participants was measured using auditory and visual continuous performance tasks. Next, the participants described pictures using simple noun phrases while their response times (RTs) and gaze durations were measured. Earlier research has suggested that gaze duration reflects language planning processes up to and including phonological encoding. Individual differences in sustained attention ability correlated with individual differences in the magnitude of the tail of the RT distribution, reflecting the proportion of very slow responses, but not with individual differences in gaze duration. These results suggest that language production requires sustained attention, especially after phonological encoding.
  • Jongman, S. R., Meyer, A. S., & Roelofs, A. (2015). The role of sustained attention in the production of conjoined noun phrases: An individual differences study. PLoS One, 10(9): e0137557. doi:10.1371/journal.pone.0137557.

    Abstract

    It has previously been shown that language production, performed simultaneously with a nonlinguistic task, involves sustained attention. Sustained attention concerns the ability to maintain alertness over time. Here, we aimed to replicate the previous finding by showing that individuals call upon sustained attention when they plan single noun phrases (e.g., "the carrot") and perform a manual arrow categorization task. In addition, we investigated whether speakers also recruit sustained attention when they produce conjoined noun phrases (e.g., "the carrot and the bucket") describing two pictures, that is, when both the first and second task are linguistic. We found that sustained attention correlated with the proportion of abnormally slow phrase-production responses. Individuals with poor sustained attention displayed a greater number of very slow responses than individuals with better sustained attention. Importantly, this relationship was obtained both for the production of single phrases while performing a nonlinguistic manual task, and the production of noun phrase conjunctions in referring to two spatially separated objects. Inhibition and updating abilities were also measured. These scores did not correlate with our measure of sustained attention, suggesting that sustained attention and executive control are distinct. Overall, the results suggest that planning conjoined noun phrases involves sustained attention, and that language production happens less automatically than has often been assumed.
  • Jordan, F., & Gray, R. D. (2001). Comment on Terrell, Kelly and Rainbird. Current Anthropology, 42(1), 114-115.
  • Jordens, P. (2004). Systematiek en dynamiek bij de verwerving van Finietheid. Toegepaste Taalwetenschap in Artikelen, 71, 9-22.

    Abstract

    In early Dutch learner varieties, there is no evidence of finiteness being a functional category. There is no V2nd: no correlation between inflectional morphology and movement. Initially, learners express the illocutive function of finiteness through the use of illocutive markers, with the non-use of an illocutive marker expressing the default illocutive function of assertion. Illocutive markers are functioning as adjuncts with scope over the predicate. Illocutive markers become re-analysed as functional elements.The driving force is the acquisition of the auxiliary verbs that occur with past participles. It leads to a reanalysis of illocutive markers as two separate elements: an auxiliary verb and a scope adverb. The (modal) auxiliary carries illocutive function. Lexical verb-argument structure (including the external argument) occurs within the domain of the auxiliary verb. The predicate as the focus constituent occurs within the domain of a scope adverb. This reanalysis establishes a position for the external argument within the domain of AUX. The acquisition of AUX causes the acquisition of a (hierarchical) structure with a complement as a constituent which represents an underlying verb-argument structure, a predicate as the domain of elements that are in focus, and an external (specifier) position as a landing site for elements with topic function.
  • Karlebach, G., & Francks, C. (2015). Lateralization of gene expression in human language cortex. Cortex, 67, 30-36. doi:10.1016/j.cortex.2015.03.003.

    Abstract

    Lateralization is an important aspect of the functional brain architecture for language and other cognitive faculties. The molecular genetic basis of human brain lateralization is unknown, and recent studies have suggested that gene expression in the cerebral cortex is bilaterally symmetrical. Here we have re-analyzed two transcriptomic datasets derived from post mortem human cerebral cortex, with a specific focus on superior temporal and auditory language cortex in adults. We applied an empirical Bayes approach to model differential left-right expression, together with gene ontology analysis and meta-analysis. There was robust and reproducible lateralization of individual genes and gene ontology groups that are likely to fine-tune the electrophysiological and neurotransmission properties of cortical circuits, most notably synaptic transmission, nervous system development and glutamate receptor activity. Our findings anchor the cerebral biology of language to the molecular genetic level. Future research in model systems may determine how these molecular signatures of neurophysiological lateralization effect fine-tuning of cerebral cortical function, differently in the two hemispheres.
  • Kartushina, N., Hervais-Adelman, A., Frauenfelder, U. H., & Golestani, N. (2015). The effect of phonetic production training with visual feedback on the perception and production of foreign speech sounds. The Journal of the Acoustical Society of America, 138(2), 817-832. doi:10.1121/1.4926561.

    Abstract

    Second-language learners often experience major difficulties in producing non-native speech sounds. This paper introduces a training method that uses a real-time analysis of the acoustic properties of vowels produced by non-native speakers to provide them with immediate, trial-by-trial visual feedback about their articulation alongside that of the same vowels produced by native speakers. The Mahalanobis acoustic distance between non-native productions and target native acoustic spaces was used to assess L2 production accuracy. The experiment shows that 1 h of training per vowel improves the production of four non-native Danish vowels: the learners' productions were closer to the corresponding Danish target vowels after training. The production performance of a control group remained unchanged. Comparisons of pre- and post-training vowel discrimination performance in the experimental group showed improvements in perception. Correlational analyses of training-related changes in production and perception revealed no relationship. These results suggest, first, that this training method is effective in improving non-native vowel production. Second, training purely on production improves perception. Finally, it appears that improvements in production and perception do not systematically progress at equal rates within individuals. (C) 2015 Acoustical Society of America.
  • Kelly, B. F., Kidd, E., & Wigglesworth, G. (2015). Indigenous children's language: Acquisition, preservation and evolution of language in minority contexts. First Language, 35(4-5), 279-285. doi:10.1177/0142723715618056.

    Abstract

    A comprehensive theory of language acquisition must explain how human infants can learn any one of the world’s 7000 or so languages. As such, an important part of understanding how languages are learned is to investigate acquisition across a range of diverse languages and sociocultural contexts. To this end, cross-linguistic and cross-cultural language research has been pervasive in the field of first language acquisition since the early 1980s. In groundbreaking work, Slobin (1985) noted that the study of acquisition in cross-linguistic perspective can be used to reveal both developmental universals and language-specific acquisition patterns. Since this observation there have been several waves of cross-linguistic first language acquisition research, and more recently we have seen a rise in research investigating lesser-known languages. This special issue brings together work on several such languages, spoken in minority contexts. It is the first collection of language development research dedicated to the acquisition of under-studied or little-known languages and by extension, different cultures. Why lesser-known languages, and why minority contexts? First and foremost, acquisition theories need data from different languages, language families and cultural groups across the broadest typological array possible, and yet many theories of acquisition have been developed through analyses of English and other major world languages. Thus they are likely to be skewed by sampling bias. Languages of European origin constitute a small percentage of the total number of languages spoken worldwide. The Ethnologue (2015) lists 7102 languages spoken across the world. Of these, only 286 languages are languages of European origin, a mere 4% of the total number of languages spoken across the planet, and representing approximately only 26% of the total number of language speakers alive today. Compare this to the languages of the Pacific. The Ethnologue lists 1313 languages spoken in the Pacific, constituting 18.5% of the world’s languages. Of these, very few have been described, and even fewer have child language data available. Lieven and Stoll (2010) note that only around 70–80 languages have been the focus of acquisition studies (around 1% of the world’s languages). This somewhat alarming statistic suggests that the time is now ripe for researchers working on lesser-known languages to contribute to the field’s knowledge about how children learn a range of very different languages across differing cultures, and in doing so, for this research to make a contribution to language acquisition theory. The potential benefits are many. First, decades of descriptive work in linguistic typology have culminated in strong challenges to the existence of a Universal Grammar (Evans & Levinson, 2009), a long-held axiom of formal language acquisition theory. To be sure, cross-linguistic work in acquisition has long fuelled this debate (e.g. MacWhinney & Bates, 1989), but only as we collect a greater number of data points will we move closer toward a better understanding of the initial state of the human capacity for language and the types of social and cultural contexts in which language is successfully transmitted. A focus on linguistic diversity enables the investigation and postulation of universals in language acquisition, if and in whatever form they exist. In doing so, we can determine the sorts of things that are evident in child-directed speech, in children’s language production and in adult language, teasing out the threads at the intersection of language, culture and cognition. The study and dissemination of research into lesser-known, under-described languages with small communities significantly contributes to this aim because it not only reflects the diversity of languages present in the world, but provides a better representation of the social and economic conditions under which the majority of the world’s population acquire language (Heinrich, Heins, & Norenzayan, 2010). Related to this point, the study of smaller languages has taken on intense urgency in the past few decades due to the rapid extinction of these languages (Evans, 2010). The Language Documentation movement has toiled tirelessly in the pursuit of documenting languages before they disappear, an effort to which child language researchers have much to offer. Many children acquire smaller and minority languages in rich multilingual environments, where the influence of dominant languages affects acquisition (e.g., Stoll, Zakharko, Moran, Schikowski, & Bickel, 2015). Understanding the acquisition process where systems compete and may be in flux due to language contact, while no small task, will help us understand the social and economic conditions which favour successful preservation of minority languages, which could ultimately equip communities with the tools to stem the flow of language loss. With these points in mind we now turn to the articles in this special issue.
  • Kelly, S., Healey, M., Ozyurek, A., & Holler, J. (2015). The processing of speech, gesture and action during language comprehension. Psychonomic Bulletin & Review, 22, 517-523. doi:10.3758/s13423-014-0681-7.

    Abstract

    Hand gestures and speech form a single integrated system of meaning during language comprehension, but is gesture processed with speech in a unique fashion? We had subjects watch multimodal videos that presented auditory (words) and visual (gestures and actions on objects) information. Half of the subjects related the audio information to a written prime presented before the video, and the other half related the visual information to the written prime. For half of the multimodal video stimuli, the audio and visual information contents were congruent, and for the other half, they were incongruent. For all subjects, stimuli in which the gestures and actions were incongruent with the speech produced more errors and longer response times than did stimuli that were congruent, but this effect was less prominent for speech-action stimuli than for speech-gesture stimuli. However, subjects focusing on visual targets were more accurate when processing actions than gestures. These results suggest that although actions may be easier to process than gestures, gestures may be more tightly tied to the processing of accompanying speech.
  • Kempen, G. (1991). Conjunction reduction and gapping in clause-level coordination: An inheritance-based approach. Computational Intelligence, 7, 357-360. doi:10.1111/j.1467-8640.1991.tb00406.x.
  • Kempen, G., Schotel, H., & Hoenkamp, E. (1982). Analyse-door-synthese van Nederlandse zinnen [Abstract]. De Psycholoog, 17, 509.
  • Kempen, G., & Harbusch, K. (2003). An artificial opposition between grammaticality and frequency: Comment on Bornkessel, Schlesewsky & Friederici (2002). Cognition, 90(2), 205-210 [Rectification on p. 215]. doi:10.1016/S0010-0277(03)00145-8.

    Abstract

    In a recent Cognition paper (Cognition 85 (2002) B21), Bornkessel, Schlesewsky, and Friederici report ERP data that they claim “show that online processing difficulties induced by word order variations in German cannot be attributed to the relative infrequency of the constructions in question, but rather appear to reflect the application of grammatical principles during parsing” (p. B21). In this commentary we demonstrate that the posited contrast between grammatical principles and construction (in)frequency as sources of parsing problems is artificial because it is based on factually incorrect assumptions about the grammar of German and on inaccurate corpus frequency data concerning the German constructions involved.
  • Kempen, G. (1999). Fiets en (centri)fuge. Onze Taal, 68, 88.
  • Kemps, R. J. J. K., Ernestus, M., Schreuder, R., & Baayen, R. H. (2004). Processing reduced word forms: The suffix restoration effect. Brain and Language, 90(1-3), 117-127. doi:10.1016/S0093-934X(03)00425-5.

    Abstract

    Listeners cannot recognize highly reduced word forms in isolation, but they can do so when these forms are presented in context (Ernestus, Baayen, & Schreuder, 2002). This suggests that not all possible surface forms of words have equal status in the mental lexicon. The present study shows that the reduced forms are linked to the canonical representations in the mental lexicon, and that these latter representations induce reconstruction processes. Listeners restore suffixes that are partly or completely missing in reduced word forms. A series of phoneme-monitoring experiments reveals the nature of this restoration: the basis for suffix restoration is mainly phonological in nature, but orthography has an influence as well.
  • Kendrick, K. H. (2015). Other-initiated repair in English. Open Linguistics, 1, 164-190. doi:10.2478/opli-2014-0009.

    Abstract

    The practices of other-initiation of repair provide speakers with a set of solutions to one of the most basic problems in conversation: troubles of speaking, hearing, and understanding. Based on a collection of 227 cases systematically identified in a corpus of English conversation, this article describes the formats and practices of other-initiations of repair attested in the corpus and reports their quantitative distribution. In addition to straight other-initiations of repair, the identification of all possible cases also yielded a substantial proportion in which speakers use other-initiations to perform other actions, including non-serious actions, such as jokes and teases, preliminaries to dispreferred responses, and displays of surprise and disbelief. A distinction is made between other-initiations that perform additional actions concurrently and those that formally resemble straight other-initiations but analyzably do not initiate repair as an action.
  • Kendrick, K. H. (2015). The intersection of turn-taking and repair: The timing of other-initiations of repair in conversation. Frontiers in Psychology, 6: 250. doi:10.3389/fpsyg.2015.00250.

    Abstract

    The transitions between turns at talk in conversation tend to occur quickly, with only a slight gap of approximately 100 to 300 ms between them. This estimate of central tendency, however, hides a wealth of complex variation, as a number of factors, such as the type of turns involved, have been shown to influence the timing of turn transitions. This article considers one specific type of turn that does not conform to the statistical trend, namely turns that deal with troubles of speaking, hearing, and understanding, known as other-initiations of repair. The results of a quantitative analysis of 169 other-initiations of repair in face-to-face conversation reveal that the most frequent cases occur after gaps of approximately 700 ms. Furthermore, other-initiations of repair that locate a source of trouble in a prior turn specifically tend to occur after shorter gaps than those that do not, and those that correct errors in a prior turn, while rare, tend to occur without delay. An analysis of the transitions before other-initiations of repair, using methods of conversation analysis, suggests that speakers use the extra time (i) to search for a late recognition of the problematic turn, (ii) to provide an opportunity for the speaker of the problematic turn to resolve the trouble independently, (iii) and to produce visual signals, such as facial gestures. In light of these results, it is argued that other-initiations of repair take priority over other turns at talk in conversation and therefore are not subject to the same rules and constraints that motivate fast turn transitions in general
  • Kendrick, K. H., & Torreira, F. (2015). The timing and construction of preference: A quantitative study. Discourse Processes, 52(4), 255-289. doi:10.1080/0163853X.2014.955997.

    Abstract

    Conversation-analytic research has argued that the timing and construction of preferred responding actions (e.g., acceptances) differ from that of dispreferred responding actions (e.g., rejections), potentially enabling early response prediction by recipients. We examined 195 preferred and dispreferred responding actions in telephone corpora and found that the timing of the most frequent cases of each type did not differ systematically. Only for turn transitions of 700 ms or more was the proportion of dispreferred responding actions clearly greater than that of preferreds. In contrast, an analysis of the timing that included turn formats (i.e., those with or without qualification) revealed clearer differences. Small departures from a normal gap duration decrease the likelihood of a preferred action in a preferred turn format (e.g., a simple “yes”). We propose that the timing of a response is best understood as a turn-constructional feature, the first virtual component of a preferred or dispreferred turn format.
  • Kidd, E., Chan, A., & Chiu, J. (2015). Cross-linguistic influence in simultaneous Cantonese–English bilingual children's comprehension of relative clauses. Bilingualism: Language and Cognition, 18(3), 438-452. doi:10.1017/S1366728914000649.

    Abstract

    The current study investigated the role of cross-linguistic influence in Cantonese–English bilingual children's comprehension of subject- and object-extracted relative clauses (RCs). Twenty simultaneous Cantonese–English bilingual children (Mage = 8;11, SD = 2;6) and 20 vocabulary-matched Cantonese monolingual children (Mage = 6;4, SD = 1;3) completed a test of Cantonese RC comprehension. The bilingual children also completed a test of English RC comprehension. The results showed that, whereas the monolingual children were equally competent on subject and object RCs, the bilingual children performed significantly better on subject RCs. Error analyses suggested that the bilingual children were most often correctly assigning thematic roles in object RCs, but were incorrectly choosing the RC subject as the head referent. This pervasive error was interpreted to be due to the fact that both Cantonese and English have canonical SVO word order, which creates competition with structures that compete with an object RC analysis.
  • Kidd, E. (2015). Incorporating learning into theories of parsing. Linguistic Approaches to Bilingualism, 5(4), 487-493. doi:10.1075/lab.5.4.08kid.
  • Kidd, E. (2004). Grammars, parsers, and language acquisition. Journal of Child Language, 31(2), 480-483. doi:10.1017/S0305000904006117.

    Abstract

    Drozd's critique of Crain & Thornton's (C&T) (1998) book Investigations in Universal Grammar (IUG) raises many issues concerning theory and experimental design within generative approaches to language acquisition. I focus here on one of the strongest theoretical claims of the Modularity Matching Model (MMM): continuity of processing. For reasons different to Drozd, I argue that the assumption is tenuous. Furthermore, I argue that the focus of the MMM and the methodological prescriptions contained in IUG are too narrow to capture language acquisition.
  • Kidd, E. (2003). Relative clause comprehension revisited: Commentary on Eisenberg (2002). Journal of Child Language, 30(3), 671-679. doi:10.1017/S0305000903005683.

    Abstract

    Eisenberg (2002) presents data from an experiment investigating three- and four-year-old children's comprehension of restrictive relative clauses (RC). From the results she argues, contrary to Hamburger & Crain (1982), that children do not have discourse knowledge of the felicity conditions of RCs before acquiring the syntax of relativization. This note evaluates this conclusion on the basis of the methodology used, and proposes that an account of syntactic development needs to be sensitive to the real-time processing requirements acquisition places on the learner.
  • Kidd, E., Tennant, E., & Nitschke, S. (2015). Shared abstract representation of linguistic structure in bilingual sentence comprehension. Psychonomic Bulletin & Review, 22(4), 1062-1067. doi:10.3758/s13423-014-0775-2.

    Abstract

    Although there is strong evidence for shared abstract grammatical structure in bilingual speakers from studies of sentence production, comparable evidence from studies of comprehension is lacking. Twenty-seven (N = 27) English-German bilingual adults participated in a structural priming study where unambiguous English subject and object relative clause (RC) structures were used to prime corresponding subject and object RC interpretations of structurally ambiguous German RCs. The results showed that English object RCs primed significantly greater object RC interpretations in German in comparison to baseline and subject RC prime conditions, but that English subject RC primes did not change the participants’ baseline preferences. This is the first study to report abstract crosslinguistic priming in comprehension. The results specifically suggest that word order overlap supports the integration of syntactic structures from different languages in bilingual speakers, and that these shared representations are used in comprehension as well as production
  • Kircher, T. T. J., Brammer, M. J., Levelt, W. J. M., Bartels, M., & McGuire, P. K. (2004). Pausing for thought: Engagement of left temporal cortex during pauses in speech. NeuroImage, 21(1), 84-90. doi:10.1016/j.neuroimage.2003.09.041.

    Abstract

    Pauses during continuous speech, particularly those that occur within clauses, are thought to reflect the planning of forthcoming verbal output. We used functional Magnetic Resonance Imaging (fMRI) to examine their neural correlates. Six volunteers were scanned while describing seven Rorschach inkblots, producing 3 min of speech per inkblot. In an event-related design, the level of blood oxygenation level dependent (BOLD) contrast during brief speech pauses (mean duration 1.3 s, SD 0.3 s) during overt speech was contrasted with that during intervening periods of articulation. We then examined activity associated with pauses that occurred within clauses and pauses that occurred between grammatical junctions. Relative to articulation during speech, pauses were associated with activation in the banks of the left superior temporal sulcus (BA 39/22), at the temporoparietal junction. Continuous speech was associated with greater activation bilaterally in the inferior frontal (BA 44/45), middle frontal (BA 8) and anterior cingulate (BA 24) gyri, the middle temporal sulcus (BA 21/22), the occipital cortex and the cerebellum. Left temporal activation was evident during pauses that occurred within clauses but not during pauses at grammatical junctions. In summary, articulation during continuous speech involved frontal, temporal and cerebellar areas, while pausing was associated with activity in the left temporal cortex, especially when this occurred within a clause. The latter finding is consistent with evidence that within-clause pauses are a correlate of speech planning and in particular lexical retrieval.
  • Kita, S., & Ozyurek, A. (2003). What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language, 48(1), 16-32. doi:10.1016/S0749-596X(02)00505-3.

    Abstract

    Gestures that spontaneously accompany speech convey information coordinated with the concurrent speech. There has been considerable theoretical disagreement about the process by which this informational coordination is achieved. Some theories predict that the information encoded in gesture is not influenced by how information is verbally expressed. However, others predict that gestures encode only what is encoded in speech. This paper investigates this issue by comparing informational coordination between speech and gesture across different languages. Narratives in Turkish, Japanese, and English were elicited using an animated cartoon as the stimulus. It was found that gestures used to express the same motion events were influenced simultaneously by (1) how features of motion events were expressed in each language, and (2) spatial information in the stimulus that was never verbalized. From this, it is concluded that gestures are generated from spatio-motoric processes that interact on-line with the speech production process. Through the interaction, spatio-motoric information to be expressed is packaged into chunks that are verbalizable within a processing unit for speech formulation. In addition, we propose a model of speech and gesture production as one of a class of frameworks that are compatible with the data.
  • Klein, W. (Ed.). (2004). Philologie auf neuen Wegen [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 136.
  • Klein, W. (Ed.). (2004). Universitas [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik (LiLi), 134.
  • Klein, W. (2004). Vom Wörterbuch zum digitalen lexikalischen System. Zeitschrift für Literaturwissenschaft und Linguistik, 136, 10-55.
  • Klein, W. (2003). Wozu braucht man eigentlich Flexionsmorphologie? Zeitschrift für Literaturwissenschaft und Linguistik, 131, 23-54.
  • Klein, M., Van der Vloet, M., Harich, B., Van Hulzen, K. J., Onnink, A. M. H., Hoogman, M., Guadalupe, T., Zwiers, M., Groothuismink, J. M., Verberkt, A., Nijhof, B., Castells-Nobau, A., Faraone, S. V., Buitelaar, J. K., Schenck, A., Arias-Vasquez, A., Franke, B., & Psychiatric Genomics Consortium ADHD Working Group (2015). Converging evidence does not support GIT1 as an ADHD risk gene. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 168, 492-507. doi:10.1002/ajmg.b.32327.

    Abstract

    Attention-Deficit/Hyperactivity Disorder (ADHD) is a common neuropsychiatric disorder with a complex genetic background. The G protein-coupled receptor kinase interacting ArfGAP 1 (GIT1) gene was previously associated with ADHD. We aimed at replicating the association of GIT1 with ADHD and investigated its role in cognitive and brain phenotypes. Gene-wide and single variant association analyses for GIT1 were performed for three cohorts: (1) the ADHD meta-analysis data set of the Psychiatric Genomics Consortium (PGC, N=19,210), (2) the Dutch cohort of the International Multicentre persistent ADHD CollaboraTion (IMpACT-NL, N=225), and (3) the Brain Imaging Genetics cohort (BIG, N=1,300). Furthermore, functionality of the rs550818 variant as an expression quantitative trait locus (eQTL) for GIT1 was assessed in human blood samples. By using Drosophila melanogaster as a biological model system, we manipulated Git expression according to the outcome of the expression result and studied the effect of Git knockdown on neuronal morphology and locomotor activity. Association of rs550818 with ADHD was not confirmed, nor did a combination of variants in GIT1 show association with ADHD or any related measures in either of the investigated cohorts. However, the rs550818 risk-genotype did reduce GIT1 expression level. Git knockdown in Drosophila caused abnormal synapse and dendrite morphology, but did not affect locomotor activity. In summary, we could not confirm GIT1 as an ADHD candidate gene, while rs550818 was found to be an eQTL for GIT1. Despite GIT1's regulation of neuronal morphology, alterations in gene expression do not appear to have ADHD-related behavioral consequences
  • Klein, W., & Musan, R. (Eds.). (1999). Das deutsche Perfekt [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (113).
  • Klein, W., & Rieck, B.-O. (1982). Der Erwerb der Personalpronomina im ungesteuerten Spracherwerb. Zeitschrift für Literaturwissenschaft und Linguistik, 45, 35-71.
  • Klein, W. (2001). Ein Gemeinwesen, in dem das Volk herrscht, darf nicht von Gesetzen beherrscht werden, die das Volk nicht versteht. Rechtshistorisches Journal, 20, 621-628.
  • Klein, W. (1982). Einige Bemerkungen zur Frageintonation. Deutsche Sprache, 4, 289-310.

    Abstract

    In the first, critical part of this study, a small sample of simple German sentences with their empirically determined pitch contours is used to demonstrate the incorrectness of numerous currently hold views of German sentence intonation. In the second, more constructive part, several interrogative sentence types are analysed and an attempt is made to show that intonation, besides other functions, indicates the permantently changing 'thematic score' in on-going discourse as well as certain validity claims.
  • Klein, W. (1982). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, 12, 7-8.
  • Klein, W., & Franceschini, R. (Eds.). (2003). Einfache Sprache [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 131.
  • Klein, W. (2004). Auf der Suche nach den Prinzipien, oder: Warum die Geisteswissenschaften auf dem Rückzug sind. Zeitschrift für Literaturwissenschaft und Linguistik, 134, 19-44.
  • Klein, W. (2004). Im Lauf der Jahre. Linguistische Berichte, 200, 397-407.
  • Klein, W. (1991). Geile Binsenbüschel, sehr intime Gespielen: Ein paar Anmerkungen über Arno Schmidt als Übersetzer. Zeitschrift für Literaturwissenschaft und Linguistik, 84, 124-129.
  • Klein, W. (1982). Pronoms personnels et formes d'acquisition. Encrages, 8/9, 42-46.

Share this page