Publications

Displaying 301 - 400 of 677
  • Klein, W. (Ed.). (2000). Sprache des Rechts [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (118).
  • Klein, W., & Berliner Arbeitsgruppe (2000). Sprache des Rechts: Vermitteln, Verstehen, Verwechseln. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, (118), 7-33.
  • Klein, W. (1991). Was kann sich die Übersetzungswissenschaft von der Linguistik erwarten? Zeitschrift für Literaturwissenschaft und Linguistik, 84, 104-123.
  • Klein, W. (2000). Was uns die Sprache des Rechts über die Sprache sagt. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, (118), 115-149.
  • Klein, W. (Ed.). (1982). Zweitspracherwerb [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (45).
  • Knudsen, B., Fischer, M., & Aschersleben, G. (2015). The development of Arabic digit knowledge in 4-to-7-year-old children. Journal of numerical cognition, 1(1), 21-37. doi:10.5964/jnc.v1i1.4.

    Abstract

    Recent studies indicate that Arabic digit knowledge rather than non-symbolic number knowledge is a key foundation for arithmetic proficiency at the start of a child’s mathematical career. We document the developmental trajectory of 4- to 7-year-olds’ proficiency in accessing magnitude information from Arabic digits in five tasks differing in magnitude manipulation requirements. Results showed that children from 5 years onwards accessed magnitude information implicitly and explicitly, but that 5-year-olds failed to access magnitude information explicitly when numerical magnitude was contrasted with physical magnitude. Performance across tasks revealed a clear developmental trajectory: children traverse from first knowing the cardinal values of number words to recognizing Arabic digits to knowing their cardinal values and, concurrently, their ordinal position. Correlational analyses showed a strong within-child consistency, demonstrating that this pattern is not only reflected in group differences but also in individual performance.
  • Koch, X., & Janse, E. (2016). Speech rate effects on the processing of conversational speech across the adult life span. The Journal of the Acoustical Society of America, 139(4), 1618-1636. doi:10.1121/1.4944032.

    Abstract

    This study investigates the effect of speech rate on spoken word recognition across the adult life span. Contrary to previous studies, conversational materials with a natural variation in speech rate were used rather than lab-recorded stimuli that are subsequently artificially time-compressed. It was investigated whether older adults' speech recognition is more adversely affected by increased speech rate compared to younger and middle-aged adults, and which individual listener characteristics (e.g., hearing, fluid cognitive processing ability) predict the size of the speech rate effect on recognition performance. In an eye-tracking experiment, participants indicated with a mouse-click which visually presented words they recognized in a conversational fragment. Click response times, gaze, and pupil size data were analyzed. As expected, click response times and gaze behavior were affected by speech rate, indicating that word recognition is more difficult if speech rate is faster. Contrary to earlier findings, increased speech rate affected the age groups to the same extent. Fluid cognitive processing ability predicted general recognition performance, but did not modulate the speech rate effect. These findings emphasize that earlier results of age by speech rate interactions mainly obtained with artificially speeded materials may not generalize to speech rate variation as encountered in conversational speech.
  • Koch, X., Dingemanse, G., Goedegebure, A., & Janse, E. (2016). Type of speech material affects Acceptable Noise Level outcome. Frontiers in Psychology, 7: 186. doi:10.3389/fpsyg.2016.00186.

    Abstract

    The Acceptable Noise Level (ANL) test, in which individuals indicate what level of noise they are willing to put up with while following speech, has been used to guide hearing aid fitting decisions and has been found to relate to prospective hearing aid use. Unlike objective measures of speech perception ability, ANL outcome is not related to individual hearing loss or age, but rather reflects an individual's inherent acceptance of competing noise while listening to speech. As such, the measure may predict aspects of hearing aid success. Crucially, however, recent studies have questioned its repeatability (test-retest reliability). The first question for this study was whether the inconsistent results regarding the repeatability of the ANL test may be due to differences in speech material types used in previous studies. Second, it is unclear whether meaningfulness and semantic coherence of the speech modify ANL outcome. To investigate these questions, we compared ANLs obtained with three types of materials: the International Speech Test Signal (ISTS), which is non-meaningful and semantically non-coherent by definition, passages consisting of concatenated meaningful standard audiology sentences, and longer fragments taken from conversational speech. We included conversational speech as this type of speech material is most representative of everyday listening. Additionally, we investigated whether ANL outcomes, obtained with these three different speech materials, were associated with self-reported limitations due to hearing problems and listening effort in everyday life, as assessed by a questionnaire. ANL data were collected for 57 relatively good-hearing adult participants with an age range representative for hearing aid users. Results showed that meaningfulness, but not semantic coherence of the speech material affected ANL. Less noise was accepted for the non-meaningful ISTS signal than for the meaningful speech materials. ANL repeatability was comparable across the speech materials. Furthermore, ANL was found to be associated with the outcome of a hearing-related questionnaire. This suggests that ANL may predict activity limitations for listening to speech-in-noise in everyday situations. In conclusion, more natural speech materials can be used in a clinical setting as their repeatability is not reduced compared to more standard materials.
  • Kong, X., Liu, Z., Huang, L., Wang, X., Yang, Z., Zhou, G., Zhen, Z., & Liu, J. (2015). Mapping Individual Brain Networks Using Statistical Similarity in Regional Morphology from MRI. PLoS One, 10(11): e0141840. doi:10.1371/journal.pone.0141840.

    Abstract

    Representing brain morphology as a network has the advantage that the regional morphology of ‘isolated’ structures can be described statistically based on graph theory. However, very few studies have investigated brain morphology from the holistic perspective of complex networks, particularly in individual brains. We proposed a new network framework for individual brain morphology. Technically, in the new network, nodes are defined as regions based on a brain atlas, and edges are estimated using our newly-developed inter-regional relation measure based on regional morphological distributions. This implementation allows nodes in the brain network to be functionally/anatomically homogeneous but different with respect to shape and size. We first demonstrated the new network framework in a healthy sample. Thereafter, we studied the graph-theoretical properties of the networks obtained and compared the results with previous morphological, anatomical, and functional networks. The robustness of the method was assessed via measurement of the reliability of the network metrics using a test-retest dataset. Finally, to illustrate potential applications, the networks were used to measure age-related changes in commonly used network metrics. Results suggest that the proposed method could provide a concise description of brain organization at a network level and be used to investigate interindividual variability in brain morphology from the perspective of complex networks. Furthermore, the method could open a new window into modeling the complexly distributed brain and facilitate the emerging field of human connectomics.

    Additional information

    https://www.nitrc.org/
  • Konopka, A. E., & Kuchinsky, S. E. (2015). How message similarity shapes the timecourse of sentence formulation. Journal of Memory and Language, 84, 1-23. doi:10.1016/j.jml.2015.04.003.
  • Kos, A., Wanke, K., Gioio, A., Martens, G. J., Kaplan, B. B., & Aschrafi, A. (2016). Monitoring mRNA Translation in Neuronal Processes Using Fluorescent Non-Canonical Amino Acid Tagging. Journal of Histochemistry and Cytochemistry, 64(5), 323-333. doi:10.1369/0022155416641604.

    Abstract

    A steady accumulation of experimental data argues that protein synthesis in neurons is not merely restricted to the somatic compartment, but also occurs in several discrete cellular micro-domains. Local protein synthesis is critical for the establishment of synaptic plasticity in mature dendrites and in directing the growth cones of immature axons, and has been associated with cognitive impairment in mice and humans. Although in recent years a number of important mechanisms governing this process have been described, it remains technically challenging to precisely monitor local protein synthesis in individual neuronal cell parts independent from the soma. This report presents the utility of employing microfluidic chambers for the isolation and treatment of single neuronal cellular compartments. Furthermore, it is demonstrated that a protein synthesis assay, based on fluorescent non-canonical amino acid tagging (FUNCAT), can be combined with this cell culture system to label nascent proteins within a discrete structural and functional domain of the neuron. Together, these techniques could be employed for the detection of protein synthesis within developing and mature neurites, offering an effective approach to elucidate novel mechanisms controlling synaptic maintenance and plasticity.
  • Kösem, A., Basirat, A., Azizi, L., & van Wassenhove, V. (2016). High frequency neural activity predicts word parsing in ambiguous speech streams. Journal of Neurophysiology, 116(6), 2497-2512. doi:10.1152/jn.00074.2016.

    Abstract

    During speech listening, the brain parses a continuous acoustic stream of information into computational units (e.g. syllables or words) necessary for speech comprehension. Recent neuroscientific hypotheses propose that neural oscillations contribute to speech parsing, but whether they do so on the basis of acoustic cues (bottom-up acoustic parsing) or as a function of available linguistic representations (top-down linguistic parsing) is unknown. In this magnetoencephalography study, we contrasted acoustic and linguistic parsing using bistable speech sequences. While listening to the speech sequences, participants were asked to maintain one of the two possible speech percepts through volitional control. We predicted that the tracking of speech dynamics by neural oscillations would not only follow the acoustic properties but also shift in time according to the participant’s conscious speech percept. Our results show that the latency of high-frequency activity (specifically, beta and gamma bands) varied as a function of the perceptual report. In contrast, the phase of low-frequency oscillations was not strongly affected by top-down control. While changes in low-frequency neural oscillations were compatible with the encoding of pre-lexical segmentation cues, high-frequency activity specifically informed on an individual’s conscious speech percept.

    Files private

    Request files
  • De Kovel, C. G. F., Mulder, F., van Setten, J., van 't Slot, R., Al-Rubaish, A., Alshehri, A. M., Al Faraidy, K., Al-Ali, A., Al-Madan, M., Al Aqaili, I., Larbi, E., Al-Ali, R., Alzahrani, A., Asselbergs, F. W., Koeleman, B. P., & Al-Ali, A. (2016). Exome-Wide Association Analysis of Coronary Artery Disease in the Kingdom of Saudi Arabia Population. PLoS One, 11(2), e0146502. doi:10.1371/journal.pone.0146502.

    Abstract

    Coronary Artery Disease (CAD) remains the leading cause of mortality worldwide. Mortality rates associated with CAD have shown an exceptional increase particularly in fast developing economies like the Kingdom of Saudi Arabia (KSA). Over the past twenty years, CAD has become the leading cause of death in KSA and has reached epidemic proportions. This rise is undoubtedly caused by fast urbanization that is associated with a life-style that promotes CAD. However, the question remains whether genetics play a significant role and whether genetic susceptibility is increased in KSA compared to the well-studied Western European populations. Therefore, we performed an Exome-wide association study (EWAS) in 832 patients and 1,076 controls of Saudi Arabian origin to test whether population specific, strong genetic risk factors for CAD exist, or whether the polygenic risk score for known genetic risk factors for CAD, lipids, and Type 2 Diabetes show evidence for an enriched genetic burden. Our results do not show significant associations for a single genetic locus. However, the heritability estimate for CAD for this population was high (h2 = 0.53, S.E. = 0.1, p = 4e-12) and we observed a significant association of the polygenic risk score for CAD that demonstrates that the population of KSA, at least in part, shares the genetic risk associated to CAD in Western populations.

    Additional information

    Data Availability
  • De Kovel, C. G. F., Brilstra, E. H., van Kempen, M. J., Van't Slot, R., Nijman, I. J., Afawi, Z., De Jonghe, P., Djemie, T., Guerrini, R., Hardies, K., Helbig, I., Hendrickx, R., Kanaan, M., Kramer, U., Lehesjoki, A. E., Lemke, J. R., Marini, C., Mei, D., Moller, R. S., Pendziwiat, M. and 4 moreDe Kovel, C. G. F., Brilstra, E. H., van Kempen, M. J., Van't Slot, R., Nijman, I. J., Afawi, Z., De Jonghe, P., Djemie, T., Guerrini, R., Hardies, K., Helbig, I., Hendrickx, R., Kanaan, M., Kramer, U., Lehesjoki, A. E., Lemke, J. R., Marini, C., Mei, D., Moller, R. S., Pendziwiat, M., Stamberger, H., Suls, A., Weckhuysen, S., & Koeleman, B. P. (2016). Targeted sequencing of 351 candidate genes for epileptic encephalopathy in a large cohort of patients. Molecular Genetics & Genomic Medicine, 4(5), 568-80. doi:10.1002/mgg3.235.

    Abstract

    Background Many genes are candidates for involvement in epileptic encephalopathy (EE) because one or a few possibly pathogenic variants have been found in patients, but insufficient genetic or functional evidence exists for a definite annotation. Methods To increase the number of validated EE genes, we sequenced 26 known and 351 candidate genes for EE in 360 patients. Variants in 25 genes known to be involved in EE or related phenotypes were followed up in 41 patients. We pri- oritized the candidate genes, and followed up 31 variants in this prioritized subset of candidate genes. Results Twenty-nine genotypes in known genes for EE (19) or related diseases (10), dominant as well as recessive or X-linked, were classified as likely pathogenic variants. Among those, likely pathogenic de novo variants were found in EE genes that act dominantly, including the recently identified genes EEF1A2, KCNB1 and the X-linked gene IQSEC2 .A de novo frameshift variant in candi- date gene HNRNPU was the only de novo variant found among the followed- up candidate genes, and the patient’s phenotype was similar to a few recent publication
  • Kroes, H. Y., Monroe, G. R., van der Zwaag, B., Duran, K. J., De Kovel, C. G. F., van Roosmalen, M. J., Harakalova, M., Nijman, I. J., Kloosterman, W. P., Giles, R. H., Knoers, N. V., & van Haaften, G. (2016). Joubert syndrome: genotyping a Northern European patient cohort. European Journal of Human Genetics, 24(2), 214-20. doi:10.1038/ejhg.2015.84.
  • Kulakova, E., & Nieuwland, M. S. (2016). Pragmatic skills predict online counterfactual comprehension: Evidence from the N400. Cognitive, Affective and Behavioral Neuroscience, 16(5), 814-824. doi:10.3758/s13415-016-0433-4.

    Abstract

    Counterfactual thought allows people to consider alternative worlds they know to be false. Communicating these thoughts through language poses a social-communicative challenge because listeners typically expect a speaker to produce true utterances, but counterfactuals per definition convey information that is false. Listeners must therefore incorporate overt linguistic cues (subjunctive mood, such as in If I loved you then) in a rapid way to infer the intended counterfactual meaning. The present EEG study focused on the comprehension of such counterfactual antecedents and investigated if pragmatic ability—the ability to apply knowledge of the social-communicative use of language in daily life—predicts the online generation of counterfactual worlds. This yielded two novel findings: (1) Words that are consistent with factual knowledge incur a semantic processing cost, as reflected in larger N400 amplitude, in counterfactual antecedents compared to hypothetical antecedents (If sweets were/are made of sugar). We take this to suggest that counterfactuality is quickly incorporated during language comprehension and reduces online expectations based on factual knowledge. (2) Individual scores on the Autism Quotient Communication subscale modulated this effect, suggesting that individuals who are better at understanding the communicative intentions of other people are more likely to reduce knowledge-based expectations in counterfactuals. These results are the first demonstration of the real-time pragmatic processes involved in creating possible worlds.
  • Kulakova, E., & Nieuwland, M. S. (2016). Understanding Counterfactuality: A Review of Experimental Evidence for the Dual Meaning of Counterfactuals. Language and Linguistics Compass, 10(2), 49-65. doi:10.1111/lnc3.12175.

    Abstract

    Cognitive and linguistic theories of counterfactual language comprehension assume that counterfactuals convey a dual meaning. Subjunctive-counterfactual conditionals (e.g., ‘If Tom had studied hard, he would have passed the test’) express a supposition while implying the factual state of affairs (Tom has not studied hard and failed). The question of how counterfactual dual meaning plays out during language processing is currently gaining interest in psycholinguistics. Whereas numerous studies using offline measures of language processing consistently support counterfactual dual meaning, evidence coming from online studies is less conclusive. Here, we review the available studies that examine online counterfactual language comprehension through behavioural measurement (self-paced reading times, eye-tracking) and neuroimaging (electroencephalography, functional magnetic resonance imaging). While we argue that these studies do not offer direct evidence for the online computation of counterfactual dual meaning, they provide valuable information about the way counterfactual meaning unfolds in time and influences successive information processing. Further advances in research on counterfactual comprehension require more specific predictions about how counterfactual dual meaning impacts incremental sentence processing.
  • Kunert, R., Willems, R. M., & Hagoort, P. (2016). An independent psychometric evaluation of the PROMS measure of music perception skills. PLoS One, 11(7): e0159103. doi:10.1371/journal.pone.0159103.

    Abstract

    The Profile of Music Perception Skills (PROMS) is a recently developed measure of perceptual music skills which has been shown to have promising psychometric properties. In this paper we extend the evaluation of its brief version to three kinds of validity using an individual difference approach. The brief PROMS displays good discriminant validity with working memory, given that it does not correlate with backward digit span (r = .04). Moreover, it shows promising criterion validity (association with musical training (r = .45), musicianship status (r = .48), and self-rated musical talent (r = .51)). Finally, its convergent validity, i.e. relation to an unrelated measure of music perception skills, was assessed by correlating the brief PROMS to harmonic closure judgment accuracy. Two independent samples point to good convergent validity of the brief PROMS (r = .36; r = .40). The same association is still significant in one of the samples when including self-reported music skill in a partial correlation (rpartial = .30; rpartial = .17). Overall, the results show that the brief version of the PROMS displays a very good pattern of construct validity. Especially its tuning subtest stands out as a valuable part for music skill evaluations in Western samples. We conclude by briefly discussing the choice faced by music cognition researchers between different musical aptitude measures of which the brief PROMS is a well evaluated example.
  • Kunert, R., & Slevc, L. R. (2015). A commentary on: “Neural overlap in processing music and speech”. Frontiers in Human Neuroscience, 9: 330. doi:10.3389/fnhum.2015.00330.
  • Kunert, R., Willems, R. M., Casasanto, D., Patel, A. D., & Hagoort, P. (2015). Music and language syntax interact in Broca’s Area: An fMRI study. PLoS One, 10(11): e0141069. doi:10.1371/journal.pone.0141069.

    Abstract

    Instrumental music and language are both syntactic systems, employing complex, hierarchically-structured sequences built using implicit structural norms. This organization allows listeners to understand the role of individual words or tones in the context of an unfolding sentence or melody. Previous studies suggest that the brain mechanisms of syntactic processing may be partly shared between music and language. However, functional neuroimaging evidence for anatomical overlap of brain activity involved in linguistic and musical syntactic processing has been lacking. In the present study we used functional magnetic resonance imaging (fMRI) in conjunction with an interference paradigm based on sung sentences. We show that the processing demands of musical syntax (harmony) and language syntax interact in Broca’s area in the left inferior frontal gyrus (without leading to music and language main effects). A language main effect in Broca’s area only emerged in the complex music harmony condition, suggesting that (with our stimuli and tasks) a language effect only becomes visible under conditions of increased demands on shared neural resources. In contrast to previous studies, our design allows us to rule out that the observed neural interaction is due to: (1) general attention mechanisms, as a psychoacoustic auditory anomaly behaved unlike the harmonic manipulation, (2) error processing, as the language and the music stimuli contained no structural errors. The current results thus suggest that two different cognitive domains—music and language—might draw on the same high level syntactic integration resources in Broca’s area.
  • Kunert, R., Willems, R. M., & Hagoort, P. (2016). Language influences music harmony perception: effects of shared syntactic integration resources beyond attention. Royal Society Open Science, 3(2): 150685. doi:10.1098/rsos.150685.

    Abstract

    Many studies have revealed shared music–language processing resources by finding an influence of music harmony manipulations on concurrent language processing. However, the nature of the shared resources has remained ambiguous. They have been argued to be syntax specific and thus due to shared syntactic integration resources. An alternative view regards them as related to general attention and, thus, not specific to syntax. The present experiments evaluated these accounts by investigating the influence of language on music. Participants were asked to provide closure judgements on harmonic sequences in order to assess the appropriateness of sequence endings. At the same time participants read syntactic garden-path sentences. Closure judgements revealed a change in harmonic processing as the result of reading a syntactically challenging word. We found no influence of an arithmetic control manipulation (experiment 1) or semantic garden-path sentences (experiment 2). Our results provide behavioural evidence for a specific influence of linguistic syntax processing on musical harmony judgements. A closer look reveals that the shared resources appear to be needed to hold a harmonic key online in some form of syntactic working memory or unification workspace related to the integration of chords and words. Overall, our results support the syntax specificity of shared music–language processing resources.
  • Kunert, R. (2016). Internal conceptual replications do not increase independent replication success. Psychonomic Bulletin & Review, 23(5), 1631-1638. doi:10.3758/s13423-016-1030-9.

    Abstract

    Recently, many psychological effects have been surprisingly difficult to reproduce. This article asks why, and investigates whether conceptually replicating an effect in the original publication is related to the success of independent, direct replications. Two prominent accounts of low reproducibility make different predictions in this respect. One account suggests that psychological phenomena are dependent on unknown contexts that are not reproduced in independent replication attempts. By this account, internal replications indicate that a finding is more robust and, thus, that it is easier to independently replicate it. An alternative account suggests that researchers employ questionable research practices (QRPs), which increase false positive rates. By this account, the success of internal replications may just be the result of QRPs and, thus, internal replications are not predictive of independent replication success. The data of a large reproducibility project support the QRP account: replicating an effect in the original publication is not related to independent replication success. Additional analyses reveal that internally replicated and internally unreplicated effects are not very different in terms of variables associated with replication success. Moreover, social psychological effects in particular appear to lack any benefit from internal replications. Overall, these results indicate that, in this dataset at least, the influence of QRPs is at the heart of failures to replicate psychological findings, especially in social psychology. Variable, unknown contexts appear to play only a relatively minor role. I recommend practical solutions for how QRPs can be avoided.

    Additional information

    13423_2016_1030_MOESM1_ESM.pdf
  • Ladd, D. R., Roberts, S. G., & Dediu, D. (2015). Correlational studies in typological and historical linguistics. Annual Review of Linguistics, 1, 221-241. doi:10.1146/annurev-linguist-030514-124819.

    Abstract

    We review a number of recent studies that have identified either correlations between different linguistic features (e.g., implicational universals) or correlations between linguistic features and nonlinguistic properties of speakers or their environment (e.g., effects of geography on vocabulary). We compare large-scale quantitative studies with more traditional theoretical and historical linguistic research and identify divergent assumptions and methods that have led linguists to be skeptical of correlational work. We also attempt to demystify statistical techniques and point out the importance of informed critiques of the validity of statistical approaches. Finally, we describe various methods used in recent correlational studies to deal with the fact that, because of contact and historical relatedness, individual languages in a sample rarely represent independent data points, and we show how these methods may allow us to explore linguistic prehistory to a greater time depth than is possible with orthodox comparative reconstruction.
  • Lai, V. T., & Huettig, F. (2016). When prediction is fulfilled: Insight from emotion processing. Neuropsychologia, 85, 110-117. doi:10.1016/j.neuropsychologia.2016.03.014.

    Abstract

    Research on prediction in language processing has focused predominantly on the function of predictive context and less on the potential contribution of the predicted word. The present study investigated how meaning that is not immediately prominent in the contents of predictions but is part of the predicted words influences sentence processing. We used emotional meaning to address this question. Participants read emotional and neutral words embedded in highly predictive and non-predictive sentential contexts, with the two sentential contexts rated similarly for their emotional ratings. Event Related Potential (ERP) effects of prediction and emotion both started at ~200 ms. Confirmed predictions elicited larger P200s than violated predictions when the target words were non-emotional (neutral), but such effect was absent when the target words were emotional. Likewise, emotional words elicited larger P200s than neutral words when the target words were non-predictive, but such effect were absent when the contexts were predictive. We conjecture that the prediction and emotion effects at ~200 ms may share similar neural process(es). We suggest that such process(es) could be affective, where confirmed predictions and word emotion give rise to ‘aha’ or reward feelings, and/or cognitive, where both prediction and word emotion quickly engage attention

    Additional information

    Lai_Huettig_2016_supp.xlsx
  • Lai, V. T., & Curran, T. (2015). Erratum to “ERP evidence for conceptual mappings and comparison processes during the comprehension of conventional and novel metaphors” [Brain Lang. 127 (3) (2013) 484–496]. Brain and Language, 149, 148-150. doi:10.1016/j.bandl.2014.11.001.
  • Lai, V. T., van Dam, W., Conant, L. L., Binder, J. R., & Desai, R. H. (2015). Familiarity differentially affects right hemisphere contributions to processing metaphors and literals. Frontiers in Human Neuroscience, 9: 44. doi:10.3389/fnhum.2015.00044.

    Abstract

    The role of the two hemispheres in processing metaphoric language is controversial. While some studies have reported a special role of the right hemisphere (RH) in processing metaphors, others indicate no difference in laterality relative to literal language. Some studies have found a role of the RH for novel/unfamiliar metaphors, but not
    conventional/familiar metaphors. It is not clear, however, whether the role of the RH
    is specific to metaphor novelty, or whether it reflects processing, reinterpretation or
    reanalysis of novel/unfamiliar language in general. Here we used functional magnetic
    resonance imaging (fMRI) to examine the effects of familiarity in both metaphoric and
    non-metaphoric sentences. A left lateralized network containing the middle and inferior
    frontal gyri, posterior temporal regions in the left hemisphere (LH), and inferior frontal
    regions in the RH, was engaged across both metaphoric and non-metaphoric sentences;
    engagement of this network decreased as familiarity decreased. No region was engaged
    selectively for greater metaphoric unfamiliarity. An analysis of laterality, however, showed that the contribution of the RH relative to that of LH does increase in a metaphorspecific manner as familiarity decreases. These results show that RH regions, taken by themselves, including commonly reported regions such as the right inferior frontal gyrus (IFG), are responsive to increased cognitive demands of processing unfamiliar stimuli, rather than being metaphor-selective. The division of labor between the two hemispheres, however, does shift towards the right for metaphoric processing. The shift results not because the RH contributes more to metaphoric processing. Rather, relative to
    its contribution for processing literals, the LH contributes less.
  • Lai, V. T., Willems, R. M., & Hagoort, P. (2015). Feel between the Lines: Implied emotion from combinatorial semantics. Journal of Cognitive Neuroscience, 27(8), 1528-1541. doi:10.1162/jocn_a_00798.

    Abstract

    This study investigated the brain regions for the comprehension of implied emotion in sentences. Participants read negative sentences without negative words, for example, “The boy fell asleep and never woke up again,” and their neutral counterparts “The boy stood up and grabbed his bag.” This kind of negative sentence allows us to examine implied emotion derived at the sentence level, without associative emotion coming from word retrieval. We found that implied emotion in sentences, relative to neutral sentences, led to activation in some emotion-related areas, including the medial prefrontal cortex, the amygdala, and the insula, as well as certain language-related areas, including the inferior frontal gyrus, which has been implicated in combinatorial processing. These results suggest that the emotional network involved in implied emotion is intricately related to the network for combinatorial processing in language, supporting the view that sentence meaning is more than simply concatenating the meanings of its lexical building blocks.
  • Lai, C. S. L., Fisher, S. E., Hurst, J. A., Levy, E. R., Hodgson, S., Fox, M., Jeremiah, S., Povey, S., Jamison, D. C., Green, E. D., Vargha-Khadem, F., & Monaco, A. P. (2000). The SPCH1 region on human 7q31: Genomic characterization of the critical interval and localization of translocations associated with speech and language disorder. American Journal of Human Genetics, 67(2), 357-368. doi:10.1086/303011.

    Abstract

    The KE family is a large three-generation pedigree in which half the members are affected with a severe speech and language disorder that is transmitted as an autosomal dominant monogenic trait. In previously published work, we localized the gene responsible (SPCH1) to a 5.6-cM region of 7q31 between D7S2459 and D7S643. In the present study, we have employed bioinformatic analyses to assemble a detailed BAC-/PAC-based sequence map of this interval, containing 152 sequence tagged sites (STSs), 20 known genes, and >7.75 Mb of completed genomic sequence. We screened the affected chromosome 7 from the KE family with 120 of these STSs (average spacing <100 kb), but we did not detect any evidence of a microdeletion. Novel polymorphic markers were generated from the sequence and were used to further localize critical recombination breakpoints in the KE family. This allowed refinement of the SPCH1 interval to a region between new markers 013A and 330B, containing ∼6.1 Mb of completed sequence. In addition, we have studied two unrelated patients with a similar speech and language disorder, who have de novo translocations involving 7q31. Fluorescence in situ hybridization analyses with BACs/PACs from the sequence map localized the t(5;7)(q22;q31.2) breakpoint in the first patient (CS) to a single clone within the newly refined SPCH1 interval. This clone contains the CAGH44 gene, which encodes a brain-expressed protein containing a large polyglutamine stretch. However, we found that the t(2;7)(p23;q31.3) breakpoint in the second patient (BRD) resides within a BAC clone mapping >3.7 Mb distal to this, outside the current SPCH1 critical interval. Finally, we investigated the CAGH44 gene in affected individuals of the KE family, but we found no mutations in the currently known coding sequence. These studies represent further steps toward the isolation of the first gene to be implicated in the development of speech and language.
  • Lam, K. J. Y., Dijkstra, T., & Rueschemeyer, S.-A. (2015). Feature activation during word recognition: action, visual, and associative-semantic priming effects. Frontiers in Psychology, 6: 659. doi:10.3389/fpsyg.2015.00659.

    Abstract

    Embodied theories of language postulate that language meaning is stored in modality-specific brain areas generally involved in perception and action in the real world. However, the temporal dynamics of the interaction between modality-specific information and lexical-semantic processing remain unclear. We investigated the relative timing at which two types of modality-specific information (action-based and visual-form information) contribute to lexical-semantic comprehension. To this end, we applied a behavioral priming paradigm in which prime and target words were related with respect to (1) action features, (2) visual features, or (3) semantically associative information. Using a Go/No-Go lexical decision task, priming effects were measured across four different inter-stimulus intervals (ISI = 100, 250, 400, and 1000 ms) to determine the relative time course of the different features. Notably, action priming effects were found in ISIs of 100, 250, and 1000 ms whereas a visual priming effect was seen only in the ISI of 1000 ms. Importantly, our data suggest that features follow different time courses of activation during word recognition. In this regard, feature activation is dynamic, measurable in specific time windows but not in others. Thus the current study (1) demonstrates how multiple ISIs can be used within an experiment to help chart the time course of feature activation and (2) provides new evidence for embodied theories of language.
  • Lam, N. H. L., Schoffelen, J.-M., Udden, J., Hulten, A., & Hagoort, P. (2016). Neural activity during sentence processing as reflected in theta, alpha, beta and gamma oscillations. NeuroImage, 142(15), 43-54. doi:10.1016/j.neuroimage.2016.03.007.

    Abstract

    We used magnetoencephalography (MEG) to explore the spatio-temporal dynamics of neural oscillations associated with sentence processing, in 102 participants. We quantified changes in oscillatory power as the sentence unfolded, and in response to individual words in the sentence. For words early in a sentence compared to those late in the same sentence, we observed differences in left temporal and frontal areas, and bilateral frontal and right parietal regions for the theta, alpha, and beta frequency bands. The neural response to words in a sentence differed from the response to words in scrambled sentences in left-lateralized theta, alpha, beta, and gamma. The theta band effects suggest that a sentential context facilitates lexical retrieval, and that this facilitation is stronger for words late in the sentence. Effects in the alpha and beta band may reflect the unification of semantic and syntactic information, and are suggestive of easier unification late in a sentence. The gamma oscillations are indicative of predicting the upcoming word during sentence processing. In conclusion, changes in oscillatory neuronal activity capture aspects of sentence processing. Our results support earlier claims that language (sentence) processing recruits areas distributed across both hemispheres, and extends beyond the classical language regions
  • Lammertink, I., Casillas, M., Benders, T., Post, B., & Fikkert, P. (2015). Dutch and English toddlers' use of linguistic cues in predicting upcoming turn transitions. Frontiers in Psychology, 6: 495. doi:10.3389/fpsyg.2015.00495.
  • de Lange, I. M., Helbig, K. L., Weckhuysen, S., Moller, R. S., Velinov, M., Dolzhanskaya, N., Marsh, E., Helbig, I., Devinsky, O., Tang, S., Mefford, H. C., Myers, C. T., van Paesschen, W., Striano, P., van Gassen, K., van Kempen, M., De Kovel, C. G. F., Piard, J., Minassian, B. A., Nezarati, M. M. and 12 morede Lange, I. M., Helbig, K. L., Weckhuysen, S., Moller, R. S., Velinov, M., Dolzhanskaya, N., Marsh, E., Helbig, I., Devinsky, O., Tang, S., Mefford, H. C., Myers, C. T., van Paesschen, W., Striano, P., van Gassen, K., van Kempen, M., De Kovel, C. G. F., Piard, J., Minassian, B. A., Nezarati, M. M., Pessoa, A., Jacquette, A., Maher, B., Balestrini, S., Sisodiya, S., Warde, M. T., De St Martin, A., Chelly, J., van 't Slot, R., Van Maldergem, L., Brilstra, E. H., & Koeleman, B. P. (2016). De novo mutations of KIAA2022 in females cause intellectual disability and intractable epilepsy. Journal of Medical Genetics, 53(12), 850-858. doi:10.1136/jmedgenet-2016-103909.

    Abstract

    Background Mutations in the KIAA2022 gene have been reported in male patients with X-linked intellectual disability, and related female carriers were unaffected. Here, we report 14 female patients who carry a heterozygous de novo KIAA2022 mutation and share a phenotype characterised by intellectual disability and epilepsy.

    Methods Reported females were selected for genetic testing because of substantial developmental problems and/or epilepsy. X-inactivation and expression studies were performed when possible.

    Results All mutations were predicted to result in a frameshift or premature stop. 12 out of 14 patients had intractable epilepsy with myoclonic and/or absence seizures, and generalised in 11. Thirteen patients had mild to severe intellectual disability. This female phenotype partially overlaps with the reported male phenotype which consists of more severe intellectual disability, microcephaly, growth retardation, facial dysmorphisms and, less frequently, epilepsy. One female patient showed completely skewed X-inactivation, complete absence of RNA expression in blood and a phenotype similar to male patients. In the six other tested patients, X-inactivation was random, confirmed by a non-significant twofold to threefold decrease of RNA expression in blood, consistent with the expected mosaicism between cells expressing mutant or normal KIAA2022 alleles.

    Conclusions Heterozygous loss of KIAA2022 expression is a cause of intellectual disability in females. Compared with its hemizygous male counterpart, the heterozygous female disease has less severe intellectual disability, but is more often associated with a severe and intractable myoclonic epilepsy.
  • Lartseva, A., Dijkstra, T., & Buitelaar, J. (2015). Emotional language processing in Autism Spectrum Disorders: A systematic review. Frontiers in Human Neuroscience, 8: 991. doi:10.3389/fnhum.2014.00991.

    Abstract

    In his first description of Autism Spectrum Disorders (ASD), Kanner emphasized emotional impairments by characterizing children with ASD as indifferent to other people, self-absorbed, emotionally cold, distanced, and retracted. Thereafter, emotional impairments became regarded as part of the social impairments of ASD, and research mostly focused on understanding how individuals with ASD recognize visual expressions of emotions from faces and body postures. However, it still remains unclear how emotions are processed outside of the visual domain. This systematic review aims to fill this gap by focusing on impairments of emotional language processing in ASD.
    We systematically searched PubMed for papers published between 1990 and 2013 using standardized search terms. Studies show that people with ASD are able to correctly classify emotional language stimuli as emotionally positive or negative. However, processing of emotional language stimuli in ASD is associated with atypical patterns of attention and memory performance, as well as abnormal physiological and neural activity. Particularly, younger children with ASD have difficulties in acquiring and developing emotional concepts, and avoid using these in discourse. These emotional language impairments were not consistently associated with age, IQ, or level of development of language skills.
    We discuss how emotional language impairments fit with existing cognitive theories of ASD, such as central coherence, executive dysfunction, and weak Theory of Mind. We conclude that emotional impairments in ASD may be broader than just a mere consequence of social impairments, and should receive more attention in future research
  • Lattenkamp, E. Z., Mandák, M., & Scherz, M. D. (2016). The advertisement call of Stumpffia be Köhler, Vences, D'Cruze & Glaw, 2010 (Anura: Microhylidae: Cophylinae). Zootaxa, 4205(5), 483-485. doi:10.11646/zootaxa.4205.5.7.

    Abstract

    We describe the calls of Stumpffia be Köhler, Vences, D’Cruze & Glaw, 2010. This is the first call description made for a species belonging to the large-bodied northern Madagascan radiation of Stumpffia Boettger, 1881. Stumpffia is a genus of small (~9–28 mm) microhylid frogs in the Madagascar-endemic subfamily Cophylinae Cope. Little is known about their reproductive strategies. Most species are assumed to lay their eggs in foam nests in the leaf litter of Madagascar’s humid and semi-humid forests (Glaw & Vences 1994; Klages et al. 2013). They exhibit some degree of parental care, with the males guarding the nest after eggs are laid (Klages et al. 2013). The bioacoustic repertoire of these frogs is thought to be limited, and there are two distinct call structures known for the genus: the advertisement call of the type species, S. psologlossa Boettger, 1881, is apparently unique in being a trill of notes repeated in short succession. All other species from which calls are known emit single, whistling or chirping notes (Vences & Glaw 1991; Vences et al. 2006).

    Files private

    Request files
  • Lau, E., Weber, K., Gramfort, A., Hämäläinen, M., & Kuperberg, G. (2016). Spatiotemporal signatures of lexical–semantic prediction. Cerebral Cortex., 26(4), 1377-1387. doi:10.1093/cercor/bhu219.

    Abstract

    Although there is broad agreement that top-down expectations can facilitate lexical-semantic processing, the mechanisms driving these effects are still unclear. In particular, while previous electroencephalography (EEG) research has demonstrated a reduction in the N400 response to words in a supportive context, it is often challenging to dissociate facilitation due to bottom-up spreading activation from facilitation due to top-down expectations. The goal of the current study was to specifically determine the cortical areas associated with facilitation due to top-down prediction, using magnetoencephalography (MEG) recordings supplemented by EEG and functional magnetic resonance imaging (fMRI) in a semantic priming paradigm. In order to modulate expectation processes while holding context constant, we manipulated the proportion of related pairs across 2 blocks (10 and 50% related). Event-related potential results demonstrated a larger N400 reduction when a related word was predicted, and MEG source localization of activity in this time-window (350-450 ms) localized the differential responses to left anterior temporal cortex. fMRI data from the same participants support the MEG localization, showing contextual facilitation in left anterior superior temporal gyrus for the high expectation block only. Together, these results provide strong evidence that facilitatory effects of lexical-semantic prediction on the electrophysiological response 350-450 ms postonset reflect modulation of activity in left anterior temporal cortex.
  • Lausberg, H., & Sloetjes, H. (2016). The revised NEUROGES–ELAN system: An objective and reliable interdisciplinary analysis tool for nonverbal behavior and gesture. Behavior Research Methods, 48, 973-993. doi:10.3758/s13428-015-0622-z.

    Abstract

    As visual media spread to all domains of public and scientific life, nonverbal behavior is taking its place as an important form of communication alongside the written and spoken word. An objective and reliable method of analysis for hand movement behavior and gesture is therefore currently required in various scientific disciplines, including psychology, medicine, linguistics, anthropology, sociology, and computer science. However, no adequate common methodological standards have been developed thus far. Many behavioral gesture-coding systems lack objectivity and reliability, and automated methods that register specific movement parameters often fail to show validity with regard to psychological and social functions. To address these deficits, we have combined two methods, an elaborated behavioral coding system and an annotation tool for video and audio data. The NEUROGES–ELAN system is an effective and user-friendly research tool for the analysis of hand movement behavior, including gesture, self-touch, shifts, and actions. Since its first publication in 2009 in Behavior Research Methods, the tool has been used in interdisciplinary research projects to analyze a total of 467 individuals from different cultures, including subjects with mental disease and brain damage. Partly on the basis of new insights from these studies, the system has been revised methodologically and conceptually. The article presents the revised version of the system, including a detailed study of reliability. The improved reproducibility of the revised version makes NEUROGES–ELAN a suitable system for basic empirical research into the relation between hand movement behavior and gesture and cognitive, emotional, and interactive processes and for the development of automated movement behavior recognition methods.
  • Lee, S. A., Ferrari, A., Vallortigara, G., & Sovrano, V. A. (2015). Boundary primacy in spatial mapping: Evidence from zebrafish (Danio rerio). Behavioural Processes, 119, 116-122. doi:10.1016/j.beproc.2015.07.012.

    Abstract

    The ability to map locations in the surrounding environment is crucial for any navigating animal. Decades of research on mammalian spatial representations suggest that environmental boundaries play a major role in both navigation behavior and hippocampal place coding. Although the capacity for spatial mapping is shared among vertebrates, including birds and fish, it is not yet clear whether such similarities in competence reflect common underlying mechanisms. The present study tests cue specificity in spatial mapping in zebrafish, by probing their use of various visual cues to encode the location of a nearby conspecific. The results suggest that untrained zebrafish, like other vertebrates tested so far, rely primarily on environmental boundaries to compute spatial relationships and, at the same time, use other visible features such as surface markings and freestanding objects as local cues to goal locations. We propose that the pattern of specificity in spontaneous spatial mapping behavior across vertebrates reveals cross-species commonalities in its underlying neural representations.
  • Lemke, J. R., Geider, K., Helbig, K. L., Heyne, H. O., Schutz, H., Hentschel, J., Courage, C., Depienne, C., Nava, C., Heron, D., Moller, R. S., Hjalgrim, H., Lal, D., Neubauer, B. A., Nurnberg, P., Thiele, H., Kurlemann, G., Arnold, G. L., Bhambhani, V., Bartholdi, D. and 38 moreLemke, J. R., Geider, K., Helbig, K. L., Heyne, H. O., Schutz, H., Hentschel, J., Courage, C., Depienne, C., Nava, C., Heron, D., Moller, R. S., Hjalgrim, H., Lal, D., Neubauer, B. A., Nurnberg, P., Thiele, H., Kurlemann, G., Arnold, G. L., Bhambhani, V., Bartholdi, D., Pedurupillay, C. R., Misceo, D., Frengen, E., Stromme, P., Dlugos, D. J., Doherty, E. S., Bijlsma, E. K., Ruivenkamp, C. A., Hoffer, M. J., Goldstein, A., Rajan, D. S., Narayanan, V., Ramsey, K., Belnap, N., Schrauwen, I., Richholt, R., Koeleman, B. P., Sa, J., Mendonca, C., De Kovel, C. G. F., Weckhuysen, S., Hardies, K., De Jonghe, P., De Meirleir, L., Milh, M., Badens, C., Lebrun, M., Busa, T., Francannet, C., Piton, A., Riesch, E., Biskup, S., Vogt, H., Dorn, T., Helbig, I., Michaud, J. L., Laube, B., & Syrbe, S. (2016). Delineating the GRIN1 phenotypic spectrum: A distinct genetic NMDA receptor encephalopathy. Neurology, 86(23), 2171-2178. doi:10.1212/wnl.0000000000002740.
  • Leonard, M., Baud, M., Sjerps, M. J., & Chang, E. (2016). Perceptual restoration of masked speech in human cortex. Nature Communications, 7: 13619. doi:10.1038/ncomms13619.

    Abstract

    Humans are adept at understanding speech despite the fact that our natural listening environment is often filled with interference. An example of this capacity is phoneme restoration, in which part of a word is completely replaced by noise, yet listeners report hearing the whole word. The neurological basis for this unconscious fill-in phenomenon is unknown, despite being a fundamental characteristic of human hearing. Here, using direct cortical recordings in humans, we demonstrate that missing speech is restored at the acoustic-phonetic level in bilateral auditory cortex, in real-time. This restoration is preceded by specific neural activity patterns in a separate language area, left frontal cortex, which predicts the word that participants later report hearing. These results demonstrate that during speech perception, missing acoustic content is synthesized online from the integration of incoming sensory cues and the internal neural dynamics that bias word-level expectation and prediction.

    Additional information

    ncomms13619-s1.pdf
  • Lev-Ari, S. (2015). Comprehending non-native speakers: Theory and evidence for adjustment in manner of processing. Frontiers in Psychology, 5: 1546. doi:10.3389/fpsyg.2014.01546.

    Abstract

    Non-native speakers have lower linguistic competence than native speakers, which renders their language less reliable in conveying their intentions. We suggest that expectations of lower competence lead listeners to adapt their manner of processing when they listen to non-native speakers. We propose that listeners use cognitive resources to adjust by increasing their reliance on top-down processes and extracting less information from the language of the non-native speaker. An eye-tracking study supports our proposal by showing that when following instructions by a non-native speaker, listeners make more contextually-induced interpretations. Those with relatively high working memory also increase their reliance on context to anticipate the speaker’s upcoming reference, and are less likely to notice lexical errors in the non-native speech, indicating that they take less information from the speaker’s language. These results contribute to our understanding of the flexibility in language processing and have implications for interactions between native and non-native speakers

    Additional information

    Data Sheet 1.docx
  • Lev-Ari, S., & Peperkamp, S. (2016). How the demographic make-up of our community influences speech perception. The Journal of the Acoustical Society of America, 139(6), 3076-3087. doi:10.1121/1.4950811.

    Abstract

    Speech perception is known to be influenced by listeners’ expectations of the speaker. This paper tests whether the demographic makeup of individuals’ communities can influence their perception of foreign sounds by influencing their expectations of the language. Using online experiments with participants from all across the U.S. and matched census data on the proportion of Spanish and other foreign language speakers in participants’ communities, this paper shows that the demo- graphic makeup of individuals’ communities influences their expectations of foreign languages to have an alveolar trill versus a tap (Experiment 1), as well as their consequent perception of these sounds (Experiment 2). Thus, the paper shows that while individuals’ expectations of foreign lan- guage to have a trill occasionally lead them to misperceive a tap in a foreign language as a trill, a higher proportion of non-trill language speakers in one’s community decreases this likelihood. These results show that individuals’ environment can influence their perception by shaping their linguistic expectations
  • Lev-Ari, S. (2016). How the size of our social network influences our semantic skills. Cognitive Science, 40, 2050-2064. doi:10.1111/cogs.12317.

    Abstract

    People differ in the size of their social network, and thus in the properties of the linguistic input they receive. This article examines whether differences in social network size influence individuals’ linguistic skills in their native language, focusing on global comprehension of evaluative language. Study 1 exploits the natural variation in social network size and shows that individuals with larger social networks are better at understanding the valence of restaurant reviews. Study 2 manipulated social network size by randomly assigning participants to learn novel evaluative words as used by two (small network) versus eight (large network) speakers. It replicated the finding from Study 1, showing that those exposed to a larger social network were better at comprehending the valence of product reviews containing the novel words that were written by novel speakers. Together, these studies show that the size of one's social network can influence success at language comprehension. They thus open the door to research on how individuals’ lifestyle and the nature of their social interactions can influence linguistic skills.
  • Lev-Ari, S. (2016). Studying individual differences in the social environment to better understand language learning and processing. Linguistics Vanguard, 2(s1), 13-22. doi:10.1515/lingvan-2016-0015.
  • Lev-Ari, S. (2016). Selective grammatical convergence: Learning from desirable speakers. Discourse Processes, 53(8), 657-674. doi:10.1080/0163853X.2015.1094716.

    Abstract

    Models of language learning often assume that we learn from all the input we receive. This assumption is particularly strong in the domain of short-term and long-term grammatical convergence, where researchers argue that grammatical convergence is mostly an automatic process insulated from social factors. This paper shows that the degree to which individuals learn from grammatical input is modulated by social and contextual factors, such as the degree to which the speaker is liked and their social standing. Furthermore, such modulation is found in experiments that test generalized learning rather than convergence during the interaction. This paper thus shows the importance of the social context in grammatical learning, and indicates that the social context should be integrated into models of language learning.
  • Levelt, W. J. M. (2000). Uit talloos veel miljoenen. Natuur & Techniek, 68(11), 90.
  • Levelt, W. J. M., & Plomp, R. (1964). De waardering van muzikale intervallen. Hypothese: Orgaan van de Psychologische Faculteit der Leidse Studenten, 9(3/4), 30-39.
  • Levelt, W. J. M. (2000). Dyslexie. Natuur & Techniek, 68(4), 64.
  • Levelt, W. J. M. (1991). Die konnektionistische Mode. Sprache und Kognition, 10(2), 61-72.
  • Levelt, W. J. M. (1982). Het lineariseringsprobleem van de spreker. Tijdschrift voor Taal- en Tekstwetenschap (TTT), 2(1), 1-15.
  • Levelt, W. J. M. (2000). Links en rechts: Waarom hebben we zo vaak problemen met die woorden? Natuur & Techniek, 68(7/8), 90.
  • Levelt, W. J. M., Schriefers, H., Vorberg, D., Meyer, A. S., Pechmann, T., & Havinga, J. (1991). Normal and deviant lexical processing: Reply to Dell and O'Seaghdha. Psychological Review, 98(4), 615-618. doi:10.1037/0033-295X.98.4.615.

    Abstract

    In their comment, Dell and O'Seaghdha (1991) adduced any effect on phonological probes for semantic alternatives to the activation of these probes in the lexical network. We argue that that interpretation is false and, in addition, that the model still cannot account for our data. Furthermore, and different from Dell and O'seaghda, we adduce semantic rebound to the lemma level, where it is so substantial that it should have shown up in our data. Finally, we question the function of feedback in a lexical network (other than eliciting speech errors) and discuss Dell's (1988) notion of a unified production-comprehension system.
  • Levelt, C. C., Schiller, N. O., & Levelt, W. J. M. (2000). The acquisition of syllable types. Language Acquisition, 8(3), 237-263. doi:10.1207/S15327817LA0803_2.

    Abstract

    In this article, we present an account of developmental data regarding the acquisition of syllable types. The data come from a longitudinal corpus of phonetically transcribed speech of 12 children acquiring Dutch as their first language. A developmental order of acquisition of syllable types was deduced by aligning the syllabified data on a Guttman scale. This order could be analyzed as following from an initial ranking and subsequent rerankings in the grammar of the structural constraints ONSET, NO-CODA, *COMPLEX-O, and *COMPLEX-C; some local conjunctions of these constraints; and a faithfulness constraint FAITH. The syllable type frequencies in the speech surrounding the language learner are also considered. An interesting correlation is found between the frequencies and the order of development of the different syllable types.
  • Levelt, W. J. M. (2000). The brain does not serve linguistic theory so easily [Commentary to target article by Grodzinksy]. Behavioral and Brain Sciences, 23(1), 40-41.
  • Levelt, W. J. M., & Kelter, S. (1982). Surface form and memory in question answering. Cognitive Psychology, 14, 78-106. doi:10.1016/0010-0285(82)90005-6.

    Abstract

    Speakers tend to repeat materials from previous talk. This tendency is experimentally established and manipulated in various question-answering situations. It is shown that a question's surface form can affect the format of the answer given, even if this form has little semantic or conversational consequence, as in the pair Q: (At) what time do you close. A: “(At)five o'clock.” Answerers tend to match the utterance to the prepositional (nonprepositional) form of the question. This “correspondence effect” may diminish or disappear when, following the question, additional verbal material is presented to the answerer. The experiments show that neither the articulatory buffer nor long-term memory is normally involved in this retention of recent speech. Retaining recent speech in working memory may fulfill a variety of functions for speaker and listener, among them the correct production and interpretation of surface anaphora. Reusing recent materials may, moreover, be more economical than regenerating speech anew from a semantic base, and thus contribute to fluency. But the realization of this strategy requires a production system in which linguistic formulation can take place relatively independent of, and parallel to, conceptual planning.
  • Levelt, W. J. M. (1982). Science policy: Three recent idols, and a goddess. IPO Annual Progress Report, 17, 32-35.
  • Levelt, W. J. M., Schriefer, H., Vorberg, D., Meyer, A. S., Pechmann, T., & Havinga, J. (1991). The time course of lexical access in speech production: A study of picture naming. Psychological Review, 98(1), 122-142. doi:10.1037/0033-295X.98.1.122.
  • Levelt, W. J. M., & Meyer, A. S. (2000). Word for word: Multiple lexical access in speech production. European Journal of Cognitive Psychology, 12(4), 433-452. doi:10.1080/095414400750050178.

    Abstract

    It is quite normal for us to produce one or two million word tokens every year. Speaking is a dear occupation and producing words is at the core of it. Still, producing even a single word is a highly complex affair. Recently, Levelt, Roelofs, and Meyer (1999) reviewed their theory of lexical access in speech production, which dissects the word-producing mechanism as a staged application of various dedicated operations. The present paper begins by presenting a bird eye's view of this mechanism. We then square the complexity by asking how speakers control multiple access in generating simple utterances such as a table and a chair. In particular, we address two issues. The first one concerns dependency: Do temporally contiguous access procedures interact in any way, or do they run in modular fashion? The second issue concerns temporal alignment: How much temporal overlap of processing does the system tolerate in accessing multiple content words, such as table and chair? Results from picture-word interference and eye tracking experiments provide evidence for restricted cases of dependency as well as for constraints on the temporal alignment of access procedures.
  • Levelt, W. J. M. (1982). Zelfcorrecties in het spreekproces. KNAW: Mededelingen van de afdeling letterkunde, nieuwe reeks, 45(8), 215-228.
  • Levinson, S. C. (2016). “Process and perish” or multiple buffers with push-down stacks? [Commentary on Christiansen & Slater]. Behavioral and Brain Sciences, 39: e81. doi:10.1017/S0140525X15000862.

    Abstract

    This commentary raises two issues: (1) Language processing is hastened not only by internal pressures but also externally by turntaking in language use; (2) the theory requires nested levels of processing, but linguistic levels do not fully nest; further, it would seem to require multiple memory buffers, otherwise there’s no obvious treatment for discontinuous structures, or for verbatim recall.
  • Levinson, S. C., & Senft, G. (1991). Forschungsgruppe für Kognitive Anthropologie - Eine neue Forschungsgruppe in der Max-Planck-Gesellschaft. Linguistische Berichte, 133, 244-246.
  • Levinson, S. C. (2015). John Joseph Gumperz (1922–2013) [Obituary]. American Anthropologist, 117(1), 212-224. doi:10.1111/aman.12185.
  • Levinson, S. C. (2015). Other-initiated repair in Yélî Dnye: Seeing eye-to-eye in the language of Rossel Island. Open Linguistics, 1(1), 386-410. doi:10.1515/opli-2015-0009.

    Abstract

    Other-initiated repair (OIR) is the fundamental back-up system that ensures the effectiveness of human communication in its primordial niche, conversation. This article describes the interactional and linguistic patterns involved in other-initiated repair in Yélî Dnye, the Papuan language of Rossel Island, Papua New Guinea. The structure of the article is based on the conceptual set of distinctions described in Chapters 1 and 2 of the special issue, and describes the major properties of the Rossel Island system, and the ways in which OIR in this language both conforms to familiar European patterns and deviates from those patterns. Rossel Island specialities include lack of a Wh-word open class repair initiator, and a heavy reliance on visual signals that makes it possible both to initiate repair and confirm it non-verbally. But the overall system conforms to universal expectations.
  • Levinson, S. C., & Senft, G. (1991). Research group for cognitive anthropology - A new research group of the Max Planck Society. Cognitive Linguistics, 2, 311-312.
  • Levinson, S. C. (1991). Pragmatic reduction of the Binding Conditions revisited. Journal of Linguistics, 27, 107-161. doi:10.1017/S0022226700012433.

    Abstract

    In an earlier article (Levinson, 1987b), I raised the possibility that a Gricean theory of implicature might provide a systematic partial reduction of the Binding Conditions; the briefest of outlines is given in Section 2.1 below but the argumentation will be found in the earlier article. In this article I want, first, to show how that account might be further justified and extended, but then to introduce a radical alternative. This alternative uses the same pragmatic framework, but gives an account better adjusted to some languages. Finally, I shall attempt to show that both accounts can be combined by taking a diachronic perspective. The attraction of the combined account is that, suddenly, many facts about long-range reflexives and their associated logophoricity fall into place.
  • Levinson, S. C., & Torreira, F. (2015). Timing in turn-taking and its implications for processing models of language. Frontiers in Psychology, 6: 731. doi:10.3389/fpsyg.2015.00731.

    Abstract

    The core niche for language use is in verbal interaction, involving the rapid exchange of turns at talking. This paper reviews the extensive literature about this system, adding new statistical analyses of behavioural data where they have been missing, demonstrating that turn-taking has the systematic properties originally noted by Sacks, Schegloff and Jefferson (1974; hereafter SSJ). This system poses some significant puzzles for current theories of language processing: the gaps between turns are short (of the order of 200 ms), but the latencies involved in language production are much longer (over 600 ms). This seems to imply that participants in conversation must predict (or ‘project’ as SSJ have it) the end of the current speaker’s turn in order to prepare their response in advance. This in turn implies some overlap between production and comprehension despite their use of common processing resources. Collecting together what is known behaviourally and experimentally about the system, the space for systematic explanations of language processing for conversation can be significantly narrowed, and we sketch some first model of the mental processes involved for the participant preparing to speak next.
  • Levinson, S. C. (2000). Yélî Dnye and the theory of basic color terms. Journal of Linguistic Anthropology, 10( 1), 3-55. doi:10.1525/jlin.2000.10.1.3.

    Abstract

    The theory of basic color terms was a crucial factor in the demise of linguistic relativity. The theory is now once again under scrutiny and fundamental revision. This article details a case study that undermines one of the central claims of the classical theory, namely that languages universally treat color as a unitary domain, to be exhaustively named. Taken together with other cases, the study suggests that a number of languages have only an incipient color terminology, raising doubts about the linguistic universality of such terminology.
  • Levinson, S. C. (2016). Turn-taking in human communication, origins, and implications for language processing. Trends in Cognitive Sciences, 20(1), 6-14. doi:10.1016/j.tics.2015.10.010.

    Abstract

    Most language usage is interactive, involving rapid turn-taking. The turn-taking system has a number of striking properties: turns are short and responses are remarkably rapid, but turns are of varying length and often of very complex construction such that the underlying cognitive processing is highly compressed. Although neglected in cognitive science, the system has deep implications for language processing and acquisition that are only now becoming clear. Appearing earlier in ontogeny than linguistic competence, it is also found across all the major primate clades. This suggests a possible phylogenetic continuity, which may provide key insights into language evolution.
  • Levshina, N. (2016). When variables align: A Bayesian multinomial mixed-effects model of English permissive constructions. Cognitive Linguistics, 27(2), 235-268. doi:10.1515/cog-2015-0054.
  • Lewis, A. G., & Bastiaansen, M. C. M. (2015). A predictive coding framework for rapid neural dynamics during sentence-level language comprehension. Cortex, 68, 155-168. doi:10.1016/j.cortex.2015.02.014.

    Abstract

    There is a growing literature investigating the relationship between oscillatory neural dynamics measured using EEG and/or MEG, and sentence-level language comprehension. Recent proposals have suggested a strong link between predictive coding accounts of the hierarchical flow of information in the brain, and oscillatory neural dynamics in the beta and gamma frequency ranges. We propose that findings relating beta and gamma oscillations to sentence-level language comprehension might be unified under such a predictive coding account. Our suggestion is that oscillatory activity in the beta frequency range may reflect both the active maintenance of the current network configuration responsible for representing the sentence-level meaning under construction, and the top-down propagation of predictions to hierarchically lower processing levels based on that representation. In addition, we suggest that oscillatory activity in the low and middle gamma range reflect the matching of top-down predictions with bottom-up linguistic input, while evoked high gamma might reflect the propagation of bottom-up prediction errors to higher levels of the processing hierarchy. We also discuss some of the implications of this predictive coding framework, and we outline ideas for how these might be tested experimentally
  • Lewis, A. G., Schoffelen, J.-M., Schriefers, H., & Bastiaansen, M. C. M. (2016). A Predictive Coding Perspective on Beta Oscillations during Sentence-Level Language Comprehension. Frontiers in Human Neuroscience, 10: 85. doi:10.3389/fnhum.2016.00085.

    Abstract

    Oscillatory neural dynamics have been steadily receiving more attention as a robust and temporally precise signature of network activity related to language processing. We have recently proposed that oscillatory dynamics in the beta and gamma frequency ranges measured during sentence-level comprehension might be best explained from a predictive coding perspective. Under our proposal we related beta oscillations to both the maintenance/change of the neural network configuration responsible for the construction and representation of sentence-level meaning, and to top–down predictions about upcoming linguistic input based on that sentence-level meaning. Here we zoom in on these particular aspects of our proposal, and discuss both old and new supporting evidence. Finally, we present some preliminary magnetoencephalography data from an experiment comparing Dutch subject- and object-relative clauses that was specifically designed to test our predictive coding framework. Initial results support the first of the two suggested roles for beta oscillations in sentence-level language comprehension.
  • Lewis, A. G., Wang, L., & Bastiaansen, M. C. M. (2015). Fast oscillatory dynamics during language comprehension: Unification versus maintenance and prediction? Brain and Language, 148, 51-63. doi:10.1016/j.bandl.2015.01.003.

    Abstract

    The role of neuronal oscillations during language comprehension is not yet well understood. In this paper we review and reinterpret the functional roles of beta- and gamma-band oscillatory activity during language comprehension at the sentence and discourse level. We discuss the evidence in favor of a role for beta and gamma in unification (the unification hypothesis), and in light of mounting evidence that cannot be accounted for under this hypothesis, we explore an alternative proposal linking beta and gamma oscillations to maintenance and prediction (respectively) during language comprehension. Our maintenance/prediction hypothesis is able to account for most of the findings that are currently available relating beta and gamma oscillations to language comprehension, and is in good agreement with other proposals about the roles of beta and gamma in domain-general cognitive processing. In conclusion we discuss proposals for further testing and comparing the prediction and unification hypotheses.
  • Lewis, A. G., Lemhӧfer, K., Schoffelen, J.-M., & Schriefers, H. (2016). Gender agreement violations modulate beta oscillatory dynamics during sentence comprehension: A comparison of second language learners and native speakers. Neuropsychologia, 89(1), 254-272. doi:10.1016/j.neuropsychologia.2016.06.031.

    Abstract

    For native speakers, many studies suggest a link between oscillatory neural activity in the beta frequency range and syntactic processing. For late second language (L2) learners on the other hand, the extent to which the neural architecture supporting syntactic processing is similar to or different from that of native speakers is still unclear. In a series of four experiments, we used electroencephalography to investigate the link between beta oscillatory activity and the processing of grammatical gender agreement in Dutch determiner-noun pairs, for Dutch native speakers, and for German L2 learners of Dutch. In Experiment 1 we show that for native speakers, grammatical gender agreement violations are yet another among many syntactic factors that modulate beta oscillatory activity during sentence comprehension. Beta power is higher for grammatically acceptable target words than for those that mismatch in grammatical gender with their preceding determiner. In Experiment 2 we observed no such beta modulations for L2 learners, irrespective of whether trials were sorted according to objective or subjective syntactic correctness. Experiment 3 ruled out that the absence of a beta effect for the L2 learners in Experiment 2 was due to repetition of the target nouns in objectively correct and incorrect determiner-noun pairs. Finally, Experiment 4 showed that when L2 learners are required to explicitly focus on grammatical information, they show modulations of beta oscillatory activity, comparable to those of native speakers, but only when trials are sorted according to participants’ idiosyncratic lexical representations of the grammatical gender of target nouns. Together, these findings suggest that beta power in L2 learners is sensitive to violations of grammatical gender agreement, but only when the importance of grammatical information is highlighted, and only when participants' subjective lexical representations are taken into account.
  • Lima, C. F., Lavan, N., Evans, S., Agnew, Z., Halpern, A. R., Shanmugalingam, P., Meekings, S., Boebinger, D., Ostarek, M., McGettigan, C., Warren, J. E., & Scott, S. K. (2015). Feel the Noise: Relating individual differences in auditory imagery to the structure and function of sensorimotor systems. Cerebral Cortex., 2015(25), 4638-4650. doi:10.1093/cercor/bhv134.

    Abstract

    Humans can generate mental auditory images of voices or songs, sometimes perceiving them almost as vividly as perceptual experiences. The functional networks supporting auditory imagery have been described, but less is known about the systems associated with interindividual differences in auditory imagery. Combining voxel-based morphometry and fMRI, we examined the structural basis of interindividual differences in how auditory images are subjectively perceived, and explored associations between auditory imagery, sensory-based processing, and visual imagery. Vividness of auditory imagery correlated with gray matter volume in the supplementary motor area (SMA), parietal cortex, medial superior frontal gyrus, and middle frontal gyrus. An analysis of functional responses to different types of human vocalizations revealed that the SMA and parietal sites that predict imagery are also modulated by sound type. Using representational similarity analysis, we found that higher representational specificity of heard sounds in SMA predicts vividness of imagery, indicating a mechanistic link between sensory- and imagery-based processing in sensorimotor cortex. Vividness of imagery in the visual domain also correlated with SMA structure, and with auditory imagery scores. Altogether, these findings provide evidence for a signature of imagery in brain structure, and highlight a common role of perceptual–motor interactions for processing heard and internally generated auditory information.
  • Liszkowski, U., & Ramenzoni, V. C. (2015). Pointing to nothing? Empty places prime infants' attention to absent objects. Infancy, 20, 433-444. doi:10.1111/infa.12080.

    Abstract

    People routinely point to empty space when referring to absent entities. These points to "nothing" are meaningful because they direct attention to places that stand in for specific entities. Typically, the meaning of places in terms of absent referents is established through preceding discourse and accompanying language. However, it is unknown whether nonlinguistic actions can establish locations as meaningful places, and whether infants have the capacity to represent a place as standing in for an object. In a novel eye-tracking paradigm, 18-month-olds watched objects being placed in specific locations. Then, the objects disappeared and a point directed infants' attention to an emptied place. The point to the empty place primed infants in a subsequent scene (in which the objects appeared at novel locations) to look more to the object belonging to the indicated place than to a distracter referent. The place-object expectations were strong enough to interfere when reversing the place-object associations. Findings show that infants comprehend nonlinguistic reference to absent entities, which reveals an ontogenetic early, nonverbal understanding of places as representations of absent objects
  • Lockwood, G. (2016). Academic clickbait: Articles with positively-framed titles, interesting phrasing, and no wordplay get more attention online. The Winnower, 3: e146723.36330. doi:10.15200/winn.146723.36330.

    Abstract

    This article is about whether the factors which drive online sharing of non-scholarly content also apply to academic journal titles. It uses Altmetric scores as a measure of online attention to articles from Frontiers in Psychology published in 2013 and 2014. Article titles with result-oriented positive framing and more interesting phrasing receive higher Altmetric scores, i.e., get more online attention. Article titles with wordplay and longer article titles receive lower Altmetric scores. This suggests that the same factors that affect how widely non-scholarly content is shared extend to academia, which has implications for how academics can make their work more likely to have more impact.
  • Lockwood, G., Hagoort, P., & Dingemanse, M. (2016). How iconicity helps people learn new words: neural correlates and individual differences in sound-symbolic bootstrapping. Collabra, 2(1): 7. doi:10.1525/collabra.42.

    Abstract

    Sound symbolism is increasingly understood as involving iconicity, or perceptual analogies and cross-modal correspondences between form and meaning, but the search for its functional and neural correlates is ongoing. Here we study how people learn sound-symbolic words, using behavioural, electrophysiological and individual difference measures. Dutch participants learned Japanese ideophones —lexical sound-symbolic words— with a translation of either the real meaning (in which form and meaning show cross-modal correspondences) or the opposite meaning (in which form and meaning show cross-modal clashes). Participants were significantly better at identifying the words they learned in the real condition, correctly remembering the real word pairing 86.7% of the time, but the opposite word pairing only 71.3% of the time. Analysing event-related potentials (ERPs) during the test round showed that ideophones in the real condition elicited a greater P3 component and late positive complex than ideophones in the opposite condition. In a subsequent forced choice task, participants were asked to guess the real translation from two alternatives. They did this with 73.0% accuracy, well above chance level even for words they had encountered in the opposite condition, showing that people are generally sensitive to the sound-symbolic cues in ideophones. Individual difference measures showed that the ERP effect in the test round of the learning task was greater for participants who were more sensitive to sound symbolism in the forced choice task. The main driver of the difference was a lower amplitude of the P3 component in response to ideophones in the opposite condition, suggesting that people who are more sensitive to sound symbolism may have more difficulty to suppress conflicting cross-modal information. The findings provide new evidence that cross-modal correspondences between sound and meaning facilitate word learning, while cross-modal clashes make word learning harder, especially for people who are more sensitive to sound symbolism.

    Additional information

    https://osf.io/ema3t/
  • Lockwood, G., & Dingemanse, M. (2015). Iconicity in the lab: A review of behavioural, developmental, and neuroimaging research into sound-symbolism. Frontiers in Psychology, 6: 1246. doi:10.3389/fpsyg.2015.01246.

    Abstract

    This review covers experimental approaches to sound-symbolism—from infants to adults, and from Sapir’s foundational studies to twenty-first century product naming. It synthesizes recent behavioral, developmental, and neuroimaging work into a systematic overview of the cross-modal correspondences that underpin iconic links between form and meaning. It also identifies open questions and opportunities, showing how the future course of experimental iconicity research can benefit from an integrated interdisciplinary perspective. Combining insights from psychology and neuroscience with evidence from natural languages provides us with opportunities for the experimental investigation of the role of sound-symbolism in language learning, language processing, and communication. The review finishes by describing how hypothesis-testing and model-building will help contribute to a cumulative science of sound-symbolism in human language.
  • Lockwood, G., & Tuomainen, J. (2015). Ideophones in Japanese modulate the P2 and late positive complex responses. Frontiers in Psychology, 6: 933. doi:10.3389/fpsyg.2015.00933.

    Abstract

    Sound-symbolism, or the direct link between sound and meaning, is typologically and behaviorally attested across languages. However, neuroimaging research has mostly focused on artificial non-words or individual segments, which do not represent sound-symbolism in natural language. We used EEG to compare Japanese ideophones, which are phonologically distinctive sound-symbolic lexical words, and arbitrary adverbs during a sentence reading task. Ideophones elicit a larger visual P2 response and a sustained late positive complex in comparison to arbitrary adverbs. These results and previous literature suggest that the larger P2 may indicate the integration of sound and sensory information by association in response to the distinctive phonology of ideophones. The late positive complex may reflect the facilitated lexical retrieval of ideophones in comparison to arbitrary words. This account provides new evidence that ideophones exhibit similar cross-modal correspondences to those which have been proposed for non-words and individual sounds, and that these effects are detectable in natural language.
  • Lockwood, G., Dingemanse, M., & Hagoort, P. (2016). Sound-symbolism boosts novel word learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(8), 1274-1281. doi:10.1037/xlm0000235.

    Abstract

    The existence of sound-symbolism (or a non-arbitrary link between form and meaning) is well-attested. However, sound-symbolism has mostly been investigated with nonwords in forced choice tasks, neither of which are representative of natural language. This study uses ideophones, which are naturally occurring sound-symbolic words that depict sensory information, to investigate how sensitive Dutch speakers are to sound-symbolism in Japanese in a learning task. Participants were taught 2 sets of Japanese ideophones; 1 set with the ideophones’ real meanings in Dutch, the other set with their opposite meanings. In Experiment 1, participants learned the ideophones and their real meanings much better than the ideophones with their opposite meanings. Moreover, despite the learning rounds, participants were still able to guess the real meanings of the ideophones in a 2-alternative forced-choice test after they were informed of the manipulation. This shows that natural language sound-symbolism is robust beyond 2-alternative forced-choice paradigms and affects broader language processes such as word learning. In Experiment 2, participants learned regular Japanese adjectives with the same manipulation, and there was no difference between real and opposite conditions. This shows that natural language sound-symbolism is especially strong in ideophones, and that people learn words better when form and meaning match. The highlights of this study are as follows: (a) Dutch speakers learn real meanings of Japanese ideophones better than opposite meanings, (b) Dutch speakers accurately guess meanings of Japanese ideophones, (c) this sensitivity happens despite learning some opposite pairings, (d) no such learning effect exists for regular Japanese adjectives, and (e) this shows the importance of sound-symbolism in scaffolding language learning
  • Love, B. C., Kopeć, Ł., & Guest, O. (2015). Optimism bias in fans and sports reporters. PLoS One, 10(9): e0137685. doi:10.1371/journal.pone.0137685.

    Abstract

    People are optimistic about their prospects relative to others. However, existing studies can be difficult to interpret because outcomes are not zero-sum. For example, one person avoiding cancer does not necessitate that another person develops cancer. Ideally, optimism bias would be evaluated within a closed formal system to establish with certainty the extent of the bias and the associated environmental factors, such that optimism bias is demonstrated when a population is internally inconsistent. Accordingly, we asked NFL fans to predict how many games teams they liked and disliked would win in the 2015 season. Fans, like ESPN reporters assigned to cover a team, were overly optimistic about their team’s prospects. The opposite pattern was found for teams that fans disliked. Optimism may flourish because year-to-year team results are marked by auto-correlation and regression to the group mean (i.e., good teams stay good, but bad teams improve).

    Additional information

    raw data
  • Lozano, R., Vino, A., Lozano, C., Fisher, S. E., & Deriziotis, P. (2015). A de novo FOXP1 variant in a patient with autism, intellectual disability and severe speech and language impairment. European Journal of Human Genetics, 23, 1702-1707. doi:10.1038/ejhg.2015.66.

    Abstract

    FOXP1 (forkhead box protein P1) is a transcription factor involved in the development of several tissues, including the brain. An emerging phenotype of patients with protein-disrupting FOXP1 variants includes global developmental delay, intellectual disability and mild to severe speech/language deficits. We report on a female child with a history of severe hypotonia, autism spectrum disorder and mild intellectual disability with severe speech/language impairment. Clinical exome sequencing identified a heterozygous de novo FOXP1 variant c.1267_1268delGT (p.V423Hfs*37). Functional analyses using cellular models show that the variant disrupts multiple aspects of FOXP1 activity, including subcellular localization and transcriptional repression properties. Our findings highlight the importance of performing functional characterization to help uncover the biological significance of variants identified by genomics approaches, thereby providing insight into pathways underlying complex neurodevelopmental disorders. Moreover, our data support the hypothesis that de novo variants represent significant causal factors in severe sporadic disorders and extend the phenotype seen in individuals with FOXP1 haploinsufficiency
  • Majid, A., & Van Staden, M. (2015). Can nomenclature for the body be explained by embodiment theories? Topics in Cognitive Science, 7(4), 570-594. doi:10.1111/tops.12159.

    Abstract

    According to widespread opinion, the meaning of body part terms is determined by salient discontinuities in the visual image; such that hands, feet, arms, and legs, are natural parts. If so, one would expect these parts to have distinct names which correspond in meaning across languages. To test this proposal, we compared three unrelated languages—Dutch, Japanese, and Indonesian—and found both naming systems and boundaries of even basic body part terms display variation across languages. Bottom-up cues alone cannot explain natural language semantic systems; there simply is not a one-to-one mapping of the body semantic system to the body structural description. Although body parts are flexibly construed across languages, body parts semantics are, nevertheless, constrained by non-linguistic representations in the body structural description, suggesting these are necessary, although not sufficient, in accounting for aspects of the body lexicon.
  • Majid, A. (2015). Cultural factors shape olfactory language. Trends in Cognitive Sciences, 19(11), 629-630. doi:10.1016/j.tics.2015.06.009.
  • Majid, A. (2016). The content of minds: Asifa Majid talks to Jon Sutton about language and thought. The psychologist, 29, 554-556.
  • Majid, A., Jordan, F., & Dunn, M. (Eds.). (2015). Semantic systems in closely related languages [Special Issue]. Language Sciences, 49.
  • Majid, A., Jordan, F., & Dunn, M. (2015). Semantic systems in closely related languages. Language Sciences, 49, 1-18. doi:10.1016/j.langsci.2014.11.002.

    Abstract

    In each semantic domain studied to date, there is considerable variation in how meanings are expressed across languages. But are some semantic domains more likely to show variation than others? Is the domain of space more or less variable in its expression than other semantic domains, such as containers, body parts, or colours? According to many linguists, the meanings expressed in grammaticised expressions, such as (spatial) adpositions, are more likely to be similar across languages than meanings expressed in open class lexical items. On the other hand, some psychologists predict there ought to be more variation across languages in the meanings of adpositions, than in the meanings of nouns. This is because relational categories, such as those expressed as adpositions, are said to be constructed by language; whereas object categories expressed as nouns are predicted to be “given by the world”. We tested these hypotheses by comparing the semantic systems of closely related languages. Previous cross-linguistic studies emphasise the importance of studying diverse languages, but we argue that a focus on closely related languages is advantageous because domains can be compared in a culturally- and historically-informed manner. Thus we collected data from 12 Germanic languages. Naming data were collected from at least 20 speakers of each language for containers, body-parts, colours, and spatial relations. We found the semantic domains of colour and body-parts were the most similar across languages. Containers showed some variation, but spatial relations expressed in adpositions showed the most variation. The results are inconsistent with the view expressed by most linguists. Instead, we find meanings expressed in grammaticised meanings are more variable than meanings in open class lexical items.
  • Mani, N., Daum, M., & Huettig, F. (2016). “Pro-active” in many ways: Developmental evidence for a dynamic pluralistic approach to prediction. Quarterly Journal of Experimental Psychology, 69(11), 2189-2201. doi:10.1080/17470218.2015.1111395.

    Abstract

    The anticipation of the forthcoming behaviour of social interaction partners is a useful ability supporting interaction and communication between social partners. Associations and prediction based on the production system (in line with views that listeners use the production system covertly to anticipate what the other person might be likely to say) are two potential factors, which have been proposed to be involved in anticipatory language processing. We examined the influence of both factors on the degree to which listeners predict upcoming linguistic input. Are listeners more likely to predict book as an appropriate continuation of the sentence “The boy reads a”, based on the strength of the association between the words read and book (strong association) and read and letter (weak association)? Do more proficient producers predict more? What is the interplay of these two influences on prediction? The results suggest that associations influence language-mediated anticipatory eye gaze in two-year-olds and adults only when two thematically appropriate target objects compete for overt attention but not when these objects are presented separately. Furthermore, children’s prediction abilities are strongly related to their language production skills when appropriate target objects are presented separately but not when presented together. Both influences on prediction in language processing thus appear to be context-dependent. We conclude that multiple factors simultaneously influence listeners’ anticipation of upcoming linguistic input and that only such a dynamic approach to prediction can capture listeners’ prowess at predictive language processing.
  • Manrique, E. (2016). Other-initiated repair in Argentine Sign Language. Open Linguistics, 2, 1-34. doi:10.1515/opli-2016-0001.

    Abstract

    Other-initiated repair is an essential interactional practice to secure mutual understanding in everyday interaction. This article presents evidence from a large conversational corpus of a sign language, showing that signers of Argentine Sign Language (Lengua de Señas Argentina or ‘LSA’), like users of spoken languages, use a systematic set of linguistic formats and practices to indicate troubles of signing, seeing and understanding. The general aim of this article is to provide a general overview of the different visual-gestural linguistic patterns of other-initiated repair sequences in LSA. It also describes the quantitative distribution of other-initiated repair formats based on a collection of 213 cases. It describes the multimodal components of open and restricted types of repair initiators, and reports a previously undescribed implicit practice to initiate repair in LSA in comparison to explicitly produced formats. Part of a special issue presenting repair systems across a range of languages, this article contributes to a better understanding of the phenomenon of other-initiated repair in terms of visual and gestural practices in human interaction in both signed and spoken languages
  • Manrique, E., & Enfield, N. J. (2015). Suspending the next turn as a form of repair initiation: Evidence from Argentine Sign Language. Frontiers in Psychology, 6: 1326. doi:10.3389/fpsyg.2015.01326.

    Abstract

    Practices of other initiated repair deal with problems of hearing or understanding what another person has said in the fast-moving turn-by-turn flow of conversation. As such, other-initiated repair plays a fundamental role in the maintenance of intersubjectivity in social interaction. This study finds and analyses a special type of other initiated repair that is used in turn-by-turn conversation in a sign language: Argentine Sign Language (Lengua de Sehas Argentina or LSA). We describe a type of response termed a "freeze-look,' which occurs when a person has just been asked a direct question: instead of answering the question in the next turn position, the person holds still while looking directly at the questioner. In these cases it is clear that the person is aware of having just been addressed and is not otherwise accounting for their delay in responding (e.g., by displaying a "thinking" face or hesitation, etc.). We find that this behavior functions as a way for an addressee to initiate repair by the person who asked the question. The "freeze-look" results in the questioner "re-doing" their action of asking a question, for example by repeating or rephrasing it Thus, we argue that the "freeze-look" is a practice for other-initiation of repair. In addition, we argue that it is an "off-record" practice, thus contrasting with known on record practices such as saying "Huh?" or equivalents. The findings aim to contribute to research on human understanding in everyday turn-by-turn conversation by looking at an understudied sign language, with possible implications for our understanding of visual bodily communication in spoken languages as wel

    Additional information

    Manrique_Enfield_2015_supp.pdf
  • Martin, J.-R., Kösem, A., & van Wassenhove, V. (2015). Hysteresis in Audiovisual Synchrony Perception. PLoS One, 10(3): e0119365. doi:10.1371/journal.pone.0119365.

    Abstract

    The effect of stimulation history on the perception of a current event can yield two opposite effects, namely: adaptation or hysteresis. The perception of the current event thus goes in the opposite or in the same direction as prior stimulation, respectively. In audiovisual (AV) synchrony perception, adaptation effects have primarily been reported. Here, we tested if perceptual hysteresis could also be observed over adaptation in AV timing perception by varying different experimental conditions. Participants were asked to judge the synchrony of the last (test) stimulus of an AV sequence with either constant or gradually changing AV intervals (constant and dynamic condition, respectively). The onset timing of the test stimulus could be cued or not (prospective vs. retrospective condition, respectively). We observed hysteretic effects for AV synchrony judgments in the retrospective condition that were independent of the constant or dynamic nature of the adapted stimuli; these effects disappeared in the prospective condition. The present findings suggest that knowing when to estimate a stimulus property has a crucial impact on perceptual simultaneity judgments. Our results extend beyond AV timing perception, and have strong implications regarding the comparative study of hysteresis and adaptation phenomena.
  • Martin, A. E. (2016). Language processing as cue integration: Grounding the psychology of language in perception and neurophysiology. Frontiers in Psychology, 7: 120. doi:10.3389/fpsyg.2016.00120.

    Abstract

    I argue that cue integration, a psychophysiological mechanism from vision and multisensory perception, offers a computational linking hypothesis between psycholinguistic theory and neurobiological models of language. I propose that this mechanism, which incorporates probabilistic estimates of a cue's reliability, might function in language processing from the perception of a phoneme to the comprehension of a phrase structure. I briefly consider the implications of the cue integration hypothesis for an integrated theory of language that includes acquisition, production, dialogue and bilingualism, while grounding the hypothesis in canonical neural computation.
  • Matić, D., & Odé, C. (2015). On prosodic signalling of focus in Tundra Yukaghir. Acta Linguistica Petropolitana, 11(2), 627-644.
  • McQueen, J. M., Eisner, F., & Norris, D. (2016). When brain regions talk to each other during speech processing, what are they talking about? Commentary on Gow and Olson (2015). Language, Cognition and Neuroscience, 31(7), 860-863. doi:10.1080/23273798.2016.1154975.

    Abstract

    This commentary on Gow and Olson [2015. Sentential influences on acoustic-phonetic processing: A Granger causality analysis of multimodal imaging data. Language, Cognition and Neuroscience. doi:10.1080/23273798.2015.1029498] questions in three ways their conclusion that speech perception is based on interactive processing. First, it is not clear that the data presented by Gow and Olson reflect normal speech recognition. Second, Gow and Olson's conclusion depends on still-debated assumptions about the functions performed by specific brain regions. Third, the results are compatible with feedforward models of speech perception and appear inconsistent with models in which there are online interactions about phonological content. We suggest that progress in the neuroscience of speech perception requires the generation of testable hypotheses about the function(s) performed by inter-regional connections
  • Meekings, S., Boebinger, D., Evans, S., Lima, C. F., Chen, S., Ostarek, M., & Scott, S. K. (2015). Do we know what we’re saying? The roles of attention and sensory information during speech production. Psychological Science, 26(12), 1975-1977. doi:10.1177/0956797614563766.
  • Meira, S., & Drude, S. (2015). A summary reconstruction of Proto-Maweti-Guarani segmental phonology. Boletim do Museu Paraense Emilio Goeldi:Ciencias Humanas, 10, 275-296. doi: 10.1590/1981-81222015000200005.

    Abstract

    This paper presents a succinct reconstruction of the segmental phonology of Proto-Maweti-Guarani, the hypothetical protolanguage from which modern Mawe, Aweti and the Tupi-Guarani branches of the Tupi linguistic family have evolved. Based on about 300 cognate sets from the authors' field data (for Mawe and Aweti) and from Mello's reconstruction (2000) for Proto-Tupi-Guarani (with additional information from other works; and with a few changes concerning certain doubtful features, such as the status of stem-final lenis consonants ∗r and ∗β, and the distinction of ∗c and ∗č), the consonants and vowels of Proto-Maweti-Guarani were reconstructed with the help of the traditional historical-comparative method. The development of the reconstructed segments is then traced from the protolanguage to each of the modern branches. A comparison with other claims made about Proto-Maweti-Guarani is given in the conclusion
  • Meyer, A. S., Huettig, F., & Levelt, W. J. M. (2016). Same, different, or closely related: What is the relationship between language production and comprehension? Journal of Memory and Language, 89, 1-7. doi:10.1016/j.jml.2016.03.002.
  • Meyer, A. S., & Huettig, F. (Eds.). (2016). Speaking and Listening: Relationships Between Language Production and Comprehension [Special Issue]. Journal of Memory and Language, 89.
  • Meyer, A. S., & Levelt, W. J. M. (2000). Merging speech perception and production [Comment on Norris, McQueen and Cutler]. Behavioral and Brain Sciences, 23(3), 339-340. doi:10.1017/S0140525X00373241.

    Abstract

    A comparison of Merge, a model of comprehension, and WEAVER, a model of production, raises five issues: (1) merging models of comprehension and production necessarily creates feedback; (2) neither model is a comprehensive account of word processing; (3) the models are incomplete in different ways; (4) the models differ in their handling of competition; (5) as opposed to WEAVER, Merge is a model of metalinguistic behavior.
  • Meyer, A. S., & Schriefers, H. (1991). Phonological facilitation in picture-word interference experiments: Effects of stimulus onset asynchrony and types of interfering stimuli. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17, 1146-1160. doi:10.1037/0278-7393.17.6.1146.

    Abstract

    Subjects named pictures while hearing distractor words that shared word-initial or word-final segments with the picture names or were unrelated to the picture names. The relative timing of distractor and picture presentation was varied. Compared with unrelated distractors, both types of related distractors facilitated picture naming under certain timing conditions. Begin-related distractors facilitated the naming responses if the shared segments began 150 ms before, at, or 150 ms after picture onset. By contrast, end-related distractors only facilitated the responses if the shared segments began at or 150 ms after picture onset. The results suggest that the phonological encoding of the beginning of a word is initiated before the encoding of its end.
  • Meyer, A. S., & Van der Meulen, F. (2000). Phonological priming effects on speech onset latencies and viewing times in object naming. Psychonomic Bulletin & Review, 7, 314-319.

Share this page