Publications

Displaying 301 - 400 of 668
  • Klein, W. (1989). Sprechen lernen - das Selbstverständlichste von der Welt: Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik, 73, 7-17.
  • Klein, W. (1989). Schreiben oder Lesen, aber nicht beides, oder: Vorschlag zur Wiedereinführung der Keilschrift mittels Hammer und Meißel. Zeitschrift für Literaturwissenschaft und Linguistik, 74, 116-119.
  • Klein, W. (2004). Was die Geisteswissenschaften leider noch von den Naturwissenschaften unterscheidet. Gegenworte, 13, 79-84.
  • Klein, W. (1991). Was kann sich die Übersetzungswissenschaft von der Linguistik erwarten? Zeitschrift für Literaturwissenschaft und Linguistik, 84, 104-123.
  • Klein, W. (1986). Über Ansehen und Wirkung der deutschen Sprachwissenschaft heute. Linguistische Berichte, 100, 511-520.
  • Klein, W. (1998). Von der einfältigen Wißbegierde. Zeitschrift für Literaturwissenschaft und Linguistik, 112, 6-13.
  • Knosche, T. R., & Bastiaansen, M. C. M. (2002). On the time resolution of event-related desynchronization: A simulation study. Clinical Neurophysiology, 113(5), 754-763. doi:10.1016/S1388-2457(02)00055-X.

    Abstract

    Objectives: To investigate the time resolution of different methods for the computation of event-related desynchronization/synchronization (ERD/ERS), including one based on Hilbert transform. Methods: In order to better understand the time resolution of ERD/ERS, which is a function of factors such as the exact computation method, the frequency under study, the number of trials, and the sampling frequency, we simulated sudden changes in oscillation amplitude as well as very short and closely spaced events. Results: Hilbert-based ERD yields very similar results to ERD integrated over predefined time intervals (block ERD), if the block length is half the period length of the studied frequency. ERD predicts the onset of a change in oscillation amplitude with an error margin of only 10–30 ms. On the other hand, the time the ERD response needs to climb to its full height after a sudden change in oscillation amplitude is quite long, i.e. between 200 and 500 ms. With respect to sensitivity to short oscillatory events, the ratio between sampling frequency and electroencephalographic frequency band plays a major role. Conclusions: (1) The optimal time interval for the computation of block ERD is half a period of the frequency under investigation. (2) Due to the slow impulse response, amplitude effects in the ERD may in reality be caused by duration differences. (3) Although ERD based on the Hilbert transform does not yield any significant advantages over classical ERD in terms of time resolution, it has some important practical advantages.
  • Knudsen, B., Fischer, M., & Aschersleben, G. (2015). The development of Arabic digit knowledge in 4-to-7-year-old children. Journal of numerical cognition, 1(1), 21-37. doi:10.5964/jnc.v1i1.4.

    Abstract

    Recent studies indicate that Arabic digit knowledge rather than non-symbolic number knowledge is a key foundation for arithmetic proficiency at the start of a child’s mathematical career. We document the developmental trajectory of 4- to 7-year-olds’ proficiency in accessing magnitude information from Arabic digits in five tasks differing in magnitude manipulation requirements. Results showed that children from 5 years onwards accessed magnitude information implicitly and explicitly, but that 5-year-olds failed to access magnitude information explicitly when numerical magnitude was contrasted with physical magnitude. Performance across tasks revealed a clear developmental trajectory: children traverse from first knowing the cardinal values of number words to recognizing Arabic digits to knowing their cardinal values and, concurrently, their ordinal position. Correlational analyses showed a strong within-child consistency, demonstrating that this pattern is not only reflected in group differences but also in individual performance.
  • Kong, X., Liu, Z., Huang, L., Wang, X., Yang, Z., Zhou, G., Zhen, Z., & Liu, J. (2015). Mapping Individual Brain Networks Using Statistical Similarity in Regional Morphology from MRI. PLoS One, 10(11): e0141840. doi:10.1371/journal.pone.0141840.

    Abstract

    Representing brain morphology as a network has the advantage that the regional morphology of ‘isolated’ structures can be described statistically based on graph theory. However, very few studies have investigated brain morphology from the holistic perspective of complex networks, particularly in individual brains. We proposed a new network framework for individual brain morphology. Technically, in the new network, nodes are defined as regions based on a brain atlas, and edges are estimated using our newly-developed inter-regional relation measure based on regional morphological distributions. This implementation allows nodes in the brain network to be functionally/anatomically homogeneous but different with respect to shape and size. We first demonstrated the new network framework in a healthy sample. Thereafter, we studied the graph-theoretical properties of the networks obtained and compared the results with previous morphological, anatomical, and functional networks. The robustness of the method was assessed via measurement of the reliability of the network metrics using a test-retest dataset. Finally, to illustrate potential applications, the networks were used to measure age-related changes in commonly used network metrics. Results suggest that the proposed method could provide a concise description of brain organization at a network level and be used to investigate interindividual variability in brain morphology from the perspective of complex networks. Furthermore, the method could open a new window into modeling the complexly distributed brain and facilitate the emerging field of human connectomics.

    Additional information

    https://www.nitrc.org/
  • Konopka, A. E., & Kuchinsky, S. E. (2015). How message similarity shapes the timecourse of sentence formulation. Journal of Memory and Language, 84, 1-23. doi:10.1016/j.jml.2015.04.003.
  • Köster, O., Hess, M. M., Schiller, N. O., & Künzel, H. J. (1998). The correlation between auditory speech sensitivity and speaker recognition ability. Forensic Linguistics: The international Journal of Speech, Language and the Law, 5, 22-32.

    Abstract

    In various applications of forensic phonetics the question arises as to how far aural-perceptual speaker recognition performance is reliable. Therefore, it is necessary to examine the relationship between speaker recognition results and human perception/production abilities like musicality or speech sensitivity. In this study, performance in a speaker recognition experiment and a speech sensitivity test are correlated. The results show a moderately significant positive correlation between the two tasks. Generally, performance in the speaker recognition task was better than in the speech sensitivity test. Professionals in speech and singing yielded a more homogeneous correlation than non-experts. Training in speech as well as choir-singing seems to have a positive effect on performance in speaker recognition. It may be concluded, firstly, that in cases where the reliability of voice line-up results or the credibility of a testimony have to be considered, the speech sensitivity test could be a useful indicator. Secondly, the speech sensitivity test might be integrated into the canon of possible procedures for the accreditation of forensic phoneticians. Both tests may also be used in combination.
  • Krämer, I. (1998). Children's interpretations of indefinite object noun phrases. Linguistics in the Netherlands, 1998, 163-174. doi:10.1075/avt.15.15kra.
  • Krott, A., Hagoort, P., & Baayen, R. H. (2004). Sublexical units and supralexical combinatories in the processing of interfixed Dutch compounds. Language and Cognitive Processes, 19(3), 453-471. doi:10.1080/769813936.

    Abstract

    This study addresses the supralexical inferential processes underlying wellformedness judgements and latencies for a specic sublexical unit that appears in Dutch compounds, the interfix. Production studies have shown that the selection of interfixes in novel Dutch compounds and the speed of
    this selection is primarily determined by the distribution of interfixes in existing compounds that share the left constituent with the target compound, i.e. the ‘‘left constituent family’’. In this paper, we consider the question whether constituent families also affect wellformedness decisions of novel as well as existing Dutch compounds in comprehension. We visually presented compounds containing interfixes that were either in line with the bias of the left constituent family or not. In the case of existing compounds, we also presented variants with replaced interfixes. As in production, the bias of the left constituent family emerged as a crucial predictor for both acceptance rates and response latencies. This result supports the hypothesis that, as in production, constituent families are (co-)activated in comprehension. We argue that this co-activation is part of a supralexical inferential process, and we discuss how our data might be interpreted within sublexical and supralexical theories of morphological processing.
  • Krott, A., Libben, G., Jarema, G., Dressler, W., Schreuder, R., & Baayen, R. H. (2004). Probability in the grammar of German and Dutch: Interfixation in triconstituent compounds. Language and Speech, 47(1), 83-106.

    Abstract

    This study addresses the possibility that interfixes in multiconstituent nominal compounds in German and Dutch are functional as markers of immediate constituent structure.We report a lexical statistical survey of interfixation in the lexicons of German and Dutch which shows that all interfixes of German and one interfix of Dutch are significantly more likely to appear at the major constituent boundary than expected under chance conditions. A series of experiments provides evidence that speakers of German and Dutch are sensitive to the probabilistic cues to constituent structure provided by the interfixes. Thus, our data provide evidence that probability is part and parcel of grammatical competence.
  • Kunert, R., & Slevc, L. R. (2015). A commentary on: “Neural overlap in processing music and speech”. Frontiers in Human Neuroscience, 9: 330. doi:10.3389/fnhum.2015.00330.
  • Kunert, R., Willems, R. M., Casasanto, D., Patel, A. D., & Hagoort, P. (2015). Music and language syntax interact in Broca’s Area: An fMRI study. PLoS One, 10(11): e0141069. doi:10.1371/journal.pone.0141069.

    Abstract

    Instrumental music and language are both syntactic systems, employing complex, hierarchically-structured sequences built using implicit structural norms. This organization allows listeners to understand the role of individual words or tones in the context of an unfolding sentence or melody. Previous studies suggest that the brain mechanisms of syntactic processing may be partly shared between music and language. However, functional neuroimaging evidence for anatomical overlap of brain activity involved in linguistic and musical syntactic processing has been lacking. In the present study we used functional magnetic resonance imaging (fMRI) in conjunction with an interference paradigm based on sung sentences. We show that the processing demands of musical syntax (harmony) and language syntax interact in Broca’s area in the left inferior frontal gyrus (without leading to music and language main effects). A language main effect in Broca’s area only emerged in the complex music harmony condition, suggesting that (with our stimuli and tasks) a language effect only becomes visible under conditions of increased demands on shared neural resources. In contrast to previous studies, our design allows us to rule out that the observed neural interaction is due to: (1) general attention mechanisms, as a psychoacoustic auditory anomaly behaved unlike the harmonic manipulation, (2) error processing, as the language and the music stimuli contained no structural errors. The current results thus suggest that two different cognitive domains—music and language—might draw on the same high level syntactic integration resources in Broca’s area.
  • Küntay, A. C., & Slobin, D. I. (2002). Putting interaction back into child language: Examples from Turkish. Psychology of Language and Communication, 6(1): 14.

    Abstract

    As in the case of other non-English languages, the study of the acquisition of Turkish has mostly focused on aspects of grammatical morphology and syntax, largely neglecting the study of the effect of interactional factors on child morphosyntax. This paper reviews indications from past research that studying input and adult-child discourse can facilitate the study of the acquisition of morphosyntax in the Turkish language. It also provides some recent studies of Turkish child language on the relationship of child-directed speech to the early acquisition of morphosyntax, and on the pragmatic features of a certain kind of discourse form in child-directed speech called variation sets.
  • Ladd, D. R., Roberts, S. G., & Dediu, D. (2015). Correlational studies in typological and historical linguistics. Annual Review of Linguistics, 1, 221-241. doi:10.1146/annurev-linguist-030514-124819.

    Abstract

    We review a number of recent studies that have identified either correlations between different linguistic features (e.g., implicational universals) or correlations between linguistic features and nonlinguistic properties of speakers or their environment (e.g., effects of geography on vocabulary). We compare large-scale quantitative studies with more traditional theoretical and historical linguistic research and identify divergent assumptions and methods that have led linguists to be skeptical of correlational work. We also attempt to demystify statistical techniques and point out the importance of informed critiques of the validity of statistical approaches. Finally, we describe various methods used in recent correlational studies to deal with the fact that, because of contact and historical relatedness, individual languages in a sample rarely represent independent data points, and we show how these methods may allow us to explore linguistic prehistory to a greater time depth than is possible with orthodox comparative reconstruction.
  • Lai, V. T., & Curran, T. (2015). Erratum to “ERP evidence for conceptual mappings and comparison processes during the comprehension of conventional and novel metaphors” [Brain Lang. 127 (3) (2013) 484–496]. Brain and Language, 149, 148-150. doi:10.1016/j.bandl.2014.11.001.
  • Lai, V. T., van Dam, W., Conant, L. L., Binder, J. R., & Desai, R. H. (2015). Familiarity differentially affects right hemisphere contributions to processing metaphors and literals. Frontiers in Human Neuroscience, 9: 44. doi:10.3389/fnhum.2015.00044.

    Abstract

    The role of the two hemispheres in processing metaphoric language is controversial. While some studies have reported a special role of the right hemisphere (RH) in processing metaphors, others indicate no difference in laterality relative to literal language. Some studies have found a role of the RH for novel/unfamiliar metaphors, but not
    conventional/familiar metaphors. It is not clear, however, whether the role of the RH
    is specific to metaphor novelty, or whether it reflects processing, reinterpretation or
    reanalysis of novel/unfamiliar language in general. Here we used functional magnetic
    resonance imaging (fMRI) to examine the effects of familiarity in both metaphoric and
    non-metaphoric sentences. A left lateralized network containing the middle and inferior
    frontal gyri, posterior temporal regions in the left hemisphere (LH), and inferior frontal
    regions in the RH, was engaged across both metaphoric and non-metaphoric sentences;
    engagement of this network decreased as familiarity decreased. No region was engaged
    selectively for greater metaphoric unfamiliarity. An analysis of laterality, however, showed that the contribution of the RH relative to that of LH does increase in a metaphorspecific manner as familiarity decreases. These results show that RH regions, taken by themselves, including commonly reported regions such as the right inferior frontal gyrus (IFG), are responsive to increased cognitive demands of processing unfamiliar stimuli, rather than being metaphor-selective. The division of labor between the two hemispheres, however, does shift towards the right for metaphoric processing. The shift results not because the RH contributes more to metaphoric processing. Rather, relative to
    its contribution for processing literals, the LH contributes less.
  • Lai, V. T., Willems, R. M., & Hagoort, P. (2015). Feel between the Lines: Implied emotion from combinatorial semantics. Journal of Cognitive Neuroscience, 27(8), 1528-1541. doi:10.1162/jocn_a_00798.

    Abstract

    This study investigated the brain regions for the comprehension of implied emotion in sentences. Participants read negative sentences without negative words, for example, “The boy fell asleep and never woke up again,” and their neutral counterparts “The boy stood up and grabbed his bag.” This kind of negative sentence allows us to examine implied emotion derived at the sentence level, without associative emotion coming from word retrieval. We found that implied emotion in sentences, relative to neutral sentences, led to activation in some emotion-related areas, including the medial prefrontal cortex, the amygdala, and the insula, as well as certain language-related areas, including the inferior frontal gyrus, which has been implicated in combinatorial processing. These results suggest that the emotional network involved in implied emotion is intricately related to the network for combinatorial processing in language, supporting the view that sentence meaning is more than simply concatenating the meanings of its lexical building blocks.
  • Lam, K. J. Y., Dijkstra, T., & Rueschemeyer, S.-A. (2015). Feature activation during word recognition: action, visual, and associative-semantic priming effects. Frontiers in Psychology, 6: 659. doi:10.3389/fpsyg.2015.00659.

    Abstract

    Embodied theories of language postulate that language meaning is stored in modality-specific brain areas generally involved in perception and action in the real world. However, the temporal dynamics of the interaction between modality-specific information and lexical-semantic processing remain unclear. We investigated the relative timing at which two types of modality-specific information (action-based and visual-form information) contribute to lexical-semantic comprehension. To this end, we applied a behavioral priming paradigm in which prime and target words were related with respect to (1) action features, (2) visual features, or (3) semantically associative information. Using a Go/No-Go lexical decision task, priming effects were measured across four different inter-stimulus intervals (ISI = 100, 250, 400, and 1000 ms) to determine the relative time course of the different features. Notably, action priming effects were found in ISIs of 100, 250, and 1000 ms whereas a visual priming effect was seen only in the ISI of 1000 ms. Importantly, our data suggest that features follow different time courses of activation during word recognition. In this regard, feature activation is dynamic, measurable in specific time windows but not in others. Thus the current study (1) demonstrates how multiple ISIs can be used within an experiment to help chart the time course of feature activation and (2) provides new evidence for embodied theories of language.
  • Lammertink, I., Casillas, M., Benders, T., Post, B., & Fikkert, P. (2015). Dutch and English toddlers' use of linguistic cues in predicting upcoming turn transitions. Frontiers in Psychology, 6: 495. doi:10.3389/fpsyg.2015.00495.
  • De Lange, F. P., Kalkman, J. S., Bleijenberg, G., Hagoort, P., Van der Werf, S. P., Van der Meer, J. W. M., & Toni, I. (2004). Neural correlates of the chronic fatigue syndrom: An fMRI study. Brain, 127(9), 1948-1957. doi:10.1093/brain/awh225.

    Abstract

    Chronic fatigue syndrome (CFS) is characterized by a debilitating fatigue of unknown aetiology. Patients who suffer from CFS report a variety of physical complaints as well as neuropsychological complaints. Therefore, it is conceivable that the CNS plays a role in the pathophysiology of CFS. The purpose of this study was to investigate neural correlates of CFS, and specifically whether there exists a linkage between disturbances in the motor system and CFS. We measured behavioural performance and cerebral activity using rapid event-related functional MRI in 16 CFS patients and 16 matched healthy controls while they were engaged in a motor imagery task and a control visual imagery task. CFS patients were considerably slower on performance of both tasks, but the increase in reaction time with increasing task load was similar between the groups. Both groups used largely overlapping neural resources. However, during the motor imagery task, CFS patients evoked stronger responses in visually related structures. Furthermore, there was a marked between-groups difference during erroneous performance. In both groups, dorsal anterior cingulate cortex was specifically activated during error trials. Conversely, ventral anterior cingulate cortex was active when healthy controls made an error, but remained inactive when CFS patients made an error. Our results support the notion that CFS may be associated with dysfunctional motor planning. Furthermore, the between-groups differences observed during erroneous performance point to motivational disturbances as a crucial component of CFS.
  • Lartseva, A., Dijkstra, T., & Buitelaar, J. (2015). Emotional language processing in Autism Spectrum Disorders: A systematic review. Frontiers in Human Neuroscience, 8: 991. doi:10.3389/fnhum.2014.00991.

    Abstract

    In his first description of Autism Spectrum Disorders (ASD), Kanner emphasized emotional impairments by characterizing children with ASD as indifferent to other people, self-absorbed, emotionally cold, distanced, and retracted. Thereafter, emotional impairments became regarded as part of the social impairments of ASD, and research mostly focused on understanding how individuals with ASD recognize visual expressions of emotions from faces and body postures. However, it still remains unclear how emotions are processed outside of the visual domain. This systematic review aims to fill this gap by focusing on impairments of emotional language processing in ASD.
    We systematically searched PubMed for papers published between 1990 and 2013 using standardized search terms. Studies show that people with ASD are able to correctly classify emotional language stimuli as emotionally positive or negative. However, processing of emotional language stimuli in ASD is associated with atypical patterns of attention and memory performance, as well as abnormal physiological and neural activity. Particularly, younger children with ASD have difficulties in acquiring and developing emotional concepts, and avoid using these in discourse. These emotional language impairments were not consistently associated with age, IQ, or level of development of language skills.
    We discuss how emotional language impairments fit with existing cognitive theories of ASD, such as central coherence, executive dysfunction, and weak Theory of Mind. We conclude that emotional impairments in ASD may be broader than just a mere consequence of social impairments, and should receive more attention in future research
  • Lausberg, H., & Kita, S. (2002). Dissociation of right and left gesture spaces in split-brain patients. Cortex, 38(5), 883-886. doi:10.1016/S0010-9452(08)70062-5.

    Abstract

    The present study investigates hemispheric specialisation in the use of space in communicative gestures. For this purpose, we investigate split-brain patients in whom spontaneous and distinct right hand gestures can only be controlled by the left hemisphere and vice versa, the left hand only by the right hemisphere. On this anatomical basis, we can infer hemispheric specialisation from the performances of the right and left hands. In contrast to left hand dyspraxia in tasks that require language processing, split-brain patients utilise their left hands in a meaningful way in visuo-constructive tasks such as copying drawings or block-design. Therefore, we conjecture that split-brain patients are capable of using their left hands for the communication of the content of visuo-spatial animations via gestural demonstration. On this basis, we further examine the use of space in communicative gestures by the right and left hands. McNeill and Pedelty (1995) noted for the split-brain patient N.G. that her iconic right hand gestures were exclusively displayed in the right personal space. The present study investigates systematically if there is indication for neglect of the left personal space in right hand gestures in split-brain patients.
  • Lausberg, H., & Kita, S. (2002). Dissociation of right and left hand gesture spaces in split-brain patients. Cortex, 38(5), 883-886. doi:10.1016/S0010-9452(08)70062-5.

    Abstract

    The present study investigates hemispheric specialisation in the use of space in communicative gestures. For this purpose, we investigate split-brain patients in whom spontaneous and distinct right hand gestures can only be controlled by the left hemisphere and vice versa, the left hand only by the right hemisphere. On this anatomical basis, we can infer hemispheric specialisation from the performances of the right and left hands. In contrast to left hand dyspraxia in tasks that require language processing, split-brain patients utilise their left hands in a meaningful way in visuo-constructive tasks such as copying drawings or block-design. Therefore, we conjecture that split-brain patients are capable of using their left hands for the communication of the content of visuo-spatial animations via gestural demonstration. On this basis, we further examine the use of space in communicative gestures by the right and left hands. McNeill and Pedelty (1995) noted for the split-brain patient N.G. that her iconic right hand gestures were exclusively displayed in the right personal space. The present study investigates systematically if there is indication for neglect of the left personal space in right hand gestures in split-brain patients.
  • Lee, S. A., Ferrari, A., Vallortigara, G., & Sovrano, V. A. (2015). Boundary primacy in spatial mapping: Evidence from zebrafish (Danio rerio). Behavioural Processes, 119, 116-122. doi:10.1016/j.beproc.2015.07.012.

    Abstract

    The ability to map locations in the surrounding environment is crucial for any navigating animal. Decades of research on mammalian spatial representations suggest that environmental boundaries play a major role in both navigation behavior and hippocampal place coding. Although the capacity for spatial mapping is shared among vertebrates, including birds and fish, it is not yet clear whether such similarities in competence reflect common underlying mechanisms. The present study tests cue specificity in spatial mapping in zebrafish, by probing their use of various visual cues to encode the location of a nearby conspecific. The results suggest that untrained zebrafish, like other vertebrates tested so far, rely primarily on environmental boundaries to compute spatial relationships and, at the same time, use other visible features such as surface markings and freestanding objects as local cues to goal locations. We propose that the pattern of specificity in spontaneous spatial mapping behavior across vertebrates reveals cross-species commonalities in its underlying neural representations.
  • Lev-Ari, S. (2015). Comprehending non-native speakers: Theory and evidence for adjustment in manner of processing. Frontiers in Psychology, 5: 1546. doi:10.3389/fpsyg.2014.01546.

    Abstract

    Non-native speakers have lower linguistic competence than native speakers, which renders their language less reliable in conveying their intentions. We suggest that expectations of lower competence lead listeners to adapt their manner of processing when they listen to non-native speakers. We propose that listeners use cognitive resources to adjust by increasing their reliance on top-down processes and extracting less information from the language of the non-native speaker. An eye-tracking study supports our proposal by showing that when following instructions by a non-native speaker, listeners make more contextually-induced interpretations. Those with relatively high working memory also increase their reliance on context to anticipate the speaker’s upcoming reference, and are less likely to notice lexical errors in the non-native speech, indicating that they take less information from the speaker’s language. These results contribute to our understanding of the flexibility in language processing and have implications for interactions between native and non-native speakers

    Additional information

    Data Sheet 1.docx
  • Levelt, W. J. M. (2002). Picture naming and word frequency: Comments on Alario, Costa and Caramazza, Language and Cognitive Processes, 17(3), 299-319. Language and Cognitive Processes, 17(6), 663-671. doi:0.1080/01690960143000443.

    Abstract

    This commentary on Alario et al. (2002) addresses two issues: (1) Different from what the authors suggest, there are no theories of production claiming the phonological word to be the upper bound of advance planning before the onset of articulation; (2) Their picture naming study of word frequency effects on speech onset is inconclusive by lack of a crucial control, viz., of object recognition latency. This is a perennial problem in picture naming studies of word frequency and age of acquisition effects
  • Levelt, W. J. M., Meyer, A. S., & Roelofs, A. (2004). Relations of lexical access to neural implementation and syntactic encoding [author's response]. Behavioral and Brain Sciences, 27, 299-301. doi:10.1017/S0140525X04270078.

    Abstract

    How can one conceive of the neuronal implementation of the processing model we proposed in our target article? In his commentary (Pulvermüller 1999, reprinted here in this issue), Pulvermüller makes various proposals concerning the underlying neural mechanisms and their potential localizations in the brain. These proposals demonstrate the compatibility of our processing model and current neuroscience. We add further evidence on details of localization based on a recent meta-analysis of neuroimaging studies of word production (Indefrey & Levelt 2000). We also express some minor disagreements with respect to Pulvermüller’s interpretation of the “lemma” notion, and concerning his neural modeling of phonological code retrieval. Branigan & Pickering discuss important aspects of syntactic encoding, which was not the topic of the target article. We discuss their well-taken proposal that multiple syntactic frames for a single verb lemma are represented as independent nodes, which can be shared with other verbs, such as accounting for syntactic priming in speech production. We also discuss how, in principle, the alternative multiple-frame-multiplelemma account can be tested empirically. The available evidence does not seem to support that account.
  • Levelt, W. J. M. (2004). Speech, gesture and the origins of language. European Review, 12(4), 543-549. doi:10.1017/S1062798704000468.

    Abstract

    During the second half of the 19th century, the psychology of language was invented as a discipline for the sole purpose of explaining the evolution of spoken language. These efforts culminated in Wilhelm Wundt’s monumental Die Sprache of 1900, which outlined the psychological mechanisms involved in producing utterances and considered how these mechanisms could have evolved. Wundt assumes that articulatory movements were originally rather arbitrary concomitants of larger, meaningful expressive bodily gestures. The sounds such articulations happened to produce slowly acquired the meaning of the gesture as a whole, ultimately making the gesture superfluous. Over a century later, gestural theories of language origins still abound. I argue that such theories are unlikely and wasteful, given the biological, neurological and genetic evidence.
  • Levelt, W. J. M. (2004). Een huis voor kunst en wetenschap. Boekman: Tijdschrift voor Kunst, Cultuur en Beleid, 16(58/59), 212-215.
  • Levelt, W. J. M. (1991). Die konnektionistische Mode. Sprache und Kognition, 10(2), 61-72.
  • Levelt, W. J. M., Praamstra, P., Meyer, A. S., Helenius, P., & Salmelin, R. (1998). An MEG study of picture naming. Journal of Cognitive Neuroscience, 10(5), 553-567. doi:10.1162/089892998562960.

    Abstract

    The purpose of this study was to relate a psycholinguistic processing model of picture naming to the dynamics of cortical activation during picture naming. The activation was recorded from eight Dutch subjects with a whole-head neuromagnetometer. The processing model, based on extensive naming latency studies, is a stage model. In preparing a picture's name, the speaker performs a chain of specific operations. They are, in this order, computing the visual percept, activating an appropriate lexical concept, selecting the target word from the mental lexicon, phonological encoding, phonetic encoding, and initiation of articulation. The time windows for each of these operations are reasonably well known and could be related to the peak activity of dipole sources in the individual magnetic response patterns. The analyses showed a clear progression over these time windows from early occipital activation, via parietal and temporal to frontal activation. The major specific findings were that (1) a region in the left posterior temporal lobe, agreeing with the location of Wernicke's area, showed prominent activation starting about 200 msec after picture onset and peaking at about 350 msec, (i.e., within the stage of phonological encoding), and (2) a consistent activation was found in the right parietal cortex, peaking at about 230 msec after picture onset, thus preceding and partly overlapping with the left temporal response. An interpretation in terms of the management of visual attention is proposed.
  • Levelt, W. J. M. (1989). Hochleistung in Millisekunden: Sprechen und Sprache verstehen. Universitas, 44(511), 56-68.
  • Levelt, W. J. M., & Schiller, N. O. (1998). Is the syllable frame stored? [Commentary on the BBS target article 'The frame/content theory of evolution of speech production' by Peter F. McNeilage]. Behavioral and Brain Sciences, 21, 520.

    Abstract

    This commentary discusses whether abstract metrical frames are stored. For stress-assigning languages (e.g., Dutch and English), which have a dominant stress pattern, metrical frames are stored only for words that deviate from the default stress pattern. The majority of the words in these languages are produced without retrieving any independent syllabic or metrical frame.
  • Levelt, W. J. M., Schriefers, H., Vorberg, D., Meyer, A. S., Pechmann, T., & Havinga, J. (1991). Normal and deviant lexical processing: Reply to Dell and O'Seaghdha. Psychological Review, 98(4), 615-618. doi:10.1037/0033-295X.98.4.615.

    Abstract

    In their comment, Dell and O'Seaghdha (1991) adduced any effect on phonological probes for semantic alternatives to the activation of these probes in the lexical network. We argue that that interpretation is false and, in addition, that the model still cannot account for our data. Furthermore, and different from Dell and O'seaghda, we adduce semantic rebound to the lemma level, where it is so substantial that it should have shown up in our data. Finally, we question the function of feedback in a lexical network (other than eliciting speech errors) and discuss Dell's (1988) notion of a unified production-comprehension system.
  • Levelt, W. J. M. (1998). The genetic perspective in psycholinguistics, or: Where do spoken words come from? Journal of Psycholinguistic Research, 27(2), 167-180. doi:10.1023/A:1023245931630.

    Abstract

    The core issue in the 19-century sources of psycholinguistics was the question, "Where does language come from?'' This genetic perspective unified the study of the ontogenesis, the phylogenesis, the microgenesis, and to some extent the neurogenesis of language. This paper makes the point that this original perspective is still a valid and attractive one. It is exemplified by a discussion of the genesis of spoken words.
  • Levelt, W. J. M., Schriefer, H., Vorberg, D., Meyer, A. S., Pechmann, T., & Havinga, J. (1991). The time course of lexical access in speech production: A study of picture naming. Psychological Review, 98(1), 122-142. doi:10.1037/0033-295X.98.1.122.
  • Levinson, S. C., Kita, S., Haun, D. B. M., & Rasch, B. H. (2002). Returning the tables: Language affects spatial reasoning. Cognition, 84(2), 155-188. doi:10.1016/S0010-0277(02)00045-8.

    Abstract

    Li and Gleitman (Turning the tables: language and spatial reasoning. Cognition, in press) seek to undermine a large-scale cross-cultural comparison of spatial language and cognition which claims to have demonstrated that language and conceptual coding in the spatial domain covary (see, for example, Space in language and cognition: explorations in linguistic diversity. Cambridge: Cambridge University Press, in press; Language 74 (1998) 557): the most plausible interpretation is that different languages induce distinct conceptual codings. Arguing against this, Li and Gleitman attempt to show that in an American student population they can obtain any of the relevant conceptual codings just by varying spatial cues, holding language constant. They then argue that our findings are better interpreted in terms of ecologically-induced distinct cognitive styles reflected in language. Linguistic coding, they argue, has no causal effects on non-linguistic thinking – it simply reflects antecedently existing conceptual distinctions. We here show that Li and Gleitman did not make a crucial distinction between frames of spatial reference relevant to our line of research. We report a series of experiments designed to show that they have, as a consequence, misinterpreted the results of their own experiments, which are in fact in line with our hypothesis. Their attempts to reinterpret the large cross-cultural study, and to enlist support from animal and infant studies, fail for the same reasons. We further try to discern exactly what theory drives their presumption that language can have no cognitive efficacy, and conclude that their position is undermined by a wide range of considerations.
  • Levinson, S. C. (2002). Time for a linguistic anthropology of time. Current Anthropology, 43(4), S122-S123. doi:10.1086/342214.
  • Levinson, S. C. (1989). A review of Relevance [book review of Dan Sperber & Deirdre Wilson, Relevance: communication and cognition]. Journal of Linguistics, 25, 455-472.
  • Levinson, S. C., & Senft, G. (1991). Forschungsgruppe für Kognitive Anthropologie - Eine neue Forschungsgruppe in der Max-Planck-Gesellschaft. Linguistische Berichte, 133, 244-246.
  • Levinson, S. C. (2015). John Joseph Gumperz (1922–2013) [Obituary]. American Anthropologist, 117(1), 212-224. doi:10.1111/aman.12185.
  • Levinson, S. C. (2015). Other-initiated repair in Yélî Dnye: Seeing eye-to-eye in the language of Rossel Island. Open Linguistics, 1(1), 386-410. doi:10.1515/opli-2015-0009.

    Abstract

    Other-initiated repair (OIR) is the fundamental back-up system that ensures the effectiveness of human communication in its primordial niche, conversation. This article describes the interactional and linguistic patterns involved in other-initiated repair in Yélî Dnye, the Papuan language of Rossel Island, Papua New Guinea. The structure of the article is based on the conceptual set of distinctions described in Chapters 1 and 2 of the special issue, and describes the major properties of the Rossel Island system, and the ways in which OIR in this language both conforms to familiar European patterns and deviates from those patterns. Rossel Island specialities include lack of a Wh-word open class repair initiator, and a heavy reliance on visual signals that makes it possible both to initiate repair and confirm it non-verbally. But the overall system conforms to universal expectations.
  • Levinson, S. C., & Senft, G. (1991). Research group for cognitive anthropology - A new research group of the Max Planck Society. Cognitive Linguistics, 2, 311-312.
  • Levinson, S. C. (1998). Studying spatial conceptualization across cultures: Anthropology and cognitive science. Ethos, 26(1), 7-24. doi:10.1525/eth.1998.26.1.7.

    Abstract

    Philosophers, psychologists, and linguists have argued that spatial conception is pivotal to cognition in general, providing a general, egocentric, and universal framework for cognition as well as metaphors for conceptualizing many other domains. But in an aboriginal community in Northern Queensland, a system of cardinal directions informs not only language, but also memory for arbitrary spatial arrays and directions. This work suggests that fundamental cognitive parameters, like the system of coding spatial locations, can vary cross-culturally, in line with the language spoken by a community. This opens up the prospect of a fruitful dialogue between anthropology and the cognitive sciences on the complex interaction between cultural and universal factors in the constitution of mind.
  • Levinson, S. C. (1991). Pragmatic reduction of the Binding Conditions revisited. Journal of Linguistics, 27, 107-161. doi:10.1017/S0022226700012433.

    Abstract

    In an earlier article (Levinson, 1987b), I raised the possibility that a Gricean theory of implicature might provide a systematic partial reduction of the Binding Conditions; the briefest of outlines is given in Section 2.1 below but the argumentation will be found in the earlier article. In this article I want, first, to show how that account might be further justified and extended, but then to introduce a radical alternative. This alternative uses the same pragmatic framework, but gives an account better adjusted to some languages. Finally, I shall attempt to show that both accounts can be combined by taking a diachronic perspective. The attraction of the combined account is that, suddenly, many facts about long-range reflexives and their associated logophoricity fall into place.
  • Levinson, S. C., & Torreira, F. (2015). Timing in turn-taking and its implications for processing models of language. Frontiers in Psychology, 6: 731. doi:10.3389/fpsyg.2015.00731.

    Abstract

    The core niche for language use is in verbal interaction, involving the rapid exchange of turns at talking. This paper reviews the extensive literature about this system, adding new statistical analyses of behavioural data where they have been missing, demonstrating that turn-taking has the systematic properties originally noted by Sacks, Schegloff and Jefferson (1974; hereafter SSJ). This system poses some significant puzzles for current theories of language processing: the gaps between turns are short (of the order of 200 ms), but the latencies involved in language production are much longer (over 600 ms). This seems to imply that participants in conversation must predict (or ‘project’ as SSJ have it) the end of the current speaker’s turn in order to prepare their response in advance. This in turn implies some overlap between production and comprehension despite their use of common processing resources. Collecting together what is known behaviourally and experimentally about the system, the space for systematic explanations of language processing for conversation can be significantly narrowed, and we sketch some first model of the mental processes involved for the participant preparing to speak next.
  • Lewis, A. G., & Bastiaansen, M. C. M. (2015). A predictive coding framework for rapid neural dynamics during sentence-level language comprehension. Cortex, 68, 155-168. doi:10.1016/j.cortex.2015.02.014.

    Abstract

    There is a growing literature investigating the relationship between oscillatory neural dynamics measured using EEG and/or MEG, and sentence-level language comprehension. Recent proposals have suggested a strong link between predictive coding accounts of the hierarchical flow of information in the brain, and oscillatory neural dynamics in the beta and gamma frequency ranges. We propose that findings relating beta and gamma oscillations to sentence-level language comprehension might be unified under such a predictive coding account. Our suggestion is that oscillatory activity in the beta frequency range may reflect both the active maintenance of the current network configuration responsible for representing the sentence-level meaning under construction, and the top-down propagation of predictions to hierarchically lower processing levels based on that representation. In addition, we suggest that oscillatory activity in the low and middle gamma range reflect the matching of top-down predictions with bottom-up linguistic input, while evoked high gamma might reflect the propagation of bottom-up prediction errors to higher levels of the processing hierarchy. We also discuss some of the implications of this predictive coding framework, and we outline ideas for how these might be tested experimentally
  • Lewis, A. G., Wang, L., & Bastiaansen, M. C. M. (2015). Fast oscillatory dynamics during language comprehension: Unification versus maintenance and prediction? Brain and Language, 148, 51-63. doi:10.1016/j.bandl.2015.01.003.

    Abstract

    The role of neuronal oscillations during language comprehension is not yet well understood. In this paper we review and reinterpret the functional roles of beta- and gamma-band oscillatory activity during language comprehension at the sentence and discourse level. We discuss the evidence in favor of a role for beta and gamma in unification (the unification hypothesis), and in light of mounting evidence that cannot be accounted for under this hypothesis, we explore an alternative proposal linking beta and gamma oscillations to maintenance and prediction (respectively) during language comprehension. Our maintenance/prediction hypothesis is able to account for most of the findings that are currently available relating beta and gamma oscillations to language comprehension, and is in good agreement with other proposals about the roles of beta and gamma in domain-general cognitive processing. In conclusion we discuss proposals for further testing and comparing the prediction and unification hypotheses.
  • Lima, C. F., Lavan, N., Evans, S., Agnew, Z., Halpern, A. R., Shanmugalingam, P., Meekings, S., Boebinger, D., Ostarek, M., McGettigan, C., Warren, J. E., & Scott, S. K. (2015). Feel the Noise: Relating individual differences in auditory imagery to the structure and function of sensorimotor systems. Cerebral Cortex., 2015(25), 4638-4650. doi:10.1093/cercor/bhv134.

    Abstract

    Humans can generate mental auditory images of voices or songs, sometimes perceiving them almost as vividly as perceptual experiences. The functional networks supporting auditory imagery have been described, but less is known about the systems associated with interindividual differences in auditory imagery. Combining voxel-based morphometry and fMRI, we examined the structural basis of interindividual differences in how auditory images are subjectively perceived, and explored associations between auditory imagery, sensory-based processing, and visual imagery. Vividness of auditory imagery correlated with gray matter volume in the supplementary motor area (SMA), parietal cortex, medial superior frontal gyrus, and middle frontal gyrus. An analysis of functional responses to different types of human vocalizations revealed that the SMA and parietal sites that predict imagery are also modulated by sound type. Using representational similarity analysis, we found that higher representational specificity of heard sounds in SMA predicts vividness of imagery, indicating a mechanistic link between sensory- and imagery-based processing in sensorimotor cortex. Vividness of imagery in the visual domain also correlated with SMA structure, and with auditory imagery scores. Altogether, these findings provide evidence for a signature of imagery in brain structure, and highlight a common role of perceptual–motor interactions for processing heard and internally generated auditory information.
  • Liszkowski, U., & Ramenzoni, V. C. (2015). Pointing to nothing? Empty places prime infants' attention to absent objects. Infancy, 20, 433-444. doi:10.1111/infa.12080.

    Abstract

    People routinely point to empty space when referring to absent entities. These points to "nothing" are meaningful because they direct attention to places that stand in for specific entities. Typically, the meaning of places in terms of absent referents is established through preceding discourse and accompanying language. However, it is unknown whether nonlinguistic actions can establish locations as meaningful places, and whether infants have the capacity to represent a place as standing in for an object. In a novel eye-tracking paradigm, 18-month-olds watched objects being placed in specific locations. Then, the objects disappeared and a point directed infants' attention to an emptied place. The point to the empty place primed infants in a subsequent scene (in which the objects appeared at novel locations) to look more to the object belonging to the indicated place than to a distracter referent. The place-object expectations were strong enough to interfere when reversing the place-object associations. Findings show that infants comprehend nonlinguistic reference to absent entities, which reveals an ontogenetic early, nonverbal understanding of places as representations of absent objects
  • Liszkowski, U., Carpenter, M., Henning, A., Striano, T., & Tomasello, M. (2004). Twelve-month-olds point to share attention and interest. Developmental Science, 7(3), 297-307. doi:10.1111/j.1467-7687.2004.00349.x.

    Abstract

    Infants point for various motives. Classically, one such motive is declarative, to share attention and interest with adults to events. Recently, some researchers have questioned whether infants have this motivation. In the current study, an adult reacted to 12-month-olds' pointing in different ways, and infants' responses were observed. Results showed that when the adult shared attention and interest (i.e. alternated gaze and emoted), infants pointed more frequently across trials and tended to prolong each point – presumably to prolong the satisfying interaction. However, when the adult emoted to the infant alone or looked only to the event, infants pointed less across trials and repeated points more within trials – presumably in an attempt to establish joint attention. Results suggest that 12-month-olds point declaratively and understand that others have psychological states that can be directed and shared.
  • Lockwood, G., & Dingemanse, M. (2015). Iconicity in the lab: A review of behavioural, developmental, and neuroimaging research into sound-symbolism. Frontiers in Psychology, 6: 1246. doi:10.3389/fpsyg.2015.01246.

    Abstract

    This review covers experimental approaches to sound-symbolism—from infants to adults, and from Sapir’s foundational studies to twenty-first century product naming. It synthesizes recent behavioral, developmental, and neuroimaging work into a systematic overview of the cross-modal correspondences that underpin iconic links between form and meaning. It also identifies open questions and opportunities, showing how the future course of experimental iconicity research can benefit from an integrated interdisciplinary perspective. Combining insights from psychology and neuroscience with evidence from natural languages provides us with opportunities for the experimental investigation of the role of sound-symbolism in language learning, language processing, and communication. The review finishes by describing how hypothesis-testing and model-building will help contribute to a cumulative science of sound-symbolism in human language.
  • Lockwood, G., & Tuomainen, J. (2015). Ideophones in Japanese modulate the P2 and late positive complex responses. Frontiers in Psychology, 6: 933. doi:10.3389/fpsyg.2015.00933.

    Abstract

    Sound-symbolism, or the direct link between sound and meaning, is typologically and behaviorally attested across languages. However, neuroimaging research has mostly focused on artificial non-words or individual segments, which do not represent sound-symbolism in natural language. We used EEG to compare Japanese ideophones, which are phonologically distinctive sound-symbolic lexical words, and arbitrary adverbs during a sentence reading task. Ideophones elicit a larger visual P2 response and a sustained late positive complex in comparison to arbitrary adverbs. These results and previous literature suggest that the larger P2 may indicate the integration of sound and sensory information by association in response to the distinctive phonology of ideophones. The late positive complex may reflect the facilitated lexical retrieval of ideophones in comparison to arbitrary words. This account provides new evidence that ideophones exhibit similar cross-modal correspondences to those which have been proposed for non-words and individual sounds, and that these effects are detectable in natural language.
  • Loo, S. K., Fisher, S. E., Francks, C., Ogdie, M. N., MacPhie, I. L., Yang, M., McCracken, J. T., McGough, J. J., Nelson, S. F., Monaco, A. P., & Smalley, S. L. (2004). Genome-wide scan of reading ability in affected sibling pairs with attention-deficit/hyperactivity disorder: Unique and shared genetic effects. Molecular Psychiatry, 9, 485-493. doi:10.1038/sj.mp.4001450.

    Abstract

    Attention-deficit/hyperactivity disorder (ADHD) and reading disability (RD) are common highly heritable disorders of childhood, which frequently co-occur. Data from twin and family studies suggest that this overlap is, in part, due to shared genetic underpinnings. Here, we report the first genome-wide linkage analysis of measures of reading ability in children with ADHD, using a sample of 233 affected sibling pairs who previously participated in a genome-wide scan for susceptibility loci in ADHD. Quantitative trait locus (QTL) analysis of a composite reading factor defined from three highly correlated reading measures identified suggestive linkage (multipoint maximum lod score, MLS>2.2) in four chromosomal regions. Two regions (16p, 17q) overlap those implicated by our previous genome-wide scan for ADHD in the same sample: one region (2p) provides replication for an RD susceptibility locus, and one region (10q) falls approximately 35 cM from a modestly highlighted region in an independent genome-wide scan of siblings with ADHD. Investigation of an individual reading measure of Reading Recognition supported linkage to putative RD susceptibility regions on chromosome 8p (MLS=2.4) and 15q (MLS=1.38). Thus, the data support the existence of genetic factors that have pleiotropic effects on ADHD and reading ability--as suggested by shared linkages on 16p, 17q and possibly 10q--but also those that appear to be unique to reading--as indicated by linkages on 2p, 8p and 15q that coincide with those previously found in studies of RD. Our study also suggests that reading measures may represent useful phenotypes in ADHD research. The eventual identification of genes underlying these unique and shared linkages may increase our understanding of ADHD, RD and the relationship between the two.
  • Love, B. C., Kopeć, Ł., & Guest, O. (2015). Optimism bias in fans and sports reporters. PLoS One, 10(9): e0137685. doi:10.1371/journal.pone.0137685.

    Abstract

    People are optimistic about their prospects relative to others. However, existing studies can be difficult to interpret because outcomes are not zero-sum. For example, one person avoiding cancer does not necessitate that another person develops cancer. Ideally, optimism bias would be evaluated within a closed formal system to establish with certainty the extent of the bias and the associated environmental factors, such that optimism bias is demonstrated when a population is internally inconsistent. Accordingly, we asked NFL fans to predict how many games teams they liked and disliked would win in the 2015 season. Fans, like ESPN reporters assigned to cover a team, were overly optimistic about their team’s prospects. The opposite pattern was found for teams that fans disliked. Optimism may flourish because year-to-year team results are marked by auto-correlation and regression to the group mean (i.e., good teams stay good, but bad teams improve).

    Additional information

    raw data
  • Lozano, R., Vino, A., Lozano, C., Fisher, S. E., & Deriziotis, P. (2015). A de novo FOXP1 variant in a patient with autism, intellectual disability and severe speech and language impairment. European Journal of Human Genetics, 23, 1702-1707. doi:10.1038/ejhg.2015.66.

    Abstract

    FOXP1 (forkhead box protein P1) is a transcription factor involved in the development of several tissues, including the brain. An emerging phenotype of patients with protein-disrupting FOXP1 variants includes global developmental delay, intellectual disability and mild to severe speech/language deficits. We report on a female child with a history of severe hypotonia, autism spectrum disorder and mild intellectual disability with severe speech/language impairment. Clinical exome sequencing identified a heterozygous de novo FOXP1 variant c.1267_1268delGT (p.V423Hfs*37). Functional analyses using cellular models show that the variant disrupts multiple aspects of FOXP1 activity, including subcellular localization and transcriptional repression properties. Our findings highlight the importance of performing functional characterization to help uncover the biological significance of variants identified by genomics approaches, thereby providing insight into pathways underlying complex neurodevelopmental disorders. Moreover, our data support the hypothesis that de novo variants represent significant causal factors in severe sporadic disorders and extend the phenotype seen in individuals with FOXP1 haploinsufficiency
  • Maess, B., Friederici, A. D., Damian, M., Meyer, A. S., & Levelt, W. J. M. (2002). Semantic category interference in overt picture naming: Sharpening current density localization by PCA. Journal of Cognitive Neuroscience, 14(3), 455-462. doi:10.1162/089892902317361967.

    Abstract

    The study investigated the neuronal basis of the retrieval of words from the mental lexicon. The semantic category interference effect was used to locate lexical retrieval processes in time and space. This effect reflects the finding that, for overt naming, volunteers are slower when naming pictures out of a sequence of items from the same semantic category than from different categories. Participants named pictures blockwise either in the context of same- or mixedcategory items while the brain response was registered using magnetoencephalography (MEG). Fifteen out of 20 participants showed longer response latencies in the same-category compared to the mixed-category condition. Event-related MEG signals for the participants demonstrating the interference effect were submitted to a current source density (CSD) analysis. As a new approach, a principal component analysis was applied to decompose the grand average CSD distribution into spatial subcomponents (factors). The spatial factor indicating left temporal activity revealed significantly different activation for the same-category compared to the mixedcategory condition in the time window between 150 and 225 msec post picture onset. These findings indicate a major involvement of the left temporal cortex in the semantic interference effect. As this effect has been shown to take place at the level of lexical selection, the data suggest that the left temporal cortex supports processes of lexical retrieval during production.
  • Magyari, L. (2004). Nyelv és/vagy evolúció? [Book review]. Magyar Pszichológiai Szemle, 59(4), 591-607. doi:10.1556/MPSzle.59.2004.4.7.

    Abstract

    Nyelv és/vagy evolúció: Lehetséges-e a nyelv evolúciós magyarázata? [Derek Bickerton: Nyelv és evolúció] (Magyari Lilla); Történelmi olvasókönyv az agyról [Charles G. Gross: Agy, látás, emlékezet. Mesék az idegtudomány történetéből] (Garab Edit Anna); Művészet vagy tudomány [Margitay Tihamér: Az érvelés mestersége. Érvelések elemzése, értékelése és kritikája] (Zemplén Gábor); Tényleg ésszerűek vagyunk? [Herbert Simon: Az ésszerűség szerepe az emberi életben] (Kardos Péter); Nemi különbségek a megismerésben [Doreen Kimura: Női agy, férfi agy]. (Hahn Noémi);
  • Majid, A. (2004). Out of context. The Psychologist, 17(6), 330-330.
  • Majid, A., & Van Staden, M. (2015). Can nomenclature for the body be explained by embodiment theories? Topics in Cognitive Science, 7(4), 570-594. doi:10.1111/tops.12159.

    Abstract

    According to widespread opinion, the meaning of body part terms is determined by salient discontinuities in the visual image; such that hands, feet, arms, and legs, are natural parts. If so, one would expect these parts to have distinct names which correspond in meaning across languages. To test this proposal, we compared three unrelated languages—Dutch, Japanese, and Indonesian—and found both naming systems and boundaries of even basic body part terms display variation across languages. Bottom-up cues alone cannot explain natural language semantic systems; there simply is not a one-to-one mapping of the body semantic system to the body structural description. Although body parts are flexibly construed across languages, body parts semantics are, nevertheless, constrained by non-linguistic representations in the body structural description, suggesting these are necessary, although not sufficient, in accounting for aspects of the body lexicon.
  • Majid, A. (2015). Cultural factors shape olfactory language. Trends in Cognitive Sciences, 19(11), 629-630. doi:10.1016/j.tics.2015.06.009.
  • Majid, A. (2004). Data elicitation methods. Language Archive Newsletter, 1(2), 6-6.
  • Majid, A. (2004). Developing clinical understanding. The Psychologist, 17, 386-387.
  • Majid, A. (2004). Coned to perfection. The Psychologist, 17(7), 386-386.
  • Majid, A., Bowerman, M., Kita, S., Haun, D. B. M., & Levinson, S. C. (2004). Can language restructure cognition? The case for space. Trends in Cognitive Sciences, 8(3), 108-114. doi:10.1016/j.tics.2004.01.003.

    Abstract

    Frames of reference are coordinate systems used to compute and specify the location of objects with respect to other objects. These have long been thought of as innate concepts, built into our neurocognition. However, recent work shows that the use of such frames in language, cognition and gesture varies crossculturally, and that children can acquire different systems with comparable ease. We argue that language can play a significant role in structuring, or restructuring, a domain as fundamental as spatial cognition. This suggests we need to rethink the relation between the neurocognitive underpinnings of spatial cognition and the concepts we use in everyday thinking, and, more generally, to work out how to account for cross-cultural cognitive diversity in core cognitive domains.
  • Majid, A. (2004). An integrated view of cognition [Review of the book Rethinking implicit memory ed. by J. S. Bowers and C. J. Marsolek]. The Psychologist, 17(3), 148-149.
  • Majid, A. (2004). [Review of the book The new handbook of language and social psychology ed. by W. Peter Robinson and Howard Giles]. Language and Society, 33(3), 429-433.
  • Majid, A. (2002). Frames of reference and language concepts. Trends in Cognitive Sciences, 6(12), 503-504. doi:10.1016/S1364-6613(02)02024-7.
  • Majid, A., Jordan, F., & Dunn, M. (Eds.). (2015). Semantic systems in closely related languages [Special Issue]. Language Sciences, 49.
  • Majid, A., Jordan, F., & Dunn, M. (2015). Semantic systems in closely related languages. Language Sciences, 49, 1-18. doi:10.1016/j.langsci.2014.11.002.

    Abstract

    In each semantic domain studied to date, there is considerable variation in how meanings are expressed across languages. But are some semantic domains more likely to show variation than others? Is the domain of space more or less variable in its expression than other semantic domains, such as containers, body parts, or colours? According to many linguists, the meanings expressed in grammaticised expressions, such as (spatial) adpositions, are more likely to be similar across languages than meanings expressed in open class lexical items. On the other hand, some psychologists predict there ought to be more variation across languages in the meanings of adpositions, than in the meanings of nouns. This is because relational categories, such as those expressed as adpositions, are said to be constructed by language; whereas object categories expressed as nouns are predicted to be “given by the world”. We tested these hypotheses by comparing the semantic systems of closely related languages. Previous cross-linguistic studies emphasise the importance of studying diverse languages, but we argue that a focus on closely related languages is advantageous because domains can be compared in a culturally- and historically-informed manner. Thus we collected data from 12 Germanic languages. Naming data were collected from at least 20 speakers of each language for containers, body-parts, colours, and spatial relations. We found the semantic domains of colour and body-parts were the most similar across languages. Containers showed some variation, but spatial relations expressed in adpositions showed the most variation. The results are inconsistent with the view expressed by most linguists. Instead, we find meanings expressed in grammaticised meanings are more variable than meanings in open class lexical items.
  • Mak, W. M., Vonk, W., & Schriefers, H. (2002). The influence of animacy on relative clause processing. Journal of Memory and Language, 47(1), 50-68. doi:10.1006/jmla.2001.2837.

    Abstract

    In previous research it has been shown that subject relative clauses are easier to process than object relative clauses. Several theories have been proposed that explain the difference on the basis of different theoretical perspectives. However, previous research tested relative clauses only with animate protagonists. In a corpus study of Dutch and German newspaper texts, we show that animacy is an important determinant of the distribution of subject and object relative clauses. In two experiments in Dutch, in which the animacy of the object of the relative clause is varied, no difference in reading time is obtained between subject and object relative clauses when the object is inanimate. The experiments show that animacy influences the processing difficulty of relative clauses. These results can only be accounted for by current major theories of relative clause processing when additional assumptions are introduced, and at the same time show that the possibility of semantically driven analysis can be considered as a serious alternative.
  • Mangione-Smith, R., Elliott, M. N., Stivers, T., McDonald, L., Heritage, J., & McGlynn, E. A. (2004). Racial/ethnic variation in parent expectations for antibiotics: Implications for public health campaigns. Pediatrics, 113(5), 385-394.
  • Manrique, E., & Enfield, N. J. (2015). Suspending the next turn as a form of repair initiation: Evidence from Argentine Sign Language. Frontiers in Psychology, 6: 1326. doi:10.3389/fpsyg.2015.01326.

    Abstract

    Practices of other initiated repair deal with problems of hearing or understanding what another person has said in the fast-moving turn-by-turn flow of conversation. As such, other-initiated repair plays a fundamental role in the maintenance of intersubjectivity in social interaction. This study finds and analyses a special type of other initiated repair that is used in turn-by-turn conversation in a sign language: Argentine Sign Language (Lengua de Sehas Argentina or LSA). We describe a type of response termed a "freeze-look,' which occurs when a person has just been asked a direct question: instead of answering the question in the next turn position, the person holds still while looking directly at the questioner. In these cases it is clear that the person is aware of having just been addressed and is not otherwise accounting for their delay in responding (e.g., by displaying a "thinking" face or hesitation, etc.). We find that this behavior functions as a way for an addressee to initiate repair by the person who asked the question. The "freeze-look" results in the questioner "re-doing" their action of asking a question, for example by repeating or rephrasing it Thus, we argue that the "freeze-look" is a practice for other-initiation of repair. In addition, we argue that it is an "off-record" practice, thus contrasting with known on record practices such as saying "Huh?" or equivalents. The findings aim to contribute to research on human understanding in everyday turn-by-turn conversation by looking at an understudied sign language, with possible implications for our understanding of visual bodily communication in spoken languages as wel

    Additional information

    Manrique_Enfield_2015_supp.pdf
  • Marlow, A. J., Fisher, S. E., Richardson, A. J., Francks, C., Talcott, J. B., Monaco, A. P., Stein, J. F., & Cardon, L. R. (2002). Investigation of quantitative measures related to reading disability in a large sample of sib-pairs from the UK. Behavior Genetics, 31(2), 219-230. doi:10.1023/A:1010209629021.

    Abstract

    We describe a family-based sample of individuals with reading disability collected as part of a quantitative trait loci (QTL) mapping study. Eighty-nine nuclear families (135 independent sib-pairs) were identified through a single proband using a traditional discrepancy score of predicted/actual reading ability and a known family history. Eight correlated psychometric measures were administered to each sibling, including single word reading, spelling, similarities, matrices, spoonerisms, nonword and irregular word reading, and a pseudohomophone test. Summary statistics for each measure showed a reduced mean for the probands compared to the co-sibs, which in turn was lower than that of the population. This partial co-sib regression back to the mean indicates that the measures are influenced by familial factors and therefore, may be suitable for a mapping study. The variance of each of the measures remained largely unaffected, which is reassuring for the application of a QTL approach. Multivariate genetic analysis carried out to explore the relationship between the measures identified a common factor between the reading measures that accounted for 54% of the variance. Finally the familiality estimates (range 0.32–0.73) obtained for the reading measures including the common factor (0.68) supported their heritability. These findings demonstrate the viability of this sample for QTL mapping, and will assist in the interpretation of any subsequent linkage findings in an ongoing genome scan.
  • Martin, J.-R., Kösem, A., & van Wassenhove, V. (2015). Hysteresis in Audiovisual Synchrony Perception. PLoS One, 10(3): e0119365. doi:10.1371/journal.pone.0119365.

    Abstract

    The effect of stimulation history on the perception of a current event can yield two opposite effects, namely: adaptation or hysteresis. The perception of the current event thus goes in the opposite or in the same direction as prior stimulation, respectively. In audiovisual (AV) synchrony perception, adaptation effects have primarily been reported. Here, we tested if perceptual hysteresis could also be observed over adaptation in AV timing perception by varying different experimental conditions. Participants were asked to judge the synchrony of the last (test) stimulus of an AV sequence with either constant or gradually changing AV intervals (constant and dynamic condition, respectively). The onset timing of the test stimulus could be cued or not (prospective vs. retrospective condition, respectively). We observed hysteretic effects for AV synchrony judgments in the retrospective condition that were independent of the constant or dynamic nature of the adapted stimuli; these effects disappeared in the prospective condition. The present findings suggest that knowing when to estimate a stimulus property has a crucial impact on perceptual simultaneity judgments. Our results extend beyond AV timing perception, and have strong implications regarding the comparative study of hysteresis and adaptation phenomena.
  • Matić, D., & Odé, C. (2015). On prosodic signalling of focus in Tundra Yukaghir. Acta Linguistica Petropolitana, 11(2), 627-644.
  • Mauner, G., Melinger, A., Koenig, J.-P., & Bienvenue, B. (2002). When is schematic participant information encoded: Evidence from eye-monitoring. Journal of Memory and Language, 47(3), 386-406. doi:10.1016/S0749-596X(02)00009-8.

    Abstract

    Two eye-monitoring studies examined when unexpressed schematic participant information specified by verbs is used during sentence processing. Experiment 1 compared the processing of sentences with passive and intransitive verbs hypothesized to introduce or not introduce, respectively, an agent when their main clauses were preceded by either agent-dependent rationale clauses or adverbial clause controls. While there were no differences in the processing of passive clauses following rationale and control clauses, intransitive verb clauses elicited anomaly effects following agent-dependent rationale clauses. To determine whether the source of this immediately available schematic participant information is lexically specified or instead derived solely from conceptual sources associated with verbs, Experiment 2 compared the processing of clauses with passive and middle verbs following rationale clauses (e.g., To raise money for the charity, the vase was/had sold quickly…). Although both passive and middle verb forms denote situations that logically require an agent, middle verbs, which by hypothesis do not lexically specify an agent, elicited longer processing times than passive verbs in measures of early processing. These results demonstrate that participants access and interpret lexically encoded schematic participant information in the process of recognizing a verb.
  • Meekings, S., Boebinger, D., Evans, S., Lima, C. F., Chen, S., Ostarek, M., & Scott, S. K. (2015). Do we know what we’re saying? The roles of attention and sensory information during speech production. Psychological Science, 26(12), 1975-1977. doi:10.1177/0956797614563766.
  • Meeuwissen, M., Roelofs, A., & Levelt, W. J. M. (2004). Naming analog clocks conceptually facilitates naming digital clocks. Brain and Language, 90(1-3), 434-440. doi:10.1016/S0093-934X(03)00454-1.

    Abstract

    This study investigates how speakers of Dutch compute and produce relative time expressions. Naming digital clocks (e.g., 2:45, say ‘‘quarter to three’’) requires conceptual operations on the minute and hour information for the correct relative time expression. The interplay of these conceptual operations was investigated using a repetition priming paradigm. Participants named analog clocks (the primes) directly before naming digital clocks (the targets). The targets referred to the hour (e.g., 2:00), half past the hour (e.g., 2:30), or the coming hour (e.g., 2:45). The primes differed from the target in one or two hour and in five or ten minutes. Digital clock naming latencies were shorter with a five- than with a ten-min difference between prime and target, but the difference in hour had no effect. Moreover, the distance in minutes had only an effect for half past the hour and the coming hour, but not for the hour. These findings suggest that conceptual facilitation occurs when conceptual transformations are shared between prime and target in telling time.
  • Meira, S., & Drude, S. (2015). A summary reconstruction of Proto-Maweti-Guarani segmental phonology. Boletim do Museu Paraense Emilio Goeldi:Ciencias Humanas, 10, 275-296. doi: 10.1590/1981-81222015000200005.

    Abstract

    This paper presents a succinct reconstruction of the segmental phonology of Proto-Maweti-Guarani, the hypothetical protolanguage from which modern Mawe, Aweti and the Tupi-Guarani branches of the Tupi linguistic family have evolved. Based on about 300 cognate sets from the authors' field data (for Mawe and Aweti) and from Mello's reconstruction (2000) for Proto-Tupi-Guarani (with additional information from other works; and with a few changes concerning certain doubtful features, such as the status of stem-final lenis consonants ∗r and ∗β, and the distinction of ∗c and ∗č), the consonants and vowels of Proto-Maweti-Guarani were reconstructed with the help of the traditional historical-comparative method. The development of the reconstructed segments is then traced from the protolanguage to each of the modern branches. A comparison with other claims made about Proto-Maweti-Guarani is given in the conclusion
  • Melinger, A. (2002). Foot structure and accent in Seneca. International Journal of American Linguistics, 68(3), 287-315.

    Abstract

    Argues that the Seneca accent system can be explained more simply and naturally if the foot structure is reanalyzed as trochaic. Determination of the position of the accent by the position and structure of the accented syllable and by the position and structure of the post-tonic syllable; Assignment of the pair of syllables which interact to predict where accent is assigned in different iambic feet.
  • Melinger, A., & Levelt, W. J. M. (2004). Gesture and the communicative intention of the speaker. Gesture, 4(2), 119-141.

    Abstract

    This paper aims to determine whether iconic tracing gestures produced while speaking constitute part of the speaker’s communicative intention. We used a picture description task in which speakers must communicate the spatial and color information of each picture to an interlocutor. By establishing the necessary minimal content of an intended message, we determined whether speech produced with concurrent gestures is less explicit than speech without gestures. We argue that a gesture must be communicatively intended if it expresses necessary information that was nevertheless omitted from speech. We found that speakers who produced iconic gestures representing spatial relations omitted more required spatial information from their descriptions than speakers who did not gesture. These results provide evidence that speakers intend these gestures to communicate. The results have implications for the cognitive architectures that underlie the production of gesture and speech.
  • Meulenbroek, O., Petersson, K. M., Voermans, N., Weber, B., & Fernández, G. (2004). Age differences in neural correlates of route encoding and route recognition. Neuroimage, 22, 1503-1514. doi:10.1016/j.neuroimage.2004.04.007.

    Abstract

    Spatial memory deficits are core features of aging-related changes in cognitive abilities. The neural correlates of these deficits are largely unknown. In the present study, we investigated the neural underpinnings of age-related differences in spatial memory by functional MRI using a navigational memory task with route encoding and route recognition conditions. We investigated 20 healthy young (18 - 29 years old) and 20 healthy old adults (53 - 78 years old) in a random effects analysis. Old subjects showed slightly poorer performance than young subjects. Compared to the control condition, route encoding and route recognition showed activation of the dorsal and ventral visual processing streams and the frontal eye fields in both groups of subjects. Compared to old adults, young subjects showed during route encoding stronger activations in the dorsal and the ventral visual processing stream (supramarginal gyrus and posterior fusiform/parahippocampal areas). In addition, young subjects showed weaker anterior parahippocampal activity during route recognition compared to the old group. In contrast, old compared to young subjects showed less suppressed activity in the left perisylvian region and the anterior cingulate cortex during route encoding. Our findings suggest that agerelated navigational memory deficits might be caused by less effective route encoding based on reduced posterior fusiform/parahippocampal and parietal functionality combined with diminished inhibition of perisylvian and anterior cingulate cortices correlated with less effective suppression of task-irrelevant information. In contrast, age differences in neural correlates of route recognition seem to be rather subtle. Old subjects might show a diminished familiarity signal during route recognition in the anterior parahippocampal region.
  • Meyer, A. S., Van der Meulen, F. F., & Brooks, A. (2004). Eye movements during speech planning: Talking about present and remembered objects. Visual Cognition, 11, 553-576. doi:10.1080/13506280344000248.

    Abstract

    Earlier work has shown that speakers naming several objects usually look at each of them before naming them (e.g., Meyer, Sleiderink, & Levelt, 1998). In the present study, participants saw pictures and described them in utterances such as "The chair next to the cross is brown", where the colour of the first object was mentioned after another object had been mentioned. In Experiment 1, we examined whether the speakers would look at the first object (the chair) only once, before naming the object, or twice (before naming the object and before naming its colour). In Experiment 2, we examined whether speakers about to name the colour of the object would look at the object region again when the colour or the entire object had been removed while they were looking elsewhere. We found that speakers usually looked at the target object again before naming its colour, even when the colour was not displayed any more. Speakers were much less likely to fixate upon the target region when the object had been removed from view. We propose that the object contours may serve as a memory cue supporting the retrieval of the associated colour information. The results show that a speaker's eye movements in a picture description task, far from being random, depend on the available visual information and the content and structure of the planned utterance.
  • Meyer, A. S., & Schriefers, H. (1991). Phonological facilitation in picture-word interference experiments: Effects of stimulus onset asynchrony and types of interfering stimuli. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17, 1146-1160. doi:10.1037/0278-7393.17.6.1146.

    Abstract

    Subjects named pictures while hearing distractor words that shared word-initial or word-final segments with the picture names or were unrelated to the picture names. The relative timing of distractor and picture presentation was varied. Compared with unrelated distractors, both types of related distractors facilitated picture naming under certain timing conditions. Begin-related distractors facilitated the naming responses if the shared segments began 150 ms before, at, or 150 ms after picture onset. By contrast, end-related distractors only facilitated the responses if the shared segments began at or 150 ms after picture onset. The results suggest that the phonological encoding of the beginning of a word is initiated before the encoding of its end.
  • Meyer, A. S. (1991). The time course of phonological encoding in language production: Phonological encoding inside a syllable. Journal of Memory and Language, 30, 69-69. doi:10.1016/0749-596X(91)90011-8.

    Abstract

    Eight experiments were carried out investigating whether different parts of a syllable must be phonologically encoded in a specific order or whether they can be encoded in any order. A speech production task was used in which the subjects in each test trial had to utter one out of three or five response words as quickly as possible. In the so-called homogeneous condition these words were related in form, while in the heterogeneous condition they were unrelated in form. For monosyllabic response words shorter reaction times were obtained in the homogeneous than in the heterogeneous condition when the words had the same onset, but not when they had the same rhyme. Similarly, for disyllabic response words, the reaction times were shorter in the homogeneous than in the heterogeneous condition when the words shared only the onset of the first syllable, but not when they shared only its rhyme. Furthermore, a stronger facilitatory effect was observed when the words had the entire first syllable in common than when they only shared the onset, or the onset and the nucleus, but not the coda of the first syllable. These results suggest that syllables are phonologically encoded in two ordered steps, the first of which is dedicated to the onset and the second to the rhyme.
  • Meyer, A. S., Sleiderink, A. M., & Levelt, W. J. M. (1998). Viewing and naming objects: Eye movements during noun phrase production. Cognition, 66(2), B25-B33. doi:10.1016/S0010-0277(98)00009-2.

    Abstract

    Eye movements have been shown to reflect word recognition and language comprehension processes occurring during reading and auditory language comprehension. The present study examines whether the eye movements speakers make during object naming similarly reflect speech planning processes. In Experiment 1, speakers named object pairs saying, for instance, 'scooter and hat'. The objects were presented as ordinary line drawings or with partly dele:ed contours and had high or low frequency names. Contour type and frequency both significantly affected the mean naming latencies and the mean time spent looking at the objects. The frequency effects disappeared in Experiment 2, in which the participants categorized the objects instead of naming them. This suggests that the frequency effects of Experiment 1 arose during lexical retrieval. We conclude that eye movements during object naming indeed reflect linguistic planning processes and that the speakers' decision to move their eyes from one object to the next is contingent upon the retrieval of the phonological form of the object names.
  • Mielcarek, M., Toczek, M., Smeets, C. J. L. M., Franklin, S. A., Bondulich, M. K., Jolinon, N., Muller, T., Ahmed, M., Dick, J. R. T., Piotrowska, I., Greensmith, L., Smolenski, R. T., & Bates, G. P. (2015). HDAC4-Myogenin Axis As an Important Marker of HD-Related Skeletal Muscle Atrophy. PLoS Genetics, 11(3): e1005021. doi:10.1371/journal.pgen.1005021.

    Abstract

    Skeletal muscle remodelling and contractile dysfunction occur through both acute and chronic disease processes. These include the accumulation of insoluble aggregates of mis- folded amyloid proteins that is a pathological feature of Huntington ’ s disease (HD). While HD has been described primarily as a neurological disease, HD patients ’ exhibit pro- nounced skeletal muscle atrophy. Given that huntingtin is a ubiquitously expressed protein, skeletal muscle fibres may be at risk of a cell autonomous HD-related dysfunction. However the mechanism leading to skeletal muscle abnormalities in the clinical and pre-clinical HD settings remains unknown. To unravel this mechanism, we employed the R6/2 transgenic and Hdh Q150 knock-in mouse models of HD. We found that symptomatic animals devel- oped a progressive impairment of the contractile characteristics of the hind limb muscles tibialis anterior (TA) and extensor digitorum longus (EDL), accompanied by a significant loss of motor units in the EDL. In symptomatic animals, these pronounced functional changes were accompanied by an aberrant deregulation of contractile protein transcripts and their up-stream transcriptional regulators. In addition, HD mouse models develop a sig- nificant reduction in muscle force, possibly as a result of a deterioration in energy metabo- lism and decreased oxidation that is accompanied by the re-expression of the HDAC4- DACH2-myogenin axis. These results show that muscle dysfunction is a key pathological feature of HD.
  • Monaghan, P., Mattock, K., Davies, R., & Smith, A. C. (2015). Gavagai is as gavagai does: Learning nouns and verbs from cross-situational statistics. Cognitive Science, 39, 1099-1112. doi:10.1111/cogs.12186.

    Abstract

    Learning to map words onto their referents is difficult, because there are multiple possibilities for forming these mappings. Cross-situational learning studies have shown that word-object mappings can be learned across multiple situations, as can verbs when presented in a syntactic context. However, these previous studies have presented either nouns or verbs in ambiguous contexts and thus bypass much of the complexity of multiple grammatical categories in speech. We show that noun word-learning in adults is robust when objects are moving, and that verbs can also be learned from similar scenes without additional syntactic information. Furthermore, we show that both nouns and verbs can be acquired simultaneously, thus resolving category-level as well as individual word level ambiguity. However, nouns were learned more accurately than verbs, and we discuss this in light of previous studies investigating the noun advantage in word learning.
  • Moreno, I., De Vega, M., León, I., Bastiaansen, M. C. M., Lewis, A. G., & Magyari, L. (2015). Brain dynamics in the comprehension of action-related language. A time-frequency analysis of mu rhythms. Neuroimage, 109, 50-62. doi:10.1016/j.neuroimage.2015.01.018.

    Abstract

    EEG mu rhythms (8-13Hz) recorded at fronto-central electrodes are generally considered as markers of motor cortical activity in humans, because they are modulated when participants perform an action, when they observe another’s action or even when they imagine performing an action. In this study, we analyzed the time-frequency (TF) modulation of mu rhythms while participants read action language (“You will cut the strawberry cake”), abstract language (“You will doubt the patient´s argument”), and perceptive language (“You will notice the bright day”). The results indicated that mu suppression at fronto-central sites is associated with action language rather than with abstract or perceptive language. Also, the largest difference between conditions occurred quite late in the sentence, while reading the first noun, (contrast Action vs. Abstract), or the second noun following the action verb (contrast Action vs. Perceptive). This suggests that motor activation is associated with the integration of words across the sentence beyond the lexical processing of the action verb. Source reconstruction localized mu suppression associated with action sentences in premotor cortex (BA 6). The present study suggests (1) that the understanding of action language activates motor networks in the human brain, and (2) that this activation occurs online based on semantic integration across multiple words in the sentence.
  • Moscoso del Prado Martín, F., Kostic, A., & Baayen, R. H. (2004). Putting the bits together: An information theoretical perspective on morphological processing. Cognition, 94(1), 1-18. doi:10.1016/j.cognition.2003.10.015.

    Abstract

    In this study we introduce an information-theoretical formulation of the emergence of type- and token-based effects in morphological processing. We describe a probabilistic measure of the informational complexity of a word, its information residual, which encompasses the combined influences of the amount of information contained by the target word and the amount of information carried by its nested morphological paradigms. By means of re-analyses of previously published data on Dutch words we show that the information residual outperforms the combination of traditional token- and type-based counts in predicting response latencies in visual lexical decision, and at the same time provides a parsimonious account of inflectional, derivational, and compounding processes.
  • Moscoso del Prado Martín, F., Ernestus, M., & Baayen, R. H. (2004). Do type and token effects reflect different mechanisms? Connectionist modeling of Dutch past-tense formation and final devoicing. Brain and Language, 90(1-3), 287-298. doi:10.1016/j.bandl.2003.12.002.

    Abstract

    In this paper, we show that both token and type-based effects in lexical processing can result from a single, token-based, system, and therefore, do not necessarily reflect different levels of processing. We report three Simple Recurrent Networks modeling Dutch past-tense formation. These networks show token-based frequency effects and type-based analogical effects closely matching the behavior of human participants when producing past-tense forms for both existing verbs and pseudo-verbs. The third network covers the full vocabulary of Dutch, without imposing predefined linguistic structure on the input or output words.
  • Moscoso del Prado Martín, F., Bertram, R., Haikio, T., Schreuder, R., & Baayen, R. H. (2004). Morphological family size in a morphologically rich language: The case of Finnish compared to Dutch and Hebrew. Journal of Experimental Psychology: Learning, Memory and Cognition, 30(6), 1271-1278. doi:10.1037/0278-7393.30.6.1271.

    Abstract

    Finnish has a very productive morphology in which a stem can give rise to several thousand words. This study presents a visual lexical decision experiment addressing the processing consequences of the huge productivity of Finnish morphology. The authors observed that in Finnish words with larger morphological families elicited shorter response latencies. However, in contrast to Dutch and Hebrew, it is not the complete morphological family of a complex Finnish word that codetermines response latencies but only the subset of words directly derived from the complex word itself. Comparisons with parallel experiments using translation equivalents in Dutch and Hebrew showed substantial cross-language predictivity of family size between Finnish and Dutch but not between Finnish and Hebrew, reflecting the different ways in which the Hebrew and Finnish morphological systems contribute to the semantic organization of concepts in the mental lexicon.
  • Mulder, K., Dijkstra, T., & Baayen, R. H. (2015). Cross-language activation of morphological relatives in cognates: The role of orthographic overlap and task-related processing. Frontiers in Human Neuroscience, 9: 16. doi:10.3389/fnhum.2015.00016.

    Abstract

    We considered the role of orthography and task-related processing mechanisms in the activation of morphologically related complex words during bilingual word processing. So far, it has only been shown that such morphologically related words (i.e., morphological family members) are activated through the semantic and morphological overlap they share with the target word. In this study, we investigated family size effects in Dutch-English identical cognates (e.g., tent in both languages), non-identical cognates (e.g., pil and pill, in English and Dutch, respectively), and non-cognates (e.g., chicken in English). Because of their cross-linguistic overlap in orthography, reading a cognate can result in activation of family members both languages. Cognates are therefore well-suited for studying mechanisms underlying bilingual activation of morphologically complex words. We investigated family size effects in an English lexical decision task and a Dutch-English language decision task, both performed by Dutch-English bilinguals. English lexical decision showed a facilitatory effect of English and Dutch family size on the processing of English-Dutch cognates relative to English non-cognates. These family size effects were not dependent on cognate type. In contrast, for language decision, in which a bilingual context is created, Dutch and English family size effects were inhibitory. Here, the combined family size of both languages turned out to better predict reaction time than the separate family size in Dutch or English. Moreover, the combined family size interacted with cognate type: the response to identical cognates was slowed by morphological family members in both languages. We conclude that (1) family size effects are sensitive to the task performed on the lexical items, and (2) depend on both semantic and formal aspects of bilingual word processing. We discuss various mechanisms that can explain the observed family size effects in a spreading activation framework.
  • Narasimhan, B., Sproat, R., & Kiraz, G. (2004). Schwa-deletion in Hindi text-to-speech synthesis. International Journal of Speech Technology, 7(4), 319-333. doi:10.1023/B:IJST.0000037075.71599.62.

    Abstract

    We describe the phenomenon of schwa-deletion in Hindi and how it is handled in the pronunciation component of a multilingual concatenative text-to-speech system. Each of the consonants in written Hindi is associated with an “inherent” schwa vowel which is not represented in the orthography. For instance, the Hindi word pronounced as [namak] (’salt’) is represented in the orthography using the consonantal characters for [n], [m], and [k]. Two main factors complicate the issue of schwa pronunciation in Hindi. First, not every schwa following a consonant is pronounced within the word. Second, in multimorphemic words, the presence of a morpheme boundary can block schwa deletion where it might otherwise occur. We propose a model for schwa-deletion which combines a general purpose schwa-deletion rule proposed in the linguistics literature (Ohala, 1983), with additional morphological analysis necessitated by the high frequency of compounds in our database. The system is implemented in the framework of finite-state transducer technology.
  • Neger, T. M., Janse, E., & Rietveld, T. (2015). Correlates of older adults' discrimination of acoustic properties in speech. Speech, Language and Hearing, 18(2), 102-115. doi:10.1179/2050572814Y.0000000055.

    Abstract

    Auditory discrimination of speech stimuli is an essential tool in speech and language therapy, e.g., in dysarthria rehabilitation. It is unclear, however, which listener characteristics are associated with the ability to perceive differences between one's own utterance and target speech. Knowledge about such associations may help to support patients participating in speech and language therapy programs that involve auditory discrimination tasks.
    Discrimination performance was evaluated in 96 healthy participants over 60 years of age as individuals with dysarthria are typically in this age group. Participants compared meaningful words and sentences on the dimensions of loudness, pitch and speech rate. Auditory abilities were assessed using pure-tone audiometry, speech audiometry and speech understanding in noise. Cognitive measures included auditory short-term memory, working memory and processing speed. Linguistic functioning was assessed by means of vocabulary knowledge and language proficiency.
    Exploratory factor analyses showed that discrimination performance was primarily associated with cognitive and linguistic skills, rather than auditory abilities. Accordingly, older adults’ discrimination performance was mainly predicted by cognitive and linguistic skills. Discrimination accuracy was higher in older adults with better speech understanding in noise, faster processing speed, and better language proficiency, but accuracy decreased with age. This raises the question whether these associations generalize to clinical populations and, if so, whether patients with better cognitive or linguistic skills may benefit more from discrimination-based therapeutic approaches than patients with poorer cognitive or linguistic abilities.

Share this page