Publications

Displaying 501 - 600 of 1065
  • Köster, O., Hess, M. M., Schiller, N. O., & Künzel, H. J. (1998). The correlation between auditory speech sensitivity and speaker recognition ability. Forensic Linguistics: The international Journal of Speech, Language and the Law, 5, 22-32.

    Abstract

    In various applications of forensic phonetics the question arises as to how far aural-perceptual speaker recognition performance is reliable. Therefore, it is necessary to examine the relationship between speaker recognition results and human perception/production abilities like musicality or speech sensitivity. In this study, performance in a speaker recognition experiment and a speech sensitivity test are correlated. The results show a moderately significant positive correlation between the two tasks. Generally, performance in the speaker recognition task was better than in the speech sensitivity test. Professionals in speech and singing yielded a more homogeneous correlation than non-experts. Training in speech as well as choir-singing seems to have a positive effect on performance in speaker recognition. It may be concluded, firstly, that in cases where the reliability of voice line-up results or the credibility of a testimony have to be considered, the speech sensitivity test could be a useful indicator. Secondly, the speech sensitivity test might be integrated into the canon of possible procedures for the accreditation of forensic phoneticians. Both tests may also be used in combination.
  • Krämer, I. (1998). Children's interpretations of indefinite object noun phrases. Linguistics in the Netherlands, 1998, 163-174. doi:10.1075/avt.15.15kra.
  • Krott, A., Hagoort, P., & Baayen, R. H. (2004). Sublexical units and supralexical combinatories in the processing of interfixed Dutch compounds. Language and Cognitive Processes, 19(3), 453-471. doi:10.1080/769813936.

    Abstract

    This study addresses the supralexical inferential processes underlying wellformedness judgements and latencies for a specic sublexical unit that appears in Dutch compounds, the interfix. Production studies have shown that the selection of interfixes in novel Dutch compounds and the speed of
    this selection is primarily determined by the distribution of interfixes in existing compounds that share the left constituent with the target compound, i.e. the ‘‘left constituent family’’. In this paper, we consider the question whether constituent families also affect wellformedness decisions of novel as well as existing Dutch compounds in comprehension. We visually presented compounds containing interfixes that were either in line with the bias of the left constituent family or not. In the case of existing compounds, we also presented variants with replaced interfixes. As in production, the bias of the left constituent family emerged as a crucial predictor for both acceptance rates and response latencies. This result supports the hypothesis that, as in production, constituent families are (co-)activated in comprehension. We argue that this co-activation is part of a supralexical inferential process, and we discuss how our data might be interpreted within sublexical and supralexical theories of morphological processing.
  • Krott, A., Libben, G., Jarema, G., Dressler, W., Schreuder, R., & Baayen, R. H. (2004). Probability in the grammar of German and Dutch: Interfixation in triconstituent compounds. Language and Speech, 47(1), 83-106.

    Abstract

    This study addresses the possibility that interfixes in multiconstituent nominal compounds in German and Dutch are functional as markers of immediate constituent structure.We report a lexical statistical survey of interfixation in the lexicons of German and Dutch which shows that all interfixes of German and one interfix of Dutch are significantly more likely to appear at the major constituent boundary than expected under chance conditions. A series of experiments provides evidence that speakers of German and Dutch are sensitive to the probabilistic cues to constituent structure provided by the interfixes. Thus, our data provide evidence that probability is part and parcel of grammatical competence.
  • Kunert, R., & Slevc, L. R. (2015). A commentary on: “Neural overlap in processing music and speech”. Frontiers in Human Neuroscience, 9: 330. doi:10.3389/fnhum.2015.00330.
  • Kunert, R., Willems, R. M., Casasanto, D., Patel, A. D., & Hagoort, P. (2015). Music and language syntax interact in Broca’s Area: An fMRI study. PLoS One, 10(11): e0141069. doi:10.1371/journal.pone.0141069.

    Abstract

    Instrumental music and language are both syntactic systems, employing complex, hierarchically-structured sequences built using implicit structural norms. This organization allows listeners to understand the role of individual words or tones in the context of an unfolding sentence or melody. Previous studies suggest that the brain mechanisms of syntactic processing may be partly shared between music and language. However, functional neuroimaging evidence for anatomical overlap of brain activity involved in linguistic and musical syntactic processing has been lacking. In the present study we used functional magnetic resonance imaging (fMRI) in conjunction with an interference paradigm based on sung sentences. We show that the processing demands of musical syntax (harmony) and language syntax interact in Broca’s area in the left inferior frontal gyrus (without leading to music and language main effects). A language main effect in Broca’s area only emerged in the complex music harmony condition, suggesting that (with our stimuli and tasks) a language effect only becomes visible under conditions of increased demands on shared neural resources. In contrast to previous studies, our design allows us to rule out that the observed neural interaction is due to: (1) general attention mechanisms, as a psychoacoustic auditory anomaly behaved unlike the harmonic manipulation, (2) error processing, as the language and the music stimuli contained no structural errors. The current results thus suggest that two different cognitive domains—music and language—might draw on the same high level syntactic integration resources in Broca’s area.
  • Kurt, S., Fisher, S. E., & Ehret, G. (2012). Foxp2 mutations impair auditory-motor-association learning. PLoS One, 7(3), e33130. doi:10.1371/journal.pone.0033130.

    Abstract

    Heterozygous mutations of the human FOXP2 transcription factor gene cause the best-described examples of monogenic speech and language disorders. Acquisition of proficient spoken language involves auditory-guided vocal learning, a specialized form of sensory-motor association learning. The impact of etiological Foxp2 mutations on learning of auditory-motor associations in mammals has not been determined yet. Here, we directly assess this type of learning using a newly developed conditioned avoidance paradigm in a shuttle-box for mice. We show striking deficits in mice heterozygous for either of two different Foxp2 mutations previously implicated in human speech disorders. Both mutations cause delays in acquiring new motor skills. The magnitude of impairments in association learning, however, depends on the nature of the mutation. Mice with a missense mutation in the DNA-binding domain are able to learn, but at a much slower rate than wild type animals, while mice carrying an early nonsense mutation learn very little. These results are consistent with expression of Foxp2 in distributed circuits of the cortex, striatum and cerebellum that are known to play key roles in acquisition of motor skills and sensory-motor association learning, and suggest differing in vivo effects for distinct variants of the Foxp2 protein. Given the importance of such networks for the acquisition of human spoken language, and the fact that similar mutations in human FOXP2 cause problems with speech development, this work opens up a new perspective on the use of mouse models for understanding pathways underlying speech and language disorders.
  • Ladd, D. R., Roberts, S. G., & Dediu, D. (2015). Correlational studies in typological and historical linguistics. Annual Review of Linguistics, 1, 221-241. doi:10.1146/annurev-linguist-030514-124819.

    Abstract

    We review a number of recent studies that have identified either correlations between different linguistic features (e.g., implicational universals) or correlations between linguistic features and nonlinguistic properties of speakers or their environment (e.g., effects of geography on vocabulary). We compare large-scale quantitative studies with more traditional theoretical and historical linguistic research and identify divergent assumptions and methods that have led linguists to be skeptical of correlational work. We also attempt to demystify statistical techniques and point out the importance of informed critiques of the validity of statistical approaches. Finally, we describe various methods used in recent correlational studies to deal with the fact that, because of contact and historical relatedness, individual languages in a sample rarely represent independent data points, and we show how these methods may allow us to explore linguistic prehistory to a greater time depth than is possible with orthodox comparative reconstruction.
  • Lai, V. T., Hagoort, P., & Casasanto, D. (2012). Affective primacy vs. cognitive primacy: Dissolving the debate. Frontiers in Psychology, 3, 243. doi:10.3389/fpsyg.2012.00243.

    Abstract

    When people see a snake, they are likely to activate both affective information (e.g., dangerous) and non-affective information about its ontological category (e.g., animal). According to the Affective Primacy Hypothesis, the affective information has priority, and its activation can precede identification of the ontological category of a stimulus. Alternatively, according to the Cognitive Primacy Hypothesis, perceivers must know what they are looking at before they can make an affective judgment about it. We propose that neither hypothesis holds at all times. Here we show that the relative speed with which affective and non-affective information gets activated by pictures and words depends upon the contexts in which stimuli are processed. Results illustrate that the question of whether affective information has processing priority over ontological information (or vice versa) is ill posed. Rather than seeking to resolve the debate over Cognitive vs. Affective Primacy in favor of one hypothesis or the other, a more productive goal may be to determine the factors that cause affective information to have processing priority in some circumstances and ontological information in others. Our findings support a view of the mind according to which words and pictures activate different neurocognitive representations every time they are processed, the specifics of which are co-determined by the stimuli themselves and the contexts in which they occur.
  • Lai, V. T., & Curran, T. (2015). Erratum to “ERP evidence for conceptual mappings and comparison processes during the comprehension of conventional and novel metaphors” [Brain Lang. 127 (3) (2013) 484–496]. Brain and Language, 149, 148-150. doi:10.1016/j.bandl.2014.11.001.
  • Lai, V. T., van Dam, W., Conant, L. L., Binder, J. R., & Desai, R. H. (2015). Familiarity differentially affects right hemisphere contributions to processing metaphors and literals. Frontiers in Human Neuroscience, 9: 44. doi:10.3389/fnhum.2015.00044.

    Abstract

    The role of the two hemispheres in processing metaphoric language is controversial. While some studies have reported a special role of the right hemisphere (RH) in processing metaphors, others indicate no difference in laterality relative to literal language. Some studies have found a role of the RH for novel/unfamiliar metaphors, but not
    conventional/familiar metaphors. It is not clear, however, whether the role of the RH
    is specific to metaphor novelty, or whether it reflects processing, reinterpretation or
    reanalysis of novel/unfamiliar language in general. Here we used functional magnetic
    resonance imaging (fMRI) to examine the effects of familiarity in both metaphoric and
    non-metaphoric sentences. A left lateralized network containing the middle and inferior
    frontal gyri, posterior temporal regions in the left hemisphere (LH), and inferior frontal
    regions in the RH, was engaged across both metaphoric and non-metaphoric sentences;
    engagement of this network decreased as familiarity decreased. No region was engaged
    selectively for greater metaphoric unfamiliarity. An analysis of laterality, however, showed that the contribution of the RH relative to that of LH does increase in a metaphorspecific manner as familiarity decreases. These results show that RH regions, taken by themselves, including commonly reported regions such as the right inferior frontal gyrus (IFG), are responsive to increased cognitive demands of processing unfamiliar stimuli, rather than being metaphor-selective. The division of labor between the two hemispheres, however, does shift towards the right for metaphoric processing. The shift results not because the RH contributes more to metaphoric processing. Rather, relative to
    its contribution for processing literals, the LH contributes less.
  • Lai, V. T., Willems, R. M., & Hagoort, P. (2015). Feel between the Lines: Implied emotion from combinatorial semantics. Journal of Cognitive Neuroscience, 27(8), 1528-1541. doi:10.1162/jocn_a_00798.

    Abstract

    This study investigated the brain regions for the comprehension of implied emotion in sentences. Participants read negative sentences without negative words, for example, “The boy fell asleep and never woke up again,” and their neutral counterparts “The boy stood up and grabbed his bag.” This kind of negative sentence allows us to examine implied emotion derived at the sentence level, without associative emotion coming from word retrieval. We found that implied emotion in sentences, relative to neutral sentences, led to activation in some emotion-related areas, including the medial prefrontal cortex, the amygdala, and the insula, as well as certain language-related areas, including the inferior frontal gyrus, which has been implicated in combinatorial processing. These results suggest that the emotional network involved in implied emotion is intricately related to the network for combinatorial processing in language, supporting the view that sentence meaning is more than simply concatenating the meanings of its lexical building blocks.
  • Lam, K. J. Y., Dijkstra, T., & Rueschemeyer, S.-A. (2015). Feature activation during word recognition: action, visual, and associative-semantic priming effects. Frontiers in Psychology, 6: 659. doi:10.3389/fpsyg.2015.00659.

    Abstract

    Embodied theories of language postulate that language meaning is stored in modality-specific brain areas generally involved in perception and action in the real world. However, the temporal dynamics of the interaction between modality-specific information and lexical-semantic processing remain unclear. We investigated the relative timing at which two types of modality-specific information (action-based and visual-form information) contribute to lexical-semantic comprehension. To this end, we applied a behavioral priming paradigm in which prime and target words were related with respect to (1) action features, (2) visual features, or (3) semantically associative information. Using a Go/No-Go lexical decision task, priming effects were measured across four different inter-stimulus intervals (ISI = 100, 250, 400, and 1000 ms) to determine the relative time course of the different features. Notably, action priming effects were found in ISIs of 100, 250, and 1000 ms whereas a visual priming effect was seen only in the ISI of 1000 ms. Importantly, our data suggest that features follow different time courses of activation during word recognition. In this regard, feature activation is dynamic, measurable in specific time windows but not in others. Thus the current study (1) demonstrates how multiple ISIs can be used within an experiment to help chart the time course of feature activation and (2) provides new evidence for embodied theories of language.
  • Lammertink, I., Casillas, M., Benders, T., Post, B., & Fikkert, P. (2015). Dutch and English toddlers' use of linguistic cues in predicting upcoming turn transitions. Frontiers in Psychology, 6: 495. doi:10.3389/fpsyg.2015.00495.
  • De Lange, F. P., Kalkman, J. S., Bleijenberg, G., Hagoort, P., Van der Werf, S. P., Van der Meer, J. W. M., & Toni, I. (2004). Neural correlates of the chronic fatigue syndrom: An fMRI study. Brain, 127(9), 1948-1957. doi:10.1093/brain/awh225.

    Abstract

    Chronic fatigue syndrome (CFS) is characterized by a debilitating fatigue of unknown aetiology. Patients who suffer from CFS report a variety of physical complaints as well as neuropsychological complaints. Therefore, it is conceivable that the CNS plays a role in the pathophysiology of CFS. The purpose of this study was to investigate neural correlates of CFS, and specifically whether there exists a linkage between disturbances in the motor system and CFS. We measured behavioural performance and cerebral activity using rapid event-related functional MRI in 16 CFS patients and 16 matched healthy controls while they were engaged in a motor imagery task and a control visual imagery task. CFS patients were considerably slower on performance of both tasks, but the increase in reaction time with increasing task load was similar between the groups. Both groups used largely overlapping neural resources. However, during the motor imagery task, CFS patients evoked stronger responses in visually related structures. Furthermore, there was a marked between-groups difference during erroneous performance. In both groups, dorsal anterior cingulate cortex was specifically activated during error trials. Conversely, ventral anterior cingulate cortex was active when healthy controls made an error, but remained inactive when CFS patients made an error. Our results support the notion that CFS may be associated with dysfunctional motor planning. Furthermore, the between-groups differences observed during erroneous performance point to motivational disturbances as a crucial component of CFS.
  • Lartseva, A., Dijkstra, T., & Buitelaar, J. (2015). Emotional language processing in Autism Spectrum Disorders: A systematic review. Frontiers in Human Neuroscience, 8: 991. doi:10.3389/fnhum.2014.00991.

    Abstract

    In his first description of Autism Spectrum Disorders (ASD), Kanner emphasized emotional impairments by characterizing children with ASD as indifferent to other people, self-absorbed, emotionally cold, distanced, and retracted. Thereafter, emotional impairments became regarded as part of the social impairments of ASD, and research mostly focused on understanding how individuals with ASD recognize visual expressions of emotions from faces and body postures. However, it still remains unclear how emotions are processed outside of the visual domain. This systematic review aims to fill this gap by focusing on impairments of emotional language processing in ASD.
    We systematically searched PubMed for papers published between 1990 and 2013 using standardized search terms. Studies show that people with ASD are able to correctly classify emotional language stimuli as emotionally positive or negative. However, processing of emotional language stimuli in ASD is associated with atypical patterns of attention and memory performance, as well as abnormal physiological and neural activity. Particularly, younger children with ASD have difficulties in acquiring and developing emotional concepts, and avoid using these in discourse. These emotional language impairments were not consistently associated with age, IQ, or level of development of language skills.
    We discuss how emotional language impairments fit with existing cognitive theories of ASD, such as central coherence, executive dysfunction, and weak Theory of Mind. We conclude that emotional impairments in ASD may be broader than just a mere consequence of social impairments, and should receive more attention in future research
  • Lattenkamp, E. Z., Hörpel, S. G., Mengede, J., & Firzlaff, U. (2021). A researcher’s guide to the comparison of vocal production learning. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200237. doi:10.1098/rstb.2020.0237.

    Abstract

    Vocal production learning (VPL) is the capacity to learn to produce new vocalizations, which is a rare ability in the animal kingdom and thus far has only been identified in a handful of mammalian taxa and three groups of birds. Over the last few decades, approaches to the demonstration of VPL have varied among taxa, sound production systems and functions. These discrepancies strongly impede direct comparisons between studies. In the light of the growing number of experimental studies reporting VPL, the need for comparability is becoming more and more pressing. The comparative evaluation of VPL across studies would be facilitated by unified and generalized reporting standards, which would allow a better positioning of species on any proposed VPL continuum. In this paper, we specifically highlight five factors influencing the comparability of VPL assessments: (i) comparison to an acoustic baseline, (ii) comprehensive reporting of acoustic parameters, (iii) extended reporting of training conditions and durations, (iv) investigating VPL function via behavioural, perception-based experiments and (v) validation of findings on a neuronal level. These guidelines emphasize the importance of comparability between studies in order to unify the field of vocal learning.
  • Lattenkamp, E. Z., Linnenschmidt, M., Mardus, E., Vernes, S. C., Wiegrebe, L., & Schutte, M. (2021). The vocal development of the pale spear-nosed bat is dependent on auditory feedback. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200253. doi:10.1098/rstb.2020.0253.

    Abstract

    Human vocal development and speech learning require acoustic feedback, and
    humans who are born deaf do not acquire a normal adult speech capacity. Most
    other mammals display a largely innate vocal repertoire. Like humans, bats are
    thought to be one of the few taxa capable of vocal learning as they can acquire
    new vocalizations by modifying vocalizations according to auditory experiences.
    We investigated the effect of acoustic deafening on the vocal development of the
    pale spear-nosed bat. Three juvenile pale spear-nosed bats were deafened, and
    their vocal development was studied in comparison with an age-matched, hear-
    ing control group. The results show that during development the deafened bats
    increased their vocal activity, and their vocalizations were substantially altered,
    being much shorter, higher in pitch, and more aperiodic than the vocalizations
    of the control animals. The pale spear-nosed bat relies on auditory feedback
    for vocal development and, in the absence of auditory input, species-atypical
    vocalizations are acquired. This work serves as a basis for further research
    using the pale spear-nosed bat as a mammalian model for vocal learning, and
    contributes to comparative studies on hearing impairment across species.
    This article is part of the theme issue ‘Vocal learning in animals and
    humans’.
  • Lattenkamp, E. Z., Nagy, M., Drexl, M., Vernes, S. C., Wiegrebe, L., & Knörnschild, M. (2021). Hearing sensitivity and amplitude coding in bats are differentially shaped by echolocation calls and social calls. Proceedings of the Royal Society B: Biological Sciences, 288(1942): 20202600. doi:10.1098/rspb.2020.2600.

    Abstract

    Differences in auditory perception between species are influenced by phylogenetic origin and the perceptual challenges imposed by the natural environment, such as detecting prey- or predator-generated sounds and communication signals. Bats are well suited for comparative studies on auditory perception since they predominantly rely on echolocation to perceive the world, while their social calls and most environmental sounds have low frequencies. We tested if hearing sensitivity and stimulus level coding in bats differ between high and low-frequency ranges by measuring auditory brainstem responses (ABRs) of 86 bats belonging to 11 species. In most species, auditory sensitivity was equally good at both high- and low-frequency ranges, while amplitude was more finely coded for higher frequency ranges. Additionally, we conducted a phylogenetic comparative analysis by combining our ABR data with published data on 27 species. Species-specific peaks in hearing sensitivity correlated with peak frequencies of echolocation calls and pup isolation calls, suggesting that changes in hearing sensitivity evolved in response to frequency changes of echolocation and social calls. Overall, our study provides the most comprehensive comparative assessment of bat hearing capacities to date and highlights the evolutionary pressures acting on their sensory perception.

    Additional information

    data
  • Law, R., & Pylkkänen, L. (2021). Lists with and without syntax: A new approach to measuring the neural processing of syntax. The Journal of Neuroscience, 41(10), 2186-2196. doi:10.1523/JNEUROSCI.1179-20.2021.

    Abstract

    In the neurobiology of language, a fundamental challenge is deconfounding syntax from semantics. Changes in syntactic structure usually correlate with changes in meaning. We approached this challenge from a new angle. We deployed word lists, which are usually the unstructured control in studies of syntax, as both the test and the control stimulus. Three-noun lists (lamps, dolls, guitars) were embedded in sentences (The eccentric man hoarded lamps, dolls, guitars…) and in longer lists (forks, pen, toilet, rodeo, graves, drums, mulch, lamps, dolls, guitars…). This allowed us to perfectly control both lexical characteristics and local combinatorics: the same words occurred in both conditions and in neither case did the list items locally compose into phrases (e.g. ‘lamps’ and ‘dolls’ do not form a phrase). But in one case, the list partakes in a syntactic tree, while in the other, it does not. Being embedded inside a syntactic tree increased source-localized MEG activity at ~250-300ms from word onset in the left inferior frontal cortex, at ~300-350ms in the left anterior temporal lobe and, most reliably, at ~330-400ms in left posterior temporal cortex. In contrast, effects of semantic association strength, which we also varied, localized in left temporo-parietal cortex, with high associations increasing activity at around 400ms. This dissociation offers a novel characterization of the structure vs. meaning contrast in the brain: The fronto-temporal network that is familiar from studies of sentence processing can be driven by the sheer presence of global sentence structure, while associative semantics has a more posterior neural signature.

    Additional information

    Link to Preprint on BioRxiv
  • Lee, S. A., Ferrari, A., Vallortigara, G., & Sovrano, V. A. (2015). Boundary primacy in spatial mapping: Evidence from zebrafish (Danio rerio). Behavioural Processes, 119, 116-122. doi:10.1016/j.beproc.2015.07.012.

    Abstract

    The ability to map locations in the surrounding environment is crucial for any navigating animal. Decades of research on mammalian spatial representations suggest that environmental boundaries play a major role in both navigation behavior and hippocampal place coding. Although the capacity for spatial mapping is shared among vertebrates, including birds and fish, it is not yet clear whether such similarities in competence reflect common underlying mechanisms. The present study tests cue specificity in spatial mapping in zebrafish, by probing their use of various visual cues to encode the location of a nearby conspecific. The results suggest that untrained zebrafish, like other vertebrates tested so far, rely primarily on environmental boundaries to compute spatial relationships and, at the same time, use other visible features such as surface markings and freestanding objects as local cues to goal locations. We propose that the pattern of specificity in spontaneous spatial mapping behavior across vertebrates reveals cross-species commonalities in its underlying neural representations.
  • Lehtonen, M., Hulten, A., Rodríguez-Fornells, A., Cunillera, T., Tuomainen, J., & Laine, M. (2012). Differences in word recognition between early bilinguals and monolinguals: Behavioral and ERP evidence. Neuropsychologia, 50, 1362-1371. doi:10.1016/j.neuropsychologia.2012.02.021.

    Abstract

    We investigated the behavioral and brain responses (ERPs) of bilingual word recognition to three fundamental psycholinguistic factors, frequency, morphology, and lexicality, in early bilinguals vs. monolinguals. Earlier behavioral studies have reported larger frequency effects in bilingualś nondominant vs. dominant language and in some studies also when compared to corresponding monolinguals. In ERPs, language processing differences between bilinguals vs. monolinguals have typically been found in the N400 component. In the present study, highly proficient Finnish-Swedish bilinguals who had acquired both languages during childhood were compared to Finnish monolinguals during a visual lexical decision task and simultaneous ERP recordings. Behaviorally, we found that the response latencies were overall longer in bilinguals than monolinguals, and that the effects for all three factors, frequency, morphology, and lexicality were also larger in bilinguals even though they had acquired both languages early and were highly proficient in them. In line with this, the N400 effects induced by frequency, morphology, and lexicality were larger for bilinguals than monolinguals. Furthermore, the ERP results also suggest that while most inflected Finnish words are decomposed into stem and suffix, only monolinguals have encountered high frequency inflected word forms often enough to develop full-form representations for them. Larger behavioral and neural effects in bilinguals in these factors likely reflect lower amount of exposure to words compared to monolinguals, as the language input of bilinguals is divided between two languages.
  • Lemen, H., Lieven, E., & Theakston, A. (2021). A comparison of the pragmatic patterns in the spontaneous because- and if-sentences produced by children and their caregivers. Journal of Pragmatics, 185, 15-34. doi:10.1016/j.pragma.2021.07.016.

    Abstract

    Findings from corpus (e.g. Diessel, 2004) and comprehension (e.g. De Ruiter et al., 2018) studies show that children produce the adverbial connectives because and if long before they seem able to understand them. However, although children's comprehension is typically tested on sentences expressing the pragmatic relationship which Sweetser (1990) calls “Content”, children also hear and produce sentences expressing “Speech–Act” relationships (e.g. De Ruiter et al., 2021; Kyratzis et al., 1990). To better understand the possible influence of pragmatic variation on 2- to 4- year-old children's acquisition of these connectives, we coded the because and if Speech–Act sentences of 14 British English-speaking mother-child dyads for the type of illocutionary act they contained, as well as the phrasing following the connective. Analyses revealed that children's because Speech–Act sentences were primarily explanations of Statements/Claims, while their if Speech–Act sentences typically related to permission and politeness. While children's because-sentences showed a great deal of individuality, their if-sentences closely resembled their mothers’, containing a high proportion of recurring phrases which appear to be abstracted from input. We discuss how these patterns might help shape children's understanding of each connective and contribute to the children's overall difficulty with because and if.
  • Lemhöfer, K., & Broersma, M. (2012). Introducing LexTALE: A quick and valid Lexical Test for Advanced Learners of English. Behavior Research Methods, 44, 325-343. doi:10.3758/s13428-011-0146-0.

    Abstract

    The increasing number of experimental studies on second language (L2) processing, frequently with English as the L2, calls for a practical and valid measure of English vocabulary knowledge and proficiency. In a large-scale study with Dutch and Korean speakers of L2 English, we tested whether LexTALE, a 5-min vocabulary test, is a valid predictor of English vocabulary knowledge and, possibly, even of general English proficiency. Furthermore, the validity of LexTALE was compared with that of self-ratings of proficiency, a measure frequently used by L2 researchers. The results showed the following in both speaker groups: (1) LexTALE was a good predictor of English vocabulary knowledge; 2) it also correlated substantially with a measure of general English proficiency; and 3) LexTALE was generally superior to self-ratings in its predictions. LexTALE, but not self-ratings, also correlated highly with previous experimental data on two word recognition paradigms. The test can be carried out on or downloaded from www.lextale.com.
  • Lesage, E., Morgan, B. E., Olson, A. C., Meyer, A. S., & Miall, R. C. (2012). Cerebellar rTMS disrupts predictive language processing. Current Biology, 22, R794-R795. doi:10.1016/j.cub.2012.07.006.

    Abstract

    The human cerebellum plays an important role in language, amongst other cognitive and motor functions [1], but a unifying theoretical framework about cerebellar language function is lacking. In an established model of motor control, the cerebellum is seen as a predictive machine, making short-term estimations about the outcome of motor commands. This allows for flexible control, on-line correction, and coordination of movements [2]. The homogeneous cytoarchitecture of the cerebellar cortex suggests that similar computations occur throughout the structure, operating on different input signals and with different output targets [3]. Several authors have therefore argued that this ‘motor’ model may extend to cerebellar nonmotor functions [3], [4] and [5], and that the cerebellum may support prediction in language processing [6]. However, this hypothesis has never been directly tested. Here, we used the ‘Visual World’ paradigm [7], where on-line processing of spoken sentence content can be assessed by recording the latencies of listeners' eye movements towards objects mentioned. Repetitive transcranial magnetic stimulation (rTMS) was used to disrupt function in the right cerebellum, a region implicated in language [8]. After cerebellar rTMS, listeners showed delayed eye fixations to target objects predicted by sentence content, while there was no effect on eye fixations in sentences without predictable content. The prediction deficit was absent in two control groups. Our findings support the hypothesis that computational operations performed by the cerebellum may support prediction during both motor control and language processing.

    Additional information

    Lesage_Suppl_Information.pdf
  • Lev-Ari, S., & Keysar, B. (2012). Less detailed representation of non-native language: Why non-native speakers’ stories seem more vague. Discourse Processes, 49(7), 523-538. doi:10.1080/0163853X.2012.698493.

    Abstract

    The language of non-native speakers is less reliable than the language of native
    speakers in conveying the speaker’s intentions. We propose that listeners expect
    such reduced reliability and that this leads them to adjust the manner in which they
    process and represent non-native language by representing non-native language
    in less detail. Experiment 1 shows that when people listen to a story, they are
    less able to detect a word change with a non-native than with a native speaker.
    This suggests they represent the language of a non-native speaker with fewer
    details. Experiment 2 shows that, above a certain threshold, the higher participants’
    working memory is, the less they are able to detect the change with a non-native
    speaker. This suggests that adjustment to non-native speakers depends on working
    memory. This research has implications for the role of interpersonal expectations
    in the way people process language.
  • Lev-Ari, S. (2015). Comprehending non-native speakers: Theory and evidence for adjustment in manner of processing. Frontiers in Psychology, 5: 1546. doi:10.3389/fpsyg.2014.01546.

    Abstract

    Non-native speakers have lower linguistic competence than native speakers, which renders their language less reliable in conveying their intentions. We suggest that expectations of lower competence lead listeners to adapt their manner of processing when they listen to non-native speakers. We propose that listeners use cognitive resources to adjust by increasing their reliance on top-down processes and extracting less information from the language of the non-native speaker. An eye-tracking study supports our proposal by showing that when following instructions by a non-native speaker, listeners make more contextually-induced interpretations. Those with relatively high working memory also increase their reliance on context to anticipate the speaker’s upcoming reference, and are less likely to notice lexical errors in the non-native speech, indicating that they take less information from the speaker’s language. These results contribute to our understanding of the flexibility in language processing and have implications for interactions between native and non-native speakers

    Additional information

    Data Sheet 1.docx
  • Levelt, W. J. M., Meyer, A. S., & Roelofs, A. (2004). Relations of lexical access to neural implementation and syntactic encoding [author's response]. Behavioral and Brain Sciences, 27, 299-301. doi:10.1017/S0140525X04270078.

    Abstract

    How can one conceive of the neuronal implementation of the processing model we proposed in our target article? In his commentary (Pulvermüller 1999, reprinted here in this issue), Pulvermüller makes various proposals concerning the underlying neural mechanisms and their potential localizations in the brain. These proposals demonstrate the compatibility of our processing model and current neuroscience. We add further evidence on details of localization based on a recent meta-analysis of neuroimaging studies of word production (Indefrey & Levelt 2000). We also express some minor disagreements with respect to Pulvermüller’s interpretation of the “lemma” notion, and concerning his neural modeling of phonological code retrieval. Branigan & Pickering discuss important aspects of syntactic encoding, which was not the topic of the target article. We discuss their well-taken proposal that multiple syntactic frames for a single verb lemma are represented as independent nodes, which can be shared with other verbs, such as accounting for syntactic priming in speech production. We also discuss how, in principle, the alternative multiple-frame-multiplelemma account can be tested empirically. The available evidence does not seem to support that account.
  • Levelt, W. J. M. (2004). Speech, gesture and the origins of language. European Review, 12(4), 543-549. doi:10.1017/S1062798704000468.

    Abstract

    During the second half of the 19th century, the psychology of language was invented as a discipline for the sole purpose of explaining the evolution of spoken language. These efforts culminated in Wilhelm Wundt’s monumental Die Sprache of 1900, which outlined the psychological mechanisms involved in producing utterances and considered how these mechanisms could have evolved. Wundt assumes that articulatory movements were originally rather arbitrary concomitants of larger, meaningful expressive bodily gestures. The sounds such articulations happened to produce slowly acquired the meaning of the gesture as a whole, ultimately making the gesture superfluous. Over a century later, gestural theories of language origins still abound. I argue that such theories are unlikely and wasteful, given the biological, neurological and genetic evidence.
  • Levelt, W. J. M. (2004). Een huis voor kunst en wetenschap. Boekman: Tijdschrift voor Kunst, Cultuur en Beleid, 16(58/59), 212-215.
  • Levelt, W. J. M., Praamstra, P., Meyer, A. S., Helenius, P., & Salmelin, R. (1998). An MEG study of picture naming. Journal of Cognitive Neuroscience, 10(5), 553-567. doi:10.1162/089892998562960.

    Abstract

    The purpose of this study was to relate a psycholinguistic processing model of picture naming to the dynamics of cortical activation during picture naming. The activation was recorded from eight Dutch subjects with a whole-head neuromagnetometer. The processing model, based on extensive naming latency studies, is a stage model. In preparing a picture's name, the speaker performs a chain of specific operations. They are, in this order, computing the visual percept, activating an appropriate lexical concept, selecting the target word from the mental lexicon, phonological encoding, phonetic encoding, and initiation of articulation. The time windows for each of these operations are reasonably well known and could be related to the peak activity of dipole sources in the individual magnetic response patterns. The analyses showed a clear progression over these time windows from early occipital activation, via parietal and temporal to frontal activation. The major specific findings were that (1) a region in the left posterior temporal lobe, agreeing with the location of Wernicke's area, showed prominent activation starting about 200 msec after picture onset and peaking at about 350 msec, (i.e., within the stage of phonological encoding), and (2) a consistent activation was found in the right parietal cortex, peaking at about 230 msec after picture onset, thus preceding and partly overlapping with the left temporal response. An interpretation in terms of the management of visual attention is proposed.
  • Levelt, W. J. M., & Schiller, N. O. (1998). Is the syllable frame stored? [Commentary on the BBS target article 'The frame/content theory of evolution of speech production' by Peter F. McNeilage]. Behavioral and Brain Sciences, 21, 520.

    Abstract

    This commentary discusses whether abstract metrical frames are stored. For stress-assigning languages (e.g., Dutch and English), which have a dominant stress pattern, metrical frames are stored only for words that deviate from the default stress pattern. The majority of the words in these languages are produced without retrieving any independent syllabic or metrical frame.
  • Levelt, W. J. M., Richardson, G., & La Heij, W. (1985). Pointing and voicing in deictic expressions. Journal of Memory and Language, 24, 133-164. doi:10.1016/0749-596X(85)90021-X.

    Abstract

    The present paper studies how, in deictic expressions, the temporal interdependency of speech and gesture is realized in the course of motor planning and execution. Two theoretical positions were compared. On the “interactive” view the temporal parameters of speech and gesture are claimed to be the result of feedback between the two systems throughout the phases of motor planning and execution. The alternative “ballistic” view, however, predicts that the two systems are independent during the phase of motor execution, the temporal parameters having been preestablished in the planning phase. In four experiments subjects were requested to indicate which of an array of referent lights was momentarily illuminated. This was done by pointing to the light and/or by using a deictic expression (this/that light). The temporal and spatial course of the pointing movement was automatically registered by means of a Selspot opto-electronic system. By analyzing the moments of gesture initiation and apex, and relating them to the moments of speech onset, it was possible to show that, for deictic expressions, the ballistic view is very nearly correct.
  • Levelt, W. J. M. (1998). The genetic perspective in psycholinguistics, or: Where do spoken words come from? Journal of Psycholinguistic Research, 27(2), 167-180. doi:10.1023/A:1023245931630.

    Abstract

    The core issue in the 19-century sources of psycholinguistics was the question, "Where does language come from?'' This genetic perspective unified the study of the ontogenesis, the phylogenesis, the microgenesis, and to some extent the neurogenesis of language. This paper makes the point that this original perspective is still a valid and attractive one. It is exemplified by a discussion of the genesis of spoken words.
  • Levinson, S. C. (2012). Authorship: Include all institutes in publishing index [Correspondence]. Nature, 485, 582. doi:10.1038/485582c.
  • Levinson, S. C. (2012). Kinship and human thought. Science, 336(6084), 988-989. doi:10.1126/science.1222691.

    Abstract

    Language and communication are central to shaping concepts such as kinship categories.
  • Levinson, S. C. (2015). John Joseph Gumperz (1922–2013) [Obituary]. American Anthropologist, 117(1), 212-224. doi:10.1111/aman.12185.
  • Levinson, S. C. (2015). Other-initiated repair in Yélî Dnye: Seeing eye-to-eye in the language of Rossel Island. Open Linguistics, 1(1), 386-410. doi:10.1515/opli-2015-0009.

    Abstract

    Other-initiated repair (OIR) is the fundamental back-up system that ensures the effectiveness of human communication in its primordial niche, conversation. This article describes the interactional and linguistic patterns involved in other-initiated repair in Yélî Dnye, the Papuan language of Rossel Island, Papua New Guinea. The structure of the article is based on the conceptual set of distinctions described in Chapters 1 and 2 of the special issue, and describes the major properties of the Rossel Island system, and the ways in which OIR in this language both conforms to familiar European patterns and deviates from those patterns. Rossel Island specialities include lack of a Wh-word open class repair initiator, and a heavy reliance on visual signals that makes it possible both to initiate repair and confirm it non-verbally. But the overall system conforms to universal expectations.
  • Levinson, S. C. (1998). Studying spatial conceptualization across cultures: Anthropology and cognitive science. Ethos, 26(1), 7-24. doi:10.1525/eth.1998.26.1.7.

    Abstract

    Philosophers, psychologists, and linguists have argued that spatial conception is pivotal to cognition in general, providing a general, egocentric, and universal framework for cognition as well as metaphors for conceptualizing many other domains. But in an aboriginal community in Northern Queensland, a system of cardinal directions informs not only language, but also memory for arbitrary spatial arrays and directions. This work suggests that fundamental cognitive parameters, like the system of coding spatial locations, can vary cross-culturally, in line with the language spoken by a community. This opens up the prospect of a fruitful dialogue between anthropology and the cognitive sciences on the complex interaction between cultural and universal factors in the constitution of mind.
  • Levinson, S. C. (2012). The original sin of cognitive science. Topics in Cognitive Science, 4, 396-403. doi:10.1111/j.1756-8765.2012.01195.x.

    Abstract

    Classical cognitive science was launched on the premise that the architecture of human cognition is uniform and universal across the species. This premise is biologically impossible and is being actively undermined by, for example, imaging genomics. Anthropology (including archaeology, biological anthropology, linguistics, and cultural anthropology) is, in contrast, largely concerned with the diversification of human culture, language, and biology across time and space—it belongs fundamentally to the evolutionary sciences. The new cognitive sciences that will emerge from the interactions with the biological sciences will focus on variation and diversity, opening the door for rapprochement with anthropology.
  • Levinson, S. C., & Torreira, F. (2015). Timing in turn-taking and its implications for processing models of language. Frontiers in Psychology, 6: 731. doi:10.3389/fpsyg.2015.00731.

    Abstract

    The core niche for language use is in verbal interaction, involving the rapid exchange of turns at talking. This paper reviews the extensive literature about this system, adding new statistical analyses of behavioural data where they have been missing, demonstrating that turn-taking has the systematic properties originally noted by Sacks, Schegloff and Jefferson (1974; hereafter SSJ). This system poses some significant puzzles for current theories of language processing: the gaps between turns are short (of the order of 200 ms), but the latencies involved in language production are much longer (over 600 ms). This seems to imply that participants in conversation must predict (or ‘project’ as SSJ have it) the end of the current speaker’s turn in order to prepare their response in advance. This in turn implies some overlap between production and comprehension despite their use of common processing resources. Collecting together what is known behaviourally and experimentally about the system, the space for systematic explanations of language processing for conversation can be significantly narrowed, and we sketch some first model of the mental processes involved for the participant preparing to speak next.
  • Levinson, S. C., & Gray, R. D. (2012). Tools from evolutionary biology shed new light on the diversification of languages. Trends in Cognitive Sciences, 16(3), 167-173. doi:10.1016/j.tics.2012.01.007.

    Abstract

    Computational methods have revolutionized evolutionary biology. In this paper we explore the impact these methods are now having on our understanding of the forces that both affect the diversification of human languages and shape human cognition. We show how these methods can illuminate problems ranging from the nature of constraints on linguistic variation to the role that social processes play in determining the rate of linguistic change. Throughout the paper we argue that the cognitive sciences should move away from an idealized model of human cognition, to a more biologically realistic model where variation is central.
  • Levshina, N. (2021). Cross-linguistic trade-offs and causal relationships between cues to grammatical subject and object, and the problem of efficiency-related explanations. Frontiers in Psychology, 12: 648200. doi:10.3389/fpsyg.2021.648200.

    Abstract

    Cross-linguistic studies focus on inverse correlations (trade-offs) between linguistic variables that reflect different cues to linguistic meanings. For example, if a language has no case marking, it is likely to rely on word order as a cue for identification of grammatical roles. Such inverse correlations are interpreted as manifestations of language users’ tendency to use language efficiently. The present study argues that this interpretation is problematic. Linguistic variables, such as the presence of case, or flexibility of word order, are aggregate properties, which do not represent the use of linguistic cues in context directly. Still, such variables can be useful for circumscribing the potential role of communicative efficiency in language evolution, if we move from cross-linguistic trade-offs to multivariate causal networks. This idea is illustrated by a case study of linguistic variables related to four types of Subject and Object cues: case marking, rigid word order of Subject and Object, tight semantics and verb-medial order. The variables are obtained from online language corpora in thirty languages, annotated with the Universal Dependencies. The causal model suggests that the relationships between the variables can be explained predominantly by sociolinguistic factors, leaving little space for a potential impact of efficient linguistic behavior.
  • Levshina, N., & Moran, S. (2021). Efficiency in human languages: Corpus evidence for universal principles. Linguistics Vanguard, 7(s3): 20200081. doi:10.1515/lingvan-2020-0081.

    Abstract

    Over the last few years, there has been a growing interest in communicative efficiency. It has been argued that language users act efficiently, saving effort for processing and articulation, and that language structure and use reflect this tendency. The emergence of new corpus data has brought to life numerous studies on efficient language use in the lexicon, in morphosyntax, and in discourse and phonology in different languages. In this introductory paper, we discuss communicative efficiency in human languages, focusing on evidence of efficient language use found in multilingual corpora. The evidence suggests that efficiency is a universal feature of human language. We provide an overview of different manifestations of efficiency on different levels of language structure, and we discuss the major questions and findings so far, some of which are addressed for the first time in the contributions in this special collection.
  • Levshina, N., & Moran, S. (Eds.). (2021). Efficiency in human languages: Corpus evidence for universal principles [Special Issue]. Linguistics Vanguard, 7(s3).
  • Levshina, N. (2021). Communicative efficiency and differential case marking: A reverse-engineering approach. Linguistics Vanguard, 7(s3): 20190087. doi:10.1515/lingvan-2019-0087.
  • Lewis, A. G., & Bastiaansen, M. C. M. (2015). A predictive coding framework for rapid neural dynamics during sentence-level language comprehension. Cortex, 68, 155-168. doi:10.1016/j.cortex.2015.02.014.

    Abstract

    There is a growing literature investigating the relationship between oscillatory neural dynamics measured using EEG and/or MEG, and sentence-level language comprehension. Recent proposals have suggested a strong link between predictive coding accounts of the hierarchical flow of information in the brain, and oscillatory neural dynamics in the beta and gamma frequency ranges. We propose that findings relating beta and gamma oscillations to sentence-level language comprehension might be unified under such a predictive coding account. Our suggestion is that oscillatory activity in the beta frequency range may reflect both the active maintenance of the current network configuration responsible for representing the sentence-level meaning under construction, and the top-down propagation of predictions to hierarchically lower processing levels based on that representation. In addition, we suggest that oscillatory activity in the low and middle gamma range reflect the matching of top-down predictions with bottom-up linguistic input, while evoked high gamma might reflect the propagation of bottom-up prediction errors to higher levels of the processing hierarchy. We also discuss some of the implications of this predictive coding framework, and we outline ideas for how these might be tested experimentally
  • Lewis, A. G., Wang, L., & Bastiaansen, M. C. M. (2015). Fast oscillatory dynamics during language comprehension: Unification versus maintenance and prediction? Brain and Language, 148, 51-63. doi:10.1016/j.bandl.2015.01.003.

    Abstract

    The role of neuronal oscillations during language comprehension is not yet well understood. In this paper we review and reinterpret the functional roles of beta- and gamma-band oscillatory activity during language comprehension at the sentence and discourse level. We discuss the evidence in favor of a role for beta and gamma in unification (the unification hypothesis), and in light of mounting evidence that cannot be accounted for under this hypothesis, we explore an alternative proposal linking beta and gamma oscillations to maintenance and prediction (respectively) during language comprehension. Our maintenance/prediction hypothesis is able to account for most of the findings that are currently available relating beta and gamma oscillations to language comprehension, and is in good agreement with other proposals about the roles of beta and gamma in domain-general cognitive processing. In conclusion we discuss proposals for further testing and comparing the prediction and unification hypotheses.
  • Liebal, K., & Haun, D. B. M. (2012). The importance of comparative psychology for developmental science [Review Article]. International Journal of Developmental Science, 6, 21-23. doi:10.3233/DEV-2012-11088.

    Abstract

    The aim of this essay is to elucidate the relevance of cross-species comparisons for the investigation of human behavior and its development. The focus is on the comparison of human children and another group of primates, the non-human great apes, with special attention to their cognitive skills. Integrating a comparative and developmental perspective, we argue, can provide additional answers to central and elusive questions about human behavior in general and its development in particular: What are the heritable predispositions of the human mind? What cognitive traits are uniquely human? In this sense, Developmental Science would benefit from results of Comparative Psychology.
  • Lima, C. F., Lavan, N., Evans, S., Agnew, Z., Halpern, A. R., Shanmugalingam, P., Meekings, S., Boebinger, D., Ostarek, M., McGettigan, C., Warren, J. E., & Scott, S. K. (2015). Feel the Noise: Relating individual differences in auditory imagery to the structure and function of sensorimotor systems. Cerebral Cortex., 2015(25), 4638-4650. doi:10.1093/cercor/bhv134.

    Abstract

    Humans can generate mental auditory images of voices or songs, sometimes perceiving them almost as vividly as perceptual experiences. The functional networks supporting auditory imagery have been described, but less is known about the systems associated with interindividual differences in auditory imagery. Combining voxel-based morphometry and fMRI, we examined the structural basis of interindividual differences in how auditory images are subjectively perceived, and explored associations between auditory imagery, sensory-based processing, and visual imagery. Vividness of auditory imagery correlated with gray matter volume in the supplementary motor area (SMA), parietal cortex, medial superior frontal gyrus, and middle frontal gyrus. An analysis of functional responses to different types of human vocalizations revealed that the SMA and parietal sites that predict imagery are also modulated by sound type. Using representational similarity analysis, we found that higher representational specificity of heard sounds in SMA predicts vividness of imagery, indicating a mechanistic link between sensory- and imagery-based processing in sensorimotor cortex. Vividness of imagery in the visual domain also correlated with SMA structure, and with auditory imagery scores. Altogether, these findings provide evidence for a signature of imagery in brain structure, and highlight a common role of perceptual–motor interactions for processing heard and internally generated auditory information.
  • Linkenauger, S. A., Lerner, M. D., Ramenzoni, V. C., & Proffitt, D. R. (2012). A perceptual-motor deficit predicts social and communicative impairments in individuals with autism spectrum disorders. Autism Research, 5, 352-362. doi:10.1002/aur.1248.

    Abstract

    Individuals with autism spectrum disorders (ASDs) have known impairments in social and motor skills. Identifying putative underlying mechanisms of these impairments could lead to improved understanding of the etiology of core social/communicative deficits in ASDs, and identification of novel intervention targets. The ability to perceptually integrate one's physical capacities with one's environment (affordance perception) may be such a mechanism. This ability has been theorized to be impaired in ASDs, but this question has never been directly tested. Crucially, affordance perception has shown to be amenable to learning; thus, if it is implicated in deficits in ASDs, it may be a valuable unexplored intervention target. The present study compared affordance perception in adolescents and adults with ASDs to typically developing (TD) controls. Two groups of individuals (adolescents and adults) with ASDs and age-matched TD controls completed well-established action capability estimation tasks (reachability, graspability, and aperture passability). Their caregivers completed a measure of their lifetime social/communicative deficits. Compared with controls, individuals with ASDs showed unprecedented gross impairments in relating information about their bodies' action capabilities to visual information specifying the environment. The magnitude of these deficits strongly predicted the magnitude of social/communicative impairments in individuals with ASDs. Thus, social/communicative impairments in ASDs may derive, at least in part, from deficits in basic perceptual–motor processes (e.g. action capability estimation). Such deficits may impair the ability to maintain and calibrate the relationship between oneself and one's social and physical environments, and present fruitful, novel, and unexplored target for intervention.
  • Liszkowski, U., Brown, P., Callaghan, T., Takada, A., & De Vos, C. (2012). A prelinguistic gestural universal of human communication. Cognitive Science, 36, 698-713. doi:10.1111/j.1551-6709.2011.01228.x.

    Abstract

    Several cognitive accounts of human communication argue for a language-independent, prelinguistic basis of human communication and language. The current study provides evidence for the universality of a prelinguistic gestural basis for human communication. We used a standardized, semi-natural elicitation procedure in seven very different cultures around the world to test for the existence of preverbal pointing in infants and their caregivers. Results were that by 10–14 months of age, infants and their caregivers pointed in all cultures in the same basic situation with similar frequencies and the same proto-typical morphology of the extended index finger. Infants’ pointing was best predicted by age and caregiver pointing, but not by cultural group. Further analyses revealed a strong relation between the temporal unfolding of caregivers’ and infants’ pointing events, uncovering a structure of early prelinguistic gestural conversation. Findings support the existence of a gestural, language-independent universal of human communication that forms a culturally shared, prelinguistic basis for diversified linguistic communication.
  • Liszkowski, U., & Ramenzoni, V. C. (2015). Pointing to nothing? Empty places prime infants' attention to absent objects. Infancy, 20, 433-444. doi:10.1111/infa.12080.

    Abstract

    People routinely point to empty space when referring to absent entities. These points to "nothing" are meaningful because they direct attention to places that stand in for specific entities. Typically, the meaning of places in terms of absent referents is established through preceding discourse and accompanying language. However, it is unknown whether nonlinguistic actions can establish locations as meaningful places, and whether infants have the capacity to represent a place as standing in for an object. In a novel eye-tracking paradigm, 18-month-olds watched objects being placed in specific locations. Then, the objects disappeared and a point directed infants' attention to an emptied place. The point to the empty place primed infants in a subsequent scene (in which the objects appeared at novel locations) to look more to the object belonging to the indicated place than to a distracter referent. The place-object expectations were strong enough to interfere when reversing the place-object associations. Findings show that infants comprehend nonlinguistic reference to absent entities, which reveals an ontogenetic early, nonverbal understanding of places as representations of absent objects
  • Liszkowski, U., Carpenter, M., Henning, A., Striano, T., & Tomasello, M. (2004). Twelve-month-olds point to share attention and interest. Developmental Science, 7(3), 297-307. doi:10.1111/j.1467-7687.2004.00349.x.

    Abstract

    Infants point for various motives. Classically, one such motive is declarative, to share attention and interest with adults to events. Recently, some researchers have questioned whether infants have this motivation. In the current study, an adult reacted to 12-month-olds' pointing in different ways, and infants' responses were observed. Results showed that when the adult shared attention and interest (i.e. alternated gaze and emoted), infants pointed more frequently across trials and tended to prolong each point – presumably to prolong the satisfying interaction. However, when the adult emoted to the infant alone or looked only to the event, infants pointed less across trials and repeated points more within trials – presumably in an attempt to establish joint attention. Results suggest that 12-month-olds point declaratively and understand that others have psychological states that can be directed and shared.
  • Lockwood, G., & Dingemanse, M. (2015). Iconicity in the lab: A review of behavioural, developmental, and neuroimaging research into sound-symbolism. Frontiers in Psychology, 6: 1246. doi:10.3389/fpsyg.2015.01246.

    Abstract

    This review covers experimental approaches to sound-symbolism—from infants to adults, and from Sapir’s foundational studies to twenty-first century product naming. It synthesizes recent behavioral, developmental, and neuroimaging work into a systematic overview of the cross-modal correspondences that underpin iconic links between form and meaning. It also identifies open questions and opportunities, showing how the future course of experimental iconicity research can benefit from an integrated interdisciplinary perspective. Combining insights from psychology and neuroscience with evidence from natural languages provides us with opportunities for the experimental investigation of the role of sound-symbolism in language learning, language processing, and communication. The review finishes by describing how hypothesis-testing and model-building will help contribute to a cumulative science of sound-symbolism in human language.
  • Lockwood, G., & Tuomainen, J. (2015). Ideophones in Japanese modulate the P2 and late positive complex responses. Frontiers in Psychology, 6: 933. doi:10.3389/fpsyg.2015.00933.

    Abstract

    Sound-symbolism, or the direct link between sound and meaning, is typologically and behaviorally attested across languages. However, neuroimaging research has mostly focused on artificial non-words or individual segments, which do not represent sound-symbolism in natural language. We used EEG to compare Japanese ideophones, which are phonologically distinctive sound-symbolic lexical words, and arbitrary adverbs during a sentence reading task. Ideophones elicit a larger visual P2 response and a sustained late positive complex in comparison to arbitrary adverbs. These results and previous literature suggest that the larger P2 may indicate the integration of sound and sensory information by association in response to the distinctive phonology of ideophones. The late positive complex may reflect the facilitated lexical retrieval of ideophones in comparison to arbitrary words. This account provides new evidence that ideophones exhibit similar cross-modal correspondences to those which have been proposed for non-words and individual sounds, and that these effects are detectable in natural language.
  • Long, M., Moore, I., Mollica, F., & Rubio-Fernandez, P. (2021). Contrast perception as a visual heuristic in the formulation of referential expressions. Cognition, 217: 104879. doi:10.1016/j.cognition.2021.104879.

    Abstract

    We hypothesize that contrast perception works as a visual heuristic, such that when speakers perceive a significant degree of contrast in a visual context, they tend to produce the corresponding adjective to describe a referent. The contrast perception heuristic supports efficient audience design, allowing speakers to produce referential expressions with minimum expenditure of cognitive resources, while facilitating the listener's visual search for the referent. We tested the perceptual contrast hypothesis in three language-production experiments. Experiment 1 revealed that speakers overspecify color adjectives in polychrome displays, whereas in monochrome displays they overspecified other properties that were contrastive. Further support for the contrast perception hypothesis comes from a re-analysis of previous work, which confirmed that color contrast elicits color overspecification when detected in a given display, but not when detected across monochrome trials. Experiment 2 revealed that even atypical colors (which are often overspecified) are only mentioned if there is color contrast. In Experiment 3, participants named a target color faster in monochrome than in polychrome displays, suggesting that the effect of color contrast is not analogous to ease of production. We conclude that the tendency to overspecify color in polychrome displays is not a bottom-up effect driven by the visual salience of color as a property, but possibly a learned communicative strategy. We discuss the implications of our account for pragmatic theories of referential communication and models of audience design, challenging the view that overspecification is a form of egocentric behavior.

    Additional information

    supplementary data
  • Long, M., Shukla, V., & Rubio-Fernandez, P. (2021). The development of simile comprehension: From similarity to scalar implicature. Child Development, 92(4), 1439-1457. doi:10.1111/cdev.13507.

    Abstract

    Similes require two different pragmatic skills: appreciating the intended similarity and deriving a scalar implicature (e.g., “Lucy is like a parrot” normally implies that Lucy is not a parrot), but previous studies overlooked this second skill. In Experiment 1, preschoolers (N = 48; ages 3–5) understood “X is like a Y” as an expression of similarity. In Experiment 2 (N = 99; ages 3–6, 13) and Experiment 3 (N = 201; ages 3–5 and adults), participants received metaphors (“Lucy is a parrot”) or similes (“Lucy is like a parrot”) as clues to select one of three images (a parrot, a girl or a parrot-looking girl). An early developmental trend revealed that 3-year-olds started deriving the implicature “X is not a Y,” whereas 5-year-olds performed like adults.
  • Loo, S. K., Fisher, S. E., Francks, C., Ogdie, M. N., MacPhie, I. L., Yang, M., McCracken, J. T., McGough, J. J., Nelson, S. F., Monaco, A. P., & Smalley, S. L. (2004). Genome-wide scan of reading ability in affected sibling pairs with attention-deficit/hyperactivity disorder: Unique and shared genetic effects. Molecular Psychiatry, 9, 485-493. doi:10.1038/sj.mp.4001450.

    Abstract

    Attention-deficit/hyperactivity disorder (ADHD) and reading disability (RD) are common highly heritable disorders of childhood, which frequently co-occur. Data from twin and family studies suggest that this overlap is, in part, due to shared genetic underpinnings. Here, we report the first genome-wide linkage analysis of measures of reading ability in children with ADHD, using a sample of 233 affected sibling pairs who previously participated in a genome-wide scan for susceptibility loci in ADHD. Quantitative trait locus (QTL) analysis of a composite reading factor defined from three highly correlated reading measures identified suggestive linkage (multipoint maximum lod score, MLS>2.2) in four chromosomal regions. Two regions (16p, 17q) overlap those implicated by our previous genome-wide scan for ADHD in the same sample: one region (2p) provides replication for an RD susceptibility locus, and one region (10q) falls approximately 35 cM from a modestly highlighted region in an independent genome-wide scan of siblings with ADHD. Investigation of an individual reading measure of Reading Recognition supported linkage to putative RD susceptibility regions on chromosome 8p (MLS=2.4) and 15q (MLS=1.38). Thus, the data support the existence of genetic factors that have pleiotropic effects on ADHD and reading ability--as suggested by shared linkages on 16p, 17q and possibly 10q--but also those that appear to be unique to reading--as indicated by linkages on 2p, 8p and 15q that coincide with those previously found in studies of RD. Our study also suggests that reading measures may represent useful phenotypes in ADHD research. The eventual identification of genes underlying these unique and shared linkages may increase our understanding of ADHD, RD and the relationship between the two.
  • Lopopolo, A., Van de Bosch, A., Petersson, K. M., & Willems, R. M. (2021). Distinguishing syntactic operations in the brain: Dependency and phrase-structure parsing. Neurobiology of Language, 2(1), 152-175. doi:10.1162/nol_a_00029.

    Abstract

    Finding the structure of a sentence — the way its words hold together to convey meaning — is a fundamental step in language comprehension. Several brain regions, including the left inferior frontal gyrus, the left posterior superior temporal gyrus, and the left anterior temporal pole, are supposed to support this operation. The exact role of these areas is nonetheless still debated. In this paper we investigate the hypothesis that different brain regions could be sensitive to different kinds of syntactic computations. We compare the fit of phrase-structure and dependency structure descriptors to activity in brain areas using fMRI. Our results show a division between areas with regard to the type of structure computed, with the left ATP and left IFG favouring dependency structures and left pSTG favouring phrase structures.
  • Love, B. C., Kopeć, Ł., & Guest, O. (2015). Optimism bias in fans and sports reporters. PLoS One, 10(9): e0137685. doi:10.1371/journal.pone.0137685.

    Abstract

    People are optimistic about their prospects relative to others. However, existing studies can be difficult to interpret because outcomes are not zero-sum. For example, one person avoiding cancer does not necessitate that another person develops cancer. Ideally, optimism bias would be evaluated within a closed formal system to establish with certainty the extent of the bias and the associated environmental factors, such that optimism bias is demonstrated when a population is internally inconsistent. Accordingly, we asked NFL fans to predict how many games teams they liked and disliked would win in the 2015 season. Fans, like ESPN reporters assigned to cover a team, were overly optimistic about their team’s prospects. The opposite pattern was found for teams that fans disliked. Optimism may flourish because year-to-year team results are marked by auto-correlation and regression to the group mean (i.e., good teams stay good, but bad teams improve).

    Additional information

    raw data
  • Lowndes, R., Molz, B., Warriner, L., Herbik, A., De Best, P. B., Raz, N., Gouws, A., Ahmadi, K., McLean, R. J., Gottlob, I., Kohl, S., Choritz, L., Maguire, J., Kanowski, M., Käsmann-Kellner, B., Wieland, I., Banin, E., Levin, N., Hoffmann, M. B., Morland, A. B. and 1 moreLowndes, R., Molz, B., Warriner, L., Herbik, A., De Best, P. B., Raz, N., Gouws, A., Ahmadi, K., McLean, R. J., Gottlob, I., Kohl, S., Choritz, L., Maguire, J., Kanowski, M., Käsmann-Kellner, B., Wieland, I., Banin, E., Levin, N., Hoffmann, M. B., Morland, A. B., & Baseler, H. A. (2021). Structural differences across multiple visual cortical regions in the absence of cone function in congenital achromatopsia. Frontiers in Neuroscience, 15: 718958. doi:10.3389/fnins.2021.718958.

    Abstract

    Most individuals with congenital achromatopsia (ACHM) carry mutations that affect the retinal phototransduction pathway of cone photoreceptors, fundamental to both high acuity vision and colour perception. As the central fovea is occupied solely by cones, achromats have an absence of retinal input to the visual cortex and a small central area of blindness. Additionally, those with complete ACHM have no colour perception, and colour processing regions of the ventral cortex also lack typical chromatic signals from the cones. This study examined the cortical morphology (grey matter volume, cortical thickness, and cortical surface area) of multiple visual cortical regions in ACHM (n = 15) compared to normally sighted controls (n = 42) to determine the cortical changes that are associated with the retinal characteristics of ACHM. Surface-based morphometry was applied to T1-weighted MRI in atlas-defined early, ventral and dorsal visual regions of interest. Reduced grey matter volume in V1, V2, V3, and V4 was found in ACHM compared to controls, driven by a reduction in cortical surface area as there was no significant reduction in cortical thickness. Cortical surface area (but not thickness) was reduced in a wide range of areas (V1, V2, V3, TO1, V4, and LO1). Reduction in early visual areas with large foveal representations (V1, V2, and V3) suggests that the lack of foveal input to the visual cortex was a major driving factor in morphological changes in ACHM. However, the significant reduction in ventral area V4 coupled with the lack of difference in dorsal areas V3a and V3b suggest that deprivation of chromatic signals to visual cortex in ACHM may also contribute to changes in cortical morphology. This research shows that the congenital lack of cone input to the visual cortex can lead to widespread structural changes across multiple visual areas.

    Additional information

    table S1
  • Lozano, R., Vino, A., Lozano, C., Fisher, S. E., & Deriziotis, P. (2015). A de novo FOXP1 variant in a patient with autism, intellectual disability and severe speech and language impairment. European Journal of Human Genetics, 23, 1702-1707. doi:10.1038/ejhg.2015.66.

    Abstract

    FOXP1 (forkhead box protein P1) is a transcription factor involved in the development of several tissues, including the brain. An emerging phenotype of patients with protein-disrupting FOXP1 variants includes global developmental delay, intellectual disability and mild to severe speech/language deficits. We report on a female child with a history of severe hypotonia, autism spectrum disorder and mild intellectual disability with severe speech/language impairment. Clinical exome sequencing identified a heterozygous de novo FOXP1 variant c.1267_1268delGT (p.V423Hfs*37). Functional analyses using cellular models show that the variant disrupts multiple aspects of FOXP1 activity, including subcellular localization and transcriptional repression properties. Our findings highlight the importance of performing functional characterization to help uncover the biological significance of variants identified by genomics approaches, thereby providing insight into pathways underlying complex neurodevelopmental disorders. Moreover, our data support the hypothesis that de novo variants represent significant causal factors in severe sporadic disorders and extend the phenotype seen in individuals with FOXP1 haploinsufficiency
  • Ludwig, A., Vernesi, C., Lieckfeldt, D., Lattenkamp, E. Z., Wiethölter, A., & Lutz, W. (2012). Origin and patterns of genetic diversity of German fallow deer as inferred from mitochondrial DNA. European Journal of Wildlife Research, 58(2), 495-501. doi:10.1007/s10344-011-0571-5.

    Abstract

    Although not native to Germany, fallow deer (Dama dama) are commonly found today, but their origin as well as the genetic structure of the founding members is still unclear. In order to address these aspects, we sequenced ~400 bp of the mitochondrial d-loop of 365 animals from 22 locations in nine German Federal States. Nine new haplotypes were detected and archived in GenBank. Our data produced evidence for a Turkish origin of the German founders. However, German fallow deer populations have complex patterns of mtDNA variation. In particular, three distinct clusters were identified: Schleswig-Holstein, Brandenburg/Hesse/Rhineland and Saxony/lower Saxony/Mecklenburg/Westphalia/Anhalt. Signatures of recent demographic expansions were found for the latter two. An overall pattern of reduced genetic variation was therefore accompanied by a relatively strong genetic structure, as highlighted by an overall Phict value of 0.74 (P < 0.001).
  • Lum, J. A., & Kidd, E. (2012). An examination of the associations among multiple memory systems, past tense, and vocabulary in typically developing 5-year-old children. Journal of Speech, Language, and Hearing Research, 55(4), 989-1006. doi:10.1044/1092-4388(2011/10-0137).
  • Lutzenberger, H., De Vos, C., Crasborn, O., & Fikkert, P. (2021). Formal variation in the Kata Kolok lexicon. Glossa: a journal of general linguistics, 6. doi:10.16995/glossa.5880.

    Abstract

    Sign language lexicons incorporate phonological specifications. Evidence from emerging sign languages suggests that phonological structure emerges gradually in a new language. In this study, we investigate variation in the form of signs across 20 deaf adult signers of Kata Kolok, a sign language that emerged spontaneously in a Balinese village community. Combining methods previously used for sign comparisons, we introduce a new numeric measure of variation. Our nuanced yet comprehensive approach to form variation integrates three levels (iconic motivation, surface realisation, feature differences) and allows for refinement through weighting the variation score by token and signer frequency. We demonstrate that variation in the form of signs appears in different degrees at different levels. Token frequency in a given dataset greatly affects how much variation can surface, suggesting caution in interpreting previous findings. Different sign variants have different scopes of use among the signing population, with some more widely used than others. Both frequency weightings (token and signer) identify dominant sign variants, i.e., sign forms that are produced frequently or by many signers. We argue that variation does not equal the absence of conventionalisation. Indeed, especially in micro-community sign languages, variation may be key to understanding patterns of language emergence.
  • MacLean, E. L., Matthews, L. J., Hare, B. A., Nunn, C. L., Anderson, R. C., Aureli, F., Brannon, E. M., Call, J., Drea, C. M., Emery, N. J., Haun, D. B. M., Herrmann, E., Jacobs, L. F., Platt, M. L., Rosati, A. G., Sandel, A. A., Schroepfer, K. K., Seed, A. M., Tan, J., Van Schaik, C. P. and 1 moreMacLean, E. L., Matthews, L. J., Hare, B. A., Nunn, C. L., Anderson, R. C., Aureli, F., Brannon, E. M., Call, J., Drea, C. M., Emery, N. J., Haun, D. B. M., Herrmann, E., Jacobs, L. F., Platt, M. L., Rosati, A. G., Sandel, A. A., Schroepfer, K. K., Seed, A. M., Tan, J., Van Schaik, C. P., & Wobber, V. (2012). How does cognition evolve? Phylogenetic comparative psychology. Animal Cognition, 15, 223-238. doi:10.1007/s10071-011-0448-8.

    Abstract

    Now more than ever animal studies have the potential to test hypotheses regarding how cognition evolves. Comparative psychologists have developed new techniques to probe the cognitive mechanisms underlying animal behavior, and they have become increasingly skillful at adapting methodologies to test multiple species. Meanwhile, evolutionary biologists have generated quantitative approaches to investigate the phylogenetic distribution and function of phenotypic traits, including cognition. In particular, phylogenetic methods can quantitatively (1) test whether specific cognitive abilities are correlated with life history (e.g., lifespan), morphology (e.g., brain size), or socio-ecological variables (e.g., social system), (2) measure how strongly phylogenetic relatedness predicts the distribution of cognitive skills across species, and (3) estimate the ancestral state of a given cognitive trait using measures of cognitive performance from extant species. Phylogenetic methods can also be used to guide the selection of species comparisons that offer the strongest tests of a priori predictions of cognitive evolutionary hypotheses (i.e., phylogenetic targeting). Here, we explain how an integration of comparative psychology and evolutionary biology will answer a host of questions regarding the phylogenetic distribution and history of cognitive traits, as well as the evolutionary processes that drove their evolution.
  • Magyari, L. (2004). Nyelv és/vagy evolúció? [Book review]. Magyar Pszichológiai Szemle, 59(4), 591-607. doi:10.1556/MPSzle.59.2004.4.7.

    Abstract

    Nyelv és/vagy evolúció: Lehetséges-e a nyelv evolúciós magyarázata? [Derek Bickerton: Nyelv és evolúció] (Magyari Lilla); Történelmi olvasókönyv az agyról [Charles G. Gross: Agy, látás, emlékezet. Mesék az idegtudomány történetéből] (Garab Edit Anna); Művészet vagy tudomány [Margitay Tihamér: Az érvelés mestersége. Érvelések elemzése, értékelése és kritikája] (Zemplén Gábor); Tényleg ésszerűek vagyunk? [Herbert Simon: Az ésszerűség szerepe az emberi életben] (Kardos Péter); Nemi különbségek a megismerésben [Doreen Kimura: Női agy, férfi agy]. (Hahn Noémi);
  • Magyari, L., & De Ruiter, J. P. (2012). Prediction of turn-ends based on anticipation of upcoming words. Frontiers in Psychology, 3, 376. doi:10.3389/fpsyg.2012.00376.

    Abstract

    During conversation listeners have to perform several tasks simultaneously. They have to comprehend their interlocutor’s turn, while also having to prepare their own next turn. Moreover, a careful analysis of the timing of natural conversation reveals that next speakers also time their turns very precisely. This is possible only if listeners can predict accurately when the speaker’s turn is going to end. But how are people able to predict when a turn-ends? We propose that people know when a turn-ends, because they know how it ends. We conducted a gating study to examine if better turn-end predictions coincide with more accurate anticipation of the last words of a turn. We used turns from an earlier button-press experiment where people had to press a button exactly when a turn-ended. We show that the proportion of correct guesses in our experiment is higher when a turn’s end was estimated better in time in the button-press experiment. When people were too late in their anticipation in the button-press experiment, they also anticipated more words in our gating study. We conclude that people made predictions in advance about the upcoming content of a turn and used this prediction to estimate the duration of the turn. We suggest an economical model of turn-end anticipation that is based on anticipation of words and syntactic frames in comprehension.
  • Majid, A. (2004). Out of context. The Psychologist, 17(6), 330-330.
  • Majid, A., & Van Staden, M. (2015). Can nomenclature for the body be explained by embodiment theories? Topics in Cognitive Science, 7(4), 570-594. doi:10.1111/tops.12159.

    Abstract

    According to widespread opinion, the meaning of body part terms is determined by salient discontinuities in the visual image; such that hands, feet, arms, and legs, are natural parts. If so, one would expect these parts to have distinct names which correspond in meaning across languages. To test this proposal, we compared three unrelated languages—Dutch, Japanese, and Indonesian—and found both naming systems and boundaries of even basic body part terms display variation across languages. Bottom-up cues alone cannot explain natural language semantic systems; there simply is not a one-to-one mapping of the body semantic system to the body structural description. Although body parts are flexibly construed across languages, body parts semantics are, nevertheless, constrained by non-linguistic representations in the body structural description, suggesting these are necessary, although not sufficient, in accounting for aspects of the body lexicon.
  • Majid, A. (2015). Cultural factors shape olfactory language. Trends in Cognitive Sciences, 19(11), 629-630. doi:10.1016/j.tics.2015.06.009.
  • Majid, A. (2012). Current emotion research in the language sciences. Emotion Review, 4, 432-443. doi:10.1177/1754073912445827.

    Abstract

    When researchers think about the interaction between language and emotion, they typically focus on descriptive emotion words. This review demonstrates that emotion can interact with language at many levels of structure, from the sound patterns of a language to its lexicon and grammar, and beyond to how it appears in conversation and discourse. Findings are considered from diverse subfields across the language sciences, including cognitive linguistics, psycholinguistics, linguistic anthropology, and conversation analysis. Taken together, it is clear that emotional expression is finely tuned to language-specific structures. Future emotion research can better exploit cross-linguistic variation to unravel possible universal principles operating between language and emotion.
  • Majid, A. (2004). Data elicitation methods. Language Archive Newsletter, 1(2), 6-6.
  • Majid, A. (2004). Developing clinical understanding. The Psychologist, 17, 386-387.
  • Majid, A. (2004). Coned to perfection. The Psychologist, 17(7), 386-386.
  • Majid, A., Bowerman, M., Kita, S., Haun, D. B. M., & Levinson, S. C. (2004). Can language restructure cognition? The case for space. Trends in Cognitive Sciences, 8(3), 108-114. doi:10.1016/j.tics.2004.01.003.

    Abstract

    Frames of reference are coordinate systems used to compute and specify the location of objects with respect to other objects. These have long been thought of as innate concepts, built into our neurocognition. However, recent work shows that the use of such frames in language, cognition and gesture varies crossculturally, and that children can acquire different systems with comparable ease. We argue that language can play a significant role in structuring, or restructuring, a domain as fundamental as spatial cognition. This suggests we need to rethink the relation between the neurocognitive underpinnings of spatial cognition and the concepts we use in everyday thinking, and, more generally, to work out how to account for cross-cultural cognitive diversity in core cognitive domains.
  • Majid, A. (2004). An integrated view of cognition [Review of the book Rethinking implicit memory ed. by J. S. Bowers and C. J. Marsolek]. The Psychologist, 17(3), 148-149.
  • Majid, A. (2004). [Review of the book The new handbook of language and social psychology ed. by W. Peter Robinson and Howard Giles]. Language and Society, 33(3), 429-433.
  • Majid, A., Jordan, F., & Dunn, M. (Eds.). (2015). Semantic systems in closely related languages [Special Issue]. Language Sciences, 49.
  • Majid, A., Jordan, F., & Dunn, M. (2015). Semantic systems in closely related languages. Language Sciences, 49, 1-18. doi:10.1016/j.langsci.2014.11.002.

    Abstract

    In each semantic domain studied to date, there is considerable variation in how meanings are expressed across languages. But are some semantic domains more likely to show variation than others? Is the domain of space more or less variable in its expression than other semantic domains, such as containers, body parts, or colours? According to many linguists, the meanings expressed in grammaticised expressions, such as (spatial) adpositions, are more likely to be similar across languages than meanings expressed in open class lexical items. On the other hand, some psychologists predict there ought to be more variation across languages in the meanings of adpositions, than in the meanings of nouns. This is because relational categories, such as those expressed as adpositions, are said to be constructed by language; whereas object categories expressed as nouns are predicted to be “given by the world”. We tested these hypotheses by comparing the semantic systems of closely related languages. Previous cross-linguistic studies emphasise the importance of studying diverse languages, but we argue that a focus on closely related languages is advantageous because domains can be compared in a culturally- and historically-informed manner. Thus we collected data from 12 Germanic languages. Naming data were collected from at least 20 speakers of each language for containers, body-parts, colours, and spatial relations. We found the semantic domains of colour and body-parts were the most similar across languages. Containers showed some variation, but spatial relations expressed in adpositions showed the most variation. The results are inconsistent with the view expressed by most linguists. Instead, we find meanings expressed in grammaticised meanings are more variable than meanings in open class lexical items.
  • Majid, A. (2012). The role of language in a science of emotion [Comment]. Emotion review, 4, 380-381. doi:10.1177/1754073912445819.

    Abstract

    Emotion scientists often take an ambivalent stance concerning the role of language in a science of emotion. However, it is important for emotion researchers to contemplate some of the consequences of current practices
    for their theory building. There is a danger of an overreliance on the English language as a transparent window into emotion categories. More consideration has to be given to cross-linguistic comparison in the future so that models of language acquisition and of the language–cognition interface fit better the extant variation found in today’s peoples.
  • Majid, A., Boroditsky, L., & Gaby, A. (Eds.). (2012). Time in terms of space [Research topic] [Special Issue]. Frontiers in cultural psychology. Retrieved from http://www.frontiersin.org/cultural_psychology/researchtopics/Time_in_terms_of_space/755.

    Abstract

    This Research Topic explores the question: what is the relationship between representations of time and space in cultures around the world? This question touches on the broader issue of how humans come to represent and reason about abstract entities – things we cannot see or touch. Time is a particularly opportune domain to investigate this topic. Across cultures, people use spatial representations for time, for example in graphs, time-lines, clocks, sundials, hourglasses, and calendars. In language, time is also heavily related to space, with spatial terms often used to describe the order and duration of events. In English, for example, we might move a meeting forward, push a deadline back, attend a long concert or go on a short break. People also make consistent spatial gestures when talking about time, and appear to spontaneously invoke spatial representations when processing temporal language. A large body of evidence suggests a close correspondence between temporal and spatial language and thought. However, the ways that people spatialize time can differ dramatically across languages and cultures. This research topic identifies and explores some of the sources of this variation, including patterns in spatial thinking, patterns in metaphor, gesture and other cultural systems. This Research Topic explores how speakers of different languages talk about time and space and how they think about these domains, outside of language. The Research Topic invites papers exploring the following issues: 1. Do the linguistic representations of space and time share the same lexical and morphosyntactic resources? 2. To what extent does the conceptualization of time follow the conceptualization of space?
  • Mak, M., & Willems, R. M. (2021). Eyelit: Eye movement and reader response data during literary reading. Journal of open humanities data, 7: 25. doi:10.5334/johd.49.

    Abstract

    An eye-tracking data set is described of 102 participants reading three Dutch literary short stories each (7790 words in total per participant). The pre-processed data set includes (1) Fixation report, (2) Saccade report, (3) Interest Area report, (4) Trial report (aggregated data for each page), (5) Sample report (sampling rate = 500 Hz), (6) Questionnaire data on reading experiences and participant characteristics, and (7) word characteristics for all words (with the potential of calculating additional word characteristics). It is stored on DANS, and can be used to study word characteristics or literary reading and all facets of eye movements.
  • Mangione-Smith, R., Elliott, M. N., Stivers, T., McDonald, L., Heritage, J., & McGlynn, E. A. (2004). Racial/ethnic variation in parent expectations for antibiotics: Implications for public health campaigns. Pediatrics, 113(5), 385-394.
  • Manhardt, F., Brouwer, S., & Ozyurek, A. (2021). A tale of two modalities: Sign and speech influence in each other in bimodal bilinguals. Psychological Science, 32(3), 424-436. doi:10.1177/0956797620968789.

    Abstract

    Bimodal bilinguals are hearing individuals fluent in a sign and a spoken language. Can the two languages influence each other in such individuals despite differences in the visual (sign) and vocal (speech) modalities of expression? We investigated cross-linguistic influences on bimodal bilinguals’ expression of spatial relations. Unlike spoken languages, sign uses iconic linguistic forms that resemble physical features of objects in a spatial relation and thus expresses specific semantic information. Hearing bimodal bilinguals (n = 21) fluent in Dutch and Sign Language of the Netherlands and their hearing nonsigning and deaf signing peers (n = 20 each) described left/right relations between two objects. Bimodal bilinguals expressed more specific information about physical features of objects in speech than nonsigners, showing influence from sign language. They also used fewer iconic signs with specific semantic information than deaf signers, demonstrating influence from speech. Bimodal bilinguals’ speech and signs are shaped by two languages from different modalities.

    Additional information

    supplementary materials
  • Mani, N., & Huettig, F. (2012). Prediction during language processing is a piece of cake - but only for skilled producers. Journal of Experimental Psychology: Human Perception and Performance, 38(4), 843-847. doi:10.1037/a0029284.

    Abstract

    Are there individual differences in children’s prediction of upcoming linguistic input and what do these differences reflect? Using a variant of the preferential looking paradigm (Golinkoff et al., 1987), we found that, upon hearing a sentence like “The boy eats a big cake”, two-year-olds fixate edible objects in a visual scene (a cake) soon after they hear the semantically constraining verb, eats, and prior to hearing the word, cake. Importantly, children’s prediction skills were significantly correlated with their productive vocabulary size – Skilled producers (i.e., children with large production vocabularies) showed evidence of predicting upcoming linguistic input while low producers did not. Furthermore, we found that children’s prediction ability is tied specifically to their production skills and not to their comprehension skills. Prediction is really a piece of cake, but only for skilled producers.
  • Manrique, E., & Enfield, N. J. (2015). Suspending the next turn as a form of repair initiation: Evidence from Argentine Sign Language. Frontiers in Psychology, 6: 1326. doi:10.3389/fpsyg.2015.01326.

    Abstract

    Practices of other initiated repair deal with problems of hearing or understanding what another person has said in the fast-moving turn-by-turn flow of conversation. As such, other-initiated repair plays a fundamental role in the maintenance of intersubjectivity in social interaction. This study finds and analyses a special type of other initiated repair that is used in turn-by-turn conversation in a sign language: Argentine Sign Language (Lengua de Sehas Argentina or LSA). We describe a type of response termed a "freeze-look,' which occurs when a person has just been asked a direct question: instead of answering the question in the next turn position, the person holds still while looking directly at the questioner. In these cases it is clear that the person is aware of having just been addressed and is not otherwise accounting for their delay in responding (e.g., by displaying a "thinking" face or hesitation, etc.). We find that this behavior functions as a way for an addressee to initiate repair by the person who asked the question. The "freeze-look" results in the questioner "re-doing" their action of asking a question, for example by repeating or rephrasing it Thus, we argue that the "freeze-look" is a practice for other-initiation of repair. In addition, we argue that it is an "off-record" practice, thus contrasting with known on record practices such as saying "Huh?" or equivalents. The findings aim to contribute to research on human understanding in everyday turn-by-turn conversation by looking at an understudied sign language, with possible implications for our understanding of visual bodily communication in spoken languages as wel

    Additional information

    Manrique_Enfield_2015_supp.pdf
  • Martin, J.-R., Kösem, A., & van Wassenhove, V. (2015). Hysteresis in Audiovisual Synchrony Perception. PLoS One, 10(3): e0119365. doi:10.1371/journal.pone.0119365.

    Abstract

    The effect of stimulation history on the perception of a current event can yield two opposite effects, namely: adaptation or hysteresis. The perception of the current event thus goes in the opposite or in the same direction as prior stimulation, respectively. In audiovisual (AV) synchrony perception, adaptation effects have primarily been reported. Here, we tested if perceptual hysteresis could also be observed over adaptation in AV timing perception by varying different experimental conditions. Participants were asked to judge the synchrony of the last (test) stimulus of an AV sequence with either constant or gradually changing AV intervals (constant and dynamic condition, respectively). The onset timing of the test stimulus could be cued or not (prospective vs. retrospective condition, respectively). We observed hysteretic effects for AV synchrony judgments in the retrospective condition that were independent of the constant or dynamic nature of the adapted stimuli; these effects disappeared in the prospective condition. The present findings suggest that knowing when to estimate a stimulus property has a crucial impact on perceptual simultaneity judgments. Our results extend beyond AV timing perception, and have strong implications regarding the comparative study of hysteresis and adaptation phenomena.
  • Martin, A. E., Nieuwland, M. S., & Carreiras, M. (2012). Event-related brain potentials index cue-based retrieval interference during sentence comprehension. NeuroImage, 59(2), 1859-1869. doi:10.1016/j.neuroimage.2011.08.057.

    Abstract

    Successful language use requires access to products of past processing within an evolving discourse. A central issue for any neurocognitive theory of language then concerns the role of memory variables during language processing. Under a cue-based retrieval account of language comprehension, linguistic dependency resolution (e.g., retrieving antecedents) is subject to interference from other information in the sentence, especially information that occurs between the words that form the dependency (e.g., between the antecedent and the retrieval site). Retrieval interference may then shape processing complexity as a function of the match of the information at retrieval with the antecedent versus other recent or similar items in memory. To address these issues, we studied the online processing of ellipsis in Castilian Spanish, a language with morphological gender agreement. We recorded event-related brain potentials while participants read sentences containing noun-phrase ellipsis indicated by the determiner otro/a (‘another’). These determiners had a grammatically correct or incorrect gender with respect to their antecedent nouns that occurred earlier in the sentence. Moreover, between each antecedent and determiner, another noun phrase occurred that was structurally unavailable as an antecedent and that matched or mismatched the gender of the antecedent (i.e., a local agreement attractor). In contrast to extant P600 results on agreement violation processing, and inconsistent with predictions from neurocognitive models of sentence processing, grammatically incorrect determiners evoked a sustained, broadly distributed negativity compared to correct ones between 400 and 1000 ms after word onset, possibly related to sustained negativities as observed for referential processing difficulties. Crucially, this effect was modulated by the attractor: an increased negativity was observed for grammatically correct determiners that did not match the gender of the attractor, suggesting that structurally unavailable noun phrases were at least temporarily considered for grammatically correct ellipsis. These results constitute the first ERP evidence for cue-based retrieval interference during comprehension of grammatical sentences.
  • Matić, D., & Odé, C. (2015). On prosodic signalling of focus in Tundra Yukaghir. Acta Linguistica Petropolitana, 11(2), 627-644.
  • Matić, D. (2012). Review of: Assertion by Mark Jary, Palgrave Macmillan, 2010 [Web Post]. The LINGUIST List. Retrieved from http://linguistlist.org/pubs/reviews/get-review.cfm?SubID=4547242.

    Abstract

    Even though assertion has held centre stage in much philosophical and linguistic theorising on language, Mark Jary’s ‘Assertion’ represents the first book-length treatment of the topic. The content of the book is aptly described by the author himself: ''This book has two aims. One is to bring together and discuss in a systematic way a range of perspectives on assertion: philosophical, linguistic and psychological. [...] The other is to present a view of the pragmatics of assertion, with particular emphasis on the contribution of the declarative mood to the process of utterance interpretation.'' (p. 1). The promise contained in this introductory note is to a large extent fulfilled: the first seven chapters of the book discuss many of the relevant philosophical and linguistic approaches to assertion and at the same time provide the background for the presentation of Jary's own view on the pragmatics of declaratives, presented in the last (and longest) chapter.
  • McConnell, K., & Blumenthal-Dramé, A. (2021). Usage-Based Individual Differences in the Probabilistic Processing of Multi-Word Sequences. Frontiers in Communication, 6: 703351. doi:10.3389/fcomm.2021.703351.

    Abstract

    While it is widely acknowledged that both predictive expectations and retrodictive
    integration influence language processing, the individual differences that affect these
    two processes and the best metrics for observing them have yet to be fully described.
    The present study aims to contribute to the debate by investigating the extent to which
    experienced-based variables modulate the processing of word pairs (bigrams).
    Specifically, we investigate how age and reading experience correlate with lexical
    anticipation and integration, and how this effect can be captured by the metrics of
    forward and backward transition probability (TP). Participants read more and less
    strongly associated bigrams, paired to control for known lexical covariates such as
    bigram frequency and meaning (i.e., absolute control, total control, absolute silence,
    total silence) in a self-paced reading (SPR) task. They additionally completed
    assessments of exposure to print text (Author Recognition Test, Shipley vocabulary
    assessment, Words that Go Together task) and provided their age. Results show that
    both older age and lesser reading experience individually correlate with stronger TP
    effects. Moreover, TP effects differ across the spillover region (the two words following
    the noun in the bigram)
  • McQueen, J. M., & Huettig, F. (2012). Changing only the probability that spoken words will be distorted changes how they are recognized. Journal of the Acoustical Society of America, 131(1), 509-517. doi:10.1121/1.3664087.

    Abstract

    An eye-tracking experiment examined contextual flexibility in speech processing in response to distortions in spoken input. Dutch participants heard Dutch sentences containing critical words and saw four-picture displays. The name of one picture either had the same onset phonemes as the critical word or had a different first phoneme and rhymed. Participants fixated onset-overlap more than rhyme-overlap pictures, but this tendency varied with speech quality. Relative to a baseline with noise-free sentences, participants looked less at onset-overlap and more at rhyme-overlap pictures when phonemes in the sentences (but not in the critical words) were replaced by noises like those heard on a badly-tuned AM radio. The position of the noises (word-initial or word-medial) had no effect. Noises elsewhere in the sentences apparently made evidence about the critical word less reliable: Listeners became less confident of having heard the onset-overlap name but also less sure of having not heard the rhyme-overlap name. The same acoustic information has different effects on spoken-word recognition as the probability of distortion changes.
  • McQueen, J. M., Tyler, M., & Cutler, A. (2012). Lexical retuning of children’s speech perception: Evidence for knowledge about words’ component sounds. Language Learning and Development, 8, 317-339. doi:10.1080/15475441.2011.641887.

    Abstract

    Children hear new words from many different talkers; to learn words most efficiently, they should be able to represent them independently of talker-specific pronunciation detail. However, do children know what the component sounds of words should be, and can they use that knowledge to deal with different talkers' phonetic realizations? Experiment 1 replicated prior studies on lexically guided retuning of speech perception in adults, with a picture-verification methodology suitable for children. One participant group heard an ambiguous fricative ([s/f]) replacing /f/ (e.g., in words like giraffe); another group heard [s/f] replacing /s/ (e.g., in platypus). The first group subsequently identified more tokens on a Simpie-[s/f]impie-Fimpie toy-name continuum as Fimpie. Experiments 2 and 3 found equivalent lexically guided retuning effects in 12- and 6-year-olds. Children aged 6 have all that is needed for adjusting to talker variation in speech: detailed and abstract phonological representations and the ability to apply them during spoken-word recognition.

    Files private

    Request files
  • Meekings, S., Boebinger, D., Evans, S., Lima, C. F., Chen, S., Ostarek, M., & Scott, S. K. (2015). Do we know what we’re saying? The roles of attention and sensory information during speech production. Psychological Science, 26(12), 1975-1977. doi:10.1177/0956797614563766.
  • Meeuwissen, M., Roelofs, A., & Levelt, W. J. M. (2004). Naming analog clocks conceptually facilitates naming digital clocks. Brain and Language, 90(1-3), 434-440. doi:10.1016/S0093-934X(03)00454-1.

    Abstract

    This study investigates how speakers of Dutch compute and produce relative time expressions. Naming digital clocks (e.g., 2:45, say ‘‘quarter to three’’) requires conceptual operations on the minute and hour information for the correct relative time expression. The interplay of these conceptual operations was investigated using a repetition priming paradigm. Participants named analog clocks (the primes) directly before naming digital clocks (the targets). The targets referred to the hour (e.g., 2:00), half past the hour (e.g., 2:30), or the coming hour (e.g., 2:45). The primes differed from the target in one or two hour and in five or ten minutes. Digital clock naming latencies were shorter with a five- than with a ten-min difference between prime and target, but the difference in hour had no effect. Moreover, the distance in minutes had only an effect for half past the hour and the coming hour, but not for the hour. These findings suggest that conceptual facilitation occurs when conceptual transformations are shared between prime and target in telling time.
  • Meira, S., & Drude, S. (2015). A summary reconstruction of Proto-Maweti-Guarani segmental phonology. Boletim do Museu Paraense Emilio Goeldi:Ciencias Humanas, 10, 275-296. doi: 10.1590/1981-81222015000200005.

    Abstract

    This paper presents a succinct reconstruction of the segmental phonology of Proto-Maweti-Guarani, the hypothetical protolanguage from which modern Mawe, Aweti and the Tupi-Guarani branches of the Tupi linguistic family have evolved. Based on about 300 cognate sets from the authors' field data (for Mawe and Aweti) and from Mello's reconstruction (2000) for Proto-Tupi-Guarani (with additional information from other works; and with a few changes concerning certain doubtful features, such as the status of stem-final lenis consonants ∗r and ∗β, and the distinction of ∗c and ∗č), the consonants and vowels of Proto-Maweti-Guarani were reconstructed with the help of the traditional historical-comparative method. The development of the reconstructed segments is then traced from the protolanguage to each of the modern branches. A comparison with other claims made about Proto-Maweti-Guarani is given in the conclusion
  • Melinger, A., & Levelt, W. J. M. (2004). Gesture and the communicative intention of the speaker. Gesture, 4(2), 119-141.

    Abstract

    This paper aims to determine whether iconic tracing gestures produced while speaking constitute part of the speaker’s communicative intention. We used a picture description task in which speakers must communicate the spatial and color information of each picture to an interlocutor. By establishing the necessary minimal content of an intended message, we determined whether speech produced with concurrent gestures is less explicit than speech without gestures. We argue that a gesture must be communicatively intended if it expresses necessary information that was nevertheless omitted from speech. We found that speakers who produced iconic gestures representing spatial relations omitted more required spatial information from their descriptions than speakers who did not gesture. These results provide evidence that speakers intend these gestures to communicate. The results have implications for the cognitive architectures that underlie the production of gesture and speech.
  • Mellem, M. S., Bastiaansen, M. C. M., Pilgrim, L. K., Medvedev, A. V., & Friedman, R. B. (2012). Word class and context affect alpha-band oscillatory dynamics in an older population. Frontiers in Psychology, 3, 97. doi:10.3389/fpsyg.2012.00097.

    Abstract

    Differences in the oscillatory EEG dynamics of reading open class (OC) and closed class (CC) words have previously been found (Bastiaansen et al., 2005) and are thought to reflect differences in lexical-semantic content between these word classes. In particular, the theta-band (4–7 Hz) seems to play a prominent role in lexical-semantic retrieval. We tested whether this theta effect is robust in an older population of subjects. Additionally, we examined how the context of a word can modulate the oscillatory dynamics underlying retrieval for the two different classes of words. Older participants (mean age 55) read words presented in either syntactically correct sentences or in a scrambled order (“scrambled sentence”) while their EEG was recorded. We performed time–frequency analysis to examine how power varied based on the context or class of the word. We observed larger power decreases in the alpha (8–12 Hz) band between 200–700 ms for the OC compared to CC words, but this was true only for the scrambled sentence context. We did not observe differences in theta power between these conditions. Context exerted an effect on the alpha and low beta (13–18 Hz) bands between 0 and 700 ms. These results suggest that the previously observed word class effects on theta power changes in a younger participant sample do not seem to be a robust effect in this older population. Though this is an indirect comparison between studies, it may suggest the existence of aging effects on word retrieval dynamics for different populations. Additionally, the interaction between word class and context suggests that word retrieval mechanisms interact with sentence-level comprehension mechanisms in the alpha-band.

Share this page