Publications

Displaying 301 - 400 of 604
  • Klein, W., & Meibauer, J. (2011). Einleitung. LiLi, Zeitschrift für Literaturwissenschaft und Linguistik, 41(162), 5-8.

    Abstract

    Nannten die Erwachsenen irgend einen Gegenstand und wandten sie sich dabei ihm zu, so nahm ich das wahr und ich begriff, daß der Gegenstand durch die Laute, die sie aussprachen, bezeichnet wurde, da sie auf ihn hinweisen wollten. Dies aber entnahm ich aus ihren Gebärden, der natürlichen Sprache aller Völker, der Sprache, die durch Mienen- und Augenspiel, durch die Bewegungen der Glieder und den Klang der Stimme die Empfindungen der Seele anzeigt, wenn diese irgend etwas begehrt, oder festhält, oder zurückweist, oder flieht. So lernte ich nach und nach verstehen, welche Dinge die Wörter bezeichneten, die ich wieder und wieder, an ihren bestimmten Stellen in verschiedenen Sätzen, aussprechen hörte. Und ich brachte, als nun mein Mund sich an diese Zeichen gewöhnt hatte, durch sie meine Wünsche zum Ausdruck. (Augustinus, Confessiones I, 8) Dies ist das Zitat eines Zitats: Zu Beginn der Philosophischen Untersuchungen führt Ludwig Wittgenstein diese Stelle aus Augustinus’ Bekenntnissen an, in denen dieser beschreibt, wie er seiner Erinnerung nach seine Muttersprache gelernt hat (Wittgenstein führt den lateinischen Text an und gibt dann seine Übersetzung, hier ist nur letztere zitiert). Sie bilden den Ausgangspunkt für Wittgensteins berühmte Überlegungen über die Funktionsweise der menschlichen Sprache und für seine Idee des Sprachspiels. Nun weiß man nicht, wie genau sich Augustinus wirklich erinnert und ob er sich all dies, wie so viel, was seither über den Spracherwerb gesagt und geschrieben wurde, bloß zurechtgelegt hat, in der Meinung, so müsse es sein. Aber anders als so vieles, was seither über den Spracherwerb gesagt und geschrieben wurde, ist es wunderbar formuliert und enthält zwei Momente, die in der wissenschaftlichen Forschung bis heute, wenn denn nicht bestritten, so doch oft nicht gesehen und dort, wo sie denn gesehen, nicht wirklich ernstgenommen wurden: A. Wir lernen die Sprache in der alltäglichen Kommunikation mit der sozialen Umgebung. B. Um eine Sprache zu lernen, genügt es nicht, diese Sprache zu hören; vielmehr benötigen wir eine Fülle an begleitender Information, wie hier Gestik und Mimik der Erwachsenen. Beides möchte man eigentlich für selbstverständlich halten. Herodot erzählt die berühmte Geschichte des Pharaos Psammetich, der wissen wollte, was die erste und eigentliche Sprache der Menschen sei, und befahl, zwei Neugeborene aufwachsen zu lassen, ohne dass jemand zu ihnen spricht; das erste Wort, das sie äußern, klang, so erzählt Herodot, wie das phrygische Wort für Brot, und so nahm man denn an, die Ursprache des Menschen sei das Phrygische. In dieser Vorstellung vom Spracherwerb spielt der Input aus der sozialen Umgebung nur insofern eine Rolle, als die eigentliche, von Geburt an vorhandene Sprache durch eine andere verdrängt werden kann: Kinder, die in einer englischsprachigen Umgebung aufwachsen, sprechen nicht die Ursprache. Diese Theorie gilt heute als obsolet. Sie ist aber in ihrer Einschätzung vom relativen Gewicht dessen, was an sprachlichem Wissen von Anfang an vorhanden ist, und dem, was der sozialen Umgebung entnommen werden muss, manchen neueren Theorien des Spracherwerbs nicht ganz fern: In der Chomsky’schen Idee der Universalgrammatik, theoretische Grundlage eines wesentlichen Teils der modernen Spracherwerbsforschung, ist „die Sprache” hauptsächlich etwas Angeborenes, insoweit gleich für alle Menschen und vom jeweiligen Input unabhängig. Das, was das Kind oder, beim Zweitspracherwerb, der erwachsene Lerner an Sprachlichem aus seiner Umgebung erfährt, wird nicht genutzt, um daraus bestimmte Regelhaftigkeiten abzuleiten und sich diese anzueignen; der Input fungiert eher als eine Art externer Auslöser für latent bereits vorhandenes Wissen. Für das Erlernen des Wortschatzes gilt dies sicher nicht. Es kann nicht angeboren sein, dass der Mond luna heißt. Für andere Bereiche der Sprache ist das Ausmaß des Angeborenen aber durchaus umstritten. Bei dieser Denkweise gilt das unter A Gesagte nicht. Die meisten modernen Spracherwerbsforscher schreiben dem Input ein wesentlich höheres Gewicht zu: Wir kopieren die charakteristischen Eigenschaften eines bestimmten sprachlichen Systems, indem wir den Input analysieren, um so die ihm zugrundeliegenden Regularitäten abzuleiten. Der Input tritt uns in Form von Schallfolgen (oder Gesten und später geschriebenen Zeichen) entgegen, die von anderen, die das System beherrschen, zu kommunikativen Zwecken verwendet werden. Diese Schallfolgen müssen die Lernenden in kleinere Einheiten zerlegen, diese mit Bedeutungen versehen und nach den Regularitäten abklopfen, denen gemäß sie sich zu komplexeren Ausdrücken verbinden lassen. Dies – und vieles andere – ist es, was das dem Menschen angeborene Sprachvermögen leistet, keine andere Spezies kann es (einem Pferd kann man so viel Chinesisch vorspielen, wie man will, es wird es nicht lernen). Aber auch wir könnten es nicht, wenn wir nur den Schall hätten. Wenn man, in einer Abwandlung des Psammetich’schen Versuchs, jemanden in ein Zimmer einsperren und tagaus tagein mit Chinesisch beschallen und im Übrigen gut versorgen würde, so würde er es, gleich ob als Kind oder als Erwachsener, nicht lernen. Vielleicht würde er einige strukturelle Eigenschaften des Schallstroms ausfindig machen; aber er würde auch nach Jahren kein Chinesisch können. Man benötigt den Schallstrom als sinnlich fassbaren Ausdruck der zugrundeliegenden Sprache, und man benötigt all die Informationen, die man der jeweiligen Redesituation oder aber seinem bereits vorhandenen anderweitigen Wissen entnehmen kann. Augustinus hat beides radikal vereinfacht; aber im Prinzip hat er Recht, und man sollte daher von der Spracherwerbsforschung erwarten, dass sie dies in Rechnung stellt. Das tut sie aber selten. Soweit sie überhaupt aus dem Gehäuse der Theorie tritt und sich den tatsächlichen Verlauf des Spracherwerbs anschaut, konzentriert sie sich weithin auf das, was die Kinder selbst sagen – dazu dienen ausgedehnte Corpora –, oder aber sie untersucht in experimentellen Settings, wie Kinder bestimmte Wörter oder Strukturen verstehen oder auch nicht verstehen. Das hat auch, wenn denn gut gemacht, einen hohen Aufschlusswert. Aber die eigentliche Verarbeitung des Inputs im doppelten Sinne – Schallwellen und Parallelinformation – wird selten in den Mittelpunkt des Interesses gerückt. Dies führt zu eigentümlichen Verzerrungen. So betrachtet man in der Spracherwerbsforschung vor allem deklarative Hauptsätze. Ein nicht unwesentlicher Teil dessen, was Kinder hören, besteht aber aus Imperativen („Tu das!“, „Tu das nicht!“). In solchen Imperativen gibt es normalerweise kein Subjekt. Ein intelligentes Kind muss daher zu dem Schluss kommen, dass das Deutsche in einem nicht unwesentlichen Teil seiner grammatischen Strukturen eine „pro drop-Sprache” ist, d.h. eine Sprache, in dem man das Subjekt weglassen kann. Kein Linguist käme auf diese Idee; sie entspricht aber den tatsächlichen Verhältnissen, und dies schlägt sich in dem Input, den das Kind verarbeiten muss, nieder. Dieses Heft befasst sich mit einer Spracherwerbssituation, in der – anders als beispielsweise bei einem Gespräch am Frühstückstisch – der Input in seiner doppelten Form gut zu überschauen ist, ohne dass die Situation, wie etwa bei einem kontrollierten Experiment, unnatürlich und der normalen Lernumgebung ferne wäre: mit dem Anschauen, Vorlesen und Lesen von Kinderbüchern. Man kann sich eine solche Situation als eine natürliche Ausweitung dessen vorstellen, was Augustinus beschreibt: Die Kinder hören, was die Erwachsenen sagen, und ihre Aufmerksamkeit wird auf bestimmte Dinge gerichtet, während sie hören und schauen – nur geht es hier nicht um einzelne Wörter, sondern um komplexe Ausdrücke und um komplexe, aber dennoch überschaubare begleitende Informationen. Nun haben Kinderbücher in der Spracherwerbsforschung durchaus eine Rolle gespielt. Dabei dienen sie – sei es als reine Folge von Bildern, sei es mit Text oder gar nur als Text – aber meistens nur als eine Art Vorlage für die Sprachproduktion der Kinder: Sie sollen aus der Vorlage eine Geschichte ableiten und in ihren eigenen Worten erzählen. Das bekannteste, aber keineswegs das einzige Beispiel sind die von Michael Bamberg, Ruth Berman und Dan Slobin in den 1980er Jahren initiierten „frog stories” – Nacherzählungen einer einfachen Bildgeschichte, die inzwischen in zahlreichen Sprachen vorliegen und viele Aufschlüsse über die unterschiedlichsten Aspekte der sich entwickelnden Sprachbeherrschung, von der Flexionsmorphologie bis zur Textstruktur, gebracht haben. Das ist gut und sinnvoll; aber im Grunde müsste man einen Schritt weiter gehen, nämlich gleichsam wir durch ein Mikroskop zu schauen, wie sich die Kinder ihre Regularitäten aus der Interaktion ableiten. Dies würde unsere Vorstellungen über den Verlauf des Spracherwerbs und die Gesetzlichkeiten, nach denen er erfolgt, wesentlich bereichern, vielleicht auf eine ganz neue Basis stellen. Die Beiträge dieses Heftes geben dafür eine Reihe von Beispielen, von denen nur ein kleines, aber besonders schlagendes erwähnt werden soll. Es gibt zahlreiche, auf Bildgeschichten beruhende Analysen, in denen untersucht wird, wie Kinder eine bestimmte Person oder eine Sache im fortlaufenden Diskurs benennen – ob sie etwa definite und indefinite Nominalausdrücke (ein Junge – der Junge), lexikalische oder pronominale Nominalphrasen (der Junge – er) oder gar leere Elemente (der Junge wacht auf und 0 schaut nach seinem Hund) richtig verwenden können. Das Bild, das die Forschung in diesem wesentlichen Teil der Sprachbeherrschung heute bietet, ist alles andere als einheitlich. So umfassen die Ansichten darüber, wann die Definit-Indefinit-Unterscheidung gemeistert wird, den größten Teil der Kindheit, je nachdem, welche Untersuchungen man zu Rate zieht. In dem Aufsatz von Katrin Dammann-Thedens wird deutlich, dass Kindern in einem bestimmen Alter oft überhaupt nicht klar ist, dass eine bestimmte Person, eine bestimmte Sache auf fortlaufenden Bildern dieselbe ist – auch wenn sie ähnlich aussieht –, und das ist bei Licht besehen ja auch keine triviale Frage. Diese Beobachtungen werfen ein ganz neues Licht auf die Idee der referentiellen Kontinuität im Diskurs und ihren Ausdruck durch nominale Ausdrücke wie die eben genannten. Vielleicht haben wir ganz falsche Vorstellungen darüber, wie Kinder die begleitende Information – hier durch die Bilder einer Geschichte geliefert – verstehen und damit für den Spracherwerb verarbeiten. Derlei Beobachtungen sind zunächst einmal etwas Punktuelles, keine Antworten, sondern Hinweise auf Dinge, die man bedenken muss. Aber ihre Analyse, und allgemeiner, ein genauerer Blick auf das, was sich tatsächlich abspielt, wenn Kinder sich Kinderbücher anschauen, mag uns vielleicht zu einem wesentlich tieferen Verständnis dessen führen, was beim Erwerb einer Sprache tatsächlich geschieht.
  • Klein, W. (1991). Geile Binsenbüschel, sehr intime Gespielen: Ein paar Anmerkungen über Arno Schmidt als Übersetzer. Zeitschrift für Literaturwissenschaft und Linguistik, 84, 124-129.
  • Klein, W. (1991). Raumausdrücke. Linguistische Berichte, 132, 77-114.
  • Klein, W., & Von Stutterheim, C. (1991). Text structure and referential movement. Arbeitsberichte des Forschungsprogramms S&P: Sprache und Pragmatik, 22.
  • Klein, W. (1991). Seven trivia of language acquisition. In L. Eubank (Ed.), Point counterpoint: Universal grammar in the second language (pp. 49-70). Amsterdam: Benjamins.
  • Klein, W. (1991). SLA theory: Prolegomena to a theory of language acquisition and implications for Theoretical Linguistics. In T. Huebner, & C. Ferguson (Eds.), Crosscurrents in second language acquisition and linguistic theories (pp. 169-194). Amsterdam: Benjamins.
  • Klein, W. (1991). Was kann sich die Übersetzungswissenschaft von der Linguistik erwarten? Zeitschrift für Literaturwissenschaft und Linguistik, 84, 104-123.
  • Koenigs, M., Acheson, D. J., Barbey, A. K., Soloman, J., Postle, B. R., & Grafman, J. (2011). Areas of left perisylvian cortex mediate auditory-verbal short-term memory. Neuropsychologia, 49(13), 3612-3619. doi:10.1016/j.neuropsychologia.2011.09.013.

    Abstract

    A contentious issue in memory research is whether verbal short-term memory (STM) depends on a neural system specifically dedicated to the temporary maintenance of information, or instead relies on the same brain areas subserving the comprehension and production of language. In this study, we examined a large sample of adults with acquired brain lesions to identify the critical neural substrates underlying verbal STM and the relationship between verbal STM and language processing abilities. We found that patients with damage to selective regions of left perisylvian cortex – specifically the inferior frontal and posterior temporal sectors – were impaired on auditory–verbal STM performance (digit span), as well as on tests requiring the production and/or comprehension of language. These results support the conclusion that verbal STM and language processing are mediated by the same areas of left perisylvian cortex.

    Files private

    Request files
  • Kokal, I., Engel, A., Kirschner, S., & Keysers, C. (2011). Synchronized drumming enhances activity in the caudate and facilitates prosocial commitment - If the rhythm comes easily. PLoS One, 6(11), e27272. doi:10.1371/journal.pone.0027272.

    Abstract

    Why does chanting, drumming or dancing together make people feel united? Here we investigate the neural mechanisms underlying interpersonal synchrony and its subsequent effects on prosocial behavior among synchronized individuals. We hypothesized that areas of the brain associated with the processing of reward would be active when individuals experience synchrony during drumming, and that these reward signals would increase prosocial behavior toward this synchronous drum partner. 18 female non-musicians were scanned with functional magnetic resonance imaging while they drummed a rhythm, in alternating blocks, with two different experimenters: one drumming in-synchrony and the other out-of-synchrony relative to the participant. In the last scanning part, which served as the experimental manipulation for the following prosocial behavioral test, one of the experimenters drummed with one half of the participants in-synchrony and with the other out-of-synchrony. After scanning, this experimenter "accidentally" dropped eight pencils, and the number of pencils collected by the participants was used as a measure of prosocial commitment. Results revealed that participants who mastered the novel rhythm easily before scanning showed increased activity in the caudate during synchronous drumming. The same area also responded to monetary reward in a localizer task with the same participants. The activity in the caudate during experiencing synchronous drumming also predicted the number of pencils the participants later collected to help the synchronous experimenter of the manipulation run. In addition, participants collected more pencils to help the experimenter when she had drummed in-synchrony than out-of-synchrony during the manipulation run. By showing an overlap in activated areas during synchronized drumming and monetary reward, our findings suggest that interpersonal synchrony is related to the brain's reward system.
  • Kornfeld, L., & Rossi, G. (2023). Enforcing rules during play: Knowledge, agency, and the design of instructions and reminders. Research on Language and Social Interaction, 56(1), 42-64. doi:10.1080/08351813.2023.2170637.

    Abstract

    Rules of behavior are fundamental to human sociality. Whether on the road, at the dinner table, or during a game, people monitor one another’s behavior for conformity to rules and may take action to rectify violations. In this study, we examine two ways in which rules are enforced during games: instructions and reminders. Building on prior research, we identify instructions as actions produced to rectify violations based on another’s lack of knowledge of the relevant rule; knowledge that the instruction is designed to impart. In contrast to this, the actions we refer to as reminders are designed to enforce rules presupposing the transgressor’s competence and treating the violation as the result of forgetfulness or oversight. We show that instructing and reminding actions differ in turn design, sequential development, the epistemic stances taken by transgressors and enforcers, and in how the action affects the progressivity of the interaction. Data are in German and Italian from the Parallel European Corpus of Informal Interaction (PECII).
  • Kösem, A., Dai, B., McQueen, J. M., & Hagoort, P. (2023). Neural envelope tracking of speech does not unequivocally reflect intelligibility. NeuroImage, 272: 120040. doi:10.1016/j.neuroimage.2023.120040.

    Abstract

    During listening, brain activity tracks the rhythmic structures of speech signals. Here, we directly dissociated the contribution of neural envelope tracking in the processing of speech acoustic cues from that related to linguistic processing. We examined the neural changes associated with the comprehension of Noise-Vocoded (NV) speech using magnetoencephalography (MEG). Participants listened to NV sentences in a 3-phase training paradigm: (1) pre-training, where NV stimuli were barely comprehended, (2) training with exposure of the original clear version of speech stimulus, and (3) post-training, where the same stimuli gained intelligibility from the training phase. Using this paradigm, we tested if the neural responses of a speech signal was modulated by its intelligibility without any change in its acoustic structure. To test the influence of spectral degradation on neural envelope tracking independently of training, participants listened to two types of NV sentences (4-band and 2-band NV speech), but were only trained to understand 4-band NV speech. Significant changes in neural tracking were observed in the delta range in relation to the acoustic degradation of speech. However, we failed to find a direct effect of intelligibility on the neural tracking of speech envelope in both theta and delta ranges, in both auditory regions-of-interest and whole-brain sensor-space analyses. This suggests that acoustics greatly influence the neural tracking response to speech envelope, and that caution needs to be taken when choosing the control signals for speech-brain tracking analyses, considering that a slight change in acoustic parameters can have strong effects on the neural tracking response.
  • Kucera, K. S., Reddy, T. E., Pauli, F., Gertz, J., Logan, J. E., Myers, R. M., & Willard, H. F. (2011). Allele-specific distribution of RNA polymerase II on female X chromosomes. Human Molecular Genetics, 20, 3964-3973. doi:10.1093/hmg/ddr315.

    Abstract

    While the distribution of RNA polymerase II (PolII) in a variety of complex genomes is correlated with gene expression, the presence of PolII at a gene does not necessarily indicate active expression. Various patterns of PolII binding have been described genome wide; however, whether or not PolII binds at transcriptionally inactive sites remains uncertain. The two X chromosomes in female cells in mammals present an opportunity to examine each of the two alleles of a given locus in both active and inactive states, depending on which X chromosome is silenced by X chromosome inactivation. Here, we investigated PolII occupancy and expression of the associated genes across the active (Xa) and inactive (Xi) X chromosomes in human female cells to elucidate the relationship of gene expression and PolII binding. We find that, while PolII in the pseudoautosomal region occupies both chromosomes at similar levels, it is significantly biased toward the Xa throughout the rest of the chromosome. The general paucity of PolII on the Xi notwithstanding, detectable (albeit significantly reduced) binding can be observed, especially on the evolutionarily younger short arm of the X. PolII levels at genes that escape inactivation correlate with the levels of their expression; however, additional PolII sites can be found at apparently silenced regions, suggesting the possibility of a subset of genes on the Xi that are poised for expression. Consistent with this hypothesis, we show that a high proportion of genes associated with PolII-accessible sites, while silenced in GM12878, are expressed in other female cell lines.
  • Kuzla, C., & Ernestus, M. (2011). Prosodic conditioning of phonetic detail in German plosives. Journal of Phonetics, 39, 143-155. doi:10.1016/j.wocn.2011.01.001.

    Abstract

    This study investigates the prosodic conditioning of phonetic details which are candidate cues to phonological contrasts. German /b, d, g, p, t, k/ were examined in three prosodic positions. Lenis plosives /b, d, g/ were produced with less glottal vibration at larger prosodic boundaries, whereas their VOT showed no effect of prosody. VOT of fortis plosives /p, t, k/ decreased at larger boundaries, as did their burst intensity maximum. Vowels (when measured from consonantal release) following fortis plosives and lenis velars were shorter after larger boundaries. Closure duration, which did not contribute to the fortis/lenis contrast, was heavily affected by prosody. These results support neither of the hitherto proposed accounts of prosodic strengthening (Uniform Strengthening and Feature Enhancement). We propose a different account, stating that the phonological identity of speech sounds remains stable not only within, but also across prosodic positions (contrast-over-prosody hypothesis). Domain-initial strengthening hardly diminishes the contrast between prosodically weak fortis and strong lenis plosives.
  • Laaksonen, H., Kujala, J., Hultén, A., Liljeström, M., & Salmelin, R. (2011). MEG evoked responses and rhythmic activity provide spatiotemporally complementary measures of neural activity in language production. NeuroImage, 60, 29-36. doi:MEG evoked responses and rhythmic activity provide spatiotemporally complementary measures of neural activity in language production.

    Abstract

    Phase-locked evoked responses and event-related modulations of spontaneous rhythmic activity are the two main approaches used to quantify stimulus- or task-related changes in electrophysiological measures. The relationship between the two has beenwidely theorized upon but empirical research has been limited to the primary visual and sensorimotor cortex. However, both evoked responses and rhythms have been used as markers of neural activity in paradigms ranging from simple sensory to complex cognitive tasks.While some spatial agreement between the two phenomena has been observed, typically only one of the measures has been used in any given study, thus disallowing a direct evaluation of their exact spatiotemporal relationship. In this study, we sought to systematically clarify the connection between evoked responses and rhythmic activity. Using both measures, we identified the spatiotemporal patterns of task effects in three magnetoencephalography (MEG) data sets, all variants of a picture naming task. Evoked responses and rhythmic modulation yielded largely separate networks, with spatial overlap mainly in the sensorimotor and primary visual areas.Moreover, in the cortical regions thatwere identified with both measures the experimental effects they conveyed differed in terms of timing and function. Our results suggest that the two phenomena are largely detached and that both measures are needed for an accurate portrayal of brain activity.
  • Lacan, M., Keyser, C., Ricaut, F.-X., Brucato, N., Duranthon, F., Guilaine, J., Crubézy, E., & Ludes, B. (2011). Ancient DNA reveals male diffusion through the Neolithic Mediterranean route. Proceedings of the National Academy of Sciences of the United States of America, 108, 9788-9791. doi:10.1073/pnas.1100723108.

    Abstract

    The Neolithic is a key period in the history of the European settlement. Although archaeological and present-day genetic data suggest several hypotheses regarding the human migration patterns at this period, validation of these hypotheses with the use of ancient genetic data has been limited. In this context, we studied DNA extracted from 53 individuals buried in a necropolis used by a French local community 5,000 y ago. The relatively good DNA preservation of the samples allowed us to obtain autosomal, Y-chromosomal, and/or mtDNA data for 29 of the 53 samples studied. From these datasets, we established close parental relationships within the necropolis and determined maternal and paternal lineages as well as the absence of an allele associated with lactase persistence, probably carried by Neolithic cultures of central Europe. Our study provides an integrative view of the genetic past in southern France at the end of the Neolithic period. Furthermore, the Y-haplotype lineages characterized and the study of their current repartition in European populations confirm a greater influence of the Mediterranean than the Central European route in the peopling of southern Europe during the Neolithic transition.
  • Lacan, M., Keyser, C., Ricaut, F.-X., Brucato, N., Tarrús, J., Bosch, A., Guilaine, J., Crubézy, E., & Ludes, B. (2011). Ancient DNA suggests the leading role played by men in the Neolithic dissemination. Proceedings of the National Academy of Sciences of the United States of America, 108, 18255-18259. doi:10.1073/pnas.1113061108.

    Abstract

    The impact of the Neolithic dispersal on the western European populations is subject to continuing debate. To trace and date genetic lineages potentially brought during this transition and so understand the origin of the gene pool of current populations, we studied DNA extracted from human remains excavated in a Spanish funeral cave dating from the beginning of the fifth millennium B.C. Thanks to a “multimarkers” approach based on the analysis of mitochondrial and nuclear DNA (autosomes and Y-chromosome), we obtained information on the early Neolithic funeral practices and on the biogeographical origin of the inhumed individuals. No close kinship was detected. Maternal haplogroups found are consistent with pre-Neolithic settlement, whereas the Y-chromosomal analyses permitted confirmation of the existence in Spain approximately 7,000 y ago of two haplogroups previously associated with the Neolithic transition: G2a and E1b1b1a1b. These results are highly consistent with those previously found in Neolithic individuals from French Late Neolithic individuals, indicating a surprising temporal genetic homogeneity in these groups. The high frequency of G2a in Neolithic samples in western Europe could suggest, furthermore, that the role of men during Neolithic dispersal could be greater than currently estimated.

    Additional information

    Supporting_Information_Lacan.pdf
  • Lai, J., & Poletiek, F. H. (2011). The impact of adjacent-dependencies and staged-input on the learnability of center-embedded hierarchical structures. Cognition, 118(2), 265-273. doi:10.1016/j.cognition.2010.11.011.

    Abstract

    A theoretical debate in artificial grammar learning (AGL) regards the learnability of hierarchical structures. Recent studies using an AnBn grammar draw conflicting conclusions (Bahlmann and Friederici, 2006, De Vries et al., 2008). We argue that 2 conditions crucially affect learning AnBn structures: sufficient exposure to zero-level-of-embedding (0-LoE) exemplars and a staged-input. In 2 AGL experiments, learning was observed only when the training set was staged and contained 0-LoE exemplars. Our results might help understanding how natural complex structures are learned from exemplars.
  • Lai, J., Chan, A., & Kidd, E. (2023). Relative clause comprehension in Cantonese-speaking children with and without developmental language disorder. PLoS One, 18: e0288021. doi:10.1371/journal.pone.0288021.

    Abstract

    Developmental Language Disorder (DLD), present in 2 out of every 30 children, affects primarily oral language abilities and development in the absence of associated biomedical conditions. We report the first experimental study that examines relative clause (RC) comprehension accuracy and processing (via looking preference) in Cantonese-speaking children with and without DLD, testing the predictions from competing domain-specific versus domain-general theoretical accounts. We compared children with DLD (N = 22) with their age-matched typically-developing (TD) children (AM-TD, N = 23) aged 6;6–9;7 and language-matched (and younger) TD children (YTD, N = 21) aged 4;7–7;6, using a referent selection task. Within-subject factors were: RC type (subject-RCs (SRCs) versus object-RCs (ORCs); relativizer (classifier (CL) versus relative marker ge3 RCs). Accuracy measures and looking preference to the target were analyzed using generalized linear mixed effects models. Results indicated Cantonese children with DLD scored significantly lower than their AM-TD peers in accuracy and processed RCs significantly slower than AM-TDs, but did not differ from the YTDs on either measure. Overall, while the results revealed evidence of a SRC advantage in the accuracy data, there was no indication of additional difficulty associated with ORCs in the eye-tracking data. All children showed a processing advantage for the frequent CL relativizer over the less frequent ge3 relativizer. These findings pose challenges to domain-specific representational deficit accounts of DLD, which primarily explain the disorder as a syntactic deficit, and are better explained by domain-general accounts that explain acquisition and processing as emergent properties of multiple converging linguistic and non-linguistic processes.

    Additional information

    S1 appendix
  • Leckband, D. E., Menon, S., Rosenberg, K., Graham, S. A., Taylor, M. E., & Drickamer, K. (2011). Geometry and adhesion of extracellular domains of DC-SIGNR neck length variants analyzed by force-distance measurements. Biochemistry, 50, 6125-6132. doi:10.1021/bi2003444.

    Abstract

    Force-distance measurements have been used to examine differences in the interaction of the dendritic cell glycan-binding receptor DC-SIGN and the closely related endothelial cell receptor DC-SIGNR (L-SIGN) with membranes bearing glycan ligands. The results demonstrate that upon binding to membrane-anchored ligand, DC-SIGNR undergoes a conformational change similar to that previously observed for DC-SIGN. The results also validate a model for the extracellular domain of DC-SIGNR derived from crystallographic studies. Force measurements were performed with DC-SIGNR variants that differ in the length of the neck that result from genetic polymorphisms, which encode different numbers of the 23-amino acid repeat sequences that constitute the neck. The findings are consistent with an elongated, relatively rigid structure of the neck repeat observed in crystals. In addition, differences in the lengths of DC-SIGN and DC-SIGNR extracellular domains with equivalent numbers of neck repeats support a model in which the different dispositions of the carbohydrate-recognition domains in DC-SIGN and DC-SIGNR result from variations in the sequences of the necks.
  • Lee, C., Jessop, A., Bidgood, A., Peter, M. S., Pine, J. M., Rowland, C. F., & Durrant, S. (2023). How executive functioning, sentence processing, and vocabulary are related at 3 years of age. Journal of Experimental Child Psychology, 233: 105693. doi:10.1016/j.jecp.2023.105693.

    Abstract

    There is a wealth of evidence demonstrating that executive function (EF) abilities are positively associated with language development during the preschool years, such that children with good executive functions also have larger vocabularies. However, why this is the case remains to be discovered. In this study, we focused on the hypothesis that sentence processing abilities mediate the association between EF skills and receptive vocabulary knowledge, in that the speed of language acquisition is at least partially dependent on a child’s processing ability, which is itself dependent on executive control. We tested this hypothesis in longitudinal data from a cohort of 3- and 4-year-old children at three age points (37, 43, and 49 months). We found evidence, consistent with previous research, for a significant association between three EF skills (cognitive flexibility, working memory [as measured by the Backward Digit Span], and inhibition) and receptive vocabulary knowledge across this age range. However, only one of the tested sentence processing abilities (the ability to maintain multiple possible referents in mind) significantly mediated this relationship and only for one of the tested EFs (inhibition). The results suggest that children who are better able to inhibit incorrect responses are also better able to maintain multiple possible referents in mind while a sentence unfolds, a sophisticated sentence processing ability that may facilitate vocabulary learning from complex input.

    Additional information

    table S1 code and data
  • Lehecka, T. (2023). Normative ratings for 111 Swedish nouns and corresponding picture stimuli. Nordic Journal of Linguistics, 46(1), 20-45. doi:10.1017/S0332586521000123.

    Abstract

    Normative ratings are a means to control for the effects of confounding variables in psycholinguistic experiments. This paper introduces a new dataset of normative ratings for Swedish encompassing 111 concrete nouns and the corresponding picture stimuli in the MultiPic database (Duñabeitia et al. 2017). The norms for name agreement, category typicality, age of acquisition and subjective frequency were collected using online surveys among native speakers of the Finland-Swedish variety of Swedish. The paper discusses the inter-correlations between these variables and compares them against available ratings for other languages. In doing so, the paper argues that ratings for age of acquisition and subjective frequency collected for other languages may be applied to psycholinguistic studies on Finland-Swedish, at least with respect to concrete and highly imageable nouns. In contrast, norms for name agreement should be collected from speakers of the same language variety as represented by the subjects in the actual experiments.
  • Lei, A., Willems, R. M., & Eekhof, L. S. (2023). Emotions, fast and slow: Processing of emotion words is affected by individual differences in need for affect and narrative absorption. Cognition and Emotion, 37(5), 997-1005. doi:10.1080/02699931.2023.2216445.

    Abstract

    Emotional words have consistently been shown to be processed differently than neutral words. However, few studies have examined individual variability in emotion word processing with longer, ecologically valid stimuli (beyond isolated words, sentences, or paragraphs). In the current study, we re-analysed eye-tracking data collected during story reading to reveal how individual differences in need for affect and narrative absorption impact the speed of emotion word reading. Word emotionality was indexed by affective-aesthetic potentials (AAP) calculated by a sentiment analysis tool. We found that individuals with higher levels of need for affect and narrative absorption read positive words more slowly. On the other hand, these individual differences did not influence the reading time of more negative words, suggesting that high need for affect and narrative absorption are characterised by a positivity bias only. In general, unlike most previous studies using more isolated emotion word stimuli, we observed a quadratic (U-shaped) effect of word emotionality on reading speed, such that both positive and negative words were processed more slowly than neutral words. Taken together, this study emphasises the importance of taking into account individual differences and task context when studying emotion word processing.
  • Lemaitre, H., Le Guen, Y., Tilot, A. K., Stein, J. L., Philippe, C., Mangin, J.-F., Fisher, S. E., & Frouin, V. (2023). Genetic variations within human gained enhancer elements affect human brain sulcal morphology. NeuroImage, 265: 119773. doi:10.1016/j.neuroimage.2022.119773.

    Abstract

    The expansion of the cerebral cortex is one of the most distinctive changes in the evolution of the human brain. Cortical expansion and related increases in cortical folding may have contributed to emergence of our capacities for high-order cognitive abilities. Molecular analysis of humans, archaic hominins, and non-human primates has allowed identification of chromosomal regions showing evolutionary changes at different points of our phylogenetic history. In this study, we assessed the contributions of genomic annotations spanning 30 million years to human sulcal morphology measured via MRI in more than 18,000 participants from the UK Biobank. We found that variation within brain-expressed human gained enhancers, regulatory genetic elements that emerged since our last common ancestor with Old World monkeys, explained more trait heritability than expected for the left and right calloso-marginal posterior fissures and the right central sulcus. Intriguingly, these are sulci that have been previously linked to the evolution of locomotion in primates and later on bipedalism in our hominin ancestors.

    Additional information

    tables
  • Levelt, W. J. M. (1991). Die konnektionistische Mode. Sprache und Kognition, 10(2), 61-72.
  • Levelt, W. J. M. (1984). Geesteswetenschappelijke theorie als kompas voor de gangbare mening. In S. Dresden, & D. Van de Kaa (Eds.), Wetenschap ten goede en ten kwade (pp. 42-52). Amsterdam: North Holland.
  • Levelt, W. J. M. (1962). Motion breaking and the perception of causality. In A. Michotte (Ed.), Causalité, permanence et réalité phénoménales: Etudes de psychologie expérimentale (pp. 244-258). Louvain: Publications Universitaires.
  • Levelt, W. J. M., Schriefers, H., Vorberg, D., Meyer, A. S., Pechmann, T., & Havinga, J. (1991). Normal and deviant lexical processing: Reply to Dell and O'Seaghdha. Psychological Review, 98(4), 615-618. doi:10.1037/0033-295X.98.4.615.

    Abstract

    In their comment, Dell and O'Seaghdha (1991) adduced any effect on phonological probes for semantic alternatives to the activation of these probes in the lexical network. We argue that that interpretation is false and, in addition, that the model still cannot account for our data. Furthermore, and different from Dell and O'seaghda, we adduce semantic rebound to the lemma level, where it is so substantial that it should have shown up in our data. Finally, we question the function of feedback in a lexical network (other than eliciting speech errors) and discuss Dell's (1988) notion of a unified production-comprehension system.
  • Levelt, W. J. M. (1984). Some perceptual limitations on talking about space. In A. J. Van Doorn, W. A. Van de Grind, & J. J. Koenderink (Eds.), Limits in perception (pp. 323-358). Utrecht: VNU Science Press.
  • Levelt, W. J. M. (1984). Sprache und Raum. Texten und Schreiben, 20, 18-21.
  • Levelt, W. J. M., Schriefer, H., Vorberg, D., Meyer, A. S., Pechmann, T., & Havinga, J. (1991). The time course of lexical access in speech production: A study of picture naming. Psychological Review, 98(1), 122-142. doi:10.1037/0033-295X.98.1.122.
  • Levinson, S. C. (1991). Deixis. In W. Bright (Ed.), Oxford international encyclopedia of linguistics (pp. 343-344). Oxford University Press.
  • Levinson, S. C. (2011). Deixis [Reprint]. In D. Archer, & P. Grundy (Eds.), The pragmatics reader (pp. 163-185). London: Routledge.

    Abstract

    Reproduced with permission of Blackwell Publishing from: Levinson, S. C. (2004) 'Deixis'. In: Horn, L.R. and Ward, G. (Eds.) The Handbook of Pragmatics. Oxford: Blackwell Publishing, pp. 100-121
  • Levinson, S. C. (2011). Foreword. In D. M. Mark, A. G. Turk, N. Burenhult, & D. Stea (Eds.), Landscape in language: Transdisciplinary perspectives (pp. ix-x). Amsterdam: John Benjamins.
  • Levinson, S. C., & Senft, G. (1991). Forschungsgruppe für Kognitive Anthropologie - Eine neue Forschungsgruppe in der Max-Planck-Gesellschaft. Linguistische Berichte, 133, 244-246.
  • Levinson, S. C. (2011). Presumptive meanings [Reprint]. In D. Archer, & P. Grundy (Eds.), The pragmatics reader (pp. 86-98). London: Routledge.

    Abstract

    Reprinted with permission of The MIT Press from Levinson (2000) Presumptive meanings: The theory of generalized conversational implicature, pp. 112-118, 116-167, 170-173, 177-180. MIT Press
  • Levinson, S. C. (2011). Reciprocals in Yélî Dnye, the Papuan language of Rossel Island. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 177-194). Amsterdam: Benjamins.

    Abstract

    Yélî Dnye has two discernable dedicated constructions for reciprocal marking. The first and main construction uses a dedicated reciprocal pronoun numo, somewhat like English each other. We can recognise two subconstructions. First, the ‘numo-construction’, where the reciprocal pronoun is a patient of the verb, and where the invariant pronoun numo is obligatorily incorporated, triggering intransitivisation (e.g. A-NPs become absolutive). This subconstruction has complexities, for example in the punctual aspect only, the verb is inflected like a transitive, but with enclitics mismatching actual person/number. In the second variant or subconstruction, the ‘noko-construction’, the same reciprocal pronoun (sometimes case-marked as noko) occurs but now in oblique positions with either transitive or intransitive verbs. The reciprocal element here has some peculiar binding properties. Finally, the second independent construction is a dedicated periphrastic (or woni…woni) construction, glossing ‘the one did X to the other, and the other did X to the one’. It is one of the rare cross-serial dependencies that show that natural languages cannot be modelled by context-free phrase-structure grammars. Finally, the usage of these two distinct constructions is discussed.
  • Levinson, S. C., & Senft, G. (1991). Research group for cognitive anthropology - A new research group of the Max Planck Society. Cognitive Linguistics, 2, 311-312.
  • Levinson, S. C. (2011). Pojmowanie przestrzeni w różnych kulturach [Polish translation of Levinson, S. C. 1998. Studying spatial conceptualization across cultures]. Autoportret, 33, 16-23.

    Abstract

    Polish translation of Levinson, S. C. (1998). Studying spatial conceptualization across cultures: Anthropology and cognitive science. Ethos, 26(1), 7-24. doi:10.1525/eth.1998.26.1.7
  • Levinson, S. C. (1991). Pragmatic reduction of the Binding Conditions revisited. Journal of Linguistics, 27, 107-161. doi:10.1017/S0022226700012433.

    Abstract

    In an earlier article (Levinson, 1987b), I raised the possibility that a Gricean theory of implicature might provide a systematic partial reduction of the Binding Conditions; the briefest of outlines is given in Section 2.1 below but the argumentation will be found in the earlier article. In this article I want, first, to show how that account might be further justified and extended, but then to introduce a radical alternative. This alternative uses the same pragmatic framework, but gives an account better adjusted to some languages. Finally, I shall attempt to show that both accounts can be combined by taking a diachronic perspective. The attraction of the combined account is that, suddenly, many facts about long-range reflexives and their associated logophoricity fall into place.
  • Levinson, S. C. (2011). Three levels of meaning: Essays in honor of Sir John Lyons [Reprint]. In A. Kasher (Ed.), Pragmatics II. London: Routledge.

    Abstract

    Reprint from Stephen C. Levinson, ‘Three Levels of Meaning’, in Frank Palmer (ed.), Grammar and Meaning: Essays in Honor of Sir John Lyons (Cambridge University Press, 1995), pp. 90–115
  • Levinson, S. C., Greenhill, S. J., Gray, R. D., & Dunn, M. (2011). Universal typological dependencies should be detectable in the history of language families. Linguistic Typology, 15, 509-534. doi:10.1515/LITY.2011.034.

    Abstract

    1. Introduction We claim that making sense of the typological diversity of languages demands a historical/evolutionary approach.We are pleased that the target paper (Dunn et al. 2011a) has served to bring discussion of this claim into prominence, and are grateful that leading typologists have taken the time to respond (commentaries denoted by boldface). It is unfortunate though that a number of the commentaries in this issue of LT show significant misunderstandings of our paper. Donohue thinks we were out to show the stability of typological features, but that was not our target at all (although related methods can be used to do that: see, e.g., Greenhill et al. 2010a, Dediu 2011a). Plank seems to think we were arguing against universals of any type, but our target was in fact just the implicational universals of word order that have been the bread and butter of typology. He also seems to think we ignore diachrony, whereas in fact the method introduces diachrony centrally into typological reasoning, thereby potentially revolutionising typology (see Cysouw’s commentary). Levy & Daumé think we were testing for lineage-specificity, whereas that was in fact an outcome (the main finding) of our testing for correlated evolution. Dryer thinks we must account for the distribution of language types around the world, but that was not our aim: our aim was to test the causal connection between linguistic variables by taking the perspective of language evolution (diversification and change). Longobardi & Roberts seem to think we set out to extract family trees from syntactic features, but our goal was in fact to use trees based on lexical cognates and hang reconstructed syntactic states on each node of these trees, thereby reconstructing the processes of language change.
  • Levinson, S. C. (2011). Universals in pragmatics. In P. C. Hogan (Ed.), The Cambridge encyclopedia of the language sciences (pp. 654-657). New York: Cambridge University Press.

    Abstract

    Changing Prospects for Universals in Pragmatics
    The term PRAGMATICS has come to denote the study of general principles of language use. It is usually understood to contrast with SEMANTICS, the study of encoded meaning, and also, by some authors, to contrast with SOCIOLINGUISTICS
    and the ethnography of speaking, which are more concerned with local sociocultural practices. Given that pragmaticists come from disciplines as varied as philosophy, sociology,
    linguistics, communication studies, psychology, and anthropology, it is not surprising that definitions of pragmatics vary. Nevertheless, most authors agree on a list of topics
    that come under the rubric, including DEIXIS, PRESUPPOSITION,
    implicature (see CONVERSATIONAL IMPLICATURE), SPEECH-ACTS, and conversational organization (see CONVERSATIONAL ANALYSIS). Here, we can use this extensional definition as a starting point (Levinson 1988; Huang 2007).
  • Levinson, S. C. (2023). On cognitive artifacts. In R. Feldhay (Ed.), The evolution of knowledge: A scientific meeting in honor of Jürgen Renn (pp. 59-78). Berlin: Max Planck Institute for the History of Science.

    Abstract

    Wearing the hat of a cognitive anthropologist rather than an historian, I will try to amplify the ideas of Renn’s cited above. I argue that a particular subclass of material objects, namely “cognitive artifacts,” involves a close coupling of mind and artifact that acts like a brain prosthesis. Simple cognitive artifacts are external objects that act as aids to internal
    computation, and not all cultures have extended inventories of these. Cognitive artifacts in this sense (e.g., calculating or measuring devices) have clearly played a central role in the history of science. But the notion can be widened to take in less material externalizations of cognition, like writing and language itself. A critical question here is how and why this close coupling of internal computation and external device actually works, a rather neglected question to which I’ll suggest some answers.

    Additional information

    link to book
  • Levinson, S. C. (2023). Gesture, spatial cognition and the evolution of language. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 378(1875): 20210481. doi:10.1098/rstb.2021.0481.

    Abstract

    Human communication displays a striking contrast between the diversity of languages and the universality of the principles underlying their use in conversation. Despite the importance of this interactional base, it is not obvious that it heavily imprints the structure of languages. However, a deep-time perspective suggests that early hominin communication was gestural, in line with all the other Hominidae. This gestural phase of early language development seems to have left its traces in the way in which spatial concepts, implemented in the hippocampus, provide organizing principles at the heart of grammar.
  • Levshina, N., Namboodiripad, S., Allassonnière-Tang, M., Kramer, M., Talamo, L., Verkerk, A., Wilmoth, S., Garrido Rodriguez, G., Gupton, T. M., Kidd, E., Liu, Z., Naccarato, C., Nordlinger, R., Panova, A., & Stoynova, N. (2023). Why we need a gradient approach to word order. Linguistics, 61(4), 825-883. doi:10.1515/ling-2021-0098.

    Abstract

    This article argues for a gradient approach to word order, which treats word order preferences, both within and across languages, as a continuous variable. Word order variability should be regarded as a basic assumption, rather than as something exceptional. Although this approach follows naturally from the emergentist usage-based view of language, we argue that it can be beneficial for all frameworks and linguistic domains, including language acquisition, processing, typology, language contact, language evolution and change, and formal approaches. Gradient approaches have been very fruitful in some domains, such as language processing, but their potential is not fully realized yet. This may be due to practical reasons. We discuss the most pressing methodological challenges in corpus-based and experimental research of word order and propose some practical solutions.
  • Levshina, N. (2023). Word classes in corpus linguistics. In E. Van Lier (Ed.), The Oxford handbook of word classes (pp. 833-850). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780198852889.013.34.

    Abstract

    Word classes play a central role in corpus linguistics under the name of parts of speech (POS). Many popular corpora are provided with POS tags. This chapter gives examples of popular tagsets and discusses the methods of automatic tagging. It also considers bottom-up approaches to POS induction, which are particularly important for the ‘poverty of stimulus’ debate in language acquisition research. The choice of optimal POS tagging involves many difficult decisions, which are related to the level of granularity, redundancy at different levels of corpus annotation, cross-linguistic applicability, language-specific descriptive adequacy, and dealing with fuzzy boundaries between POS. The chapter also discusses the problem of flexible word classes and demonstrates how corpus data with POS tags and syntactic dependencies can be used to quantify the level of flexibility in a language.
  • Lewis, A. G., Schoffelen, J.-M., Bastiaansen, M., & Schriefers, H. (2023). Is beta in agreement with the relatives? Using relative clause sentences to investigate MEG beta power dynamics during sentence comprehension. Psychophysiology, 60(10): e14332. doi:10.1111/psyp.14332.

    Abstract

    There remains some debate about whether beta power effects observed during sentence comprehension reflect ongoing syntactic unification operations (beta-syntax hypothesis), or instead reflect maintenance or updating of the sentence-level representation (beta-maintenance hypothesis). In this study, we used magnetoencephalography to investigate beta power neural dynamics while participants read relative clause sentences that were initially ambiguous between a subject- or an object-relative reading. An additional condition included a grammatical violation at the disambiguation point in the relative clause sentences. The beta-maintenance hypothesis predicts a decrease in beta power at the disambiguation point for unexpected (and less preferred) object-relative clause sentences and grammatical violations, as both signal a need to update the sentence-level representation. While the beta-syntax hypothesis also predicts a beta power decrease for grammatical violations due to a disruption of syntactic unification operations, it instead predicts an increase in beta power for the object-relative clause condition because syntactic unification at the point of disambiguation becomes more demanding. We observed decreased beta power for both the agreement violation and object-relative clause conditions in typical left hemisphere language regions, which provides compelling support for the beta-maintenance hypothesis. Mid-frontal theta power effects were also present for grammatical violations and object-relative clause sentences, suggesting that violations and unexpected sentence interpretations are registered as conflicts by the brain's domain-general error detection system.

    Additional information

    data
  • Lindell, A. K., & Kidd, E. (2011). Why right-brain teaching is half-witted: A critique of the misapplication of neuroscience to education. Mind, Brain and Education, 5(3), 121-127. doi:10.1111/j.1751-228X.2011.01120.x.

    Abstract

    Educational tools claiming to use “right-brain techniques” are increasingly shaping school curricula. By implying a strong scientific basis, such approaches appeal to educators who rightly believe that knowledge of the brain should guide curriculum development. However, the notion of hemisphericity (idea that people are “left-brained” or “right-brained”) is a neuromyth that was debunked in the scientific literature 25 years ago. This article challenges the validity of “right-brain” teaching, highlighting the fact that neuroscientific research does not support its claims. Providing teachers with a basic understanding of neuroscience research as part of teacher training would enable more effective evaluation of brain-based claims and facilitate the adoption of tools validated by rigorous independent research rather than programs based on pseudoscience.
  • Lingwood, J., Lampropoulou, S., De Bezena, C., Billington, J., & Rowland, C. F. (2023). Children’s engagement and caregivers’ use of language-boosting strategies during shared book reading: A mixed methods approach. Journal of Child Language, 50(6), 1436-1458. doi:10.1017/S0305000922000290.

    Abstract

    For shared book reading to be effective for language development, the adult and child need to be highly engaged. The current paper adopted a mixed-methods approach to investigate caregiver’s language-boosting behaviours and children’s engagement during shared book reading. The results revealed there were more instances of joint attention and caregiver’s use of prompts during moments of higher engagement. However, instances of most language-boosting behaviours were similar across episodes of higher and lower engagement. Qualitative analysis assessing the link between children’s engagement and caregiver’s use of speech acts, revealed that speech acts do seem to contribute to high engagement, in combination with other aspects of the interaction.
  • Liszkowski, U., & Tomasello, M. (2011). Individual differences in social, cognitive, and morphological aspects of infant pointing. Cognitive Development, 26, 16-29. doi:10.1016/j.cogdev.2010.10.001.

    Abstract

    Little is known about the origins of the pointing gesture. We sought to gain insight into its emergence by investigating individual differences in the pointing of 12-month-old infants in two ways. First, we looked at differences in the communicative and interactional uses of pointing and asked how different hand shapes relate to point frequency, accompanying vocalizations, and mothers’ pointing. Second, we looked at differences in social-cognitive skills of point comprehension and imitation and tested whether these were related to infants’ own pointing. Infants’ and mothers’ spontaneous pointing correlated with one another, as did infants’ point production and comprehension. In particular, infants’ index-finger pointing had a profile different from simple whole-hand pointing. It was more frequent, it was more often accompanied by vocalizations, and it correlated more strongly with comprehension of pointing (especially to occluded referents). We conclude that whole-hand and index-finger pointing differ qualitatively and suggest that it is index-finger pointing that first embodies infants’ understanding of communicative intentions.
  • Liszkowski, U. (2011). Three lines in the emergence of prelinguistic communication and social cognition. Journal of cognitive education and psychology, 10(1), 32-43. doi:10.1891/1945-8959.10.1.32.

    Abstract

    Sociocultural theories of development posit that higher cognitive functions emerge through socially mediated processes, in particular through language. However, theories of human communication posit that language itself is based on higher social cognitive skills and cooperative motivations. Prelinguistic communication is a test case to this puzzle. In the current review, I first present recent and new findings of a research program on prelinguistic infants’ ommunication skills. This research provides empirical evidence for a rich social cognitive and motivational basis of human communication before language. Next, I discuss the emergence of these foundational skills. By considering all three lines of development, and by drawing on new findings from phylogenetic and cross-cultural comparisons, this article discusses the possibility that the cognitive foundations of prelinguistic communication are, in turn, mediated by social interactional input and shared experiences.
  • Lumaca, M., Bonetti, L., Brattico, E., Baggio, G., Ravignani, A., & Vuust, P. (2023). High-fidelity transmission of auditory symbolic material is associated with reduced right–left neuroanatomical asymmetry between primary auditory regions. Cerebral Cortex, 33(11), 6902-6919. doi:10.1093/cercor/bhad009.

    Abstract

    The intergenerational stability of auditory symbolic systems, such as music, is thought to rely on brain processes that allow the faithful transmission of complex sounds. Little is known about the functional and structural aspects of the human brain which support this ability, with a few studies pointing to the bilateral organization of auditory networks as a putative neural substrate. Here, we further tested this hypothesis by examining the role of left–right neuroanatomical asymmetries between auditory cortices. We collected neuroanatomical images from a large sample of participants (nonmusicians) and analyzed them with Freesurfer’s surface-based morphometry method. Weeks after scanning, the same individuals participated in a laboratory experiment that simulated music transmission: the signaling games. We found that high accuracy in the intergenerational transmission of an artificial tone system was associated with reduced rightward asymmetry of cortical thickness in Heschl’s sulcus. Our study suggests that the high-fidelity copying of melodic material may rely on the extent to which computational neuronal resources are distributed across hemispheres. Our data further support the role of interhemispheric brain organization in the cultural transmission and evolution of auditory symbolic systems.
  • Mace, R., & Jordan, F. (2011). Macro-evolutionary studies of cultural diversity: A review of empirical studies of cultural transmission and cultural adaptation. Philosophical Transactions of the Royal Society of London B, Biological Sciences, 366, 402-411. doi:10.1098/rstb.2010.0238.

    Abstract

    A growing body of theoretical and empirical research has examined cultural transmission and adaptive cultural behaviour at the individual, within-group level. However, relatively few studies have tried to examine proximate transmission or test ultimate adaptive hypotheses about behavioural or cultural diversity at a between-societies macro-level. In both the history of anthropology and in present-day work, a common approach to examining adaptive behaviour at the macro-level has been through correlating various cultural traits with features of ecology. We discuss some difficulties with simple ecological associations, and then review cultural phylogenetic studies that have attempted to go beyond correlations to understand the underlying cultural evolutionary processes. We conclude with an example of a phylogenetically controlled approach to understanding proximate transmission pathways in Austronesian cultural diversity.
  • Majid, A., Evans, N., Gaby, A., & Levinson, S. C. (2011). The semantics of reciprocal constructions across languages: An extensional approach. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 29-60). Amsterdam: Benjamins.

    Abstract

    How similar are reciprocal constructions in the semantic parameters they encode? We investigate this question by using an extensional approach, which examines similarity of meaning by examining how constructions are applied over a set of 64 videoclips depicting reciprocal events (Evans et al. 2004). We apply statistical modelling to descriptions from speakers of 20 languages elicited using the videoclips. We show that there are substantial differences in meaning between constructions of different languages.

    Files private

    Request files
  • Majid, A., & Levinson, S. C. (2011). The senses in language and culture. The Senses & Society, 6(1), 5-18. doi:10.2752/174589311X12893982233551.

    Abstract

    Multiple social science disciplines have converged on the senses in recent years, where formerly the domain of perception was the preserve of psychology. Linguistics, or Language, however, seems to have an ambivalent role in this undertaking. On the one hand, Language with a capital L (language as a general human capacity) is part of the problem. It was the prior focus on language (text) that led to the disregard of the senses. On the other hand, it is language (with a small "l", a particular tongue) that offers key insights into how other peoples onceptualize the senses. In this article, we argue that a systematic cross-cultural approach can reveal fundamental truths about the precise connections between language and the senses. Recurring failures to adequately describe the sensorium across specific languages reveal the intrinsic limits of Language. But the converse does not hold. Failures of expressibility in one language need not hold any implications for the Language faculty per se, and indeed can enlighten us about the possible experiential worlds available to human experience.
  • Majid, A., Evans, N., Gaby, A., & Levinson, S. C. (2011). The grammar of exchange: A comparative study of reciprocal constructions across languages. Frontiers in Psychology, 2: 34, pp. 34. doi:10.3389/fpsyg.2011.00034.

    Abstract

    Cultures are built on social exchange. Most languages have dedicated grammatical machinery for expressing this. To demonstrate that statistical methods can also be applied to grammatical meaning, we here ask whether the underlying meanings of these grammatical constructions are based on shared common concepts. To explore this, we designed video stimuli of reciprocated actions (e.g. ‘giving to each other’) and symmetrical states (e.g. ‘sitting next to each other’), and with the help of a team of linguists collected responses from 20 languages around the world. Statistical analyses revealed that many languages do, in fact, share a common conceptual core for reciprocal meanings but that this is not a universally expressed concept. The recurrent pattern of conceptual packaging found across languages is compatible with the view that there is a shared non-linguistic understanding of reciprocation. But, nevertheless, there are considerable differences between languages in the exact extensional patterns, highlighting that even in the domain of grammar semantics is highly language-specific.
  • Mak, M., Faber, M., & Willems, R. M. (2023). Different kinds of simulation during literary reading: Insights from a combined fMRI and eye-tracking study. Cortex, 162, 115-135. doi:10.1016/j.cortex.2023.01.014.

    Abstract

    Mental simulation is an important aspect of narrative reading. In a previous study, we found that gaze durations are differentially impacted by different kinds of mental simulation. Motor simulation, perceptual simulation, and mentalizing as elicited by literary short stories influenced eye movements in distinguishable ways (Mak & Willems, 2019). In the current study, we investigated the existence of a common neural locus for these different kinds of simulation. We additionally investigated whether individual differences during reading, as indexed by the eye movements, are reflected in domain-specific activations in the brain. We found a variety of brain areas activated by simulation-eliciting content, both modality-specific brain areas and a general simulation area. Individual variation in percent signal change in activated areas was related to measures of story appreciation as well as personal characteristics (i.e., transportability, perspective taking). Taken together, these findings suggest that mental simulation is supported by both domain-specific processes grounded in previous experiences, and by the neural mechanisms that underlie higher-order language processing (e.g., situation model building, event indexing, integration).

    Additional information

    figures localizer tasks appendix C1
  • Mamus, E., Speed, L. J., Rissman, L., Majid, A., & Özyürek, A. (2023). Lack of visual experience affects multimodal language production: Evidence from congenitally blind and sighted people. Cognitive Science, 47(1): e13228. doi:10.1111/cogs.13228.

    Abstract

    The human experience is shaped by information from different perceptual channels, but it is still debated whether and how differential experience influences language use. To address this, we compared congenitally blind, blindfolded, and sighted people's descriptions of the same motion events experienced auditorily by all participants (i.e., via sound alone) and conveyed in speech and gesture. Comparison of blind and sighted participants to blindfolded participants helped us disentangle the effects of a lifetime experience of being blind versus the task-specific effects of experiencing a motion event by sound alone. Compared to sighted people, blind people's speech focused more on path and less on manner of motion, and encoded paths in a more segmented fashion using more landmarks and path verbs. Gestures followed the speech, such that blind people pointed to landmarks more and depicted manner less than sighted people. This suggests that visual experience affects how people express spatial events in the multimodal language and that blindness may enhance sensitivity to paths of motion due to changes in event construal. These findings have implications for the claims that language processes are deeply rooted in our sensory experiences.
  • Mamus, E., Speed, L., Özyürek, A., & Majid, A. (2023). The effect of input sensory modality on the multimodal encoding of motion events. Language, Cognition and Neuroscience, 38(5), 711-723. doi:10.1080/23273798.2022.2141282.

    Abstract

    Each sensory modality has different affordances: vision has higher spatial acuity than audition, whereas audition has better temporal acuity. This may have consequences for the encoding of events and its subsequent multimodal language production—an issue that has received relatively little attention to date. In this study, we compared motion events presented as audio-only, visual-only, or multimodal (visual + audio) input and measured speech and co-speech gesture depicting path and manner of motion in Turkish. Input modality affected speech production. Speakers with audio-only input produced more path descriptions and fewer manner descriptions in speech compared to speakers who received visual input. In contrast, the type and frequency of gestures did not change across conditions. Path-only gestures dominated throughout. Our results suggest that while speech is more susceptible to auditory vs. visual input in encoding aspects of motion events, gesture is less sensitive to such differences.

    Additional information

    Supplemental material
  • Manhardt, F., Brouwer, S., Van Wijk, E., & Özyürek, A. (2023). Word order preference in sign influences speech in hearing bimodal bilinguals but not vice versa: Evidence from behavior and eye-gaze. Bilingualism: Language and Cognition, 26(1), 48-61. doi:10.1017/S1366728922000311.

    Abstract

    We investigated cross-modal influences between speech and sign in hearing bimodal bilinguals, proficient in a spoken and a sign language, and its consequences on visual attention during message preparation using eye-tracking. We focused on spatial expressions in which sign languages, unlike spoken languages, have a modality-driven preference to mention grounds (big objects) prior to figures (smaller objects). We compared hearing bimodal bilinguals’ spatial expressions and visual attention in Dutch and Dutch Sign Language (N = 18) to those of their hearing non-signing (N = 20) and deaf signing peers (N = 18). In speech, hearing bimodal bilinguals expressed more ground-first descriptions and fixated grounds more than hearing non-signers, showing influence from sign. In sign, they used as many ground-first descriptions as deaf signers and fixated grounds equally often, demonstrating no influence from speech. Cross-linguistic influence of word order preference and visual attention in hearing bimodal bilinguals appears to be one-directional modulated by modality-driven differences.
  • Marcus, G., & Fisher, S. E. (2011). Genes and language. In P. Hogan (Ed.), The Cambridge encyclopedia of the language sciences (pp. 341-344). New York: Cambridge University Press.
  • Mark, D. M., Turk, A., Burenhult, N., & Stea, D. (2011). Landscape in language: An introduction. In D. M. Mark, A. G. Turk, N. Burenhult, & D. Stea (Eds.), Landscape in language: Transdisciplinary perspectives (pp. 1-24). Amsterdam: John Benjamins.
  • Martin, A. E., & McElree, B. (2011). Direct-access retrieval during sentence comprehension: Evidence from Sluicing. Journal of Memory and Language, 64(4), 327-343. doi:10.1016/j.jml.2010.12.006.

    Abstract

    Language comprehension requires recovering meaning from linguistic form, even when the mapping between the two is indirect. A canonical example is ellipsis, the omission of information that is subsequently understood without being overtly pronounced. Comprehension of ellipsis requires retrieval of an antecedent from memory, without prior prediction, a property which enables the study of retrieval in situ ( Martin and McElree, 2008 and Martin and McElree, 2009). Sluicing, or inflectional-phrase ellipsis, in the presence of a conjunction, presents a test case where a competing antecedent position is syntactically licensed, in contrast with most cases of nonadjacent dependency, including verb–phrase ellipsis. We present speed–accuracy tradeoff and eye-movement data inconsistent with the hypothesis that retrieval is accomplished via a syntactically guided search, a particular variant of search not examined in past research. The observed timecourse profiles are consistent with the hypothesis that antecedents are retrieved via a cue-dependent direct-access mechanism susceptible to general memory variables.
  • Maskalenka, K., Alagöz, G., Krueger, F., Wright, J., Rostovskaya, M., Nakhuda, A., Bendall, A., Krueger, C., Walker, S., Scally, A., & Rugg-Gunn, P. J. (2023). NANOGP1, a tandem duplicate of NANOG, exhibits partial functional conservation in human naïve pluripotent stem cells. Development, 150(2): dev201155. doi:10.1242/dev.201155.

    Abstract

    Gene duplication events can drive evolution by providing genetic material for new gene functions, and they create opportunities for diverse developmental strategies to emerge between species. To study the contribution of duplicated genes to human early development, we examined the evolution and function of NANOGP1, a tandem duplicate of the transcription factor NANOG. We found that NANOGP1 and NANOG have overlapping but distinct expression profiles, with high NANOGP1 expression restricted to early epiblast cells and naïve-state pluripotent stem cells. Sequence analysis and epitope-tagging revealed that NANOGP1 is protein coding with an intact homeobox domain. The duplication that created NANOGP1 occurred earlier in primate evolution than previously thought and has been retained only in great apes, whereas Old World monkeys have disabled the gene in different ways, including homeodomain point mutations. NANOGP1 is a strong inducer of naïve pluripotency; however, unlike NANOG, it is not required to maintain the undifferentiated status of human naïve pluripotent cells. By retaining expression, sequence and partial functional conservation with its ancestral copy, NANOGP1 exemplifies how gene duplication and subfunctionalisation can contribute to transcription factor activity in human pluripotency and development.
  • Matthews, L. J., Tehrani, J. J., Jordan, F., Collard, M., & Nunn, C. (2011). Testing for divergent transmission histories among cultural characters: A study using Bayesian phylogenetic methods and Iranian tribal textile data. Plos One, 6(4), e14810. doi:10.1371/journal.pone.0014810.

    Abstract

    Abstract Background: Archaeologists and anthropologists have long recognized that different cultural complexes may have distinct descent histories, but they have lacked analytical techniques capable of easily identifying such incongruence. Here, we show how Bayesian phylogenetic analysis can be used to identify incongruent cultural histories. We employ the approach to investigate Iranian tribal textile traditions. Methods: We used Bayes factor comparisons in a phylogenetic framework to test two models of cultural evolution: the hierarchically integrated system hypothesis and the multiple coherent units hypothesis. In the hierarchically integrated system hypothesis, a core tradition of characters evolves through descent with modification and characters peripheral to the core are exchanged among contemporaneous populations. In the multiple coherent units hypothesis, a core tradition does not exist. Rather, there are several cultural units consisting of sets of characters that have different histories of descent. Results: For the Iranian textiles, the Bayesian phylogenetic analyses supported the multiple coherent units hypothesis over the hierarchically integrated system hypothesis. Our analyses suggest that pile-weave designs represent a distinct cultural unit that has a different phylogenetic history compared to other textile characters. Conclusions: The results from the Iranian textiles are consistent with the available ethnographic evidence, which suggests that the commercial rug market has influenced pile-rug designs but not the techniques or designs incorporated in the other textiles produced by the tribes. We anticipate that Bayesian phylogenetic tests for inferring cultural units will be of great value for researchers interested in studying the evolution of cultural traits including language, behavior, and material culture.
  • Mazzini, S., Holler, J., & Drijvers, L. (2023). Studying naturalistic human communication using dual-EEG and audio-visual recordings. STAR Protocols, 4(3): 102370. doi:10.1016/j.xpro.2023.102370.

    Abstract

    We present a protocol to study naturalistic human communication using dual-EEG and audio-visual recordings. We describe preparatory steps for data collection including setup preparation, experiment design, and piloting. We then describe the data collection process in detail which consists of participant recruitment, experiment room preparation, and data collection. We also outline the kinds of research questions that can be addressed with the current protocol, including several analysis possibilities, from conversational to advanced time-frequency analyses.
    For complete details on the use and execution of this protocol, please refer to Drijvers and Holler (2022).
  • McConnell, K. (2023). Individual Differences in Holistic and Compositional Language Processing. Journal of Cognition, 6. doi:10.5334/joc.283.

    Abstract

    Individual differences in cognitive abilities are ubiquitous across the spectrum of proficient language users. Although speakers differ with regard to their memory capacity, ability for inhibiting distraction, and ability to shift between different processing levels, comprehension is generally successful. However, this does not mean it is identical across individuals; listeners and readers may rely on different processing strategies to exploit distributional information in the service of efficient understanding. In the following psycholinguistic reading experiment, we investigate potential sources of individual differences in the processing of co-occurring words. Participants read modifier-noun bigrams like absolute silence in a self-paced reading task. Backward transition probability (BTP) between the two lexemes was used to quantify the prominence of the bigram as a whole in comparison to the frequency of its parts. Of five individual difference measures (processing speed, verbal working memory, cognitive inhibition, global-local scope shifting, and personality), two proved to be significantly associated with the effect of BTP on reading times. Participants who could inhibit a distracting global environment in order to more efficiently retrieve a single part and those that preferred the local level in the shifting task showed greater effects of the co-occurrence probability of the parts. We conclude that some participants are more likely to retrieve bigrams via their parts and their co-occurrence statistics whereas others more readily retrieve the two words together as a single chunked unit.
  • McGettigan, C., Warren, J. E., Eisner, F., Marshall, C. R., Shanmugalingam, P., & Scott, S. K. (2011). Neural correlates of sublexical processing in phonological working memory. Journal of Cognitive Neuroscience, 23, 961-977. doi:10.1162/jocn.2010.21491.

    Abstract

    This study investigated links between working memory and speech processing systems. We used delayed pseudoword repetition in fMRI to investigate the neural correlates of sublexical structure in phonological working memory (pWM). We orthogonally varied the number of syllables and consonant clusters in auditory pseudowords and measured the neural responses to these manipulations under conditions of covert rehearsal (Experiment 1). A left-dominant network of temporal and motor cortex showed increased activity for longer items, with motor cortex only showing greater activity concomitant with adding consonant clusters. An individual-differences analysis revealed a significant positive relationship between activity in the angular gyrus and the hippocampus, and accuracy on pseudoword repetition. As models of pWM stipulate that its neural correlates should be activated during both perception and production/rehearsal [Buchsbaum, B. R., & D'Esposito, M. The search for the phonological store: From loop to convolution. Journal of Cognitive Neuroscience, 20, 762-778, 2008; Jacquemot, C., & Scott, S. K. What is the relationship between phonological short-term memory and speech processing? Trends in Cognitive Sciences, 10, 480-486, 2006; Baddeley, A. D., & Hitch, G. Working memory. In G. H. Bower (Ed.), The psychology of learning and motivation: Advances in research and theory (Vol. 8, pp. 47-89). New York: Academic Press, 1974], we further assessed the effects of the two factors in a separate passive listening experiment (Experiment 2). In this experiment, the effect of the number of syllables was concentrated in posterior-medial regions of the supratemporal plane bilaterally, although there was no evidence of a significant response to added clusters. Taken together, the results identify the planum temporale as a key region in pWM; within this region, representations are likely to take the form of auditory or audiomotor -templates- or -chunks- at the level of the syllable [Papoutsi, M., de Zwart, J. A., Jansma, J. M., Pickering, M. J., Bednar, J. A., & Horwitz, B. From phonemes to articulatory codes: an fMRI study of the role of Broca's area in speech production. Cerebral Cortex, 19, 2156-2165, 2009; Warren, J. E., Wise, R. J. S., & Warren, J. D. Sounds do-able: auditory-motor transformations and the posterior temporal plane. Trends in Neurosciences, 28, 636-643, 2005; Griffiths, T. D., & Warren, J. D. The planum temporale as a computational hub. Trends in Neurosciences, 25, 348-353, 2002], whereas more lateral structures on the STG may deal with phonetic analysis of the auditory input [Hickok, G. The functional neuroanatomy of language. Physics of Life Reviews, 6, 121-143, 2009].
  • McLean, B., Dunn, M., & Dingemanse, M. (2023). Two measures are better than one: Combining iconicity ratings and guessing experiments for a more nuanced picture of iconicity in the lexicon. Language and Cognition, 15(4), 719-739. doi:10.1017/langcog.2023.9.

    Abstract

    Iconicity in language is receiving increased attention from many fields, but our understanding of iconicity is only as good as the measures we use to quantify it. We collected iconicity measures for 304 Japanese words from English-speaking participants, using rating and guessing tasks. The words included ideophones (structurally marked depictive words) along with regular lexical items from similar semantic domains (e.g., fuwafuwa ‘fluffy’, jawarakai ‘soft’). The two measures correlated, speaking to their validity. However, ideophones received consistently higher iconicity ratings than other items, even when guessed at the same accuracies, suggesting the rating task is more sensitive to cues like structural markedness that frame words as iconic. These cues did not always guide participants to the meanings of ideophones in the guessing task, but they did make them more confident in their guesses, even when they were wrong. Consistently poor guessing results reflect the role different experiences play in shaping construals of iconicity. Using multiple measures in tandem allows us to explore the interplay between iconicity and these external factors. To facilitate this, we introduce a reproducible workflow for creating rating and guessing tasks from standardised wordlists, while also making improvements to the robustness, sensitivity and discriminability of previous approaches.
  • McQueen, J. M., Jesse, A., & Mitterer, H. (2023). Lexically mediated compensation for coarticulation still as elusive as a white christmash. Cognitive Science: a multidisciplinary journal, 47(9): e13342. doi:10.1111/cogs.13342.

    Abstract

    Luthra, Peraza-Santiago, Beeson, Saltzman, Crinnion, and Magnuson (2021) present data from the lexically mediated compensation for coarticulation paradigm that they claim provides conclusive evidence in favor of top-down processing in speech perception. We argue here that this evidence does not support that conclusion. The findings are open to alternative explanations, and we give data in support of one of them (that there is an acoustic confound in the materials). Lexically mediated compensation for coarticulation thus remains elusive, while prior data from the paradigm instead challenge the idea that there is top-down processing in online speech recognition.

    Additional information

    supplementary materials
  • Menenti, L., Gierhan, S., Segaert, K., & Hagoort, P. (2011). Shared language: Overlap and segregation of the neuronal infrastructure for speaking and listening revealed by functional MRI. Psychological Science, 22, 1173-1182. doi:10.1177/0956797611418347.

    Abstract

    Whether the brain’s speech-production system is also involved in speech comprehension is a topic of much debate. Research has focused on whether motor areas are involved in listening, but overlap between speaking and listening might occur not only at primary sensory and motor levels, but also at linguistic levels (where semantic, lexical, and syntactic processes occur). Using functional MRI adaptation during speech comprehension and production, we found that the brain areas involved in semantic, lexical, and syntactic processing are mostly the same for speaking and for listening. Effects of primary processing load (indicative of sensory and motor processes) overlapped in auditory cortex and left inferior frontal cortex, but not in motor cortex, where processing load affected activity only in speaking. These results indicate that the linguistic parts of the language system are used for both speaking and listening, but that the motor system does not seem to provide a crucial contribution to listening.
  • Mester, J. L., Tilot, A. K., Rybicki, L. A., Frazier, T. W., & Eng, C. (2011). Analysis of prevalence and degree of macrocephaly in patients with germline PTEN mutations and of brain weight in Pten knock-in murine model. European Journal of Human Genetics, 19(7), 763-768. doi:10.1038/ejhg.2011.20.

    Abstract

    PTEN Hamartoma Tumour Syndrome (PHTS) includes Cowden syndrome (CS), Bannayan-Riley-Ruvalcaba syndrome (BRRS), and other conditions resulting from germline mutation of the PTEN tumour suppressor gene. Although macrocephaly, presumably due to megencephaly, is found in both CS and BRRS, the prevalence and degree have not been formally assessed in PHTS. We evaluated head size in a prospective nested series of 181 patients found to have pathogenic germline PTEN mutations. Clinical data including occipital-frontal circumference (OFC) measurement were requested for all participants. Macrocephaly was present in 94% of 161 evaluable PHTS individuals. In patients ≤18 years, mean OFC was +4.89 standard deviations (SD) above the population mean with no difference between genders (P=0.7). Among patients >18 years, average OFC was 60.0 cm in females and 62.8 cm in males (P<0.0001). To systematically determine whether macrocephaly was due to megencephaly, we examined PtenM3M4 missense mutant mice generated and maintained on mixed backgrounds. Mice were killed at various ages, brains were dissected out and weighed. Average brain weight for PtenM3M4 homozygous mice (N=15) was 1.02 g compared with 0.57 g for heterozygous mice (N=29) and 0.49 g for wild-type littermates (N=24) (P<0.0001). Macrocephaly, secondary to megencephaly, is an important component of PHTS and more prevalent than previously appreciated. Patients with PHTS have increased risks for breast and thyroid cancers, and early diagnosis is key to initiating timely screening to reduce patient morbidity and mortality. Clinicians should consider germline PTEN testing at an early point in the diagnostic work-up for patients with extreme macrocephaly.
  • Meyer, A. S., & Schriefers, H. (1991). Phonological facilitation in picture-word interference experiments: Effects of stimulus onset asynchrony and types of interfering stimuli. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17, 1146-1160. doi:10.1037/0278-7393.17.6.1146.

    Abstract

    Subjects named pictures while hearing distractor words that shared word-initial or word-final segments with the picture names or were unrelated to the picture names. The relative timing of distractor and picture presentation was varied. Compared with unrelated distractors, both types of related distractors facilitated picture naming under certain timing conditions. Begin-related distractors facilitated the naming responses if the shared segments began 150 ms before, at, or 150 ms after picture onset. By contrast, end-related distractors only facilitated the responses if the shared segments began at or 150 ms after picture onset. The results suggest that the phonological encoding of the beginning of a word is initiated before the encoding of its end.
  • Meyer, A. S. (1991). The time course of phonological encoding in language production: Phonological encoding inside a syllable. Journal of Memory and Language, 30, 69-69. doi:10.1016/0749-596X(91)90011-8.

    Abstract

    Eight experiments were carried out investigating whether different parts of a syllable must be phonologically encoded in a specific order or whether they can be encoded in any order. A speech production task was used in which the subjects in each test trial had to utter one out of three or five response words as quickly as possible. In the so-called homogeneous condition these words were related in form, while in the heterogeneous condition they were unrelated in form. For monosyllabic response words shorter reaction times were obtained in the homogeneous than in the heterogeneous condition when the words had the same onset, but not when they had the same rhyme. Similarly, for disyllabic response words, the reaction times were shorter in the homogeneous than in the heterogeneous condition when the words shared only the onset of the first syllable, but not when they shared only its rhyme. Furthermore, a stronger facilitatory effect was observed when the words had the entire first syllable in common than when they only shared the onset, or the onset and the nucleus, but not the coda of the first syllable. These results suggest that syllables are phonologically encoded in two ordered steps, the first of which is dedicated to the onset and the second to the rhyme.
  • Meyer, A. S. (2023). Timing in conversation. Journal of Cognition, 6(1), 1-17. doi:10.5334/joc.268.

    Abstract

    Turn-taking in everyday conversation is fast, with median latencies in corpora of conversational speech often reported to be under 300 ms. This seems like magic, given that experimental research on speech planning has shown that speakers need much more time to plan and produce even the shortest of utterances. This paper reviews how language scientists have combined linguistic analyses of conversations and experimental work to understand the skill of swift turn-taking and proposes a tentative solution to the riddle of fast turn-taking.
  • Mickan, A., McQueen, J. M., Brehm, L., & Lemhöfer, K. (2023). Individual differences in foreign language attrition: A 6-month longitudinal investigation after a study abroad. Language, Cognition and Neuroscience, 38(1), 11-39. doi:10.1080/23273798.2022.2074479.

    Abstract

    While recent laboratory studies suggest that the use of competing languages is a driving force in foreign language (FL) attrition (i.e. forgetting), research on “real” attriters has failed to demonstrate
    such a relationship. We addressed this issue in a large-scale longitudinal study, following German students throughout a study abroad in Spain and their first six months back in Germany. Monthly,
    percentage-based frequency of use measures enabled a fine-grained description of language use.
    L3 Spanish forgetting rates were indeed predicted by the quantity and quality of Spanish use, and
    correlated negatively with L1 German and positively with L2 English letter fluency. Attrition rates
    were furthermore influenced by prior Spanish proficiency, but not by motivation to maintain
    Spanish or non-verbal long-term memory capacity. Overall, this study highlights the importance
    of language use for FL retention and sheds light on the complex interplay between language
    use and other determinants of attrition.
  • Minagawa-Kawai, Y., Cristia, A., Vendelin, I., Cabrol, D., & Dupoux, E. (2011). Assessing signal-driven mechanisms in neonates: Brain responses to temporally and spectrally different sounds. Frontiers in Psychology, 2, 135. doi:10.3389/fpsyg.2011.00135.

    Abstract

    Past studies have found that, in adults, the acoustic properties of sound signals (such as fast versus slow temporal features) differentially activate the left and right hemispheres, and some have hypothesized that left-lateralization for speech processing may follow from left-lateralization to rapidly changing signals. Here, we tested whether newborns’ brains show some evidence of signal-specific lateralization responses using near-infrared spectroscopy (NIRS) and auditory stimuli that elicits lateralized responses in adults, composed of segments that vary in duration and spectral diversity. We found significantly greater bilateral responses of oxygenated hemoglobin (oxy-Hb) in the temporal areas for stimuli with a minimum segment duration of 21 ms, than stimuli with a minimum segment duration of 667 ms. However, we found no evidence for hemispheric asymmetries dependent on the stimulus characteristics. We hypothesize that acoustic-based functional brain asymmetries may develop throughout early infancy, and discuss their possible relationship with brain asymmetries for language.
  • Minagawa-Kawai, Y., Cristia, A., & Dupoux, E. (2011). Cerebral lateralization and early speech acquisition: A developmental scenario. Developmental Cognitive Neuroscience, 1, 217-232. doi:10.1016/j.dcn.2011.03.005.

    Abstract

    During the past ten years, research using Near-infrared Spectroscopy (NIRS) to study the developing brain has provided groundbreaking evidence of brain functions in infants. This paper presents a theoretically oriented review of this wealth of evidence, summarizing recent NIRS data on language processing, without neglecting other neuroimaging or behavioral studies in infancy and adulthood. We review three competing classes of hypotheses (i.e. signal-driven, domain-driven, and learning biases hypotheses) regarding the causes of hemispheric specialization for speech processing. We assess the fit between each of these hypotheses and neuroimaging evidence in speech perception and show that none of the three hypotheses can account for the entire set of observations on its own. However, we argue that they provide a good fit when combined within a developmental perspective. According to our proposed scenario, lateralization for language emerges out of the interaction between pre-existing left–right biases in generic auditory processing (signal-driven hypothesis), and a left-hemisphere predominance of particular learning mechanisms (learning-biases hypothesis). As a result of this completed developmental process, the native language is represented in the left hemisphere predominantly. The integrated scenario enables to link infant and adult data, and points to many empirical avenues that need to be explored more systematically.
  • Mishra, C., Offrede, T., Fuchs, S., Mooshammer, C., & Skantze, G. (2023). Does a robot’s gaze aversion affect human gaze aversion? Frontiers in Robotics and AI, 10: 1127626. doi:10.3389/frobt.2023.1127626.

    Abstract

    Gaze cues serve an important role in facilitating human conversations and are generally considered to be one of the most important non-verbal cues. Gaze cues are used to manage turn-taking, coordinate joint attention, regulate intimacy, and signal cognitive effort. In particular, it is well established that gaze aversion is used in conversations to avoid prolonged periods of mutual gaze. Given the numerous functions of gaze cues, there has been extensive work on modelling these cues in social robots. Researchers have also tried to identify the impact of robot gaze on human participants. However, the influence of robot gaze behavior on human gaze behavior has been less explored. We conducted a within-subjects user study (N = 33) to verify if a robot’s gaze aversion influenced human gaze aversion behavior. Our results show that participants tend to avert their gaze more when the robot keeps staring at them as compared to when the robot exhibits well-timed gaze aversions. We interpret our findings in terms of intimacy regulation: humans try to compensate for the robot’s lack of gaze aversion.
  • Mishra, C., Verdonschot, R. G., Hagoort, P., & Skantze, G. (2023). Real-time emotion generation in human-robot dialogue using large language models. Frontiers in Robotics and AI, 10: 1271610. doi:10.3389/frobt.2023.1271610.

    Abstract

    Affective behaviors enable social robots to not only establish better connections with humans but also serve as a tool for the robots to express their internal states. It has been well established that emotions are important to signal understanding in Human-Robot Interaction (HRI). This work aims to harness the power of Large Language Models (LLM) and proposes an approach to control the affective behavior of robots. By interpreting emotion appraisal as an Emotion Recognition in Conversation (ERC) tasks, we used GPT-3.5 to predict the emotion of a robot’s turn in real-time, using the dialogue history of the ongoing conversation. The robot signaled the predicted emotion using facial expressions. The model was evaluated in a within-subjects user study (N = 47) where the model-driven emotion generation was compared against conditions where the robot did not display any emotions and where it displayed incongruent emotions. The participants interacted with the robot by playing a card sorting game that was specifically designed to evoke emotions. The results indicated that the emotions were reliably generated by the LLM and the participants were able to perceive the robot’s emotions. It was found that the robot expressing congruent model-driven facial emotion expressions were perceived to be significantly more human-like, emotionally appropriate, and elicit a more positive impression. Participants also scored significantly better in the card sorting game when the robot displayed congruent facial expressions. From a technical perspective, the study shows that LLMs can be used to control the affective behavior of robots reliably in real-time. Additionally, our results could be used in devising novel human-robot interactions, making robots more effective in roles where emotional interaction is important, such as therapy, companionship, or customer service.
  • Mitterer, H., Chen, Y., & Zhou, X. (2011). Phonological abstraction in processing lexical-tone variation: Evidence from a learning paradigm. Cognitive Science, 35, 184-197. doi:10.1111/j.1551-6709.2010.01140.x.

    Abstract

    There is a growing consensus that the mental lexicon contains both abstract and word-specific acoustic information. To investigate their relative importance for word recognition, we tested to what extent perceptual learning is word specific or generalizable to other words. In an exposure phase, participants were divided into two groups; each group was semantically biased to interpret an ambiguous Mandarin tone contour as either tone1 or tone2. In a subsequent test phase, the perception of ambiguous contours was dependent on the exposure phase: Participants who heard ambiguous contours as tone1 during exposure were more likely to perceive ambiguous contours as tone1 than participants who heard ambiguous contours as tone2 during exposure. This learning effect was only slightly larger for previously encountered than for not previously encountered words. The results speak for an architecture with prelexical analysis of phonological categories to achieve both lexical access and episodic storage of exemplars.
  • Mitterer, H. (2011). Recognizing reduced forms: Different processing mechanisms for similar reductions. Journal of Phonetics, 39, 298-303. doi:10.1016/j.wocn.2010.11.009.

    Abstract

    Recognizing phonetically reduced forms is a huge challenge for spoken-word recognition. Phonetic reductions not only occur often, but also come in a variety of forms. The paper investigates how two similar forms of reductions – /t/-reduction and nasal place assimilation in Dutch – can eventually be recognized, focusing on the role of following phonological context. Previous research indicated that listeners take the following phonological context into account when compensating for /t/-reduction and nasal place assimilation. The current paper shows that these context effects arise in early perceptual processes for the perception of assimilated forms, but at a later stage of processing for the perception of /t/-reduced forms. This shows first that the recognition of apparently similarly reduced words may rely on different processing mechanisms and, second, that searching for dissociations over tasks is a promising research strategy to investigate how reduced forms are recognized.
  • Mitterer, H. (2011). The mental lexicon is fully specified: Evidence from eye-tracking. Journal of Experimental Psychology: Human Perception and Performance, 37(2), 496-513. doi:10.1037/a0020989.

    Abstract

    Four visual-world experiments, in which listeners heard spoken words and saw printed words, compared an optimal-perception account with the theory of phonological underspecification. This theory argues that default phonological features are not specified in the mental lexicon, leading to asymmetric lexical matching: Mismatching input ("pin") activates lexical entries with underspecified coronal stops ('tin'), but lexical entries with specified labial stops ('pin') are not activated by mismatching input ("tin"). The eye-tracking data failed to show such a pattern. Although words that were phonologically similar to the spoken target attracted more looks than unrelated distractors, this effect was symmetric in Experiment 1 with minimal pairs ("tin"- "pin") and in Experiments 2 and 3 with words with an onset overlap ("peacock" - "teacake"). Experiment 4 revealed that /t/-initial words were looked at more frequently if the spoken input mismatched only in terms of place than if it mismatched in place and voice, contrary to the assumption that /t/ is unspecified for place and voice. These results show that speech perception uses signal-driven information to the fullest, as predicted by an optimal perception account.
  • Monaghan, P., Donnelly, S., Alcock, K., Bidgood, A., Cain, K., Durrant, S., Frost, R. L. A., Jago, L. S., Peter, M. S., Pine, J. M., Turnbull, H., & Rowland, C. F. (2023). Learning to generalise but not segment an artificial language at 17 months predicts children’s language skills 3 years later. Cognitive Psychology, 147: 101607. doi:10.1016/j.cogpsych.2023.101607.

    Abstract

    We investigated whether learning an artificial language at 17 months was predictive of children’s natural language vocabulary and grammar skills at 54 months. Children at 17 months listened to an artificial language containing non-adjacent dependencies, and were then tested on their learning to segment and to generalise the structure of the language. At 54 months, children were then tested on a range of standardised natural language tasks that assessed receptive and expressive vocabulary and grammar. A structural equation model demonstrated that learning the artificial language generalisation at 17 months predicted language abilities – a composite of vocabulary and grammar skills – at 54 months, whereas artificial language segmentation at 17 months did not predict language abilities at this age. Artificial language learning tasks – especially those that probe grammar learning – provide a valuable tool for uncovering the mechanisms driving children’s early language development.

    Additional information

    supplementary data
  • Mooijman, S., Schoonen, R., Ruiter, M. B., & Roelofs, A. (2023). Voluntary and cued language switching in late bilingual speakers. Bilingualism: Language and Cognition. Advance online publication. doi:10.1017/S1366728923000755.

    Abstract

    Previous research examining the factors that determine language choice and voluntary switching mainly involved early bilinguals. Here, using picture naming, we investigated language choice and switching in late Dutch–English bilinguals. We found that naming was overall slower in cued than in voluntary switching, but switch costs occurred in both types of switching. The magnitude of switch costs differed depending on the task and language, and was moderated by L2 proficiency. Self-rated rather than objectively assessed proficiency predicted voluntary switching and ease of lexical access was associated with language choice. Between-language and within-language switch costs were not correlated. These results highlight self-rated proficiency as a reliable predictor of voluntary switching, with language modulating switch costs. As in early bilinguals, ease of lexical access was related to word-level language choice of late bilinguals.
  • Morison, L., Meffert, E., Stampfer, M., Steiner-Wilke, I., Vollmer, B., Schulze, K., Briggs, T., Braden, R., Vogel, A. P., Thompson-Lake, D., Patel, C., Blair, E., Goel, H., Turner, S., Moog, U., Riess, A., Liegeois, F., Koolen, D. A., Amor, D. J., Kleefstra, T. and 3 moreMorison, L., Meffert, E., Stampfer, M., Steiner-Wilke, I., Vollmer, B., Schulze, K., Briggs, T., Braden, R., Vogel, A. P., Thompson-Lake, D., Patel, C., Blair, E., Goel, H., Turner, S., Moog, U., Riess, A., Liegeois, F., Koolen, D. A., Amor, D. J., Kleefstra, T., Fisher, S. E., Zweier, C., & Morgan, A. T. (2023). In-depth characterisation of a cohort of individuals with missense and loss-of-function variants disrupting FOXP2. Journal of Medical Genetics, 60(6), 597-607. doi:10.1136/jmg-2022-108734.

    Abstract

    Background
    Heterozygous disruptions of FOXP2 were the first identified molecular cause for severe speech disorder; childhood apraxia of speech (CAS), yet few cases have been reported, limiting knowledge of the condition.

    Methods
    Here we phenotyped 29 individuals from 18 families with pathogenic FOXP2-only variants (13 loss-of-function, 5 missense variants; 14 males; aged 2 years to 62 years). Health and development (cognitive, motor, social domains) was examined, including speech and language outcomes with the first cross-linguistic analysis of English and German.

    Results
    Speech disorders were prevalent (24/26, 92%) and CAS was most common (23/26, 89%), with similar speech presentations across English and German. Speech was still impaired in adulthood and some speech sounds (e.g. ‘th’, ‘r’, ‘ch’, ‘j’) were never acquired. Language impairments (22/26, 85%) ranged from mild to severe. Comorbidities included feeding difficulties in infancy (10/27, 37%), fine (14/27, 52%) and gross (14/27, 52%) motor impairment, anxiety (6/28, 21%), depression (7/28, 25%), and sleep disturbance (11/15, 44%). Physical features were common (23/28, 82%) but with no consistent pattern. Cognition ranged from average to mildly impaired, and was incongruent with language ability; for example, seven participants with severe language disorder had average non-verbal cognition.

    Conclusions
    Although we identify increased prevalence of conditions like anxiety, depression and sleep disturbance, we confirm that the consequences of FOXP2 dysfunction remain relatively specific to speech disorder, as compared to other recently identified monogenic conditions associated with CAS. Thus, our findings reinforce that FOXP2 provides a valuable entrypoint for examining the neurobiological bases of speech disorder.
  • Muhinyi, A., & Rowland, C. F. (2023). Contributions of abstract extratextual talk and interactive style to preschoolers’ vocabulary development. Journal of Child Language, 50(1), 198-213. doi:10.1017/S0305000921000696.

    Abstract

    Caregiver abstract talk during shared reading predicts preschool-age children’s vocabulary development. However, previous research has focused on level of abstraction with less consideration of the style of extratextual talk. Here, we investigated the relation between these two dimensions of extratextual talk, and their contributions to variance in children’s vocabulary skills. Caregiver level of abstraction was associated with an interactive reading style. Controlling for socioeconomic status and child age, high interactivity predicted children’s concurrent vocabulary skills whereas abstraction did not. Controlling for earlier vocabulary skills, neither dimension of the extratextual talk predicted later vocabulary. Theoretical and practical relevance are discussed.
  • Mulder, K., & Hulstijn, J. H. (2011). Linguistic skills of adult native speakers, as a function of age and level of education. Applied Linguistics, 32, 475-494. doi:10.1093/applin/amr016.

    Abstract

    This study assessed, in a sample of 98 adult native speakers of Dutch, how their lexical skills and their speaking proficiency varied as a function of their age and level of education and profession (EP). Participants, categorized in terms of their age (18–35, 36–50, and 51–76 years old) and the level of their EP (low versus high), were tested on their lexical knowledge, lexical fluency, and lexical memory, and they performed four speaking tasks, differing in genre and formality. Speaking performance was rated in terms of communicative adequacy and in terms of number of words, number of T-units, words per T-unit, content words per T-unit, hesitations per T-unit, and grammatical errors per T-unit. Increasing age affected lexical knowledge positively but lexical fluency and memory negatively. High EP positively affected lexical knowledge and memory but EP did not affect lexical fluency. Communicative adequacy of the responses in the speaking tasks was positively affected by high EP but was not affected by age. It is concluded that, given the large variability in native speakers’ language knowledge and skills, studies investigating the question of whether second-language learners can reach native levels of proficiency, should take native-speaker variability into account.

    Additional information

    Mulder_2011_Supplementary Data.doc
  • Munafò, M. R., Freathy, R. M., Ring, S. M., St Pourcain, B., & Smith, G. D. (2011). Association of COMT Val108/158Met Genotype and Cigarette Smoking in Pregnant Women. Nicotine & Tobacco Research, 13(2), 55-63. doi:10.1093/ntr/ntq209.

    Abstract

    INTRODUCTION: Smoking behaviors, including heaviness of smoking and smoking cessation, are known to be under a degree of genetic influence. The enzyme catechol O-methyltransferase (COMT) is of relevance in studies of smoking behavior and smoking cessation due to its presence in dopaminergic brain regions. While the COMT gene is therefore one of the more promising candidate genes for smoking behavior, some inconsistencies have begun to emerge. METHODS: We explored whether the rs4680 A (Met) allele of the COMT gene predicts increased heaviness of smoking and reduced likelihood of smoking cessation in a large population-based cohort of pregnant women. We further conducted a meta-analysis of published data from community samples investigating the association of this polymorphism with heaviness of smoking and smoking status. RESULTS: In our primary sample, the A (Met) allele was associated with increased heaviness of smoking before pregnancy but not with the odds of continuing to smoke in pregnancy either in the first trimester or in the third trimester. Meta-analysis also indicated modest evidence of association of the A (Met) allele with increased heaviness of smoking but not with persistent smoking. CONCLUSIONS: Our data suggest a weak association between COMT genotype and heaviness of smoking, which is supported by our meta-analysis. However, it should be noted that the strength of evidence for this association was modest. Neither our primary data nor our meta-analysis support an association between COMT genotype and smoking cessation. Therefore, COMT remains a plausible candidate gene for smoking behavior phenotypes, in particular, heaviness of smoking.
  • Narasimhan, B., & Gullberg, M. (2011). The role of input frequency and semantic transparency in the acquisition of verb meaning: Evidence from placement verbs in Tamil and Dutch. Journal of Child Language, 38, 504-532. doi:10.1017/S0305000910000164.

    Abstract

    We investigate how Tamil- and Dutch-speaking adults and 4- to 5-year-old children use caused posture verbs (‘lay/stand a bottle on a table’) to label placement events in which objects are oriented vertically or horizontally. Tamil caused posture verbs consist of morphemes that individually label the causal and result subevents (nikka veyyii ‘make stand’; paDka veyyii ‘make lie’), occurring in situational and discourse contexts where object orientation is at issue. Dutch caused posture verbs are less semantically transparent: they are monomorphemic (zetten ‘set/stand’; leggen ‘lay’), often occurring in contexts where factors other than object orientation determine use. Caused posture verbs occur rarely in corpora of Tamil input, whereas in Dutch input, they are used frequently. Elicited production data reveal that Tamil four-year-olds use infrequent placement verbs appropriately whereas Dutch children use high-frequency placement verbs inappropriately even at age five. Semantic transparency exerts a stronger influence than input frequency in constraining children’s verb meaning acquisition.
  • Nas, G., Kempen, G., & Hudson, P. (1984). De rol van spelling en klank bij woordherkenning tijdens het lezen. In A. Thomassen, L. Noordman, & P. Elling (Eds.), Het leesproces. Lisse: Swets & Zeitlinger.
  • Noble, C. H., Rowland, C. F., & Pine, J. M. (2011). Comprehension of argument structure and semantic roles: Evidence from English-learning children and the forced-choice pointing paradigm. Cognitive Science, 35(5), 963-982. doi:10.1111/j.1551-6709.2011.01175.x.

    Abstract

    Research using the intermodal preferential looking paradigm (IPLP) has consistently shown that English-learning children aged 2 can associate transitive argument structure with causal events. However, studies using the same methodology investigating 2-year-old children’s knowledge of the conjoined agent intransitive and semantic role assignment have reported inconsistent findings. The aim of the present study was to establish at what age English-learning children have verb-general knowledge of both transitive and intransitive argument structure using a new method: the forced-choice pointing paradigm. The results suggest that young 2-year-olds can associate transitive structures with causal (or externally caused) events and can use transitive structure to assign agent and patient roles correctly. However, the children were unable to associate the conjoined agent intransitive with noncausal events until aged 3;4. The results confirm the pattern from previous IPLP studies and indicate that children may develop the ability to comprehend different aspects of argument structure at different ages. The implications for theories of language acquisition and the nature of the language acquisition mechanism are discussed.
  • Norcliffe, E., Enfield, N. J., Majid, A., & Levinson, S. C. (2011). The grammar of perception. In K. Kendrick, & A. Majid (Eds.), Field manual volume 14 (pp. 1-10). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Nota, N., Trujillo, J. P., & Holler, J. (2023). Specific facial signals associate with categories of social actions conveyed through questions. PLoS One, 18(7): e0288104. doi:10.1371/journal.pone.0288104.

    Abstract

    The early recognition of fundamental social actions, like questions, is crucial for understanding the speaker’s intended message and planning a timely response in conversation. Questions themselves may express more than one social action category (e.g., an information request “What time is it?”, an invitation “Will you come to my party?” or a criticism “Are you crazy?”). Although human language use occurs predominantly in a multimodal context, prior research on social actions has mainly focused on the verbal modality. This study breaks new ground by investigating how conversational facial signals may map onto the expression of different types of social actions conveyed through questions. The distribution, timing, and temporal organization of facial signals across social actions was analysed in a rich corpus of naturalistic, dyadic face-to-face Dutch conversations. These social actions were: Information Requests, Understanding Checks, Self-Directed questions, Stance or Sentiment questions, Other-Initiated Repairs, Active Participation questions, questions for Structuring, Initiating or Maintaining Conversation, and Plans and Actions questions. This is the first study to reveal differences in distribution and timing of facial signals across different types of social actions. The findings raise the possibility that facial signals may facilitate social action recognition during language processing in multimodal face-to-face interaction.

    Additional information

    supporting information
  • Nota, N., Trujillo, J. P., Jacobs, V., & Holler, J. (2023). Facilitating question identification through natural intensity eyebrow movements in virtual avatars. Scientific Reports, 13: 21295. doi:10.1038/s41598-023-48586-4.

    Abstract

    In conversation, recognizing social actions (similar to ‘speech acts’) early is important to quickly understand the speaker’s intended message and to provide a fast response. Fast turns are typical for fundamental social actions like questions, since a long gap can indicate a dispreferred response. In multimodal face-to-face interaction, visual signals may contribute to this fast dynamic. The face is an important source of visual signalling, and previous research found that prevalent facial signals such as eyebrow movements facilitate the rapid recognition of questions. We aimed to investigate whether early eyebrow movements with natural movement intensities facilitate question identification, and whether specific intensities are more helpful in detecting questions. Participants were instructed to view videos of avatars where the presence of eyebrow movements (eyebrow frown or raise vs. no eyebrow movement) was manipulated, and to indicate whether the utterance in the video was a question or statement. Results showed higher accuracies for questions with eyebrow frowns, and faster response times for questions with eyebrow frowns and eyebrow raises. No additional effect was observed for the specific movement intensity. This suggests that eyebrow movements that are representative of naturalistic multimodal behaviour facilitate question recognition.
  • Nota, N., Trujillo, J. P., & Holler, J. (2023). Conversational eyebrow frowns facilitate question identification: An online study using virtual avatars. Cognitive Science, 47(12): e13392. doi:10.1111/cogs.13392.

    Abstract

    Conversation is a time-pressured environment. Recognizing a social action (the ‘‘speech act,’’ such as a question requesting information) early is crucial in conversation to quickly understand the intended message and plan a timely response. Fast turns between interlocutors are especially relevant for responses to questions since a long gap may be meaningful by itself. Human language is multimodal, involving speech as well as visual signals from the body, including the face. But little is known about how conversational facial signals contribute to the communication of social actions. Some of the most prominent facial signals in conversation are eyebrow movements. Previous studies found links between eyebrow movements and questions, suggesting that these facial signals could contribute to the rapid recognition of questions. Therefore, we aimed to investigate whether early eyebrow movements (eyebrow frown or raise vs. no eyebrow movement) facilitate question identification. Participants were instructed to view videos of avatars where the presence of eyebrow movements accompanying questions was manipulated. Their task was to indicate whether the utterance was a question or a statement as accurately and quickly as possible. Data were collected using the online testing platform Gorilla. Results showed higher accuracies and faster response times for questions with eyebrow frowns, suggesting a facilitative role of eyebrow frowns for question identification. This means that facial signals can critically contribute to the communication of social actions in conversation by signaling social action-specific visual information and providing visual cues to speakers’ intentions.

    Additional information

    link to preprint
  • Nozais, V., Forkel, S. J., Petit, L., Talozzi, L., Corbetta, M., Thiebaut de Schotten, M., & Joliot, M. (2023). Atlasing white matter and grey matter joint contributions to resting-state networks in the human brain. Communications Biology, 6: 726. doi:10.1038/s42003-023-05107-3.

    Abstract

    Over the past two decades, the study of resting-state functional magnetic resonance imaging has revealed that functional connectivity within and between networks is linked to cognitive states and pathologies. However, the white matter connections supporting this connectivity remain only partially described. We developed a method to jointly map the white and grey matter contributing to each resting-state network (RSN). Using the Human Connectome Project, we generated an atlas of 30 RSNs. The method also highlighted the overlap between networks, which revealed that most of the brain’s white matter (89%) is shared between multiple RSNs, with 16% shared by at least 7 RSNs. These overlaps, especially the existence of regions shared by numerous networks, suggest that white matter lesions in these areas might strongly impact the communication within networks. We provide an atlas and an open-source software to explore the joint contribution of white and grey matter to RSNs and facilitate the study of the impact of white matter damage to these networks. In a first application of the software with clinical data, we were able to link stroke patients and impacted RSNs, showing that their symptoms aligned well with the estimated functions of the networks.
  • Numssen, O., van der Burght, C. L., & Hartwigsen, G. (2023). Revisiting the focality of non-invasive brain stimulation - implications for studies of human cognition. Neuroscience and Biobehavioral Reviews, 149: 105154. doi:10.1016/j.neubiorev.2023.105154.

    Abstract

    Non-invasive brain stimulation techniques are popular tools to investigate brain function in health and disease. Although transcranial magnetic stimulation (TMS) is widely used in cognitive neuroscience research to probe causal structure-function relationships, studies often yield inconclusive results. To improve the effectiveness of TMS studies, we argue that the cognitive neuroscience community needs to revise the stimulation focality principle – the spatial resolution with which TMS can differentially stimulate cortical regions. In the motor domain, TMS can differentiate between cortical muscle representations of adjacent fingers. However, this high degree of spatial specificity cannot be obtained in all cortical regions due to the influences of cortical folding patterns on the TMS-induced electric field. The region-dependent focality of TMS should be assessed a priori to estimate the experimental feasibility. Post-hoc simulations allow modeling of the relationship between cortical stimulation exposure and behavioral modulation by integrating data across stimulation sites or subjects.

    Files private

    Request files
  • Oliveira‑Stahl, G., Farboud, S., Sterling, M. L., Heckman, J. J., Van Raalte, B., Lenferink, D., Van der Stam, A., Smeets, C. J. L. M., Fisher, S. E., & Englitz, B. (2023). High-precision spatial analysis of mouse courtship vocalization behavior reveals sex and strain differences. Scientific Reports, 13: 5219. doi:10.1038/s41598-023-31554-3.

    Abstract

    Mice display a wide repertoire of vocalizations that varies with sex, strain, and context. Especially during social interaction, including sexually motivated dyadic interaction, mice emit sequences of ultrasonic vocalizations (USVs) of high complexity. As animals of both sexes vocalize, a reliable attribution of USVs to their emitter is essential. The state-of-the-art in sound localization for USVs in 2D allows spatial localization at a resolution of multiple centimeters. However, animals interact at closer ranges, e.g. snout-to-snout. Hence, improved algorithms are required to reliably assign USVs. We present a novel algorithm, SLIM (Sound Localization via Intersecting Manifolds), that achieves a 2–3-fold improvement in accuracy (13.1–14.3 mm) using only 4 microphones and extends to many microphones and localization in 3D. This accuracy allows reliable assignment of 84.3% of all USVs in our dataset. We apply SLIM to courtship interactions between adult C57Bl/6J wildtype mice and those carrying a heterozygous Foxp2 variant (R552H). The improved spatial accuracy reveals that vocalization behavior is dependent on the spatial relation between the interacting mice. Female mice vocalized more in close snout-to-snout interaction while male mice vocalized more when the male snout was in close proximity to the female's ano-genital region. Further, we find that the acoustic properties of the ultrasonic vocalizations (duration, Wiener Entropy, and sound level) are dependent on the spatial relation between the interacting mice as well as on the genotype. In conclusion, the improved attribution of vocalizations to their emitters provides a foundation for better understanding social vocal behaviors.

    Additional information

    supplementary movies and figures
  • Omar, R., Henley, S. M., Bartlett, J. W., Hailstone, J. C., Gordon, E., Sauter, D., Frost, C., Scott, S. K., & Warren, J. D. (2011). The structural neuroanatomy of music emotion recognition: Evidence from frontotemporal lobar degeneration. Neuroimage, 56, 1814-1821. doi:10.1016/j.neuroimage.2011.03.002.

    Abstract

    Despite growing clinical and neurobiological interest in the brain mechanisms that process emotion in music, these mechanisms remain incompletely understood. Patients with frontotemporal lobar degeneration (FTLD) frequently exhibit clinical syndromes that illustrate the effects of breakdown in emotional and social functioning. Here we investigated the neuroanatomical substrate for recognition of musical emotion in a cohort of 26 patients with FTLD (16 with behavioural variant frontotemporal dementia, bvFTD, 10 with semantic dementia, SemD) using voxel-based morphometry. On neuropsychological evaluation, patients with FTLD showed deficient recognition of canonical emotions (happiness, sadness, anger and fear) from music as well as faces and voices compared with healthy control subjects. Impaired recognition of emotions from music was specifically associated with grey matter loss in a distributed cerebral network including insula, orbitofrontal cortex, anterior cingulate and medial prefrontal cortex, anterior temporal and more posterior temporal and parietal cortices, amygdala and the subcortical mesolimbic system. This network constitutes an essential brain substrate for recognition of musical emotion that overlaps with brain regions previously implicated in coding emotional value, behavioural context, conceptual knowledge and theory of mind. Musical emotion recognition may probe the interface of these processes, delineating a profile of brain damage that is essential for the abstraction of complex social emotions.

Share this page