Publications

Displaying 501 - 600 of 870
  • Majid, A. (2016). The content of minds: Asifa Majid talks to Jon Sutton about language and thought. The psychologist, 29, 554-556.
  • Majid, A. (2016). Was wir von anderen Kulturen über den Geruchsinn lernen können. In Museum Tinguely (Ed.), Belle Haleine – Der Duft der Kunst. Interdisziplinäres Symposium (pp. 73-79). Heidelberg: Kehrer.
  • Majid, A. (2016). What other cultures can tell us about the sense of smell. In Museum Tinguely (Ed.), Belle haleine - the scent of art: interdisciplinary symposium (pp. 72-77). Heidelberg: Kehrer.
  • Mak, M., De Vries, C., & Willems, R. M. (2020). The influence of mental imagery instructions and personality characteristics on reading experiences. Collabra: Psychology, 6(1): 43. doi:10.1525/collabra.281.

    Abstract

    It is well established that readers form mental images when reading a narrative. However, the consequences of mental imagery (i.e. the influence of mental imagery on the way people experience stories) are still unclear. Here we manipulated the amount of mental imagery that participants engaged in while reading short literary stories in two experiments. Participants received pre-reading instructions aimed at encouraging or discouraging mental imagery. After reading, participants answered questions about their reading experiences. We also measured individual trait differences that are relevant for literary reading experiences. The results from the first experiment suggests an important role of mental imagery in determining reading experiences. However, the results from the second experiment show that mental imagery is only a weak predictor of reading experiences compared to individual (trait) differences in how imaginative participants were. Moreover, the influence of mental imagery instructions did not extend to reading experiences unrelated to mental imagery. The implications of these results for the relationship between mental imagery and reading experiences are discussed.
  • Mandal, S., Best, C. T., Shaw, J., & Cutler, A. (2020). Bilingual phonology in dichotic perception: A case study of Malayalam and English voicing. Glossa: A Journal of General Linguistics, 5(1): 73. doi:10.5334/gjgl.853.

    Abstract

    Listeners often experience cocktail-party situations, encountering multiple ongoing conversa-
    tions while tracking just one. Capturing the words spoken under such conditions requires selec-
    tive attention and processing, which involves using phonetic details to discern phonological
    structure. How do bilinguals accomplish this in L1-L2 competition? We addressed that question
    using a dichotic listening task with fluent Malayalam-English bilinguals, in which they were pre-
    sented with synchronized nonce words, one in each language in separate ears, with competing
    onsets of a labial stop (Malayalam) and a labial fricative (English), both voiced or both voiceless.
    They were required to attend to the Malayalam or the English item, in separate blocks, and report
    the initial consonant they heard. We found that perceptual intrusions from the unattended to the
    attended language were influenced by voicing, with more intrusions on voiced than voiceless tri-
    als. This result supports our proposal for the feature specification of consonants in Malayalam-
    English bilinguals, which makes use of privative features, underspecification and the “standard
    approach” to laryngeal features, as against “laryngeal realism”. Given this representational
    account, we observe that intrusions result from phonetic properties in the unattended signal
    being assimilated to the closest matching phonological category in the attended language, and
    are more likely for segments with a greater number of phonological feature specifications.
  • Manhardt, F., Ozyurek, A., Sumer, B., Mulder, K., Karadöller, D. Z., & Brouwer, S. (2020). Iconicity in spatial language guides visual attention: A comparison between signers’ and speakers’ eye gaze during message preparation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(9), 1735-1753. doi:10.1037/xlm0000843.

    Abstract

    To talk about space, spoken languages rely on arbitrary and categorical forms (e.g., left, right). In sign languages, however, the visual–spatial modality allows for iconic encodings (motivated form-meaning mappings) of space in which form and location of the hands bear resemblance to the objects and spatial relations depicted. We assessed whether the iconic encodings in sign languages guide visual attention to spatial relations differently than spatial encodings in spoken languages during message preparation at the sentence level. Using a visual world production eye-tracking paradigm, we compared 20 deaf native signers of Sign-Language-of-the-Netherlands and 20 Dutch speakers’ visual attention to describe left versus right configurations of objects (e.g., “pen is to the left/right of cup”). Participants viewed 4-picture displays in which each picture contained the same 2 objects but in different spatial relations (lateral [left/right], sagittal [front/behind], topological [in/on]) to each other. They described the target picture (left/right) highlighted by an arrow. During message preparation, signers, but not speakers, experienced increasing eye-gaze competition from other spatial configurations. This effect was absent during picture viewing prior to message preparation of relational encoding. Moreover, signers’ visual attention to lateral and/or sagittal relations was predicted by the type of iconicity (i.e., object and space resemblance vs. space resemblance only) in their spatial descriptions. Findings are discussed in relation to how “thinking for speaking” differs from “thinking for signing” and how iconicity can mediate the link between language and human experience and guides signers’ but not speakers’ attention to visual aspects of the world.

    Additional information

    Supplementary materials
  • Mani, N., Daum, M., & Huettig, F. (2016). “Pro-active” in many ways: Developmental evidence for a dynamic pluralistic approach to prediction. Quarterly Journal of Experimental Psychology, 69(11), 2189-2201. doi:10.1080/17470218.2015.1111395.

    Abstract

    The anticipation of the forthcoming behaviour of social interaction partners is a useful ability supporting interaction and communication between social partners. Associations and prediction based on the production system (in line with views that listeners use the production system covertly to anticipate what the other person might be likely to say) are two potential factors, which have been proposed to be involved in anticipatory language processing. We examined the influence of both factors on the degree to which listeners predict upcoming linguistic input. Are listeners more likely to predict book as an appropriate continuation of the sentence “The boy reads a”, based on the strength of the association between the words read and book (strong association) and read and letter (weak association)? Do more proficient producers predict more? What is the interplay of these two influences on prediction? The results suggest that associations influence language-mediated anticipatory eye gaze in two-year-olds and adults only when two thematically appropriate target objects compete for overt attention but not when these objects are presented separately. Furthermore, children’s prediction abilities are strongly related to their language production skills when appropriate target objects are presented separately but not when presented together. Both influences on prediction in language processing thus appear to be context-dependent. We conclude that multiple factors simultaneously influence listeners’ anticipation of upcoming linguistic input and that only such a dynamic approach to prediction can capture listeners’ prowess at predictive language processing.
  • Manrique, E. (2016). Other-initiated repair in Argentine Sign Language. Open Linguistics, 2, 1-34. doi:10.1515/opli-2016-0001.

    Abstract

    Other-initiated repair is an essential interactional practice to secure mutual understanding in everyday interaction. This article presents evidence from a large conversational corpus of a sign language, showing that signers of Argentine Sign Language (Lengua de Señas Argentina or ‘LSA’), like users of spoken languages, use a systematic set of linguistic formats and practices to indicate troubles of signing, seeing and understanding. The general aim of this article is to provide a general overview of the different visual-gestural linguistic patterns of other-initiated repair sequences in LSA. It also describes the quantitative distribution of other-initiated repair formats based on a collection of 213 cases. It describes the multimodal components of open and restricted types of repair initiators, and reports a previously undescribed implicit practice to initiate repair in LSA in comparison to explicitly produced formats. Part of a special issue presenting repair systems across a range of languages, this article contributes to a better understanding of the phenomenon of other-initiated repair in terms of visual and gestural practices in human interaction in both signed and spoken languages
  • The ManyBabies Consortium (2020). Quantifying sources of variability in infancy research using the infant-directed speech preference. Advances in Methods and Practices in Psychological Science, 30(1), 24-52. doi:10.1177/2515245919900809.

    Abstract

    Psychological scientists have become increasingly concerned with issues related to methodology and replicability, and infancy researchers in particular face specific challenges related to replicability: For example, high-powered studies are difficult to conduct, testing conditions vary across labs, and different labs have access to different infant populations. Addressing these concerns, we report on a large-scale, multisite study aimed at (a) assessing the overall replicability of a single theoretically important phenomenon and (b) examining methodological, cultural, and developmental moderators. We focus on infants’ preference for infant-directed speech (IDS) over adult-directed speech (ADS). Stimuli of mothers speaking to their infants and to an adult in North American English were created using seminaturalistic laboratory-based audio recordings. Infants’ relative preference for IDS and ADS was assessed across 67 laboratories in North America, Europe, Australia, and Asia using the three common methods for measuring infants’ discrimination (head-turn preference, central fixation, and eye tracking). The overall meta-analytic effect size (Cohen’s d) was 0.35, 95% confidence interval = [0.29, 0.42], which was reliably above zero but smaller than the meta-analytic mean computed from previous literature (0.67). The IDS preference was significantly stronger in older children, in those children for whom the stimuli matched their native language and dialect, and in data from labs using the head-turn preference procedure. Together, these findings replicate the IDS preference but suggest that its magnitude is modulated by development, native-language experience, and testing procedure.

    Additional information

    Open Practices Disclosure Open Data OSF
  • Marecka, M., Fosker, T., Szewczyk, J., Kałamała, P., & Wodniecka, Z. (2020). An ear for language. Studies in Second Language Acquisition, 42, 987-1014. doi:10.1017/S0272263120000157.

    Abstract

    This study tested whether individual sensitivity to an auditory perceptual cue called amplitude rise time (ART) facilitates novel word learning. Forty adult native speakers of Polish performed a perceptual task testing their sensitivity to ART, learned associations between nonwords and pictures of common objects, and were subsequently tested on their knowledge with a picture recognition (PR) task. In the PR task participants heard each nonword, followed either by a congruent or incongruent picture, and had to assess if the picture matched the nonword. Word learning efficiency was measured by accuracy and reaction time on the PR task and modulation of the N300 ERP. As predicted, participants with greater sensitivity to ART showed better performance in PR suggesting that auditory sensitivity indeed facilitates learning of novel words. Contrary to expectations, the N300 was not modulated by sensitivity to ART suggesting that the behavioral and ERP measures reflect different underlying processes.
  • Martin, A. E. (2020). A compositional neural architecture for language. Journal of Cognitive Neuroscience, 32(8), 1407-1427. doi:10.1162/jocn_a_01552.

    Abstract

    Hierarchical structure and compositionality imbue human language with unparalleled expressive power and set it apart from other perception–action systems. However, neither formal nor neurobiological models account for how these defining computational properties might arise in a physiological system. I attempt to reconcile hierarchy and compositionality with principles from cell assembly computation in neuroscience; the result is an emerging theory of how the brain could convert distributed perceptual representations into hierarchical structures across multiple timescales while representing interpretable incremental stages of (de) compositional meaning. The model's architecture—a multidimensional coordinate system based on neurophysiological models of sensory processing—proposes that a manifold of neural trajectories encodes sensory, motor, and abstract linguistic states. Gain modulation, including inhibition, tunes the path in the manifold in accordance with behavior and is how latent structure is inferred. As a consequence, predictive information about upcoming sensory input during production and comprehension is available without a separate operation. The proposed processing mechanism is synthesized from current models of neural entrainment to speech, concepts from systems neuroscience and category theory, and a symbolic-connectionist computational model that uses time and rhythm to structure information. I build on evidence from cognitive neuroscience and computational modeling that suggests a formal and mechanistic alignment between structure building and neural oscillations and moves toward unifying basic insights from linguistics and psycholinguistics with the currency of neural computation.
  • Martin, A. E. (2016). Language processing as cue integration: Grounding the psychology of language in perception and neurophysiology. Frontiers in Psychology, 7: 120. doi:10.3389/fpsyg.2016.00120.

    Abstract

    I argue that cue integration, a psychophysiological mechanism from vision and multisensory perception, offers a computational linking hypothesis between psycholinguistic theory and neurobiological models of language. I propose that this mechanism, which incorporates probabilistic estimates of a cue's reliability, might function in language processing from the perception of a phoneme to the comprehension of a phrase structure. I briefly consider the implications of the cue integration hypothesis for an integrated theory of language that includes acquisition, production, dialogue and bilingualism, while grounding the hypothesis in canonical neural computation.
  • Maslowski, M., Meyer, A. S., & Bosker, H. R. (2020). Eye-tracking the time course of distal and global speech rate effects. Journal of Experimental Psychology: Human Perception and Performance, 46(10), 1148-1163. doi:10.1037/xhp0000838.

    Abstract

    To comprehend speech sounds, listeners tune in to speech rate information in the proximal (immediately adjacent), distal (non-adjacent), and global context (further removed preceding and following sentences). Effects of global contextual speech rate cues on speech perception have been shown to follow constraints not found for proximal and distal speech rate. Therefore, listeners may process such global cues at distinct time points during word recognition. We conducted a printed-word eye-tracking experiment to compare the time courses of distal and global rate effects. Results indicated that the distal rate effect emerged immediately after target sound presentation, in line with a general-auditory account. The global rate effect, however, arose more than 200 ms later than the distal rate effect, indicating that distal and global context effects involve distinct processing mechanisms. Results are interpreted in a two-stage model of acoustic context effects. This model posits that distal context effects involve very early perceptual processes, while global context effects arise at a later stage, involving cognitive adjustments conditioned by higher-level information.
  • Matić, D., Hammond, J., & Van Putten, S. (2016). Left-dislocation, sentences and clauses in Avatime, Tundra Yukaghir and Whitesands. In J. Fleischhauer, A. Latrouite, & R. Osswald (Eds.), Exploring the Syntax-Semantics Interface. Festschrift for Robert D. Van Valin, Jr. (pp. 339-367). Düsseldorf: Düsseldorf University Press.
  • Matić, D. (2016). Tag questions and focus markers: Evidence from the Tompo dialect of Even. In M. M. J. Fernandez-Vest, & R. D. Van Valin Jr. (Eds.), Information structure and spoken language in a cross-linguistic perspective (pp. 167-190). Berlin: Mouton de Gruyter.
  • McCollum, A. G., Baković, E., Mai, A., & Meinhardt, E. (2020). Unbounded circumambient patterns in segmental phonology. Phonology, 37, 215-255. doi:10.1017/S095267572000010X.

    Abstract

    We present an empirical challenge to Jardine's (2016) assertion that only tonal spreading patterns can be unbounded circumambient, meaning that the determination of a phonological value may depend on information that is an unbounded distance away on both sides. We focus on a demonstration that the ATR harmony pattern found in Tutrugbu is unbounded circumambient, and we also cite several other segmental spreading processes with the same general character. We discuss implications for the complexity of phonology and for the relationship between the explanation of typology and the evaluation of phonological theories.

    Additional information

    Supporting Information
  • McDonough, L., Choi, S., Bowerman, M., & Mandler, J. M. (1998). The use of preferential looking as a measure of semantic development. In C. Rovee-Collier, L. P. Lipsitt, & H. Hayne (Eds.), Advances in Infancy Research. Volume 12. (pp. 336-354). Stamford, CT: Ablex Publishing.
  • McQueen, J. M., Eisner, F., & Norris, D. (2016). When brain regions talk to each other during speech processing, what are they talking about? Commentary on Gow and Olson (2015). Language, Cognition and Neuroscience, 31(7), 860-863. doi:10.1080/23273798.2016.1154975.

    Abstract

    This commentary on Gow and Olson [2015. Sentential influences on acoustic-phonetic processing: A Granger causality analysis of multimodal imaging data. Language, Cognition and Neuroscience. doi:10.1080/23273798.2015.1029498] questions in three ways their conclusion that speech perception is based on interactive processing. First, it is not clear that the data presented by Gow and Olson reflect normal speech recognition. Second, Gow and Olson's conclusion depends on still-debated assumptions about the functions performed by specific brain regions. Third, the results are compatible with feedforward models of speech perception and appear inconsistent with models in which there are online interactions about phonological content. We suggest that progress in the neuroscience of speech perception requires the generation of testable hypotheses about the function(s) performed by inter-regional connections
  • McQueen, J. M., & Cutler, A. (1998). Morphology in word recognition. In A. M. Zwicky, & A. Spencer (Eds.), The handbook of morphology (pp. 406-427). Oxford: Blackwell.
  • McQueen, J. M., & Dilley, L. C. (2020). Prosody and spoken-word recognition. In C. Gussenhoven, & A. Chen (Eds.), The Oxford handbook of language prosody (pp. 509-521). Oxford: Oxford University Press.

    Abstract

    This chapter outlines a Bayesian model of spoken-word recognition and reviews how
    prosody is part of that model. The review focuses on the information that assists the lis­
    tener in recognizing the prosodic structure of an utterance and on how spoken-word
    recognition is also constrained by prior knowledge about prosodic structure. Recognition
    is argued to be a process of perceptual inference that ensures that listening is robust to
    variability in the speech signal. In essence, the listener makes inferences about the seg­
    mental content of each utterance, about its prosodic structure (simultaneously at differ­
    ent levels in the prosodic hierarchy), and about the words it contains, and uses these in­
    ferences to form an utterance interpretation. Four characteristics of the proposed
    prosody-enriched recognition model are discussed: parallel uptake of different informa­
    tion types, high contextual dependency, adaptive processing, and phonological abstrac­
    tion. The next steps that should be taken to develop the model are also discussed.
  • McQueen, J. M., Eisner, F., Burgering, M. A., & Vroomen, J. (2020). Specialized memory systems for learning spoken words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(1), 189-199. doi:10.1037/xlm0000704.

    Abstract

    Learning new words entails, inter alia, encoding of novel sound patterns and transferring those patterns from short-term to long-term memory. We report a series of 5 experiments that investigated whether the memory systems engaged in word learning are specialized for speech and whether utilization of these systems results in a benefit for word learning. Sine-wave synthesis (SWS) was applied to spoken nonwords, and listeners were or were not informed (through instruction and familiarization) that the SWS stimuli were derived from actual utterances. This allowed us to manipulate whether listeners would process sound sequences as speech or as nonspeech. In a sound–picture association learning task, listeners who processed the SWS stimuli as speech consistently learned faster and remembered more associations than listeners who processed the same stimuli as nonspeech. The advantage of listening in “speech mode” was stable over the course of 7 days. These results provide causal evidence that access to a specialized, phonological short-term memory system is important for word learning. More generally, this study supports the notion that subsystems of auditory short-term memory are specialized for processing different types of acoustic information.

    Additional information

    Supplemental material
  • McQueen, J. M., & Cutler, A. (1998). Spotting (different kinds of) words in (different kinds of) context. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2791-2794). Sydney: ICSLP.

    Abstract

    The results of a word-spotting experiment are presented in which Dutch listeners tried to spot different types of bisyllabic Dutch words embedded in different types of nonsense contexts. Embedded verbs were not reliably harder to spot than embedded nouns; this suggests that nouns and verbs are recognised via the same basic processes. Iambic words were no harder to spot than trochaic words, suggesting that trochaic words are not in principle easier to recognise than iambic words. Words were harder to spot in consonantal contexts (i.e., contexts which themselves could not be words) than in longer contexts which contained at least one vowel (i.e., contexts which, though not words, were possible words of Dutch). A control experiment showed that this difference was not due to acoustic differences between the words in each context. The results support the claim that spoken-word recognition is sensitive to the viability of sound sequences as possible words.
  • Mengede, J., Devanna, P., Hörpel, S. G., Firzla, U., & Vernes, S. C. (2020). Studying the genetic bases of vocal learning in bats. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 280-282). Nijmegen: The Evolution of Language Conferences.
  • Meyer, A. S., Huettig, F., & Levelt, W. J. M. (2016). Same, different, or closely related: What is the relationship between language production and comprehension? Journal of Memory and Language, 89, 1-7. doi:10.1016/j.jml.2016.03.002.
  • Meyer, A. S., & Huettig, F. (Eds.). (2016). Speaking and Listening: Relationships Between Language Production and Comprehension [Special Issue]. Journal of Memory and Language, 89.
  • Meyer, L., Sun, Y., & Martin, A. E. (2020). Synchronous, but not entrained: Exogenous and endogenous cortical rhythms of speech and language processing. Language, Cognition and Neuroscience, 35(9), 1089-1099. doi:10.1080/23273798.2019.1693050.

    Abstract

    Research on speech processing is often focused on a phenomenon termed “entrainment”, whereby the cortex shadows rhythmic acoustic information with oscillatory activity. Entrainment has been observed to a range of rhythms present in speech; in addition, synchronicity with abstract information (e.g. syntactic structures) has been observed. Entrainment accounts face two challenges: First, speech is not exactly rhythmic; second, synchronicity with representations that lack a clear acoustic counterpart has been described. We propose that apparent entrainment does not always result from acoustic information. Rather, internal rhythms may have functionalities in the generation of abstract representations and predictions. While acoustics may often provide punctate opportunities for entrainment, internal rhythms may also live a life of their own to infer and predict information, leading to intrinsic synchronicity – not to be counted as entrainment. This possibility may open up new research avenues in the psycho– and neurolinguistic study of language processing and language development.
  • Meyer, L., Sun, Y., & Martin, A. E. (2020). “Entraining” to speech, generating language? Language, Cognition and Neuroscience, 35(9), 1138-1148. doi:10.1080/23273798.2020.1827155.

    Abstract

    Could meaning be read from acoustics, or from the refraction rate of pyramidal cells innervated by the cochlea, everyone would be an omniglot. Speech does not contain sufficient acoustic cues to identify linguistic units such as morphemes, words, and phrases without prior knowledge. Our target article (Meyer, L., Sun, Y., & Martin, A. E. (2019). Synchronous, but not entrained: Exogenous and endogenous cortical rhythms of speech and language processing. Language, Cognition and Neuroscience, 1–11. https://doi.org/10.1080/23273798.2019.1693050) thus questioned the concept of “entrainment” of neural oscillations to such units. We suggested that synchronicity with these points to the existence of endogenous functional “oscillators”—or population rhythmic activity in Giraud’s (2020) terms—that underlie the inference, generation, and prediction of linguistic units. Here, we address a series of inspirational commentaries by our colleagues. As apparent from these, some issues raised by our target article have already been raised in the literature. Psycho– and neurolinguists might still benefit from our reply, as “oscillations are an old concept in vision and motor functions, but a new one in linguistics” (Giraud, A.-L. 2020. Oscillations for all A commentary on Meyer, Sun & Martin (2020). Language, Cognition and Neuroscience, 1–8).
  • Meyer, A. S., Sleiderink, A. M., & Levelt, W. J. M. (1998). Viewing and naming objects: Eye movements during noun phrase production. Cognition, 66(2), B25-B33. doi:10.1016/S0010-0277(98)00009-2.

    Abstract

    Eye movements have been shown to reflect word recognition and language comprehension processes occurring during reading and auditory language comprehension. The present study examines whether the eye movements speakers make during object naming similarly reflect speech planning processes. In Experiment 1, speakers named object pairs saying, for instance, 'scooter and hat'. The objects were presented as ordinary line drawings or with partly dele:ed contours and had high or low frequency names. Contour type and frequency both significantly affected the mean naming latencies and the mean time spent looking at the objects. The frequency effects disappeared in Experiment 2, in which the participants categorized the objects instead of naming them. This suggests that the frequency effects of Experiment 1 arose during lexical retrieval. We conclude that eye movements during object naming indeed reflect linguistic planning processes and that the speakers' decision to move their eyes from one object to the next is contingent upon the retrieval of the phonological form of the object names.
  • Michalareas, G., Vezoli, J., Van Pelt, S., Schoffelen, J.-M., Kennedy, H., & Fries, P. (2016). Alpha-Beta and Gamma Rhythms Subserve Feedback and Feedforward Influences among Human Visual Cortical Areas. Neuron, 82(2), 384-397. doi:10.1016/j.neuron.2015.12.018.

    Abstract

    Primate visual cortex is hierarchically organized. Bottom-up and top-down influences are exerted through distinct frequency channels, as was recently revealed in macaques by correlating inter-areal influences with laminar anatomical projection patterns. Because this anatomical data cannot be obtained in human subjects, we selected seven homologous macaque and human visual areas, and we correlated the macaque laminar projection patterns to human inter-areal directed influences as measured with magnetoencephalography. We show that influences along feedforward projections predominate in the gamma band, whereas influences along feedback projections predominate in the alpha-beta band. Rhythmic inter-areal influences constrain a functional hierarchy of the seven homologous human visual areas that is in close agreement with the respective macaque anatomical hierarchy. Rhythmic influences allow an extension of the hierarchy to 26 human visual areas including uniquely human brain areas. Hierarchical levels of ventral- and dorsal-stream visual areas are differentially affected by inter-areal influences in the alpha-beta band.
  • Micheli, C., Schepers, I., Ozker, M., Yoshor, D., Beauchamp, M., & Rieger, J. (2020). Electrocorticography reveals continuous auditory and visual speech tracking in temporal and occipital cortex. European Journal of Neuroscience, 51(5), 1364-1376. doi:10.1111/ejn.13992.
  • Mickan, A., McQueen, J. M., & Lemhöfer, K. (2020). Between-language competition as a driving force in foreign language attrition. Cognition, 198: 104218. doi:10.1016/j.cognition.2020.104218.

    Abstract

    Research in the domain of memory suggests that forgetting is primarily driven by interference and competition from other, related memories. Here we ask whether similar dynamics are at play in foreign language (FL) attrition. We tested whether interference from translation equivalents in other, more recently used languages causes subsequent retrieval failure in L3. In Experiment 1, we investigated whether interference from the native language (L1) and/or from another foreign language (L2) affected L3 vocabulary retention. On day 1, Dutch native speakers learned 40 new Spanish (L3) words. On day 2, they performed a number of retrieval tasks in either Dutch (L1) or English (L2) on half of these words, and then memory for all items was tested again in L3 Spanish. Recall in Spanish was slower and less complete for words that received interference than for words that did not. In naming speed, this effect was larger for L2 compared to L1 interference. Experiment 2 replicated the interference effect and asked if the language difference can be explained by frequency of use differences between native- and non-native languages. Overall, these findings suggest that competition from more recently used languages, and especially other foreign languages, is a driving force behind FL attrition.

    Additional information

    Supplementary data
  • Mickan, A., & Lemhöfer, K. (2020). Tracking syntactic conflict between languages over the course of L2 acquisition: A cross-sectional event-related potential study. Journal of Cognitive Neuroscience, 32(5), 822-846. doi:10.1162/jocn_a_01528.

    Abstract

    One challenge of learning a foreign language (L2) in adulthood is the mastery of syntactic structures that are implemented differently in L2 and one's native language (L1). Here, we asked how L2 speakers learn to process syntactic constructions that are in direct conflict between L1 and L2, in comparison to structures without such a conflict. To do so, we measured EEG during sentence reading in three groups of German learners of Dutch with different degrees of L2 experience (from 3 to more than 18 months of L2 immersion) as well as a control group of Dutch native speakers. They read grammatical and ungrammatical Dutch sentences that, in the conflict condition, contained a structure with opposing word orders in Dutch and German (sentence-final double infinitives) and, in the no-conflict condition, a structure for which word order is identical in Dutch and German (subordinate clause inversion). Results showed, first, that beginning learners showed N400-like signatures instead of the expected P600 for both types of violations, suggesting that, in the very early stages of learning, different neurocognitive processes are employed compared with native speakers, regardless of L1–L2 similarity. In contrast, both advanced and intermediate learners already showed native-like P600 signatures for the no-conflict sentences. However, their P600 signatures were significantly delayed in processing the conflicting structure, even though behavioral performance was on a native level for both these groups and structures. These findings suggest that L1–L2 word order conflicts clearly remain an obstacle to native-like processing, even for advanced L2 learners.
  • Micklos, A., & Walker, B. (2020). Are people sensitive to problems in communication? Cognitive Science, 44(2): e12816. doi:10.1111/cogs.12816.

    Abstract

    Recent research indicates that interpersonal communication is noisy, and that people exhibit considerable insensitivity to problems in communication. Using a dyadic referential communication task, the goal of which is accurate information transfer, this study examined the extent to which interlocutors are sensitive to problems in communication and use other‐initiated repairs (OIRs) to address them. Participants were randomly assigned to dyads (N = 88 participants, or 44 dyads) and tried to communicate a series of recurring abstract geometric shapes to a partner across a text–chat interface. Participants alternated between directing (describing shapes) and matching (interpreting shape descriptions) roles across 72 trials of the task. Replicating prior research, over repeated social interactions communication success improved and the shape descriptions became increasingly efficient. In addition, confidence in having successfully communicated the different shapes increased over trials. Importantly, matchers were less confident on trials in which communication was unsuccessful, communication success was lower on trials that contained an OIR compared to those that did not contain an OIR, and OIR trials were associated with lower Director Confidence. This pattern of results demonstrates that (a) interlocutors exhibit (a degree of) sensitivity to problems in communication, (b) they appropriately use OIRs to address problems in communication, and (c) OIRs signal problems in communication.

    Additional information

    Open Data OSF
  • Micklos, A. (2016). Interaction for facilitating conventionalization: Negotiating the silent gesture communication of noun-verb pairs. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/143.html.

    Abstract

    This study demonstrates how interaction – specifically negotiation and repair – facilitates the emergence, evolution, and conventionalization of a silent gesture communication system. In a modified iterated learning paradigm, partners communicated noun-verb meanings using only silent gesture. The need to disambiguate similar noun-verb pairs drove these "new" language users to develop a morphology that allowed for quicker processing, easier transmission, and improved accuracy. The specific morphological system that emerged came about through a process of negotiation within the dyad, namely by means of repair. By applying a discourse analytic approach to the use of repair in an experimental methodology for language evolution, we are able to determine not only if interaction facilitates the emergence and learnability of a new communication system, but also how interaction affects such a system
  • Middeldorp, C. M., Hammerschlag, A. R., Ouwens, K. G., Groen-Blokhuis, M. M., St Pourcain, B., Greven, C. U., Pappa, I., Tiesler, C. M. T., Ang, W., Nolte, I. M., Vilor-Tejedor, N., Bacelis, J., Ebejer, J. L., Zhao, H., Davies, G. E., Ehli, E. A., Evans, D. M., Fedko, I. O., Guxens, M., Hottenga, J.-J. and 31 moreMiddeldorp, C. M., Hammerschlag, A. R., Ouwens, K. G., Groen-Blokhuis, M. M., St Pourcain, B., Greven, C. U., Pappa, I., Tiesler, C. M. T., Ang, W., Nolte, I. M., Vilor-Tejedor, N., Bacelis, J., Ebejer, J. L., Zhao, H., Davies, G. E., Ehli, E. A., Evans, D. M., Fedko, I. O., Guxens, M., Hottenga, J.-J., Hudziak, J. J., Jugessur, A., Kemp, J. P., Krapohl, E., Martin, N. G., Murcia, M., Myhre, R., Ormel, J., Ring, S. M., Standl, M., Stergiakouli, E., Stoltenberg, C., Thiering, E., Timpson, N. J., Trzaskowski, M., van der Most, P. J., Wang, C., EArly Genetics and Lifecourse Epidemiology (EAGLE) Consortium, Psychiatric Genomics Consortium ADHD Working Group, Nyholt, D. R., Medland, S. E., Neale, B., Jacobsson, B., Sunyer, J., Hartman, C. A., Whitehouse, A. J. O., Pennell, C. E., Heinrich, J., Plomin, R., Smith, G. D., Tiemeier, H., Posthuma, D., & Boomsma, D. I. (2016). A Genome-Wide Association Meta-Analysis of Attention-Deficit/Hyperactivity Disorder Symptoms in Population-Based Paediatric Cohorts. Journal of the American Academy of Child & Adolescent Psychiatry, 55(10), 896-905. doi:10.1016/j.jaac.2016.05.025.

    Abstract

    Objective To elucidate the influence of common genetic variants on childhood attention-deficit/hyperactivity disorder (ADHD) symptoms, to identify genetic variants that explain its high heritability, and to investigate the genetic overlap of ADHD symptom scores with ADHD diagnosis. Method Within the EArly Genetics and Lifecourse Epidemiology (EAGLE) consortium, genome-wide single nucleotide polymorphisms (SNPs) and ADHD symptom scores were available for 17,666 children (< 13 years) from nine population-based cohorts. SNP-based heritability was estimated in data from the three largest cohorts. Meta-analysis based on genome-wide association (GWA) analyses with SNPs was followed by gene-based association tests, and the overlap in results with a meta-analysis in the Psychiatric Genomics Consortium (PGC) case-control ADHD study was investigated. Results SNP-based heritability ranged from 5% to 34%, indicating that variation in common genetic variants influences ADHD symptom scores. The meta-analysis did not detect genome-wide significant SNPs, but three genes, lying close to each other with SNPs in high linkage disequilibrium (LD), showed a gene-wide significant association (p values between 1.46×10-6 and 2.66×10-6). One gene, WASL, is involved in neuronal development. Both SNP- and gene-based analyses indicated overlap with the PGC meta-analysis results with the genetic correlation estimated at 0.96. Conclusion The SNP-based heritability for ADHD symptom scores indicates a polygenic architecture and genes involved in neurite outgrowth are possibly involved. Continuous and dichotomous measures of ADHD appear to assess a genetically common phenotype. A next step is to combine data from population-based and case-control cohorts in genetic association studies to increase sample size and improve statistical power for identifying genetic variants.
  • Milham, M., Petkov, C. I., Margulies, D. S., Schroeder, C. E., Basso, M. A., Belin, P., Fair, D. A., Fox, A., Kastner, S., Mars, R. B., Messinger, A., Poirier, C., Vanduffel, W., Van Essen, D. C., Alvand, A., Becker, Y., Ben Hamed, S., Benn, A., Bodin, C., Boretius, S. Milham, M., Petkov, C. I., Margulies, D. S., Schroeder, C. E., Basso, M. A., Belin, P., Fair, D. A., Fox, A., Kastner, S., Mars, R. B., Messinger, A., Poirier, C., Vanduffel, W., Van Essen, D. C., Alvand, A., Becker, Y., Ben Hamed, S., Benn, A., Bodin, C., Boretius, S., Cagna, B., Coulon, O., El-Gohary, S. H., Evrard, H., Forkel, S. J., Friedrich, P., Froudist-Walsh, S., Garza-Villarreal, E. A., Gao, Y., Gozzi, A., Grigis, A., Hartig, R., Hayashi, T., Heuer, K., Howells, H., Ardesch, D. J., Jarraya, B., Jarrett, W., Jedema, H. P., Kagan, I., Kelly, C., Kennedy, H., Klink, P. C., Kwok, S. C., Leech, R., Liu, X., Madan, C., Madushanka, W., Majka, P., Mallon, A.-M., Marche, K., Meguerditchian, A., Menon, R. S., Merchant, H., Mitchell, A., Nenning, K.-H., Nikolaidis, A., Ortiz-Rios, M., Pagani, M., Pareek, V., Prescott, M., Procyk, E., Rajimehr, R., Rautu, I.-S., Raz, A., Roe, A. W., Rossi-Pool, R., Roumazeilles, L., Sakai, T., Sallet, J., García-Saldivar, P., Sato, C., Sawiak, S., Schiffer, M., Schwiedrzik, C. M., Seidlitz, J., Sein, J., Shen, Z.-m., Shmuel, A., Silva, A. C., Simone, L., Sirmpilatze, N., Sliwa, J., Smallwood, J., Tasserie, J., Thiebaut de Schotten, M., Toro, R., Trapeau, R., Uhrig, L., Vezoli, J., Wang, Z., Wells, S., Williams, B., Xu, T., Xu, A. G., Yacoub, E., Zhan, M., Ai, L., Amiez, C., Balezeau, F., Baxter, M. G., Blezer, E. L., Brochier, T., Chen, A., Croxson, P. L., Damatac, C. G., Dehaene, S., Everling, S., Fleysher, L., Freiwald, W., Griffiths, T. D., Guedj, C., Hadj-Bouziane, F., Harel, N., Hiba, B., Jung, B., Koo, B., Laland, K. N., Leopold, D. A., Lindenfors, P., Meunier, M., Mok, K., Morrison, J. H., Nacef, J., Nagy, J., Pinsk, M., Reader, S. M., Roelfsema, P. R., Rudko, D. A., Rushworth, M. F., Russ, B. E., Schmid, M. C., Sullivan, E. L., Thiele, A., Todorov, O. S., Tsao, D., Ungerleider, L., Wilson, C. R., Ye, F. Q., Zarco, W., & Zhou, Y.-d. (2020). Accelerating the Evolution of Nonhuman Primate Neuroimaging. Neuron, 105(4), 600-603. doi:10.1016/j.neuron.2019.12.023.

    Abstract

    Nonhuman primate neuroimaging is on the cusp of a transformation, much in the same way its human counterpart was in 2010, when the Human Connectome Project was launched to accelerate progress. Inspired by an open data-sharing initiative, the global community recently met and, in this article, breaks through obstacles to define its ambitions.

    Additional information

    supplementary information
  • Misersky, J., & Redl, T. (2020). A psycholinguistic view on stereotypical and grammatical gender: The effects and remedies. In C. D. J. Bulten, C. F. Perquin-Deelen, M. H. Sinninghe Damsté, & K. J. Bakker (Eds.), Diversiteit. Een multidisciplinaire terreinverkenning (pp. 237-255). Deventer: Wolters Kluwer.
  • Mongelli, V. (2020). The role of neural feedback in language unification: How awareness affects combinatorial processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Montero-Melis, G., & Jaeger, T. F. (2020). Changing expectations mediate adaptation in L2 production. Bilingualism: Language and Cognition, 23(3), 602-617. doi:10.1017/S1366728919000506.

    Abstract


    Native language (L1) processing draws on implicit expectations. An open question is whether non-native learners of a second language (L2) similarly draw on expectations, and whether these expectations are based on learners’ L1 or L2 knowledge. We approach this question by studying inverse preference effects on lexical encoding. L1 and L2 speakers of Spanish described motion events, while they were either primed to express path, manner, or neither. In line with other work, we find that L1 speakers adapted more strongly after primes that are unexpected in their L1. For L2 speakers, adaptation depended on their L2 proficiency: The least proficient speakers exhibited the inverse preference effect on adaptation based on what was unexpected in their L1; but the more proficient speakers were, the more they exhibited inverse preference effects based on what was unexpected in the L2. We discuss implications for L1 transfer and L2 acquisition.
  • Montero-Melis, G., Isaksson, P., Van Paridon, J., & Ostarek, M. (2020). Does using a foreign language reduce mental imagery? Cognition, 196: 104134. doi:10.1016/j.cognition.2019.104134.

    Abstract

    In a recent article, Hayakawa and Keysar (2018) propose that mental imagery is less vivid when evoked in a foreign than in a native language. The authors argue that reduced mental imagery could even account for moral foreign language effects, whereby moral choices become more utilitarian when made in a foreign language. Here we demonstrate that Hayakawa and Keysar's (2018) key results are better explained by reduced language comprehension in a foreign language than by less vivid imagery. We argue that the paradigm used in Hayakawa and Keysar (2018) does not provide a satisfactory test of reduced imagery and we discuss an alternative paradigm based on recent experimental developments.

    Additional information

    Supplementary data and scripts
  • Montero-Melis, G., Jaeger, T. F., & Bylund, E. (2016). Thinking is modulated by recent linguistic experience: Second language priming affects perceived event similarity. Language Learning, 66(3), 636-665. doi:10.1111/lang.12172.

    Abstract

    Can recent second language (L2) exposure affect what we judge to be similar events? Using a priming paradigm, we manipulated whether native Swedish adult learners of L2 Spanish were primed to use path or manner during L2 descriptions of scenes depicting caused motion events (encoding phase). Subsequently, participants engaged in a nonverbal task, arranging events on the screen according to similarity (test phase). Path versus manner priming affected how participants judged event similarity during the test phase. The effects we find support the hypotheses that (a) speakers create or select ad hoc conceptual categories that are based on linguistic knowledge to carry out nonverbal tasks, and that (b) short-term, recent L2 experience can affect this ad hoc process. These findings further suggest that cognition can flexibly draw on linguistic categories that have been implicitly highlighted during recent exposure.
  • Morgan, A., Fisher, S. E., Scheffer, I., & Hildebrand, M. (2016). FOXP2-related speech and language disorders. In R. A. Pagon, M. P. Adam, H. H. Ardinger, S. E. Wallace, A. Amemiya, L. J. Bean, T. D. Bird, C.-T. Fong, H. C. Mefford, R. J. Smith, & K. Stephens (Eds.), GeneReviews® [internet]. Seattle (WA): University of Washington, Seattle. Retrieved from http://www.ncbi.nlm.nih.gov/books/NBK368474/.
  • Li, S., Morley, M., Lu, M., Zhou, S., Stewart, K., French, C. A., Tucker, H. O., Fisher, S. E., & Morrisey, E. E. (2016). Foxp transcription factors suppress a non-pulmonary gene expression program to permit proper lung development. Developmental Biology, 416(2), 338-346. doi:10.1016/j.ydbio.2016.06.020.

    Abstract

    The inhibitory mechanisms that prevent gene expression programs from one tissue to be expressed in another are poorly understood. Foxp1/2/4 are forkhead transcription factors that repress gene expression and are individually important for endoderm development. We show that combined loss of all three Foxp1/2/4 family members in the developing anterior foregut endoderm leads to a loss of lung endoderm lineage commitment and subsequent development. Foxp1/2/4 deficient lungs express high levels of transcriptional regulators not normally expressed in the developing lung, including Pax2, Pax8, Pax9 and the Hoxa9-13 cluster. Ectopic expression of these transcriptional regulators is accompanied by decreased expression of lung restricted transcription factors including Nkx2-1, Sox2, and Sox9. Foxp1 binds to conserved forkhead DNA binding sites within the Hoxa9-13 cluster, indicating a direct repression mechanism. Thus, Foxp1/2/4 are essential for promoting lung endoderm development by repressing expression of non-pulmonary transcription factors
  • Mudd, K., Lutzenberger, H., De Vos, C., Fikkert, P., Crasborn, O., & De Boer, B. (2020). The effect of sociolinguistic factors on variation in the Kata Kolok lexicon. Asia-Pacific Language Variation, 6(1), 53-88. doi:10.1075/aplv.19009.mud.

    Abstract

    Sign languages can be categorized as shared sign languages or deaf community sign languages, depending on the context in which they emerge. It has been suggested that shared sign languages exhibit more variation in the expression of everyday concepts than deaf community sign languages (Meir, Israel, Sandler, Padden, & Aronoff, 2012). For deaf community sign languages, it has been shown that various sociolinguistic factors condition this variation. This study presents one of the first in-depth investigations of how sociolinguistic factors (deaf status, age, clan, gender and having a deaf family member) affect lexical variation in a shared sign language, using a picture description task in Kata Kolok. To study lexical variation in Kata Kolok, two methodologies are devised: the identification of signs by underlying iconic motivation and mapping, and a way to compare individual repertoires of signs by calculating the lexical distances between participants. Alongside presenting novel methodologies to study this type of sign language, we present preliminary evidence of sociolinguistic factors that may influence variation in the Kata Kolok lexicon.
  • Mudd, K., Lutzenberger, H., De Vos, C., Fikkert, P., Crasborn, O., & De Boer, B. (2020). How does social structure shape language variation? A case study of the Kata Kolok lexicon. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 302-304). Nijmegen: The Evolution of Language Conferences.
  • Muhinyi, A., Hesketh, A., Stewart, A. J., & Rowland, C. F. (2020). Story choice matters for caregiver extra-textual talk during shared reading with preschoolers. Journal of Child Language, 47(3), 633-654. doi:10.1017/S0305000919000783.

    Abstract



    This study aimed to examine the influence of the complexity of the story-book on caregiver extra-textual talk (i.e., interactions beyond text reading) during shared reading with preschool-age children. Fifty-three mother–child dyads (3;00–4;11) were video-recorded sharing two ostensibly similar picture-books: a simple story (containing no false belief) and a complex story (containing a false belief central to the plot, which provided content that was more challenging for preschoolers to understand). Book-reading interactions were transcribed and coded. Results showed that the complex stories facilitated more extra-textual talk from mothers, and a higher quality of extra-textual talk (as indexed by linguistic richness and level of abstraction). Although the type of story did not affect the number of questions mothers posed, more elaborative follow-ups on children's responses were provided by mothers when sharing complex stories. Complex stories may facilitate more and linguistically richer caregiver extra-textual talk, having implications for preschoolers’ developing language abilities.
  • Mulder, K., Ten Bosch, L., & Boves, L. (2016). Comparing different methods for analyzing ERP signals. In Proceedings of Interspeech 2016: The 17th Annual Conference of the International Speech Communication Association (pp. 1373-1377). doi:10.21437/Interspeech.2016-967.
  • Muntendam, A., & Torreira, F. (2016). Focus and prosody in Spanish and Quechua: Insights from an interactive task. In M. E. Armstrong, N. Hendriksen, & M. Del Mar Vanrell (Eds.), Intonational Grammar in Ibero-Romance: Approaches across linguistic subfields (pp. 69-90). Amsterdam: Benjmanins.

    Abstract

    This paper reports the results of a study on the prosodic marking of broad and contrastive focus in three language varieties of which two are in contact: bilingual Peruvian Spanish, Quechua and Peninsular Spanish. An interactive communicative task revealed that the prosodic marking of contrastive focus was limited in all three language varieties. No systematic correspondence was observed between specific contour/accent types and focus, and the phonetic marking of contrastive focus was weak and restricted to phrase-final position. Interestingly, we identified two contours for bilingual Peruvian Spanish that were present in Quechua, but not in Peninsular Spanish, providing evidence for a prosodic transfer from Quechua to Spanish in Quechua-Spanish bilinguals.
  • Murakami, S., Verdonschot, R. G., Kataoka, M., Kakimoto, N., Shimamoto, H., & Kreiborg, S. (2016). A standardized evaluation of artefacts from metallic compounds during fast MR imaging. Dentomaxillofacial Radiology, 45(8): 20160094. doi:10.1259/dmfr.20160094.

    Abstract

    Objectives: Metallic compounds present in the oral and maxillofacial regions (OMRs) cause large artefacts during MR scanning. We quantitatively assessed these artefacts embedded within a phantom according to standards set by the American Society for Testing and Materials (ASTM).
    Methods: Seven metallic dental materials (each of which was a 10-mm(3) cube embedded within a phantom) were scanned [i.e. aluminium (Al), silver alloy (Ag), type IV gold alloy (Au), gold-palladium-silver alloy (Au-Pd-Ag), titanium (Ti), nickel-chromium alloy (NC) and cobalt-chromium alloy (CC)] and compared with a reference image. Sequences included gradient echo (GRE), fast spin echo (FSE), gradient recalled acquisition in steady state (GRASS), a spoiled GRASS (SPGR), a fast SPGR (FSPGR), fast imaging employing steady state (FIESTA) and echo planar imaging (EPI; axial/sagittal planes). Artefact areas were determined according to the ASTM-F2119 standard, and artefact volumes were assessed using OsiriX MD software (Pixmeo, Geneva, Switzerland).
    Results: Tukey-Kramer post hoc tests were used for statistical comparisons. For most materials, scanning sequences eliciting artefact volumes in the following (ascending) order FSE-T-1/FSE-T-2 < FSPGR/SPGR < GRASS/GRE < FIESTA < EPI. For all scanning sequences, artefact volumes containing Au, Al, Ag and Au-Pd-Ag were significantly smaller than other materials (in which artefact volume size increased, respectively, from Ti < NC < CC). The artefact-specific shape (elicited by the cubic sample) depended on the scanning plane (i.e. a circular pattern for the axial plane and a "clover-like" pattern for the sagittal plane).
    Conclusions: The availability of standardized information on artefact size and configuration during MRI will enhance diagnosis when faced with metallic compounds in the OMR.
  • Murakami, S., Verdonschot, R. G., Kakimoto, N., Sumida, I., Fujiwara, M., Ogawa, K., & Furukawa, S. (2016). Preventing complications from high-dose rate brachytherapy when treating mobile tongue cancer via the application of a modular lead-lined spacer. PLoS One, 11(4): e0154226. doi:10.1371/journal.pone.0154226.

    Abstract

    Purpose
    To point out the advantages and drawbacks of high-dose rate brachytherapy in the treatment of mobile tongue cancer and indicate the clinical importance of modular lead-lined spacers when applying this technique to patients.
    Methods
    First, all basic steps to construct the modular spacer are shown. Second, we simulate and evaluate the dose rate reduction for a wide range of spacer configurations.
    Results
    With increasing distance to the source absorbed doses dropped considerably. Significantly more shielding was obtained when lead was added to the spacer and this effect was most pronounced on shorter (i.e. more clinically relevant) distances to the source.
    Conclusions
    The modular spacer represents an important addition to the planning and treatment stages of mobile tongue cancer using HDR-ISBT.

    Additional information

    tables
  • Nakamoto, T., Hatsuta, S., Yagi, S., Verdonschot, R. G., Taguchi, A., & Kakimoto, N. (2020). Computer-aided diagnosis system for osteoporosis based on quantitative evaluation of mandibular lower border porosity using panoramic radiographs. Dentomaxillofacial Radiology, 49(4): 20190481. doi:10.1259/dmfr.20190481.

    Abstract

    Objectives: A new computer-aided screening system for osteoporosis using panoramic radiographs was developed. The conventional system could detect porotic changes within the lower border of the mandible, but its severity could not be evaluated. Our aim was to enable the system to measure severity by implementing a linear bone resorption severity index (BRSI) based on the cortical bone shape.
    Methods: The participants were 68 females (>50 years) who underwent panoramic radiography and lumbar spine bone density measurements. The new system was designed to extract the lower border of the mandible as region of interests and convert them into morphological skeleton line images. The total perimeter length of the skeleton lines was defined as the BRSI. 40 images were visually evaluated for the presence of cortical bone porosity. The correlation between visual evaluation and BRSI of the participants, and the optimal threshold value of BRSI for new system were investigated through a receiver operator characteristic analysis. The diagnostic performance of the new system was evaluated by comparing the results from new system and lumbar bone density tests using 28 participants.
    Results: BRSI and lumbar bone density showed a strong negative correlation (p < 0.01). BRSI showed a strong correlation with visual evaluation. The new system showed high diagnostic efficacy with sensitivity of 90.9%, specificity of 64.7%, and accuracy of 75.0%.
    Conclusions: The new screening system is able to quantitatively evaluate mandibular cortical porosity. This allows for preventive screening for osteoporosis thereby enhancing clinical prospects.
  • Nakayama, M., Kinoshita, S., & Verdonschot, R. G. (2016). The emergence of a phoneme-sized unit in L2 speech production: Evidence from Japanese-English bilinguals. Frontiers in Psychology, 7: 175. doi:10.3389/fpsyg.2016.00175.

    Abstract

    Recent research has revealed that the way phonology is constructed during word production differs across languages. Dutch and English native speakers are suggested to incrementally insert phonemes into a metrical frame, whereas Mandarin Chinese speakers use syllables and Japanese speakers use a unit called the mora (often a CV cluster such as "ka" or "ki"). The present study is concerned with the question how bilinguals construct phonology in their L2 when the phonological unit size differs from the unit in their L1. Japanese English bilinguals of varying proficiency read aloud English words preceded by masked primes that overlapped in just the onset (e.g., bark-BENCH) or the onset plus vowel corresponding to the mora-sized unit (e.g., bell-BENCH). Low proficient Japanese English bilinguals showed CV priming but did not show onset priming, indicating that they use their L1 phonological unit when reading L2 English words. In contrast, high-proficient Japanese English bilinguals showed significant onset priming. The size of the onset priming effect was correlated with the length of time spent in English-speaking countries, which suggests that extensive exposure to L2 phonology may play a key role in the emergence of a language-specific phonological unit in L2 word production.
  • Nieuwland, M. S., Arkhipova, Y., & Rodríguez-Gómez, P. (2020). Anticipating words during spoken discourse comprehension: A large-scale, pre-registered replication study using brain potentials. Cortex, 133, 1-36. doi:10.1016/j.cortex.2020.09.007.

    Abstract

    Numerous studies report brain potential evidence for the anticipation of specific words during language comprehension. In the most convincing demonstrations, highly predictable nouns exert an influence on processing even before they appear to a reader or listener, as indicated by the brain's neural response to a prenominal adjective or article when it mismatches the expectations about the upcoming noun. However, recent studies suggest that some well-known demonstrations of prediction may be hard to replicate. This could signal the use of data-contingent analysis, but might also mean that readers and listeners do not always use prediction-relevant information in the way that psycholinguistic theories typically suggest. To shed light on this issue, we performed a close replication of one of the best-cited ERP studies on word anticipation (Van Berkum, Brown, Zwitserlood, Kooijman & Hagoort, 2005; Experiment 1), in which participants listened to Dutch spoken mini-stories. In the original study, the marking of grammatical gender on pre-nominal adjectives (‘groot/grote’) elicited an early positivity when mismatching the gender of an unseen, highly predictable noun, compared to matching gender. The current pre-registered study involved that same manipulation, but used a novel set of materials twice the size of the original set, an increased sample size (N = 187), and Bayesian mixed-effects model analyses that better accounted for known sources of variance than the original. In our study, mismatching gender elicited more negative voltage than matching gender at posterior electrodes. However, this N400-like effect was small in size and lacked support from Bayes Factors. In contrast, we successfully replicated the original's noun effects. While our results yielded some support for prediction, they do not support the Van Berkum et al. effect and highlight the risks associated with commonly employed data-contingent analyses and small sample sizes. Our results also raise the question whether Dutch listeners reliably or consistently use adjectival inflection information to inform their noun predictions.
  • Nieuwland, M. S., Barr, D. J., Bartolozzi, F., Busch-Moreno, S., Darley, E., Donaldson, D. I., Ferguson, H. J., Fu, X., Heyselaar, E., Huettig, F., Husband, E. M., Ito, A., Kazanina, N., Kogan, V., Kohút, Z., Kulakova, E., Mézière, D., Politzer-Ahles, S., Rousselet, G., Rueschemeyer, S.-A. and 3 moreNieuwland, M. S., Barr, D. J., Bartolozzi, F., Busch-Moreno, S., Darley, E., Donaldson, D. I., Ferguson, H. J., Fu, X., Heyselaar, E., Huettig, F., Husband, E. M., Ito, A., Kazanina, N., Kogan, V., Kohút, Z., Kulakova, E., Mézière, D., Politzer-Ahles, S., Rousselet, G., Rueschemeyer, S.-A., Segaert, K., Tuomainen, J., & Von Grebmer Zu Wolfsthurn, S. (2020). Dissociable effects of prediction and integration during language comprehension: Evidence from a large-scale study using brain potentials. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375: 20180522. doi:10.1098/rstb.2018.0522.

    Abstract

    Composing sentence meaning is easier for predictable words than for unpredictable words. Are predictable words genuinely predicted, or simply more plausible and therefore easier to integrate with sentence context? We addressed this persistent and fundamental question using data from a recent, large-scale (N = 334) replication study, by investigating the effects of word predictability and sentence plausibility on the N400, the brain’s electrophysiological index of semantic processing. A spatiotemporally fine-grained mixed-effects multiple regression analysis revealed overlapping effects of predictability and plausibility on the N400, albeit with distinct spatiotemporal profiles. Our results challenge the view that the predictability-dependent N400 reflects the effects of either prediction or integration, and suggest that semantic facilitation of predictable words arises from a cascade of processes that activate and integrate word meaning with context into a sentence-level meaning.
  • Nieuwland, M. S. (2016). Quantification, prediction, and the online impact of sentence truth-value: Evidence from event-related potentials. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(2), 316-334. doi:10.1037/xlm0000173.

    Abstract

    Do negative quantifiers like “few” reduce people’s ability to rapidly evaluate incoming language with respect to world knowledge? Previous research has addressed this question by examining whether online measures of quantifier comprehension match the “final” interpretation reflected in verification judgments. However, these studies confounded quantifier valence with its impact on the unfolding expectations for upcoming words, yielding mixed results. In the current event-related potentials study, participants read negative and positive quantifier sentences matched on cloze probability and on truth-value (e.g., “Most/Few gardeners plant their flowers during the spring/winter for best results”). Regardless of whether participants explicitly verified the sentences or not, true-positive quantifier sentences elicited reduced N400s compared with false-positive quantifier sentences, reflecting the facilitated semantic retrieval of words that render a sentence true. No such facilitation was seen in negative quantifier sentences. However, mixed-effects model analyses (with cloze value and truth-value as continuous predictors) revealed that decreasing cloze values were associated with an interaction pattern between truth-value and quantifier, whereas increasing cloze values were associated with more similar truth-value effects regardless of quantifier. Quantifier sentences are thus understood neither always in 2 sequential stages, nor always in a partial-incremental fashion, nor always in a maximally incremental fashion. Instead, and in accordance with prediction-based views of sentence comprehension, quantifier sentence comprehension depends on incorporation of quantifier meaning into an online, knowledge-based prediction for upcoming words. Fully incremental quantifier interpretation occurs when quantifiers are incorporated into sufficiently strong online predictions for upcoming words. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
  • Nieuwland, M. S., & Kazanina, N. (2020). The neural basis of linguistic prediction: Introduction to the special issue. Neuropsychologia, 146: 107532. doi:10.1016/j.neuropsychologia.2020.107532.
  • Noble, C., Cameron-Faulkner, T., Jessop, A., Coates, A., Sawyer, H., Taylor-Ims, R., & Rowland, C. F. (2020). The impact of interactive shared book reading on children's language skills: A randomized controlled trial. Journal of Speech, Language, and Hearing Research, 63(6), 1878-1897. doi:10.1044/2020_JSLHR-19-00288.

    Abstract

    Purpose: Research has indicated that interactive shared
    book reading can support a wide range of early language
    skills and that children who are read to regularly in the early
    years learn language faster, enter school with a larger
    vocabulary, and become more successful readers at school.
    Despite the large volume of research suggesting interactive
    shared reading is beneficial for language development, two
    fundamental issues remain outstanding: whether shared
    book reading interventions are equally effective (a) for children
    from all socioeconomic backgrounds and (b) for a range of
    language skills.
    Method: To address these issues, we conducted a
    randomized controlled trial to investigate the effects of two
    6-week interactive shared reading interventions on a
    range of language skills in children across the socioeconomic
    spectrum. One hundred and fifty children aged between
    2;6 and 3;0 (years;months) were randomly assigned to one

    of three conditions: a pause reading, a dialogic reading, or
    an active shared reading control condition.
    Results: The findings indicated that the interventions were
    effective at changing caregiver reading behaviors. However,
    the interventions did not boost children’s language skills
    over and above the effect of an active reading control
    condition. There were also no effects of socioeconomic status.
    Conclusion: This randomized controlled trial showed
    that caregivers from all socioeconomic backgrounds
    successfully adopted an interactive shared reading style.
    However, while the interventions were effective at increasing
    caregivers’ use of interactive shared book reading behaviors,
    this did not have a significant impact on the children’s
    language skills. The findings are discussed in terms of
    practical implications and future research.

    Additional information

    Supplemental Material
  • De Nooijer, J. A., & Willems, R. M. (2016). What can we learn about cognition from studying handedness? Insights from cognitive neuroscience. In F. Loffing, N. Hagemann, B. Strauss, & C. MacMahon (Eds.), Laterality in sports: Theories and applications (pp. 135-153). Amsterdam: Elsevier.

    Abstract

    Can studying left- and right-handers inform us about cognition? In this chapter, we give an overview of research showing that studying left- and right-handers is informative for understanding the way the brain is organized (i.e., lateralized), as there appear to be differences between left- and right-handers in this respect, but also on the behavioral level handedness studies can provide new insights. According to theories of embodied cognition, our body can influence cognition. Given that left- and right-handers use their bodies differently, this might reflect their performance on an array of cognitive tasks. Indeed, handedness can have an influence on, for instance, what side of space we judge as more positive, the way we gesture, how we remember things, and how we learn new words. Laterality research can, therefore, provide valuable information as to how we act and why
  • Noordman, L. G., & Vonk, W. (1998). Discourse comprehension. In A. D. Friederici (Ed.), Language comprehension: a biological perspective (pp. 229-262). Berlin: Springer.

    Abstract

    The human language processor is conceived as a system that consists of several interrelated subsystems. Each subsystem performs a specific task in the complex process of language comprehension and production. A subsystem receives a particular input, performs certain specific operations on this input and yields a particular output. The subsystems can be characterized in terms of the transformations that relate the input representations to the output representations. An important issue in describing the language processing system is to identify the subsystems and to specify the relations between the subsystems. These relations can be conceived in two different ways. In one conception the subsystems are autonomous. They are related to each other only by the input-output channels. The operations in one subsystem are not affected by another system. The subsystems are modular, that is they are independent. In the other conception, the different subsystems influence each other. A subsystem affects the processes in another subsystem. In this conception there is an interaction between the subsystems.
  • Noordman, L. G. M., & Vonk, W. (1998). Memory-based processing in understanding causal information. Discourse Processes, 191-212. doi:10.1080/01638539809545044.

    Abstract

    The reading process depends both on the text and on the reader. When we read a text, propositions in the current input are matched to propositions in the memory representation of the previous discourse but also to knowledge structures in long‐term memory. Therefore, memory‐based text processing refers both to the bottom‐up processing of the text and to the top‐down activation of the reader's knowledge. In this article, we focus on the role of cognitive structures in the reader's knowledge. We argue that causality is an important category in structuring human knowledge and that this property has consequences for text processing. Some research is discussed that illustrates that the more the information in the text reflects causal categories, the more easily the information is processed.
  • Norcliffe, E., & Jaeger, T. F. (2016). Predicting head-marking variability in Yucatec Maya relative clause production. Language and Cognition, 8(2), 167-205. doi:10.1017/langcog.2014.39.

    Abstract

    Recent proposals hold that the cognitive systems underlying language production exhibit computational properties that facilitate communicative efficiency, i.e., an efficient trade-off between production ease and robust information transmission. We contribute to the cross-linguistic evaluation of the communicative efficiency hypothesis by investigating speakers’ preferences in the production of a typologically rare head-marking alternation that occurs in relative clause constructions in Yucatec Maya. In a sentence recall study, we find that speakers of Yucatec Maya prefer to use reduced forms of relative clause verbs when the relative clause is more contextually expected. This result is consistent with communicative efficiency and thus supports its typological generalizability. We compare two types of cue to the presence of a relative clause, pragmatic cues previously investigated in other languages and a highly predictive morphosyntactic cue specific to Yucatec. We find that Yucatec speakers’ preferences for a reduced verb form are primarily conditioned on the more informative cue. This demonstrates the role of both general principles of language production and their language-specific realizations.
  • Norris, D., McQueen, J. M., & Cutler, A. (2016). Prediction, Bayesian inference and feedback in speech recognition. Language, Cognition and Neuroscience, 31(1), 4-18. doi:10.1080/23273798.2015.1081703.

    Abstract

    Speech perception involves prediction, but how is that prediction implemented? In cognitive models prediction has often been taken to imply that there is feedback of activation from lexical to pre-lexical processes as implemented in interactive-activation models (IAMs). We show that simple activation feedback does not actually improve speech recognition. However, other forms of feedback can be beneficial. In particular, feedback can enable the listener to adapt to changing input, and can potentially help the listener to recognise unusual input, or recognise speech in the presence of competing sounds. The common feature of these helpful forms of feedback is that they are all ways of optimising the performance of speech recognition using Bayesian inference. That is, listeners make predictions about speech because speech recognition is optimal in the sense captured in Bayesian models.
  • O'Brien, D. P., & Bowerman, M. (1998). Martin D. S. Braine (1926–1996): Obituary. American Psychologist, 53, 563. doi:10.1037/0003-066X.53.5.563.

    Abstract

    Memorializes Martin D. S. Braine, whose research on child language acquisition and on both child and adult thinking and reasoning had a major influence on modern cognitive psychology. Addressing meaning as well as position, Braine argued that children start acquiring language by learning narrow-scope positional formulas that map components of meaning to positions in the utterance. These proposals were critical in starting discussions of the possible universality of the pivot-grammar stage and of the role of syntax, semantics,and pragmatics in children's early grammar and were pivotal to the rise of approaches in which cognitive development in language acquisition is stressed.
  • Ohlerth, A.-K., Valentin, A., Vergani, F., Ashkan, K., & Bastiaanse, R. (2020). The verb and noun test for peri-operative testing (VAN-POP): Standardized language tests for navigated transcranial magnetic stimulation and direct electrical stimulation. Acta Neurochirurgica, (2), 397-406. doi:10.1007/s00701-019-04159-x.

    Abstract

    Background

    Protocols for intraoperative language mapping with direct electrical stimulation (DES) often include various language tasks triggering both nouns and verbs in sentences. Such protocols are not readily available for navigated transcranial magnetic stimulation (nTMS), where only single word object naming is generally used. Here, we present the development, norming, and standardization of the verb and noun test for peri-operative testing (VAN-POP) that measures language skills more extensively.
    Methods

    The VAN-POP tests noun and verb retrieval in sentence context. Items are marked and balanced for several linguistic factors known to influence word retrieval. The VAN-POP was administered in English, German, and Dutch under conditions that are used for nTMS and DES paradigms. For each language, 30 speakers were tested.
    Results

    At least 50 items per task per language were named fluently and reached a high naming agreement.
    Conclusion

    The protocol proved to be suitable for pre- and intraoperative language mapping with nTMS and DES.
  • Okbay, A., Beauchamp, J. P., Fontana, M. A., Lee, J. J., Pers, T. H., Rietveld, C. A., Turley, P., Chen, G. B., Emilsson, V., Meddens, S. F. W., Oskarsson, S., Pickrell, J. K., Thom, K., Timshel, P., De Vlaming, R., Abdellaoui, A., Ahluwalia, T. S., Bacelis, J., Baumbach, C., Bjornsdottir, G. and 236 moreOkbay, A., Beauchamp, J. P., Fontana, M. A., Lee, J. J., Pers, T. H., Rietveld, C. A., Turley, P., Chen, G. B., Emilsson, V., Meddens, S. F. W., Oskarsson, S., Pickrell, J. K., Thom, K., Timshel, P., De Vlaming, R., Abdellaoui, A., Ahluwalia, T. S., Bacelis, J., Baumbach, C., Bjornsdottir, G., Brandsma, J., Pina Concas, M., Derringer, J., Furlotte, N. A., Galesloot, T. E., Girotto, G., Gupta, R., Hall, L. M., Harris, S. E., Hofer, E., Horikoshi, M., Huffman, J. E., Kaasik, K., Kalafati, I. P., Karlsson, R., Kong, A., Lahti, J., Lee, S. J. V. D., DeLeeuw, C., Lind, P. A., Lindgren, K.-.-O., Liu, T., Mangino, M., Marten, J., Mihailov, E., Miller, M. B., Van der Most, P. J., Oldmeadow, C., Payton, A., Pervjakova, N., Peyrot, W. J., Qian, Y., Raitakari, O., Rueedi, R., Salvi, E., Schmidt, B., Schraut, K. E., Shi, J., Smith, A. V., Poot, R. A., St Pourcain, B., Teumer, A., Thorleifsson, G., Verweij, N., Vuckovic, D., Wellmann, J., Westra, H.-.-J., Yang, J., Zhao, W., Zhu, Z., Alizadeh, B. Z., Amin, N., Bakshi, A., Baumeister, S. E., Biino, G., Bønnelykke, K., Boyle, P. A., Campbell, H., Cappuccio, F. P., Davies, G., De Neve, J.-.-E., Deloukas, P., Demuth, I., Ding, J., Eibich, P., Eisele, L., Eklund, N., Evans, D. M., Faul, J. D., Feitosa, M. F., Forstner, A. J., Gandin, I., Gunnarsson, B., Halldórsson, B. V., Harris, T. B., Heath, A. C., Hocking, L. J., Holliday, E. G., Homuth, G., Horan, M. A., Hottenga, J.-.-J., De Jager, P. L., Joshi, P. K., Jugessur, A., Kaakinen, M. A., Kähönen, M., Kanoni, S., Keltigangas-Järvinen, L., Kiemeney, L. A. L. M., Kolcic, I., Koskinen, S., Kraja, A. T., Kroh, M., Kutalik, Z., Latvala, A., Launer, L. J., Lebreton, M. P., Levinson, D. F., Lichtenstein, P., Lichtner, P., Liewald, D. C. M., Cohert Study, L., Loukola, A., Madden, P. A., Mägi, R., Mäki-Opas, T., Marioni, R. E., Marques-Vidal, P., Meddens, G. A., McMahon, G., Meisinger, C., Meitinger, T., Milaneschi, Y., Milani, L., Montgomery, G. W., Myhre, R., Nelson, C. P., Nyholt, D. R., Ollier, W. E. R., Palotie, A., Paternoster, L., Pedersen, N. L., Petrovic, K. E., Porteous, D. J., Räikkönen, K., Ring, S. M., Robino, A., Rostapshova, O., Rudan, I., Rustichini, A., Salomaa, V., Sanders, A. R., Sarin, A.-.-P., Schmidt, H., Scott, R. J., Smith, B. H., Smith, J. A., Staessen, J. A., Steinhagen-Thiessen, E., Strauch, K., Terracciano, A., Tobin, M. D., Ulivi, S., Vaccargiu, S., Quaye, L., Van Rooij, F. J. A., Venturini, C., Vinkhuyzen, A. A. E., Völker, U., Völzke, H., Vonk, J. M., Vozzi, D., Waage, J., Ware, E. B., Willemsen, G., Attia, J. R., Bennett, D. A., Berger, K., Bertram, L., Bisgaard, H., Boomsma, D. I., Borecki, I. B., Bültmann, U., Chabris, C. F., Cucca, F., Cusi, D., Deary, I. J., Dedoussis, G. V., Van Duijn, C. M., Eriksson, J. G., Franke, B., Franke, L., Gasparini, P., Gejman, P. V., Gieger, C., Grabe, H.-.-J., Gratten, J., Groenen, P. J. F., Gudnason, V., Van der Harst, P., Hayward, C., Hinds, D. A., Hoffmann, W., Hyppönen, E., Iacono, W. G., Jacobsson, B., Järvelin, M.-.-R., Jöckel, K.-.-H., Kaprio, J., Kardia, S. L. R., Lehtimäki, T., Lehrer, S. F., Magnusson, P. K. E., Martin, N. G., McGue, M., Metspalu, A., Pendleton, N., Penninx, B. W. J. H., Perola, M., Pirastu, N., Pirastu, M., Polasek, O., Posthuma, D., Power, C., Province, M. A., Samani, N. J., Schlessinger, D., Schmidt, R., Sørensen, T. I. A., Spector, T. D., Stefansson, K., Thorsteinsdottir, U., Thurik, A. R., Timpson, N. J., Tiemeier, H., Tung, J. Y., Uitterlinden, A. G., Vitart, V., Vollenweider, P., Weir, D. R., Wilson, J. F., Wright, A. F., Conley, D. C., Krueger, R. F., Davey Smith, G., Hofman, A., Laibson, D. I., Medland, S. E., Meyer, M. N., Yang, J., Johannesson, M., Visscher, P. M., Esko, T., Koellinger, P. D., Cesarini, D., & Benjamin, D. J. (2016). Genome-wide association study identifies 74 loci associated with educational attainment. Nature, 533, 539-542. doi:10.1038/nature17671.

    Abstract

    Educational attainment is strongly influenced by social and other environmental factors, but genetic factors are estimated to account for at least 20% of the variation across individuals. Here we report the results of a genome-wide association study (GWAS) for educational attainment that extends our earlier discovery sample of 101,069 individuals to 293,723 individuals, and a replication study in an independent sample of 111,349 individuals from the UK Biobank. We identify 74 genome-wide significant loci associated with the number of years of schooling completed. Single-nucleotide polymorphisms associated with educational attainment are disproportionately found in genomic regions regulating gene expression in the fetal brain. Candidate genes are preferentially expressed in neural tissue, especially during the prenatal period, and enriched for biological pathways involved in neural development. Our findings demonstrate that, even for a behavioural phenotype that is mostly environmentally determined, a well-powered GWAS identifies replicable associated genetic variants that suggest biologically relevant pathways. Because educational attainment is measured in large numbers of individuals, it will continue to be useful as a proxy phenotype in efforts to characterize the genetic influences of related phenotypes, including cognition and neuropsychiatric diseases
  • O'Meara, C., & Majid, A. (2016). How changing lifestyles impact Seri smellscapes and smell language. Anthropological Linguistics, 58(2), 107-131. doi:10.1353/anl.2016.0024.

    Abstract

    The sense of smell has widely been viewed as inferior to the other senses. This is reflected in the lack of treatment of olfaction in ethnographies and linguistic descriptions. We present novel data
    from the olfactory lexicon of Seri, a language isolate of Mexico, which sheds new light onto the possibilities for olfactory terminologies. We also present the Seri smellscape, highlighting the cultural significance of odors in Seri culture which, along with the olfactory language, is now
    under threat as globalization takes hold and traditional ways of life are transformed.
  • Ortega, G., & Ozyurek, A. (2016). Generalisable patterns of gesture distinguish semantic categories in communication without language. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 1182-1187). Austin, TX: Cognitive Science Society.

    Abstract

    There is a long-standing assumption that gestural forms are geared by a set of modes of representation (acting, representing, drawing, moulding) with each technique expressing speakers’ focus of attention on specific aspects of referents (Müller, 2013). Beyond different taxonomies describing the modes of representation, it remains unclear what factors motivate certain depicting techniques over others. Results from a pantomime generation task show that pantomimes are not entirely idiosyncratic but rather follow generalisable patterns constrained by their semantic category. We show that a) specific modes of representations are preferred for certain objects (acting for manipulable objects and drawing for non-manipulable objects); and b) that use and ordering of deictics and modes of representation operate in tandem to distinguish between semantically related concepts (e.g., “to drink” vs “mug”). This study provides yet more evidence that our ability to communicate through silent gesture reveals systematic ways to describe events and objects around us
  • Ortega, G., Ozyurek, A., & Peeters, D. (2020). Iconic gestures serve as manual cognates in hearing second language learners of a sign language: An ERP study. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(3), 403-415. doi:10.1037/xlm0000729.

    Abstract

    When learning a second spoken language, cognates, words overlapping in form and meaning with one’s native language, help breaking into the language one wishes to acquire. But what happens when the to-be-acquired second language is a sign language? We tested whether hearing nonsigners rely on their gestural repertoire at first exposure to a sign language. Participants saw iconic signs with high and low overlap with the form of iconic gestures while electrophysiological brain activity was recorded. Upon first exposure, signs with low overlap with gestures elicited enhanced positive amplitude in the P3a component compared to signs with high overlap. This effect disappeared after a training session. We conclude that nonsigners generate expectations about the form of iconic signs never seen before based on their implicit knowledge of gestures, even without having to produce them. Learners thus draw from any available semiotic resources when acquiring a second language, and not only from their linguistic experience
  • Ortega, G. (2016). Language acquisition and development. In G. Gertz (Ed.), The SAGE Deaf Studies Encyclopedia. Vol. 3 (pp. 547-551). London: SAGE Publications Inc.
  • Ortega, G., & Ozyurek, A. (2020). Systematic mappings between semantic categories and types of iconic representations in the manual modality: A normed database of silent gesture. Behavior Research Methods, 52, 51-67. doi:10.3758/s13428-019-01204-6.

    Abstract

    An unprecedented number of empirical studies have shown that iconic gestures—those that mimic the sensorimotor attributes of a referent—contribute significantly to language acquisition, perception, and processing. However, there has been a lack of normed studies describing generalizable principles in gesture production and in comprehension of the mappings of different types of iconic strategies (i.e., modes of representation; Müller, 2013). In Study 1 we elicited silent gestures in order to explore the implementation of different types of iconic representation (i.e., acting, representing, drawing, and personification) to express concepts across five semantic domains. In Study 2 we investigated the degree of meaning transparency (i.e., iconicity ratings) of the gestures elicited in Study 1. We found systematicity in the gestural forms of 109 concepts across all participants, with different types of iconicity aligning with specific semantic domains: Acting was favored for actions and manipulable objects, drawing for nonmanipulable objects, and personification for animate entities. Interpretation of gesture–meaning transparency was modulated by the interaction between mode of representation and semantic domain, with some couplings being more transparent than others: Acting yielded higher ratings for actions, representing for object-related concepts, personification for animate entities, and drawing for nonmanipulable entities. This study provides mapping principles that may extend to all forms of manual communication (gesture and sign). This database includes a list of the most systematic silent gestures in the group of participants, a notation of the form of each gesture based on four features (hand configuration, orientation, placement, and movement), each gesture’s mode of representation, iconicity ratings, and professionally filmed videos that can be used for experimental and clinical endeavors.
  • Ortega, G., & Ozyurek, A. (2020). Types of iconicity and combinatorial strategies distinguish semantic categories in silent gesture. Language and Cognition, 12(1), 84-113. doi:10.1017/langcog.2019.28.

    Abstract

    In this study we explore whether different types of iconic gestures
    (i.e., acting, drawing, representing) and their combinations are used
    systematically to distinguish between different semantic categories in
    production and comprehension. In Study 1, we elicited silent gestures
    from Mexican and Dutch participants to represent concepts from three
    semantic categories: actions, manipulable objects, and non-manipulable
    objects. Both groups favoured the acting strategy to represent actions and
    manipulable objects; while non-manipulable objects were represented
    through the drawing strategy. Actions elicited primarily single gestures
    whereas objects elicited combinations of different types of iconic gestures
    as well as pointing. In Study 2, a different group of participants were
    shown gestures from Study 1 and were asked to guess their meaning.
    Single-gesture depictions for actions were more accurately guessed than
    for objects. Objects represented through two-gesture combinations (e.g.,
    acting + drawing) were more accurately guessed than objects represented
    with a single gesture. We suggest iconicity is exploited to make direct
    links with a referent, but when it lends itself to ambiguity, individuals
    resort to combinatorial structures to clarify the intended referent.
    Iconicity and the need to communicate a clear signal shape the structure
    of silent gestures and this in turn supports comprehension.
  • Ozyurek, A. (1998). An analysis of the basic meaning of Turkish demonstratives in face-to-face conversational interaction. In S. Santi, I. Guaitella, C. Cave, & G. Konopczynski (Eds.), Oralite et gestualite: Communication multimodale, interaction: actes du colloque ORAGE 98 (pp. 609-614). Paris: L'Harmattan.
  • Ozyurek, A. (2020). From hands to brains: How does human body talk, think and interact in face-to-face language use? In K. Truong, D. Heylen, & M. Czerwinski (Eds.), ICMI '20: Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 1-2). New York, NY, USA: Association for Computing Machinery. doi:10.1145/3382507.3419442.
  • Paplu, S. H., Mishra, C., & Berns, K. (2020). Pseudo-randomization in automating robot behaviour during human-robot interaction. In 2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob) (pp. 1-6). Institute of Electrical and Electronics Engineers. doi:10.1109/ICDL-EpiRob48136.2020.9278115.

    Abstract

    Automating robot behavior in a specific situation is an active area of research. There are several approaches available in the literature of robotics to cater for the automatic behavior of a robot. However, when it comes to humanoids or human-robot interaction in general, the area has been less explored. In this paper, a pseudo-randomization approach has been introduced to automatize the gestures and facial expressions of an interactive humanoid robot called ROBIN based on its mental state. A significant number of gestures and facial expressions have been implemented to allow the robot more options to perform a relevant action or reaction based on visual stimuli. There is a display of noticeable differences in the behaviour of the robot for the same stimuli perceived from an interaction partner. This slight autonomous behavioural change in the robot clearly shows a notion of automation in behaviour. The results from experimental scenarios and human-centered evaluation of the system help validate the approach.

    Files private

    Request files
  • Pappa, I., St Pourcain, B., Benke, K., Cavadino, A., Hakulinen, C., Nivard, M. G., Nolte, I. M., Tiesler, C. M. T., Bakermans-Kranenburg, M. J., Davies, G. E., Evans, D. M., Geoffroy, M.-C., Grallert, H., Groen-Blokhuis, M. M., Hudziak, J. J., Kemp, J. P., Keltikangas-Järvinen, L., McMahon, G., Mileva-Seitz, V. R., Motazedi, E. and 23 morePappa, I., St Pourcain, B., Benke, K., Cavadino, A., Hakulinen, C., Nivard, M. G., Nolte, I. M., Tiesler, C. M. T., Bakermans-Kranenburg, M. J., Davies, G. E., Evans, D. M., Geoffroy, M.-C., Grallert, H., Groen-Blokhuis, M. M., Hudziak, J. J., Kemp, J. P., Keltikangas-Järvinen, L., McMahon, G., Mileva-Seitz, V. R., Motazedi, E., Power, C., Raitakari, O. T., Ring, S. M., Rivadeneira, F., Rodriguez, A., Scheet, P. A., Seppälä, I., Snieder, H., Standl, M., Thiering, E., Timpson, N. J., Veenstra, R., Velders, F. P., Whitehouse, A. J. O., Smith, G. D., Heinrich, J., Hypponen, E., Lehtimäki, T., Middeldorp, C. M., Oldehinkel, A. J., Pennell, C. E., Boomsma, D. I., & Tiemeier, H. (2016). A genome-wide approach to children's aggressive behavior: The EAGLE consortium. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 171(5), 562-572. doi:10.1002/ajmg.b.32333.

    Abstract

    Individual differences in aggressive behavior emerge in early childhood and predict persisting behavioral problems and disorders. Studies of antisocial and severe aggression in adulthood indicate substantial underlying biology. However, little attention has been given to genome-wide approaches of aggressive behavior in children. We analyzed data from nine population-based studies and assessed aggressive behavior using well-validated parent-reported questionnaires. This is the largest sample exploring children's aggressive behavior to date (N = 18,988), with measures in two developmental stages (N = 15,668 early childhood and N = 16,311 middle childhood/early adolescence). First, we estimated the additive genetic variance of children's aggressive behavior based on genome-wide SNP information, using genome-wide complex trait analysis (GCTA). Second, genetic associations within each study were assessed using a quasi-Poisson regression approach, capturing the highly right-skewed distribution of aggressive behavior. Third, we performed meta-analyses of genome-wide associations for both the total age-mixed sample and the two developmental stages. Finally, we performed a gene-based test using the summary statistics of the total sample. GCTA quantified variance tagged by common SNPs (10–54%). The meta-analysis of the total sample identified one region in chromosome 2 (2p12) at near genome-wide significance (top SNP rs11126630, P = 5.30 × 10−8). The separate meta-analyses of the two developmental stages revealed suggestive evidence of association at the same locus. The gene-based analysis indicated association of variation within AVPR1A with aggressive behavior. We conclude that common variants at 2p12 show suggestive evidence for association with childhood aggression. Replication of these initial findings is needed, and further studies should clarify its biological meaning.
  • Pederson, E., Danziger, E., Wilkins, D. G., Levinson, S. C., Kita, S., & Senft, G. (1998). Semantic typology and spatial conceptualization. Language, 74(3), 557-589. doi:10.2307/417793.
  • Peeters, D. (2020). Bilingual switching between languages and listeners: Insights from immersive virtual reality. Cognition, 195: 104107. doi:10.1016/j.cognition.2019.104107.

    Abstract

    Perhaps the main advantage of being bilingual is the capacity to communicate with interlocutors that have different language backgrounds. In the life of a bilingual, switching interlocutors hence sometimes involves switching languages. We know that the capacity to switch from one language to another is supported by control mechanisms, such as task-set reconfiguration. This study investigates whether similar neurophysiological mechanisms support bilingual switching between different listeners, within and across languages. A group of 48 unbalanced Dutch-English bilinguals named pictures for two monolingual Dutch and two monolingual English life-size virtual listeners in an immersive virtual reality environment. In terms of reaction times, switching languages came at a cost over and above the significant cost of switching from one listener to another. Analysis of event-related potentials showed similar electrophysiological correlates for switching listeners and switching languages. However, it was found that having to switch listeners and languages at the same time delays the onset of lexical processes more than a switch between listeners within the same language. Findings are interpreted in light of the interplay between proactive (sustained inhibition) and reactive (task-set reconfiguration) control in bilingual speech production. It is argued that a possible bilingual advantage in executive control may not be due to the process of switching per se. This study paves the way for the study of bilingual language switching in ecologically valid, naturalistic, experimental settings.

    Additional information

    Supplementary data
  • Peeters, D. (2016). Processing consequences of onomatopoeic iconicity in spoken language comprehension. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 1632-1647). Austin, TX: Cognitive Science Society.

    Abstract

    Iconicity is a fundamental feature of human language. However its processing consequences at the behavioral and neural level in spoken word comprehension are not well understood. The current paper presents the behavioral and electrophysiological outcome of an auditory lexical decision task in which native speakers of Dutch listened to onomatopoeic words and matched control words while their electroencephalogram was recorded. Behaviorally, onomatopoeic words were processed as quickly and accurately as words with an arbitrary mapping between form and meaning. Event-related potentials time-locked to word onset revealed a significant decrease in negative amplitude in the N2 and N400 components and a late positivity for onomatopoeic words in comparison to the control words. These findings advance our understanding of the temporal dynamics of iconic form-meaning mapping in spoken word comprehension and suggest interplay between the neural representations of real-world sounds and spoken words.
  • Peeters, D., & Ozyurek, A. (2016). This and that revisited: A social and multimodal approach to spatial demonstratives. Frontiers in Psychology, 7: 222. doi:10.3389/fpsyg.2016.00222.
  • Persson, J., Szalisznyó, K., Antoni, G., Wall, A., Fällmar, D., Zora, H., & Bodén, R. (2020). Phosphodiesterase 10A levels are related to striatal function in schizophrenia: a combined positron emission tomography and functional magnetic resonance imaging study. European Archives of Psychiatry and Clinical Neuroscience, 270(4), 451-459. doi:10.1007/s00406-019-01021-0.

    Abstract

    Pharmacological inhibition of phosphodiesterase 10A (PDE10A) is being investigated as a treatment option in schizophrenia. PDE10A acts postsynaptically on striatal dopamine signaling by regulating neuronal excitability through its inhibition of cyclic adenosine monophosphate (cAMP), and we recently found it to be reduced in schizophrenia compared to controls. Here, this finding of reduced PDE10A in schizophrenia was followed up in the same sample to investigate the effect of reduced striatal PDE10A on the neural and behavioral function of striatal and downstream basal ganglia regions. A positron emission tomography (PET) scan with the PDE10A ligand [11C]Lu AE92686 was performed, followed by a 6 min resting-state magnetic resonance imaging (MRI) scan in ten patients with schizophrenia. To assess the relationship between striatal function and neurophysiological and behavioral functioning, salience processing was assessed using a mismatch negativity paradigm, an auditory event-related electroencephalographic measure, episodic memory was assessed using the Rey auditory verbal learning test (RAVLT) and executive functioning using trail-making test B. Reduced striatal PDE10A was associated with increased amplitude of low-frequency fluctuations (ALFF) within the putamen and substantia nigra, respectively. Higher ALFF in the substantia nigra, in turn, was associated with lower episodic memory performance. The findings are in line with a role for PDE10A in striatal functioning, and suggest that reduced striatal PDE10A may contribute to cognitive symptoms in schizophrenia.
  • Petersson, K. M. (1998). Comments on a Monte Carlo approach to the analysis of functional neuroimaging data. NeuroImage, 8, 108-112.
  • Petras, K., Ten Oever, S., & Jansma, B. M. (2016). The effect of distance on moral engagement: Event related potentials and alpha power are sensitive to perspective in a virtual shooting task. Frontiers in Psychology, 6: 2008. doi:10.3389/fpsyg.2015.02008.

    Abstract

    In a shooting video game we investigated whether increased distance reduces moral conflict. We measured and analyzed the event related potential (ERP), including the N2 component, which has previously been linked to cognitive conflict from competing decision tendencies. In a modified Go/No-go task designed to trigger moral conflict participants had to shoot suddenly appearing human like avatars in a virtual reality scene. The scene was seen either from an ego perspective with targets appearing directly in front of the participant or from a bird's view, where targets were seen from above and more distant. To control for low level visual features, we added a visually identical control condition, where the instruction to shoot was replaced by an instruction to detect. ERP waveforms showed differences between the two tasks as early as in the N1 time-range, with higher N1 amplitudes for the close perspective in the shoot task. Additionally, we found that pre-stimulus alpha power was significantly decreased in the ego, compared to the bird's view only for the shoot but not for the detect task. In the N2 time window, we observed main amplitude effects for response (No-go > Go) and distance (ego > bird perspective) but no interaction with task type (shoot vs. detect). We argue that the pre-stimulus and N1 effects can be explained by reduced attention and arousal in the distance condition when people are instructed to shoot. These results indicate a reduced moral engagement for increased distance. The lack of interaction in the N2 across tasks suggests that at that time point response execution dominates. We discuss potential implications for real life shooting situations, especially considering recent developments in drone shootings which are per definition of a distant view.
  • Pine, J. M., Lieven, E. V., & Rowland, C. F. (1998). Comparing different models of the development of the English verb category. Linguistics, 36(4), 807-830. doi:10.1515/ling.1998.36.4.807.

    Abstract

    In this study data from the first six months of 12 children s multiword speech were used to test the validity of Valian's (1991) syntactic perfor-mance-limitation account and Tomasello s (1992) verb-island account of early multiword speech with particular reference to the development of the English verb category. The results provide evidence for appropriate use of verb morphology, auxiliary verb structures, pronoun case marking, and SVO word order from quite early in development. However, they also demonstrate a great deal of lexical specificity in the children's use of these systems, evidenced by a lack of overlap in the verbs to which different morphological markers were applied, a lack of overlap in the verbs with which different auxiliary verbs were used, a disproportionate use of the first person singular nominative pronoun I, and a lack of overlap in the lexical items that served as the subjects and direct objects of transitive verbs. These findings raise problems for both a syntactic performance-limitation account and a strong verb-island account of the data and suggest the need to develop a more general lexiealist account of early multiword speech that explains why some words come to function as "islands" of organization in the child's grammar and others do not.
  • Poletiek, F. H., & Olfers, K. J. F. (2016). Authentication by the crowd: How lay students identify the style of a 17th century artist. CODART e-Zine, 8. Retrieved from http://ezine.codart.nl/17/issue/57/artikel/19-21-june-madrid/?id=349#!/page/3.
  • Poletiek, F. H. (1998). De geest van de jury. Psychologie en Maatschappij, 4, 376-378.
  • Poletiek, F. H., Fitz, H., & Bocanegra, B. R. (2016). What baboons can (not) tell us about natural language grammars. Cognition, 151, 108-112. doi:10.1016/j.cognition.2015.04.016.

    Abstract

    Rey et al. (2012) present data from a study with baboons that they interpret in support of the idea that center-embedded structures in human language have their origin in low level memory mechanisms and associative learning. Critically, the authors claim that the baboons showed a behavioral preference that is consistent with center-embedded sequences over other types of sequences. We argue that the baboons’ response patterns suggest that two mechanisms are involved: first, they can be trained to associate a particular response with a particular stimulus, and, second, when faced with two conditioned stimuli in a row, they respond to the most recent one first, copying behavior they had been rewarded for during training. Although Rey et al. (2012) ‘experiment shows that the baboons’ behavior is driven by low level mechanisms, it is not clear how the animal behavior reported, bears on the phenomenon of Center Embedded structures in human syntax. Hence, (1) natural language syntax may indeed have been shaped by low level mechanisms, and (2) the baboons’ behavior is driven by low level stimulus response learning, as Rey et al. propose. But is the second evidence for the first? We will discuss in what ways this study can and cannot give evidential value for explaining the origin of Center Embedded recursion in human grammar. More generally, their study provokes an interesting reflection on the use of animal studies in order to understand features of the human linguistic system.
  • Poort, E. D., Warren, J. E., & Rodd, J. M. (2016). Recent experience with cognates and interlingual homographs in one language affects subsequent processing in another language. Bilingualism: Language and Cognition, 19(1), 206-212. doi:10.1017/S1366728915000395.

    Abstract

    This experiment shows that recent experience in one language influences subsequent processing of the same word-forms in a different language. Dutch–English bilinguals read Dutch sentences containing Dutch–English cognates and interlingual homographs, which were presented again 16 minutes later in isolation in an English lexical decision task. Priming produced faster responses for the cognates but slower responses for the interlingual homographs. These results show that language switching can influence bilingual speakers at the level of individual words, and require models of bilingual word recognition (e.g., BIA+) to allow access to word meanings to be modulated by recent experience.
  • Postema, M., Carrion Castillo, A., Fisher, S. E., Vingerhoets, G., & Francks, C. (2020). The genetics of situs inversus without primary ciliary dyskinesia. Scientific Reports, 10: 3677. doi:10.1038/s41598-020-60589-z.

    Abstract

    Situs inversus (SI), a left-right mirror reversal of the visceral organs, can occur with recessive Primary Ciliary Dyskinesia (PCD). However, most people with SI do not have PCD, and the etiology of their condition remains poorly studied. We sequenced the genomes of 15 people with SI, of which six had PCD, as well as 15 controls. Subjects with non-PCD SI in this sample had an elevated rate of left-handedness (five out of nine), which suggested possible developmental mechanisms linking brain and body laterality. The six SI subjects with PCD all had likely recessive mutations in genes already known to cause PCD. Two non-PCD SI cases also had recessive mutations in known PCD genes, suggesting reduced penetrance for PCD in some SI cases. One non-PCD SI case had recessive mutations in PKD1L1, and another in CFAP52 (also known as WDR16). Both of these genes have previously been linked to SI without PCD. However, five of the nine non-PCD SI cases, including three of the left-handers in this dataset, had no obvious monogenic basis for their condition. Environmental influences, or possible random effects in early development, must be considered.

    Additional information

    Supplementary information
  • Poulsen, M.-E. (Ed.). (2020). The Jerome Bruner Library: From New York to Nijmegen. Nijmegen: Max Planck Institute for Psycholinguistics.

    Abstract

    Published in September 2020 by the Max Planck Institute for Psycholinguistics to commemorate the arrival and the new beginning of the Jerome Bruner Library in Nijmegen
  • Pouw, W., Paxton, A., Harrison, S. J., & Dixon, J. A. (2020). Reply to Ravignani and Kotz: Physical impulses from upper-limb movements impact the respiratory–vocal system. Proceedings of the National Academy of Sciences of the United States of America, 117(38), 23225-23226. doi:10.1073/pnas.2015452117.
  • Pouw, W., Van Gog, T., Zwaan, R. A., & Paas, F. (2016). Augmenting instructional animations with a body analogy to help children learn about physical systems. Frontiers in Psychology, 7: 860. doi:10.3389/fpsyg.2016.00860.

    Abstract

    We investigated whether augmenting instructional animations with a body analogy (BA) would improve 10- to 13-year-old children’s learning about class-1 levers. Children with a lower level of general math skill who learned with an instructional animation that provided a BA of the physical system, showed higher accuracy on a lever problem-solving reaction time task than children studying the instructional animation without this BA. Additionally, learning with a BA led to a higher speed–accuracy trade-off during the transfer task for children with a lower math skill, which provided additional evidence that especially this group is likely to be affected by learning with a BA. However, overall accuracy and solving speed on the transfer task was not affected by learning with or without this BA. These results suggest that providing children with a BA during animation study provides a stepping-stone for understanding mechanical principles of a physical system, which may prove useful for instructional designers. Yet, because the BA does not seem effective for all children, nor for all tasks, the degree of effectiveness of body analogies should be studied further. Future research, we conclude, should be more sensitive to the necessary degree of analogous mapping between the body and physical systems, and whether this mapping is effective for reasoning about more complex instantiations of such physical systems.
  • Pouw, W., Paxton, A., Harrison, S. J., & Dixon, J. A. (2020). Acoustic information about upper limb movement in voicing. Proceedings of the National Academy of Sciences of the United States of America, 117(21), 11364-11367. doi:10.1073/pnas.2004163117.

    Abstract

    We show that the human voice has complex acoustic qualities that are directly coupled to peripheral musculoskeletal tensioning of the body, such as subtle wrist movements. In this study, human vocalizers produced a steady-state vocalization while rhythmically moving the wrist or the arm at different tempos. Although listeners could only hear but not see the vocalizer, they were able to completely synchronize their own rhythmic wrist or arm movement with the movement of the vocalizer which they perceived in the voice acoustics. This study corroborates
    recent evidence suggesting that the human voice is constrained by bodily tensioning affecting the respiratory-vocal system. The current results show that the human voice contains a bodily imprint that is directly informative for the interpersonal perception of another’s dynamic physical states.
  • Pouw, W., Eielts, C., Van Gog, T., Zwaan, R. A., & Paas, F. (2016). Does (non‐)meaningful sensori‐motor engagement promote learning with animated physical systems? Mind, Brain and Education, 10(2), 91-104. doi:10.1111/mbe.12105.

    Abstract

    Previous research indicates that sensori‐motor experience with physical systems can have a positive effect on learning. However, it is not clear whether this effect is caused by mere bodily engagement or the intrinsically meaningful information that such interaction affords in performing the learning task. We investigated (N = 74), through the use of a Wii Balance Board, whether different forms of physical engagement that was either meaningfully, non‐meaningfully, or minimally related to the learning content would be beneficial (or detrimental) to learning about the workings of seesaws from instructional animations. The results were inconclusive, indicating that motoric competency on lever problem solving did not significantly differ between conditions, nor were response speed and transfer performance affected. These findings suggest that adult's implicit and explicit knowledge about physical systems is stable and not easily affected by (contradictory) sensori‐motor experiences. Implications for embodied learning are discussed.
  • Pouw, W., Wassenburg, S. I., Hostetter, A. B., De Koning, B. B., & Paas, F. (2020). Does gesture strengthen sensorimotor knowledge of objects? The case of the size-weight illusion. Psychological Research, 84(4), 966-980. doi:10.1007/s00426-018-1128-y.

    Abstract

    Co-speech gestures have been proposed to strengthen sensorimotor knowledge related to objects’ weight and manipulability.
    This pre-registered study (https ://www.osf.io/9uh6q /) was designed to explore how gestures affect memory for sensorimotor
    information through the application of the visual-haptic size-weight illusion (i.e., objects weigh the same, but are experienced
    as different in weight). With this paradigm, a discrepancy can be induced between participants’ conscious illusory
    perception of objects’ weight and their implicit sensorimotor knowledge (i.e., veridical motor coordination). Depending on
    whether gestures reflect and strengthen either of these types of knowledge, gestures may respectively decrease or increase
    the magnitude of the size-weight illusion. Participants (N = 159) practiced a problem-solving task with small and large
    objects that were designed to induce a size-weight illusion, and then explained the task with or without co-speech gesture
    or completed a control task. Afterwards, participants judged the heaviness of objects from memory and then while holding
    them. Confirmatory analyses revealed an inverted size-weight illusion based on heaviness judgments from memory and we
    found gesturing did not affect judgments. However, exploratory analyses showed reliable correlations between participants’
    heaviness judgments from memory and (a) the number of gestures produced that simulated actions, and (b) the kinematics of
    the lifting phases of those gestures. These findings suggest that gestures emerge as sensorimotor imaginings that are governed
    by the agent’s conscious renderings about the actions they describe, rather than implicit motor routines.
  • Pouw, W., Harrison, S. J., Esteve-Gibert, N., & Dixon, J. A. (2020). Energy flows in gesture-speech physics: The respiratory-vocal system and its coupling with hand gestures. The Journal of the Acoustical Society of America, 148(3): 1231. doi:10.1121/10.0001730.

    Abstract

    Expressive moments in communicative hand gestures often align with emphatic stress in speech. It has recently been found that acoustic markers of emphatic stress arise naturally during steady-state phonation when upper-limb movements impart physical impulses on the body, most likely affecting acoustics via respiratory activity. In this confirmatory study, participants (N = 29) repeatedly uttered consonant-vowel (/pa/) mono-syllables while moving in particular phase relations with speech, or not moving the upper limbs. This study shows that respiration-related activity is affected by (especially high-impulse) gesturing when vocalizations occur near peaks in physical impulse. This study further shows that gesture-induced moments of bodily impulses increase the amplitude envelope of speech, while not similarly affecting the Fundamental Frequency (F0). Finally, tight relations between respiration-related activity and vocalization were observed, even in the absence of movement, but even more so when upper-limb movement is present. The current findings expand a developing line of research showing that speech is modulated by functional biomechanical linkages between hand gestures and the respiratory system. This identification of gesture-speech biomechanics promises to provide an alternative phylogenetic, ontogenetic, and mechanistic explanatory route of why communicative upper limb movements co-occur with speech in humans.
    ACKNOWLEDGMENTS

    Additional information

    Link to Preprint on OSF
  • Pouw, W., & Hostetter, A. B. (2016). Gesture as predictive action. Reti, Saperi, Linguaggi: Italian Journal of Cognitive Sciences, 3, 57-80. doi:10.12832/83918.

    Abstract

    Two broad approaches have dominated the literature on the production of speech-accompanying gestures. On the one hand, there are approaches that aim to explain the origin of gestures by specifying the mental processes that give rise to them. On the other, there are approaches that aim to explain the cognitive function that gestures have for the gesturer or the listener. In the present paper we aim to reconcile both approaches in one single perspective that is informed by a recent sea change in cognitive science, namely, Predictive Processing Perspectives (PPP; Clark 2013b; 2015). We start with the idea put forth by the Gesture as Simulated Action (GSA) framework (Hostetter, Alibali 2008). Under this view, the mental processes that give rise to gesture are re-enactments of sensori-motor experiences (i.e., simulated actions). We show that such anticipatory sensori-motor states and the constraints put forth by the GSA framework can be understood as top-down kinesthetic predictions that function in a broader predictive machinery as proposed by PPP. By establishing this alignment, we aim to show how gestures come to fulfill a genuine cognitive function above and beyond the mental processes that give rise to gesture.
  • Pouw, W., & Dixon, J. A. (2020). Gesture networks: Introducing dynamic time warping and network analysis for the kinematic study of gesture ensembles. Discourse Processes, 57(4), 301-319. doi:10.1080/0163853X.2019.1678967.

    Abstract

    We introduce applications of established methods in time-series and network
    analysis that we jointly apply here for the kinematic study of gesture
    ensembles. We define a gesture ensemble as the set of gestures produced
    during discourse by a single person or a group of persons. Here we are
    interested in how gestures kinematically relate to one another. We use
    a bivariate time-series analysis called dynamic time warping to assess how
    similar each gesture is to other gestures in the ensemble in terms of their
    velocity profiles (as well as studying multivariate cases with gesture velocity
    and speech amplitude envelope profiles). By relating each gesture event to
    all other gesture events produced in the ensemble, we obtain a weighted
    matrix that essentially represents a network of similarity relationships. We
    can therefore apply network analysis that can gauge, for example, how
    diverse or coherent certain gestures are with respect to the gesture ensemble.
    We believe these analyses promise to be of great value for gesture
    studies, as we can come to understand how low-level gesture features
    (kinematics of gesture) relate to the higher-order organizational structures
    present at the level of discourse.

    Additional information

    Open Data OSF
  • Pouw, W., Harrison, S. J., & Dixon, J. A. (2020). Gesture–speech physics: The biomechanical basis for the emergence of gesture–speech synchrony. Journal of Experimental Psychology: General, 149(2), 391-404. doi:10.1037/xge0000646.

    Abstract

    The phenomenon of gesture–speech synchrony involves tight coupling of prosodic contrasts in gesture
    movement (e.g., peak velocity) and speech (e.g., peaks in fundamental frequency; F0). Gesture–speech
    synchrony has been understood as completely governed by sophisticated neural-cognitive mechanisms.
    However, gesture–speech synchrony may have its original basis in the resonating forces that travel through the
    body. In the current preregistered study, movements with high physical impact affected phonation in line with
    gesture–speech synchrony as observed in natural contexts. Rhythmic beating of the arms entrained phonation
    acoustics (F0 and the amplitude envelope). Such effects were absent for a condition with low-impetus
    movements (wrist movements) and a condition without movement. Further, movement–phonation synchrony
    was more pronounced when participants were standing as opposed to sitting, indicating a mediating role for
    postural stability. We conclude that gesture–speech synchrony has a biomechanical basis, which will have
    implications for our cognitive, ontogenetic, and phylogenetic understanding of multimodal language.
  • Pouw, W., Myrto-Foteini, M., Van Gog, T., & Paas, F. (2016). Gesturing during mental problem solving reduces eye movements, especially for individuals with lower visual working memory capacity. Cognitive Processing, 17, 269-277. doi:10.1007/s10339-016-0757-6.

    Abstract

    Non-communicative hand gestures have been found to benefit problem-solving performance. These gestures seem to compensate for limited internal cognitive capacities, such as visual working memory capacity. Yet, it is not clear how gestures might perform this cognitive function. One hypothesis is that gesturing is a means to spatially index mental simulations, thereby reducing the need for visually projecting the mental simulation onto the visual presentation of the task. If that hypothesis is correct, less eye movements should be made when participants gesture during problem solving than when they do not gesture. We therefore used mobile eye tracking to investigate the effect of co-thought gesturing and visual working memory capacity on eye movements during mental solving of the Tower of Hanoi problem. Results revealed that gesturing indeed reduced the number of eye movements (lower saccade counts), especially for participants with a relatively lower visual working memory capacity. Subsequent problem-solving performance was not affected by having (not) gestured during the mental solving phase. The current findings suggest that our understanding of gestures in problem solving could be improved by taking into account eye movements during gesturing.
  • Pouw, W., Trujillo, J. P., & Dixon, J. A. (2020). The quantification of gesture–speech synchrony: A tutorial and validation of multimodal data acquisition using device-based and video-based motion tracking. Behavior Research Methods, 52, 723-740. doi:10.3758/s13428-019-01271-9.

    Abstract

    There is increasing evidence that hand gestures and speech synchronize their activity on multiple dimensions and timescales. For example, gesture’s kinematic peaks (e.g., maximum speed) are coupled with prosodic markers in speech. Such coupling operates on very short timescales at the level of syllables (200 ms), and therefore requires high-resolution measurement of gesture kinematics and speech acoustics. High-resolution speech analysis is common for gesture studies, given that field’s classic ties with (psycho)linguistics. However, the field has lagged behind in the objective study of gesture kinematics (e.g., as compared to research on instrumental action). Often kinematic peaks in gesture are measured by eye, where a “moment of maximum effort” is determined by several raters. In the present article, we provide a tutorial on more efficient methods to quantify the temporal properties of gesture kinematics, in which we focus on common challenges and possible solutions that come with the complexities of studying multimodal language. We further introduce and compare, using an actual gesture dataset (392 gesture events), the performance of two video-based motion-tracking methods (deep learning vs. pixel change) against a high-performance wired motion-tracking system (Polhemus Liberty). We show that the videography methods perform well in the temporal estimation of kinematic peaks, and thus provide a cheap alternative to expensive motion-tracking systems. We hope that the present article incites gesture researchers to embark on the widespread objective study of gesture kinematics and their relation to speech.

Share this page