Publications

Displaying 501 - 600 of 858
  • Misersky, J., Majid, A., & Snijders, T. M. (2019). Grammatical gender in German influences how role-nouns are interpreted: Evidence from ERPs. Discourse Processes, 56(8), 643-654. doi:10.1080/0163853X.2018.1541382.

    Abstract

    Grammatically masculine role-nouns (e.g., Studenten-masc.‘students’) can refer to men and women, but may favor an interpretation where only men are considered the referent. If true, this has implications for a society aiming to achieve equal representation in the workplace since, for example, job adverts use such role descriptions. To investigate the interpretation of role-nouns, the present ERP study assessed grammatical gender processing in German. Twenty participants read sentences where a role-noun (masculine or feminine) introduced a group of people, followed by a congruent (masculine–men, feminine–women) or incongruent (masculine–women, feminine–men) continuation. Both for feminine-men and masculine-women continuations a P600 (500 to 800 ms) was observed; another positivity was already present from 300 to 500 ms for feminine-men continuations, but critically not for masculine-women continuations. The results imply a male-biased rather than gender-neutral interpretation of the masculine—despite widespread usage of the masculine as a gender-neutral form—suggesting masculine forms are inadequate for representing genders equally.
  • Mitterer, H., Kim, S., & Cho, T. (2013). Compensation for complete assimilation in speech perception: The case of Korean labial-to-velar assimilation. Journal of Memory and Language, 69, 59-83. doi:10.1016/j.jml.2013.02.001.

    Abstract

    In connected speech, phonological assimilation to neighboring words can lead to pronunciation variants (e.g., 'garden bench'→ "gardem bench"). A large body of literature suggests that listeners use the phonetic context to reconstruct the intended word for assimilation types that often lead to incomplete assimilations (e.g., a pronunciation of "garden" that carries cues for both a labial [m] and an alveolar [n]). In the current paper, we show that a similar context effect is observed for an assimilation that is often complete, Korean labial-to-velar place assimilation. In contrast to the context effects for partial assimilations, however, the context effects seem to rely completely on listeners' experience with the assimilation pattern in their native language.
  • Mitterer, H., & Russell, K. (2013). How phonological reductions sometimes help the listener. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39, 977-984. doi:10.1037/a0029196.

    Abstract

    In speech production, high-frequency words are more likely than low-frequency words to be phonologically reduced. We tested in an eye-tracking experiment whether listeners can make use of this correlation between lexical frequency and phonological realization of words. Participants heard prefixed verbs in which the prefix was either fully produced or reduced. Simultaneously, they saw a high-frequency verb and a low-frequency verb with this prefix-plus 2 distractors-on a computer screen. Participants were more likely to look at the high-frequency verb when they heard a reduced prefix than when they heard a fully produced prefix. Listeners hence exploit the correlation of lexical frequency and phonological reduction and assume that a reduced prefix is more likely to belong to a high-frequency word. This shows that reductions do not necessarily burden the listener but may in fact have a communicative function, in line with functional theories of phonology.
  • Mitterer, H., & Reinisch, E. (2013). No delays in application of perceptual learning in speech recognition: Evidence from eye tracking. Journal of Memory and Language, 69(4), 527-545. doi:10.1016/j.jml.2013.07.002.

    Abstract

    Three eye-tracking experiments tested at what processing stage lexically-guided retuning of a fricative contrast affects perception. One group of participants heard an ambiguous fricative between /s/ and /f/ replace /s/ in s-final words, the other group heard the same ambiguous fricative replacing /f/ in f-final words. In a test phase, both groups of participants heard a range of ambiguous fricatives at the end of Dutch minimal pairs (e.g., roos-roof, ‘rose’-‘robbery’). Participants who heard the ambiguous fricative replacing /f/ during exposure chose at test the f-final words more often than the other participants. During this test-phase, eye-tracking data showed that the effect of exposure exerted itself as soon as it could possibly have occurred, 200 ms after the onset of the fricative. This was at the same time as the onset of the effect of the fricative itself, showing that the perception of the fricative is changed by perceptual learning at an early level. Results converged in a time-window analysis and a Jackknife procedure testing the time at which effects reached a given proportion of their maxima. This indicates that perceptual learning affects early stages of speech processing, and supports the conclusion that perceptual learning is indeed perceptual rather than post-perceptual.

    Files private

    Request files
  • Mitterer, H., Scharenborg, O., & McQueen, J. M. (2013). Phonological abstraction without phonemes in speech perception. Cognition, 129, 356-361. doi:10.1016/j.cognition.2013.07.011.

    Abstract

    Recent evidence shows that listeners use abstract prelexical units in speech perception. Using the phenomenon of lexical retuning in speech processing, we ask whether those units are necessarily phonemic. Dutch listeners were exposed to a Dutch speaker producing ambiguous phones between the Dutch syllable-final allophones approximant [r] and dark [l]. These ambiguous phones replaced either final /r/ or final /l/ in words in a lexical-decision task. This differential exposure affected perception of ambiguous stimuli on the same allophone continuum in a subsequent phonetic-categorization test: Listeners exposed to ambiguous phones in /r/-final words were more likely to perceive test stimuli as /r/ than listeners with exposure in /l/-final words. This effect was not found for test stimuli on continua using other allophones of /r/ and /l/. These results confirm that listeners use phonological abstraction in speech perception. They also show that context-sensitive allophones can play a role in this process, and hence that context-insensitive phonemes are not necessary. We suggest there may be no one unit of perception
  • Mitterer, H., & Müsseler, J. M. (2013). Regional accent variation in the shadowing task: Evidence for a loose perception-action coupling in speech. Attention, Perception & Psychophysics, 75, 557-575. doi:10.3758/s13414-012-0407-8.

    Abstract

    We investigated the relation between action and perception in speech processing, using the shadowing task, in which participants repeat words they hear. In support of a tight perception–action link, previous work has shown that phonetic details in the stimulus influence the shadowing response. On the other hand, latencies do not seem to suffer if stimulus and response differ in their articulatory properties. The present investigation tested how perception influences production when participants are confronted with regional variation. Results showed that participants often imitate a regional variation if it occurs in the stimulus set but tend to stick to their variant if the stimuli are consistent. Participants were forced or induced to correct by the experimental instructions. Articulatory stimulus–response differences do not lead to latency costs. These data indicate that speech perception does not necessarily recruit the production system.
  • Moisik, S. R. (2013). Harsh voice quality and its association with blackness in popular American media. Phonetica, 4, 193-215. doi:10.1159/000351059.

    Abstract

    Performers use various laryngeal settings to create voices for characters and personas they portray. Although some research demonstrates the sociophonetic associations of laryngeal voice quality, few studies have documented or examined the role of harsh voice quality, particularly with vibration of the epilaryngeal structures (growling). This article qualitatively examines phonetic properties of vocal performances in a corpus of popular American media and evaluates the association of voice qualities in these performances with representations of social identity and stereotype. In several cases, contrasting laryngeal states create sociophonetic contrast, and harsh voice quality is paired with the portrayal of racial stereotypes of black people. These cases indicate exaggerated emotional states and are associated with yelling/shouting modes of expression. Overall, however, the functioning of harsh voice quality as it occurs in the data is broader and may involve aggressive posturing, comedic inversion of aggressiveness, vocal pathology, and vocal homage
  • Monaghan, P., & Fletcher, M. (2019). Do sound symbolism effects for written words relate to individual phonemes or to phoneme features? Language and Cognition, 11(2), 235-255. doi:10.1017/langcog.2019.20.

    Abstract

    The sound of words has been shown to relate to the meaning that the words denote, an effect that extends beyond morphological properties of the word. Studies of these sound-symbolic relations have described this iconicity in terms of individual phonemes, or alternatively due to acoustic properties (expressed in phonological features) relating to meaning. In this study, we investigated whether individual phonemes or phoneme features best accounted for iconicity effects. We tested 92 participants’ judgements about the appropriateness of 320 nonwords presented in written form, relating to 8 different semantic attributes. For all 8 attributes, individual phonemes fitted participants’ responses better than general phoneme features. These results challenge claims that sound-symbolic effects for visually presented words can access broad, cross-modal associations between sound and meaning, instead the results indicate the operation of individual phoneme to meaning relations. Whether similar effects are found for nonwords presented auditorially remains an open question.
  • Monaghan, P., & Roberts, S. G. (2019). Cognitive influences in language evolution: Psycholinguistic predictors of loan word borrowing. Cognition, 186, 147-158. doi:10.1016/j.cognition.2019.02.007.

    Abstract

    Languages change due to social, cultural, and cognitive influences. In this paper, we provide an assessment of these cognitive influences on diachronic change in the vocabulary. Previously, tests of stability and change of vocabulary items have been conducted on small sets of words where diachronic change is imputed from cladistics studies. Here, we show for a substantially larger set of words that stability and change in terms of documented borrowings of words into English and into Dutch can be predicted by psycholinguistic properties of words that reflect their representational fidelity. We found that grammatical category, word length, age of acquisition, and frequency predict borrowing rates, but frequency has a non-linear relationship. Frequency correlates negatively with probability of borrowing for high-frequency words, but positively for low-frequency words. This borrowing evidence documents recent, observable diachronic change in the vocabulary enabling us to distinguish between change associated with transmission during language acquisition and change due to innovations by proficient speakers.
  • Mongelli, V., Meijs, E. L., Van Gaal, S., & Hagoort, P. (2019). No language unification without neural feedback: How awareness affects sentence processing. Neuroimage, 202: 116063. doi:10.1016/j.neuroimage.2019.116063.

    Abstract

    How does the human brain combine a finite number of words to form an infinite variety of sentences? According to the Memory, Unification and Control (MUC) model, sentence processing requires long-range feedback from the left inferior frontal cortex (LIFC) to left posterior temporal cortex (LPTC). Single word processing however may only require feedforward propagation of semantic information from sensory regions to LPTC. Here we tested the claim that long-range feedback is required for sentence processing by reducing visual awareness of words using a masking technique. Masking disrupts feedback processing while leaving feedforward processing relatively intact. Previous studies have shown that masked single words still elicit an N400 ERP effect, a neural signature of semantic incongruency. However, whether multiple words can be combined to form a sentence under reduced levels of awareness is controversial. To investigate this issue, we performed two experiments in which we measured electroencephalography (EEG) while 40 subjects performed a masked priming task. Words were presented either successively or simultaneously, thereby forming a short sentence that could be congruent or incongruent with a target picture. This sentence condition was compared with a typical single word condition. In the masked condition we only found an N400 effect for single words, whereas in the unmasked condition we observed an N400 effect for both unmasked sentences and single words. Our findings suggest that long-range feedback processing is required for sentence processing, but not for single word processing.
  • Montero-Melis, G., & Jaeger, T. F. (2020). Changing expectations mediate adaptation in L2 production. Bilingualism: Language and Cognition, 23(3), 602-617. doi:10.1017/S1366728919000506.

    Abstract


    Native language (L1) processing draws on implicit expectations. An open question is whether non-native learners of a second language (L2) similarly draw on expectations, and whether these expectations are based on learners’ L1 or L2 knowledge. We approach this question by studying inverse preference effects on lexical encoding. L1 and L2 speakers of Spanish described motion events, while they were either primed to express path, manner, or neither. In line with other work, we find that L1 speakers adapted more strongly after primes that are unexpected in their L1. For L2 speakers, adaptation depended on their L2 proficiency: The least proficient speakers exhibited the inverse preference effect on adaptation based on what was unexpected in their L1; but the more proficient speakers were, the more they exhibited inverse preference effects based on what was unexpected in the L2. We discuss implications for L1 transfer and L2 acquisition.
  • Montero-Melis, G., Isaksson, P., Van Paridon, J., & Ostarek, M. (2020). Does using a foreign language reduce mental imagery? Cognition, 196: 104134. doi:10.1016/j.cognition.2019.104134.

    Abstract

    In a recent article, Hayakawa and Keysar (2018) propose that mental imagery is less vivid when evoked in a foreign than in a native language. The authors argue that reduced mental imagery could even account for moral foreign language effects, whereby moral choices become more utilitarian when made in a foreign language. Here we demonstrate that Hayakawa and Keysar's (2018) key results are better explained by reduced language comprehension in a foreign language than by less vivid imagery. We argue that the paradigm used in Hayakawa and Keysar (2018) does not provide a satisfactory test of reduced imagery and we discuss an alternative paradigm based on recent experimental developments.

    Additional information

    Supplementary data and scripts
  • Morgan, T. J. H., Acerbi, A., & Van Leeuwen, E. J. C. (2019). Copy-the-majority of instances or individuals? Two approaches to the majority and their consequences for conformist decision-making. PLoS One, 14(1): e021074. doi:10.1371/journal.pone.0210748.

    Abstract

    Cultural evolution is the product of the psychological mechanisms that underlie individual decision making. One commonly studied learning mechanism is a disproportionate preference for majority opinions, known as conformist transmission. While most theoretical and experimental work approaches the majority in terms of the number of individuals that perform a behaviour or hold a belief, some recent experimental studies approach the majority in terms of the number of instances a behaviour is performed. Here, we use a mathematical model to show that disagreement between these two notions of the majority can arise when behavioural variants are performed at different rates, with different salience or in different contexts (variant overrepresentation) and when a subset of the population act as demonstrators to the whole population (model biases). We also show that because conformist transmission changes the distribution of behaviours in a population, how observers approach the majority can cause populations to diverge, and that this can happen even when the two approaches to the majority agree with regards to which behaviour is in the majority. We discuss these results in light of existing findings, ranging from political extremism on twitter to studies of animal foraging behaviour. We conclude that the factors we considered (variant overrepresentation and model biases) are plausibly widespread. As such, it is important to understand how individuals approach the majority in order to understand the effects of majority influence in cultural evolution.
  • Mudd, K., Lutzenberger, H., De Vos, C., Fikkert, P., Crasborn, O., & De Boer, B. (2020). The effect of sociolinguistic factors on variation in the Kata Kolok lexicon. Asia-Pacific Language Variation, 6(1), 53-88. doi:10.1075/aplv.19009.mud.

    Abstract

    Sign languages can be categorized as shared sign languages or deaf community sign languages, depending on the context in which they emerge. It has been suggested that shared sign languages exhibit more variation in the expression of everyday concepts than deaf community sign languages (Meir, Israel, Sandler, Padden, & Aronoff, 2012). For deaf community sign languages, it has been shown that various sociolinguistic factors condition this variation. This study presents one of the first in-depth investigations of how sociolinguistic factors (deaf status, age, clan, gender and having a deaf family member) affect lexical variation in a shared sign language, using a picture description task in Kata Kolok. To study lexical variation in Kata Kolok, two methodologies are devised: the identification of signs by underlying iconic motivation and mapping, and a way to compare individual repertoires of signs by calculating the lexical distances between participants. Alongside presenting novel methodologies to study this type of sign language, we present preliminary evidence of sociolinguistic factors that may influence variation in the Kata Kolok lexicon.
  • Muhinyi, A., Hesketh, A., Stewart, A. J., & Rowland, C. F. (2020). Story choice matters for caregiver extra-textual talk during shared reading with preschoolers. Journal of Child Language, 47(3), 633-654. doi:10.1017/S0305000919000783.

    Abstract



    This study aimed to examine the influence of the complexity of the story-book on caregiver extra-textual talk (i.e., interactions beyond text reading) during shared reading with preschool-age children. Fifty-three mother–child dyads (3;00–4;11) were video-recorded sharing two ostensibly similar picture-books: a simple story (containing no false belief) and a complex story (containing a false belief central to the plot, which provided content that was more challenging for preschoolers to understand). Book-reading interactions were transcribed and coded. Results showed that the complex stories facilitated more extra-textual talk from mothers, and a higher quality of extra-textual talk (as indexed by linguistic richness and level of abstraction). Although the type of story did not affect the number of questions mothers posed, more elaborative follow-ups on children's responses were provided by mothers when sharing complex stories. Complex stories may facilitate more and linguistically richer caregiver extra-textual talk, having implications for preschoolers’ developing language abilities.
  • Mulder, K., Schreuder, R., & Dijkstra, T. (2013). Morphological family size effects in L1 and L2 processing: An electrophysiological study. Language and Cognitive Processes, 27, 1004-1035. doi:10.1080/01690965.2012.733013.

    Abstract

    The present study examined Morphological Family Size effects in first and second language processing. Items with a high or low Dutch (L1) Family Size were contrasted in four experiments involving Dutch–English bilinguals. In two experiments, reaction times (RTs) were collected in English (L2) and Dutch (L1) lexical decision tasks; in two other experiments, an L1 and L2 go/no-go lexical decision task were performed while Event-Related Potentials (ERPs) were recorded. Two questions were addressed. First, is the ERP signal sensitive to the morphological productivity of words? Second, does nontarget language activation in L2 processing spread beyond the item itself, to the morphological family of the activated nontarget word? The two behavioural experiments both showed a facilitatory effect of Dutch Family Size, indicating that the morphological family in the L1 is activated regardless of language context. In the two ERP experiments, Family Size effects were found to modulate the N400 component. Less negative waveforms were observed for words with a high L1 Family Size compared to words with a low L1 Family Size in the N400 time window, in both the L1 and L2 task. In addition, these Family Size effects persisted in later time windows. The data are discussed in light of the Morphological Family Resonance Model (MFRM) model of morphological processing and the BIA + model.
  • Nakamoto, T., Suei, Y., Konishi, M., Kanda, T., Verdonschot, R. G., & Kakimoto, N. (2019). Abnormal positioning of the common carotid artery clinically diagnosed as a submandibular mass. Oral Radiology, 35(3), 331-334. doi:10.1007/s11282-018-0355-7.

    Abstract

    The common carotid artery (CCA) usually runs along the long axis of the neck, although it is occasionally found in an abnormal position or is displaced. We report a case of an 86-year-old woman in whom the CCA was identified in the submandibular area. The patient visited our clinic and reported soft tissue swelling in the right submandibular area. It resembled a tumor mass or a swollen lymph node. Computed tomography showed that it was the right CCA that had been bent forward and was running along the submandibular subcutaneous area. Ultrasonography verified the diagnosis. No other lesions were found on the diagnostic images. Consequently, the patient was diagnosed as having abnormal CCA positioning. Although this condition generally requires no treatment, it is important to follow-up the abnormality with diagnostic imaging because of the risk of cerebrovascular disorders.
  • Nakamoto, T., Hatsuta, S., Yagi, S., Verdonschot, R. G., Taguchi, A., & Kakimoto, N. (2020). Computer-aided diagnosis system for osteoporosis based on quantitative evaluation of mandibular lower border porosity using panoramic radiographs. Dentomaxillofacial Radiology, 49(4): 20190481. doi:10.1259/dmfr.20190481.

    Abstract

    Objectives: A new computer-aided screening system for osteoporosis using panoramic radiographs was developed. The conventional system could detect porotic changes within the lower border of the mandible, but its severity could not be evaluated. Our aim was to enable the system to measure severity by implementing a linear bone resorption severity index (BRSI) based on the cortical bone shape.
    Methods: The participants were 68 females (>50 years) who underwent panoramic radiography and lumbar spine bone density measurements. The new system was designed to extract the lower border of the mandible as region of interests and convert them into morphological skeleton line images. The total perimeter length of the skeleton lines was defined as the BRSI. 40 images were visually evaluated for the presence of cortical bone porosity. The correlation between visual evaluation and BRSI of the participants, and the optimal threshold value of BRSI for new system were investigated through a receiver operator characteristic analysis. The diagnostic performance of the new system was evaluated by comparing the results from new system and lumbar bone density tests using 28 participants.
    Results: BRSI and lumbar bone density showed a strong negative correlation (p < 0.01). BRSI showed a strong correlation with visual evaluation. The new system showed high diagnostic efficacy with sensitivity of 90.9%, specificity of 64.7%, and accuracy of 75.0%.
    Conclusions: The new screening system is able to quantitatively evaluate mandibular cortical porosity. This allows for preventive screening for osteoporosis thereby enhancing clinical prospects.
  • Nakamoto, T., Taguchi, A., Verdonschot, R. G., & Kakimoto, N. (2019). Improvement of region of interest extraction and scanning method of computer-aided diagnosis system for osteoporosis using panoramic radiographs. Oral Radiology, 35(2), 143-151. doi:10.1007/s11282-018-0330-3.

    Abstract

    ObjectivesPatients undergoing osteoporosis treatment benefit greatly from early detection. We previously developed a computer-aided diagnosis (CAD) system to identify osteoporosis using panoramic radiographs. However, the region of interest (ROI) was relatively small, and the method to select suitable ROIs was labor-intensive. This study aimed to expand the ROI and perform semi-automatized extraction of ROIs. The diagnostic performance and operating time were also assessed.MethodsWe used panoramic radiographs and skeletal bone mineral density data of 200 postmenopausal women. Using the reference point that we defined by averaging 100 panoramic images as the lower mandibular border under the mental foramen, a 400x100-pixel ROI was automatically extracted and divided into four 100x100-pixel blocks. Valid blocks were analyzed using program 1, which examined each block separately, and program 2, which divided the blocks into smaller segments and performed scans/analyses across blocks. Diagnostic performance was evaluated using another set of 100 panoramic images.ResultsMost ROIs (97.0%) were correctly extracted. The operation time decreased to 51.4% for program 1 and to 69.3% for program 2. The sensitivity, specificity, and accuracy for identifying osteoporosis were 84.0, 68.0, and 72.0% for program 1 and 92.0, 62.7, and 70.0% for program 2, respectively. Compared with the previous conventional system, program 2 recorded a slightly higher sensitivity, although it occasionally also elicited false positives.ConclusionsPatients at risk for osteoporosis can be identified more rapidly using this new CAD system, which may contribute to earlier detection and intervention and improved medical care.
  • Nayernia, L., Van den Vijver, R., & Indefrey, P. (2019). The influence of orthography on phonemic knowledge: An experimental investigation on German and Persian. Journal of Psycholinguistic Research, 48(6), 1391-1406. doi:10.1007/s10936-019-09664-9.

    Abstract

    This study investigated whether the phonological representation of a word is modulated by its orthographic representation in case of a mismatch between the two representations. Such a mismatch is found in Persian, where short vowels are represented phonemically but not orthographically. Persian adult literates, Persian adult illiterates, and German adult literates were presented with two auditory tasks, an AX-discrimination task and a reversal task. We assumed that if orthographic representations influence phonological representations, Persian literates should perform worse than Persian illiterates or German literates on items with short vowels in these tasks. The results of the discrimination tasks showed that Persian literates and illiterates as well as German literates were approximately equally competent in discriminating short vowels in Persian words and pseudowords. Persian literates did not well discriminate German words containing phonemes that differed only in vowel length. German literates performed relatively poorly in discriminating German homographic words that differed only in vowel length. Persian illiterates were unable to perform the reversal task in Persian. The results of the other two participant groups in the reversal task showed the predicted poorer performance of Persian literates on Persian items containing short vowels compared to items containing long vowels only. German literates did not show this effect in German. Our results suggest two distinct effects of orthography on phonemic representations: whereas the lack of orthographic representations seems to affect phonemic awareness, homography seems to affect the discriminability of phonemic representations.
  • Nazzi, T., & Cutler, A. (2019). How consonants and vowels shape spoken-language recognition. Annual Review of Linguistics, 5, 25-47. doi:10.1146/annurev-linguistics-011718-011919.

    Abstract

    All languages instantiate a consonant/vowel contrast. This contrast has processing consequences at different levels of spoken-language recognition throughout the lifespan. In adulthood, lexical processing is more strongly associated with consonant than with vowel processing; this has been demonstrated across 13 languages from seven language families and in a variety of auditory lexical-level tasks (deciding whether a spoken input is a word, spotting a real word embedded in a minimal context, reconstructing a word minimally altered into a pseudoword, learning new words or the “words” of a made-up language), as well as in written-word tasks involving phonological processing. In infancy, a consonant advantage in word learning and recognition is found to emerge during development in some languages, though possibly not in others, revealing that the stronger lexicon–consonant association found in adulthood is learned. Current research is evaluating the relative contribution of the early acquisition of the acoustic/phonetic and lexical properties of the native language in the emergence of this association
  • Nettle, D., Cronin, K. A., & Bateson, M. (2013). Responses of chimpanzees to cues of conspecific observation. Animal Behaviour, 86(3), 595-602. doi:10.1016/j.anbehav.2013.06.015.

    Abstract

    Recent evidence has shown that humans are remarkably sensitive to artificial cues of conspecific observation when making decisions with potential social consequences. Whether similar effects are found in other great apes has not yet been investigated. We carried out two experiments in which individual chimpanzees, Pan troglodytes, took items of food from an array in the presence of either an image of a large conspecific face or a scrambled control image. In experiment 1 we compared three versions of the face image varying in size and the amount of the face displayed. In experiment 2 we compared a fourth variant of the image with more prominent coloured eyes displayed closer to the focal chimpanzee. The chimpanzees did not look at the face images significantly more than at the control images in either experiment. Although there were trends for some individuals in each experiment to be slower to take high-value food items in the face conditions, these were not consistent or robust. We suggest that the extreme human sensitivity to cues of potential conspecific observation may not be shared with chimpanzees.
  • Newbury, D. F., Mari, F., Akha, E. S., MacDermot, K. D., Canitano, R., Monaco, A. P., Taylor, J. C., Renieri, A., Fisher, S. E., & Knight, S. J. L. (2013). Dual copy number variants involving 16p11 and 6q22 in a case of childhood apraxia of speech and pervasive developmental disorder. European Journal of Human Genetics, 21, 361-365. doi:10.1038/ejhg.2012.166.

    Abstract

    In this issue, Raca et al1 present two cases of childhood apraxia of speech (CAS) arising from microdeletions of chromosome 16p11.2. They propose that comprehensive phenotypic profiling may assist in the delineation and classification of such cases. To complement this study, we would like to report on a third, unrelated, child who presents with CAS and a chromosome 16p11.2 heterozygous deletion. We use genetic data from this child and his family to illustrate how comprehensive genetic profiling may also assist in the characterisation of 16p11.2 microdeletion syndrome.
  • Niermann, H. C. M., Tyborowska, A., Cillessen, A. H. N., Van Donkelaar, M. M. J., Lammertink, F., Gunnar, M. R., Franke, B., Figner, B., & Roelofs, K. (2019). The relation between infant freezing and the development of internalizing symptoms in adolescence: A prospective longitudinal study. Developmental Science, 22(3): e12763. doi:10.1111/desc.12763.

    Abstract

    Given the long-lasting detrimental effects of internalizing symptoms, there is great need for detecting early risk markers. One promising marker is freezing behavior. Whereas initial freezing reactions are essential for coping with threat, prolonged freezing has been associated with internalizing psychopathology. However, it remains unknown whether early life alterations in freezing reactions predict changes in internalizing symptoms during adolescent development. In a longitudinal study (N = 116), we tested prospectively whether observed freezing in infancy predicted the development of internalizing symptoms from childhood through late adolescence (until age 17). Both longer and absent infant freezing behavior during a standard challenge (robot-confrontation task) were associated with internalizing symptoms in adolescence. Specifically, absent infant freezing predicted a relative increase in internalizing symptoms consistently across development from relatively low symptom levels in childhood to relatively high levels in late adolescence. Longer infant freezing also predicted a relative increase in internalizing symptoms, but only up until early adolescence. This latter effect was moderated by peer stress and was followed by a later decrease in internalizing symptoms. The findings suggest that early deviations in defensive freezing responses signal risk for internalizing symptoms and may constitute important markers in future stress vulnerability and resilience studies.
  • Nieuwenhuis, I. L., Folia, V., Forkstam, C., Jensen, O., & Petersson, K. M. (2013). Sleep promotes the extraction of grammatical rules. PLoS One, 8(6): e65046. doi:10.1371/journal.pone.0065046.

    Abstract

    Grammar acquisition is a high level cognitive function that requires the extraction of complex rules. While it has been proposed that offline time might benefit this type of rule extraction, this remains to be tested. Here, we addressed this question using an artificial grammar learning paradigm. During a short-term memory cover task, eighty-one human participants were exposed to letter sequences generated according to an unknown artificial grammar. Following a time delay of 15 min, 12 h (wake or sleep) or 24 h, participants classified novel test sequences as Grammatical or Non-Grammatical. Previous behavioral and functional neuroimaging work has shown that classification can be guided by two distinct underlying processes: (1) the holistic abstraction of the underlying grammar rules and (2) the detection of sequence chunks that appear at varying frequencies during exposure. Here, we show that classification performance improved after sleep. Moreover, this improvement was due to an enhancement of rule abstraction, while the effect of chunk frequency was unaltered by sleep. These findings suggest that sleep plays a critical role in extracting complex structure from separate but related items during integrative memory processing. Our findings stress the importance of alternating periods of learning with sleep in settings in which complex information must be acquired.
  • Nieuwland, M. S., Coopmans, C. W., & Sommers, R. P. (2019). Distinguishing old from new referents during discourse comprehension: Evidence from ERPs and oscillations. Frontiers in Human Neuroscience, 13: 398. doi:10.3389/fnhum.2019.00398.

    Abstract

    In this EEG study, we used pre-registered and exploratory ERP and time-frequency analyses to investigate the resolution of anaphoric and non-anaphoric noun phrases during discourse comprehension. Participants listened to story contexts that described two antecedents, and subsequently read a target sentence with a critical noun phrase that lexically matched one antecedent (‘old’), matched two antecedents (‘ambiguous’), partially matched one antecedent in terms of semantic features (‘partial-match’), or introduced another referent (non-anaphoric, ‘new’). After each target sentence, participants judged whether the noun referred back to an antecedent (i.e., an ‘old/new’ judgment), which was easiest for ambiguous nouns and hardest for partially matching nouns. The noun-elicited N400 ERP component demonstrated initial sensitivity to repetition and semantic overlap, corresponding to repetition and semantic priming effects, respectively. New and partially matching nouns both elicited a subsequent frontal positivity, which suggested that partially matching anaphors may have been processed as new nouns temporarily. ERPs in an even later time window and ERPs time-locked to sentence-final words suggested that new and partially matching nouns had different effects on comprehension, with partially matching nouns incurring additional processing costs up to the end of the sentence. In contrast to the ERP results, the time-frequency results primarily demonstrated sensitivity to noun repetition, and did not differentiate partially matching anaphors from new nouns. In sum, our results show the ERP and time-frequency effects of referent repetition during discourse comprehension, and demonstrate the potentially demanding nature of establishing the anaphoric meaning of a novel noun.
  • Nieuwland, M. S. (2019). Do ‘early’ brain responses reveal word form prediction during language comprehension? A critical review. Neuroscience and Biobehavioral Reviews, 96, 367-400. doi:10.1016/j.neubiorev.2018.11.019.

    Abstract

    Current theories of language comprehension posit that readers and listeners routinely try to predict the meaning but also the visual or sound form of upcoming words. Whereas
    most neuroimaging studies on word rediction focus on the N400 ERP or its magnetic equivalent, various studies claim that word form prediction manifests itself in ‘early’, pre
    N400 brain responses (e.g., ELAN, M100, P130, N1, P2, N200/PMN, N250). Modulations of these components are often taken as evidence that word form prediction impacts early sensory processes (the sensory hypothesis) or, alternatively, the initial stages of word recognition before word meaning is integrated with sentence context (the recognition hypothesis). Here, I
    comprehensively review studies on sentence- or discourse-level language comprehension that report such effects of prediction on early brain responses. I conclude that the reported evidence for the sensory hypothesis or word recognition hypothesis is weak and inconsistent,
    and highlight the urgent need for replication of previous findings. I discuss the implications and challenges to current theories of linguistic prediction and suggest avenues for future research.
  • Nieuwland, M. S., Arkhipova, Y., & Rodríguez-Gómez, P. (2020). Anticipating words during spoken discourse comprehension: A large-scale, pre-registered replication study using brain potentials. Cortex, 133, 1-36. doi:10.1016/j.cortex.2020.09.007.

    Abstract

    Numerous studies report brain potential evidence for the anticipation of specific words during language comprehension. In the most convincing demonstrations, highly predictable nouns exert an influence on processing even before they appear to a reader or listener, as indicated by the brain's neural response to a prenominal adjective or article when it mismatches the expectations about the upcoming noun. However, recent studies suggest that some well-known demonstrations of prediction may be hard to replicate. This could signal the use of data-contingent analysis, but might also mean that readers and listeners do not always use prediction-relevant information in the way that psycholinguistic theories typically suggest. To shed light on this issue, we performed a close replication of one of the best-cited ERP studies on word anticipation (Van Berkum, Brown, Zwitserlood, Kooijman & Hagoort, 2005; Experiment 1), in which participants listened to Dutch spoken mini-stories. In the original study, the marking of grammatical gender on pre-nominal adjectives (‘groot/grote’) elicited an early positivity when mismatching the gender of an unseen, highly predictable noun, compared to matching gender. The current pre-registered study involved that same manipulation, but used a novel set of materials twice the size of the original set, an increased sample size (N = 187), and Bayesian mixed-effects model analyses that better accounted for known sources of variance than the original. In our study, mismatching gender elicited more negative voltage than matching gender at posterior electrodes. However, this N400-like effect was small in size and lacked support from Bayes Factors. In contrast, we successfully replicated the original's noun effects. While our results yielded some support for prediction, they do not support the Van Berkum et al. effect and highlight the risks associated with commonly employed data-contingent analyses and small sample sizes. Our results also raise the question whether Dutch listeners reliably or consistently use adjectival inflection information to inform their noun predictions.
  • Nieuwland, M. S. (2013). “If a lion could speak …”: Online sensitivity to propositional truth-value of unrealistic counterfactual sentences. Journal of Memory and Language, 68(1), 54-67. doi:10.1016/j.jml.2012.08.003.

    Abstract

    People can establish whether a sentence is hypothetically true even if what it describes can never be literally true given the laws of the natural world. Two event-related potential (ERP) experiments examined electrophysiological responses to sentences about unrealistic counterfactual worlds that require people to construct novel conceptual combinations and infer their consequences as the sentence unfolds in time (e.g., “If dogs had gills…”). Experiment 1 established that without this premise, described consequences (e.g., “Dobermans would breathe under water …”) elicited larger N400 responses than real-world true sentences. Incorporation of the counterfactual premise in Experiment 2 generated similar N400 effects of propositional truth-value in counterfactual and real-world sentences, suggesting that the counterfactual context eliminated the interpretive problems posed by locally anomalous sentences. This result did not depend on cloze probability of the sentences. In contrast to earlier findings regarding online comprehension of logical operators and counterfactuals, these results show that ongoing processing can be directly impacted by propositional truth-value, even that of unrealistic counterfactuals.
  • Nieuwland, M. S., Barr, D. J., Bartolozzi, F., Busch-Moreno, S., Darley, E., Donaldson, D. I., Ferguson, H. J., Fu, X., Heyselaar, E., Huettig, F., Husband, E. M., Ito, A., Kazanina, N., Kogan, V., Kohút, Z., Kulakova, E., Mézière, D., Politzer-Ahles, S., Rousselet, G., Rueschemeyer, S.-A. and 3 moreNieuwland, M. S., Barr, D. J., Bartolozzi, F., Busch-Moreno, S., Darley, E., Donaldson, D. I., Ferguson, H. J., Fu, X., Heyselaar, E., Huettig, F., Husband, E. M., Ito, A., Kazanina, N., Kogan, V., Kohút, Z., Kulakova, E., Mézière, D., Politzer-Ahles, S., Rousselet, G., Rueschemeyer, S.-A., Segaert, K., Tuomainen, J., & Von Grebmer Zu Wolfsthurn, S. (2020). Dissociable effects of prediction and integration during language comprehension: Evidence from a large-scale study using brain potentials. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375: 20180522. doi:10.1098/rstb.2018.0522.

    Abstract

    Composing sentence meaning is easier for predictable words than for unpredictable words. Are predictable words genuinely predicted, or simply more plausible and therefore easier to integrate with sentence context? We addressed this persistent and fundamental question using data from a recent, large-scale (N = 334) replication study, by investigating the effects of word predictability and sentence plausibility on the N400, the brain’s electrophysiological index of semantic processing. A spatiotemporally fine-grained mixed-effects multiple regression analysis revealed overlapping effects of predictability and plausibility on the N400, albeit with distinct spatiotemporal profiles. Our results challenge the view that the predictability-dependent N400 reflects the effects of either prediction or integration, and suggest that semantic facilitation of predictable words arises from a cascade of processes that activate and integrate word meaning with context into a sentence-level meaning.
  • Nieuwland, M. S., Martin, A. E., & Carreiras, M. (2013). Event-related brain potential evidence for animacy processing asymmetries during sentence comprehension. Brain and Language, 126(2), 151-158. doi:10.1016/j.bandl.2013.04.005.

    Abstract

    The animacy distinction is deeply rooted in the language faculty. A key example is differential object marking, the phenomenon where animate sentential objects receive specific marking. We used event-related potentials to examine the neural processing consequences of case-marking violations on animate and inanimate direct objects in Spanish. Inanimate objects with incorrect prepositional case marker ‘a’ (‘al suelo’) elicited a P600 effect compared to unmarked objects, consistent with previous literature. However, animate objects without the required prepositional case marker (‘el obispo’) only elicited an N400 effect compared to marked objects. This novel finding, an exclusive N400 modulation by a straightforward grammatical rule violation, does not follow from extant neurocognitive models of sentence processing, and mirrors unexpected “semantic P600” effects for thematically problematic sentences. These results may reflect animacy asymmetry in competition for argument prominence: following the article, thematic interpretation difficulties are elicited only by unexpectedly animate objects.
  • Nieuwland, M. S., & Kazanina, N. (2020). The neural basis of linguistic prediction: Introduction to the special issue. Neuropsychologia, 146: 107532. doi:10.1016/j.neuropsychologia.2020.107532.
  • Nievergelt, C. M., Maihofer, A. X., Klengel, T., Atkinson, E. G., Chen, C.-Y., Choi, K. W., Coleman, J. R. I., Dalvie, S., Duncan, L. E., Gelernter, J., Levey, D. F., Logue, M. W., Polimanti, R., Provost, A. C., Ratanatharathorn, A., Stein, M. B., Torres, K., Aiello, A. E., Almli, L. M., Amstadter, A. B. and 159 moreNievergelt, C. M., Maihofer, A. X., Klengel, T., Atkinson, E. G., Chen, C.-Y., Choi, K. W., Coleman, J. R. I., Dalvie, S., Duncan, L. E., Gelernter, J., Levey, D. F., Logue, M. W., Polimanti, R., Provost, A. C., Ratanatharathorn, A., Stein, M. B., Torres, K., Aiello, A. E., Almli, L. M., Amstadter, A. B., Andersen, S. B., Andreassen, O. A., Arbisi, P. A., Ashley-Koch, A. E., Austin, S. B., Avdibegovic, E., Babić, D., Bækvad-Hansen, M., Baker, D. G., Beckham, J. C., Bierut, L. J., Bisson, J. I., Boks, M. P., Bolger, E. A., Børglum, A. D., Bradley, B., Brashear, M., Breen, G., Bryant, R. A., Bustamante, A. C., Bybjerg-Grauholm, J., Calabrese, J. R., Caldas- de- Almeida, J. M., Dale, A. M., Daly, M. J., Daskalakis, N. P., Deckert, J., Delahanty, D. L., Dennis, M. F., Disner, S. G., Domschke, K., Dzubur-Kulenovic, A., Erbes, C. R., Evans, A., Farrer, L. A., Feeny, N. C., Flory, J. D., Forbes, D., Franz, C. E., Galea, S., Garrett, M. E., Gelaye, B., Geuze, E., Gillespie, C., Uka, A. G., Gordon, S. D., Guffanti, G., Hammamieh, R., Harnal, S., Hauser, M. A., Heath, A. C., Hemmings, S. M. J., Hougaard, D. M., Jakovljevic, M., Jett, M., Johnson, E. O., Jones, I., Jovanovic, T., Qin, X.-J., Junglen, A. G., Karstoft, K.-I., Kaufman, M. L., Kessler, R. C., Khan, A., Kimbrel, N. A., King, A. P., Koen, N., Kranzler, H. R., Kremen, W. S., Lawford, B. R., Lebois, L. A. M., Lewis, C. E., Linnstaedt, S. D., Lori, A., Lugonja, B., Luykx, J. J., Lyons, M. J., Maples-Keller, J., Marmar, C., Martin, A. R., Martin, N. G., Maurer, D., Mavissakalian, M. R., McFarlane, A., McGlinchey, R. E., McLaughlin, K. A., McLean, S. A., McLeay, S., Mehta, D., Milberg, W. P., Miller, M. W., Morey, R. A., Morris, C. P., Mors, O., Mortensen, P. B., Neale, B. M., Nelson, E. C., Nordentoft, M., Norman, S. B., O’Donnell, M., Orcutt, H. K., Panizzon, M. S., Peters, E. S., Peterson, A. L., Peverill, M., Pietrzak, R. H., Polusny, M. A., Rice, J. P., Ripke, S., Risbrough, V. B., Roberts, A. L., Rothbaum, A. O., Rothbaum, B. O., Roy-Byrne, P., Ruggiero, K., Rung, A., Rutten, B. P. F., Saccone, N. L., Sanchez, S. E., Schijven, D., Seedat, S., Seligowski, A. V., Seng, J. S., Sheerin, C. M., Silove, D., Smith, A. K., Smoller, J. W., Sponheim, S. R., Stein, D. J., Stevens, J. S., Sumner, J. A., Teicher, M. H., Thompson, W. K., Trapido, E., Uddin, M., Ursano, R. J., van den Heuvel, L. L., Van Hooff, M., Vermetten, E., Vinkers, C. H., Voisey, J., Wang, Y., Wang, Z., Werge, T., Williams, M. A., Williamson, D. E., Winternitz, S., Wolf, C., Wolf, E. J., Wolff, J. D., Yehuda, R., Young, R. M., Young, K. A., Zhao, H., Zoellner, L. A., Liberzon, I., Ressler, K. J., Haas, M., & Koenen, K. C. (2019). International meta-analysis of PTSD genome-wide association studies identifies sex- and ancestry-specific genetic risk loci. Nature Communications, 10(1): 4558. doi:10.1038/s41467-019-12576-w.

    Abstract

    The risk of posttraumatic stress disorder (PTSD) following trauma is heritable, but robust common variants have yet to be identified. In a multi-ethnic cohort including over 30,000 PTSD cases and 170,000 controls we conduct a genome-wide association study of PTSD. We demonstrate SNP-based heritability estimates of 5–20%, varying by sex. Three genome-wide significant loci are identified, 2 in European and 1 in African-ancestry analyses. Analyses stratified by sex implicate 3 additional loci in men. Along with other novel genes and non-coding RNAs, a Parkinson’s disease gene involved in dopamine regulation, PARK2, is associated with PTSD. Finally, we demonstrate that polygenic risk for PTSD is significantly predictive of re-experiencing symptoms in the Million Veteran Program dataset, although specific loci did not replicate. These results demonstrate the role of genetic variation in the biology of risk for PTSD and highlight the necessity of conducting sex-stratified analyses and expanding GWAS beyond European ancestry populations.

    Additional information

    Supplementary information
  • Noble, C., Cameron-Faulkner, T., Jessop, A., Coates, A., Sawyer, H., Taylor-Ims, R., & Rowland, C. F. (2020). The impact of interactive shared book reading on children's language skills: A randomized controlled trial. Journal of Speech, Language, and Hearing Research, 63(6), 1878-1897. doi:10.1044/2020_JSLHR-19-00288.

    Abstract

    Purpose: Research has indicated that interactive shared
    book reading can support a wide range of early language
    skills and that children who are read to regularly in the early
    years learn language faster, enter school with a larger
    vocabulary, and become more successful readers at school.
    Despite the large volume of research suggesting interactive
    shared reading is beneficial for language development, two
    fundamental issues remain outstanding: whether shared
    book reading interventions are equally effective (a) for children
    from all socioeconomic backgrounds and (b) for a range of
    language skills.
    Method: To address these issues, we conducted a
    randomized controlled trial to investigate the effects of two
    6-week interactive shared reading interventions on a
    range of language skills in children across the socioeconomic
    spectrum. One hundred and fifty children aged between
    2;6 and 3;0 (years;months) were randomly assigned to one

    of three conditions: a pause reading, a dialogic reading, or
    an active shared reading control condition.
    Results: The findings indicated that the interventions were
    effective at changing caregiver reading behaviors. However,
    the interventions did not boost children’s language skills
    over and above the effect of an active reading control
    condition. There were also no effects of socioeconomic status.
    Conclusion: This randomized controlled trial showed
    that caregivers from all socioeconomic backgrounds
    successfully adopted an interactive shared reading style.
    However, while the interventions were effective at increasing
    caregivers’ use of interactive shared book reading behaviors,
    this did not have a significant impact on the children’s
    language skills. The findings are discussed in terms of
    practical implications and future research.

    Additional information

    Supplemental Material
  • Noble, C., Sala, G., Peter, M., Lingwood, J., Rowland, C. F., Gobet, F., & Pine, J. (2019). The impact of shared book reading on children's language skills: A meta-analysis. Educational Research Review, 28: 100290. doi:10.1016/j.edurev.2019.100290.

    Abstract

    Shared book reading is thought to have a positive impact on young children's language development, with shared reading interventions often run in an attempt to boost children's language skills. However, despite the volume of research in this area, a number of issues remain outstanding. The current meta-analysis explored whether shared reading interventions are equally effective (a) across a range of study designs; (b) across a range of different outcome variables; and (c) for children from different SES groups. It also explored the potentially moderating effects of intervention duration, child age, use of dialogic reading techniques, person delivering the intervention and mode of intervention delivery.

    Our results show that, while there is an effect of shared reading on language development, this effect is smaller than reported in previous meta-analyses (
     = 0.194, p = .002). They also show that this effect is moderated by the type of control group used and is negligible in studies with active control groups (  = 0.028, p = .703). Finally, they show no significant effects of differences in outcome variable (ps ≥ .286), socio-economic status (p = .658), or any of our other potential moderators (ps ≥ .077), and non-significant effects for studies with follow-ups (  = 0.139, p = .200). On the basis of these results, we make a number of recommendations for researchers and educators about the design and implementation of future shared reading interventions.

    Additional information

    Supplementary data
  • Nomi, J. S., Frances, C., Nguyen, M. T., Bastidas, S., & Troup, L. J. (2013). Interaction of threat expressions and eye gaze: an event-related potential study. NeuroReport, 24, 813-817. doi:10.1097/WNR.0b013e3283647682.

    Abstract

    he current study examined the interaction of fearful, angry,
    happy, and neutral expressions with left, straight, and
    right eye gaze directions. Human participants viewed
    faces consisting of various expression and eye gaze
    combinations while event-related potential (ERP) data
    were collected. The results showed that angry expressions
    modulated the mean amplitude of the P1, whereas fearful
    and happy expressions modulated the mean amplitude of
    the N170. No influence of eye gaze on mean amplitudes for
    the P1 and N170 emerged. Fearful, angry, and happy
    expressions began to interact with eye gaze to influence
    mean amplitudes in the time window of 200–400 ms.
    The results suggest early processing of expression
    influence ERPs independent of eye gaze, whereas
    expression and gaze interact to influence later
    ERPs.
  • Nuthmann, A., De Groot, F., Huettig, F., & Olivers, C. L. N. (2019). Extrafoveal attentional capture by object semantics. PLoS One, 14(5): e0217051. doi:10.1371/journal.pone.0217051.

    Abstract

    There is ongoing debate on whether object meaning can be processed outside foveal vision, making semantics available for attentional guidance. Much of the debate has centred on whether objects that do not fit within an overall scene draw attention, in complex displays that are often difficult to control. Here, we revisited the question by reanalysing data from three experiments that used displays consisting of standalone objects from a carefully controlled stimulus set. Observers searched for a target object, as per auditory instruction. On the critical trials, the displays contained no target but objects that were semantically related to the target, visually related, or unrelated. Analyses using (generalized) linear mixed-effects models showed that, although visually related objects attracted most attention, semantically related objects were also fixated earlier in time than unrelated objects. Moreover, semantic matches affected the very first saccade in the display. The amplitudes of saccades that first entered semantically related objects were larger than 5° on average, confirming that object semantics is available outside foveal vision. Finally, there was no semantic capture of attention for the same objects when observers did not actively look for the target, confirming that it was not stimulus-driven. We discuss the implications for existing models of visual cognition.
  • Ohlerth, A.-K., Valentin, A., Vergani, F., Ashkan, K., & Bastiaanse, R. (2020). The verb and noun test for peri-operative testing (VAN-POP): Standardized language tests for navigated transcranial magnetic stimulation and direct electrical stimulation. Acta Neurochirurgica, (2), 397-406. doi:10.1007/s00701-019-04159-x.

    Abstract

    Background

    Protocols for intraoperative language mapping with direct electrical stimulation (DES) often include various language tasks triggering both nouns and verbs in sentences. Such protocols are not readily available for navigated transcranial magnetic stimulation (nTMS), where only single word object naming is generally used. Here, we present the development, norming, and standardization of the verb and noun test for peri-operative testing (VAN-POP) that measures language skills more extensively.
    Methods

    The VAN-POP tests noun and verb retrieval in sentence context. Items are marked and balanced for several linguistic factors known to influence word retrieval. The VAN-POP was administered in English, German, and Dutch under conditions that are used for nTMS and DES paradigms. For each language, 30 speakers were tested.
    Results

    At least 50 items per task per language were named fluently and reached a high naming agreement.
    Conclusion

    The protocol proved to be suitable for pre- and intraoperative language mapping with nTMS and DES.
  • O’Meara, C., Kung, S. S., & Majid, A. (2019). The challenge of olfactory ideophones: Reconsidering ineffability from the Totonac-Tepehua perspective. International Journal of American Linguistics, 85(2), 173-212. doi:10.1086/701801.

    Abstract

    Olfactory impressions are said to be ineffable, but little systematic exploration has been done to substantiate this. We explored olfactory language in Huehuetla Tepehua—a Totonac-Tepehua language spoken in Hidalgo, Mexico—which has a large inventory of ideophones, words with sound-symbolic properties used to describe perceptuomotor experiences. A multi-method study found Huehuetla Tepehua has 45 olfactory ideophones, illustrating intriguing sound-symbolic alternation patterns. Elaboration in the olfactory domain is not unique to this language; related Totonac-Tepehua languages also have impressive smell lexicons. Comparison across these languages shows olfactory and gustatory terms overlap in interesting ways, mirroring the physiology of smelling and tasting. However, although cognate taste terms are formally similar, olfactory terms are less so. We suggest the relative instability of smell vocabulary in comparison with those of taste likely results from the more varied olfactory experiences caused by the mutability of smells in different environments.
  • Ortega, G., Schiefner, A., & Ozyurek, A. (2019). Hearing non-signers use their gestures to predict iconic form-meaning mappings at first exposure to sign. Cognition, 191: 103996. doi:10.1016/j.cognition.2019.06.008.

    Abstract

    The sign languages of deaf communities and the gestures produced by hearing people are communicative systems that exploit the manual-visual modality as means of expression. Despite their striking differences they share the property of iconicity, understood as the direct relationship between a symbol and its referent. Here we investigate whether non-signing hearing adults exploit their implicit knowledge of gestures to bootstrap accurate understanding of the meaning of iconic signs they have never seen before. In Study 1 we show that for some concepts gestures exhibit systematic forms across participants, and share different degrees of form overlap with the signs for the same concepts (full, partial, and no overlap). In Study 2 we found that signs with stronger resemblance with signs are more accurately guessed and are assigned higher iconicity ratings by non-signers than signs with low overlap. In addition, when more people produced a systematic gesture resembling a sign, they assigned higher iconicity ratings to that sign. Furthermore, participants had a bias to assume that signs represent actions and not objects. The similarities between some signs and gestures could be explained by deaf signers and hearing gesturers sharing a conceptual substrate that is rooted in our embodied experiences with the world. The finding that gestural knowledge can ease the interpretation of the meaning of novel signs and predicts iconicity ratings is in line with embodied accounts of cognition and the influence of prior knowledge to acquire new schemas. Through these mechanisms we propose that iconic gestures that overlap in form with signs may serve as some type of ‘manual cognates’ that help non-signing adults to break into a new language at first exposure.

    Additional information

    Supplementary Materials
  • Ortega, G., Ozyurek, A., & Peeters, D. (2020). Iconic gestures serve as manual cognates in hearing second language learners of a sign language: An ERP study. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(3), 403-415. doi:10.1037/xlm0000729.

    Abstract

    When learning a second spoken language, cognates, words overlapping in form and meaning with one’s native language, help breaking into the language one wishes to acquire. But what happens when the to-be-acquired second language is a sign language? We tested whether hearing nonsigners rely on their gestural repertoire at first exposure to a sign language. Participants saw iconic signs with high and low overlap with the form of iconic gestures while electrophysiological brain activity was recorded. Upon first exposure, signs with low overlap with gestures elicited enhanced positive amplitude in the P3a component compared to signs with high overlap. This effect disappeared after a training session. We conclude that nonsigners generate expectations about the form of iconic signs never seen before based on their implicit knowledge of gestures, even without having to produce them. Learners thus draw from any available semiotic resources when acquiring a second language, and not only from their linguistic experience
  • Ortega, G., & Ozyurek, A. (2020). Systematic mappings between semantic categories and types of iconic representations in the manual modality: A normed database of silent gesture. Behavior Research Methods, 52, 51-67. doi:10.3758/s13428-019-01204-6.

    Abstract

    An unprecedented number of empirical studies have shown that iconic gestures—those that mimic the sensorimotor attributes of a referent—contribute significantly to language acquisition, perception, and processing. However, there has been a lack of normed studies describing generalizable principles in gesture production and in comprehension of the mappings of different types of iconic strategies (i.e., modes of representation; Müller, 2013). In Study 1 we elicited silent gestures in order to explore the implementation of different types of iconic representation (i.e., acting, representing, drawing, and personification) to express concepts across five semantic domains. In Study 2 we investigated the degree of meaning transparency (i.e., iconicity ratings) of the gestures elicited in Study 1. We found systematicity in the gestural forms of 109 concepts across all participants, with different types of iconicity aligning with specific semantic domains: Acting was favored for actions and manipulable objects, drawing for nonmanipulable objects, and personification for animate entities. Interpretation of gesture–meaning transparency was modulated by the interaction between mode of representation and semantic domain, with some couplings being more transparent than others: Acting yielded higher ratings for actions, representing for object-related concepts, personification for animate entities, and drawing for nonmanipulable entities. This study provides mapping principles that may extend to all forms of manual communication (gesture and sign). This database includes a list of the most systematic silent gestures in the group of participants, a notation of the form of each gesture based on four features (hand configuration, orientation, placement, and movement), each gesture’s mode of representation, iconicity ratings, and professionally filmed videos that can be used for experimental and clinical endeavors.
  • Ortega, G., & Ozyurek, A. (2020). Types of iconicity and combinatorial strategies distinguish semantic categories in silent gesture. Language and Cognition, 12(1), 84-113. doi:10.1017/langcog.2019.28.

    Abstract

    In this study we explore whether different types of iconic gestures
    (i.e., acting, drawing, representing) and their combinations are used
    systematically to distinguish between different semantic categories in
    production and comprehension. In Study 1, we elicited silent gestures
    from Mexican and Dutch participants to represent concepts from three
    semantic categories: actions, manipulable objects, and non-manipulable
    objects. Both groups favoured the acting strategy to represent actions and
    manipulable objects; while non-manipulable objects were represented
    through the drawing strategy. Actions elicited primarily single gestures
    whereas objects elicited combinations of different types of iconic gestures
    as well as pointing. In Study 2, a different group of participants were
    shown gestures from Study 1 and were asked to guess their meaning.
    Single-gesture depictions for actions were more accurately guessed than
    for objects. Objects represented through two-gesture combinations (e.g.,
    acting + drawing) were more accurately guessed than objects represented
    with a single gesture. We suggest iconicity is exploited to make direct
    links with a referent, but when it lends itself to ambiguity, individuals
    resort to combinatorial structures to clarify the intended referent.
    Iconicity and the need to communicate a clear signal shape the structure
    of silent gestures and this in turn supports comprehension.
  • Ostarek, M., Joosen, D., Ishag, A., De Nijs, M., & Huettig, F. (2019). Are visual processes causally involved in “perceptual simulation” effects in the sentence-picture verification task? Cognition, 182, 84-94. doi:10.1016/j.cognition.2018.08.017.

    Abstract

    Many studies have shown that sentences implying an object to have a certain shape produce a robust reaction time advantage for shape-matching pictures in the sentence-picture verification task. Typically, this finding has been interpreted as evidence for perceptual simulation, i.e., that access to implicit shape information involves the activation of modality-specific visual processes. It follows from this proposal that disrupting visual processing during sentence comprehension should interfere with perceptual simulation and obliterate the match effect. Here we directly test this hypothesis. Participants listened to sentences while seeing either visual noise that was previously shown to strongly interfere with basic visual processing or a blank screen. Experiments 1 and 2 replicated the match effect but crucially visual noise did not modulate it. When an interference technique was used that targeted high-level semantic processing (Experiment 3) however the match effect vanished. Visual noise specifically targeting high-level visual processes (Experiment 4) only had a minimal effect on the match effect. We conclude that the shape match effect in the sentence-picture verification paradigm is unlikely to rely on perceptual simulation.
  • Ostarek, M., Van Paridon, J., & Montero-Melis, G. (2019). Sighted people’s language is not helpful for blind individuals’ acquisition of typical animal colors. Proceedings of the National Academy of Sciences of the United States of America, 116(44), 21972-21973. doi:10.1073/pnas.1912302116.
  • Ostarek, M., & Huettig, F. (2019). Six challenges for embodiment research. Current Directions in Psychological Science, 28(6), 593-599. doi:10.1177/0963721419866441.

    Abstract

    20 years after Barsalou's seminal perceptual symbols paper (Barsalou, 1999), embodied cognition, the notion that cognition involves simulations of sensory, motor, or affective states, has moved in status from an outlandish proposal advanced by a fringe movement in psychology to a mainstream position adopted by large numbers of researchers in the psychological and cognitive (neuro)sciences. While it has generated highly productive work in the cognitive sciences as a whole, it had a particularly strong impact on research into language comprehension. The view of a mental lexicon based on symbolic word representations, which are arbitrarily linked to sensory aspects of their referents, for example, was generally accepted since the cognitive revolution in the 1950s. This has radically changed. Given the current status of embodiment as a main theory of cognition, it is somewhat surprising that a close look at the state of the affairs in the literature reveals that the debate about the nature of the processes involved in language comprehension is far from settled and key questions remain unanswered. We present several suggestions for a productive way forward.
  • Otake, T., & Cutler, A. (2013). Lexical selection in action: Evidence from spontaneous punning. Language and Speech, 56(4), 555-573. doi:10.1177/0023830913478933.

    Abstract

    Analysis of a corpus of spontaneously produced Japanese puns from a single speaker over a two-year period provides a view of how a punster selects a source word for a pun and transforms it into another word for humorous effect. The pun-making process is driven by a principle of similarity: the source word should as far as possible be preserved (in terms of segmental sequence) in the pun. This renders homophones (English example: band–banned) the pun type of choice, with part–whole relationships of embedding (cap–capture), and mutations of the source word (peas–bees) rather less favored. Similarity also governs mutations in that single-phoneme substitutions outnumber larger changes, and in phoneme substitutions, subphonemic features tend to be preserved. The process of spontaneous punning thus applies, on line, the same similarity criteria as govern explicit similarity judgments and offline decisions about pun success (e.g., for inclusion in published collections). Finally, the process of spoken-word recognition is word-play-friendly in that it involves multiple word-form activation and competition, which, coupled with known techniques in use in difficult listening conditions, enables listeners to generate most pun types as offshoots of normal listening procedures.
  • Ozturk, O., Shayan, S., Liszkowski, U., & Majid, A. (2013). Language is not necessary for color categories. Developmental Science, 16, 111-115. doi:10.1111/desc.12008.

    Abstract

    The origin of color categories is under debate. Some researchers argue that color categories are linguistically constructed, while others claim they have a pre-linguistic, and possibly even innate, basis. Although there is some evidence that 4–6-month-old infants respond categorically to color, these empirical results have been challenged in recent years. First, it has been claimed that previous demonstrations of color categories in infants may reflect color preferences instead. Second, and more seriously, other labs have reported failing to replicate the basic findings at all. In the current study we used eye-tracking to test 8-month-old infants’ categorical perception of a previously attested color boundary (green–blue) and an additional color boundary (blue–purple). Our results show that infants are faster and more accurate at fixating targets when they come from a different color category than when from the same category (even though the chromatic separation sizes were equated). This is the case for both blue–green and blue–purple. Our findings provide independent evidence for the existence of color categories in pre-linguistic infants, and suggest that categorical perception of color can occur without color language.
  • Peeters, D. (2020). Bilingual switching between languages and listeners: Insights from immersive virtual reality. Cognition, 195: 104107. doi:10.1016/j.cognition.2019.104107.

    Abstract

    Perhaps the main advantage of being bilingual is the capacity to communicate with interlocutors that have different language backgrounds. In the life of a bilingual, switching interlocutors hence sometimes involves switching languages. We know that the capacity to switch from one language to another is supported by control mechanisms, such as task-set reconfiguration. This study investigates whether similar neurophysiological mechanisms support bilingual switching between different listeners, within and across languages. A group of 48 unbalanced Dutch-English bilinguals named pictures for two monolingual Dutch and two monolingual English life-size virtual listeners in an immersive virtual reality environment. In terms of reaction times, switching languages came at a cost over and above the significant cost of switching from one listener to another. Analysis of event-related potentials showed similar electrophysiological correlates for switching listeners and switching languages. However, it was found that having to switch listeners and languages at the same time delays the onset of lexical processes more than a switch between listeners within the same language. Findings are interpreted in light of the interplay between proactive (sustained inhibition) and reactive (task-set reconfiguration) control in bilingual speech production. It is argued that a possible bilingual advantage in executive control may not be due to the process of switching per se. This study paves the way for the study of bilingual language switching in ecologically valid, naturalistic, experimental settings.

    Additional information

    Supplementary data
  • Peeters, D., Vanlangendonck, F., Rüschemeyer, S.-A., & Dijkstra, T. (2019). Activation of the language control network in bilingual visual word recognition. Cortex, 111, 63-73. doi:10.1016/j.cortex.2018.10.012.

    Abstract

    Research into bilingual language production has identified a language control network that subserves control operations when bilinguals produce speech. Here we explore which brain areas are recruited for control purposes in bilingual language comprehension. In two experimental fMRI sessions, Dutch-English unbalanced bilinguals read words that differed in cross-linguistic form and meaning overlap across their two languages. The need for control operations was further manipulated by varying stimulus list composition across the two experimental sessions. We observed activation of the language control network in bilingual language comprehension as a function of both cross-linguistic form and meaning overlap and stimulus list composition. These findings suggest that the language control network is shared across bilingual language production and comprehension. We argue that activation of the language control network in language comprehension allows bilinguals to quickly and efficiently grasp the context-relevant meaning of words.

    Additional information

    1-s2.0-S0010945218303459-mmc1.docx
  • Peeters, D., Dijkstra, T., & Grainger, J. (2013). The representation and processing of identical cognates by late bilinguals: RT and ERP effects. Journal of Memory and Language, 68, 315-332. doi:10.1016/j.jml.2012.12.003.

    Abstract

    Across the languages of a bilingual, translation equivalents can have the same orthographic form and shared meaning (e.g., TABLE in French and English). How such words, called orthographically identical cognates, are processed and represented in the bilingual brain is not well understood. In the present study, late French–English bilinguals processed such identical cognates and control words in an English lexical decision task. Both behavioral and electrophysiological data were collected. Reaction times to identical cognates were shorter than for non-cognate controls and depended on both English and French frequency. Cognates with a low English frequency showed a larger cognate advantage than those with a high English frequency. In addition, N400 amplitude was found to be sensitive to cognate status and both the English and French frequency of the cognate words. Theoretical consequences for the processing and representation of identical cognates are discussed.
  • Peeters, D. (2019). Virtual reality: A game-changing method for the language sciences. Psychonomic Bulletin & Review, 26(3), 894-900. doi:10.3758/s13423-019-01571-3.

    Abstract

    This paper introduces virtual reality as an experimental method for the language sciences and provides a review of recent studies using the method to answer fundamental, psycholinguistic research questions. It is argued that virtual reality demonstrates that ecological validity and
    experimental control should not be conceived of as two extremes on a continuum, but rather as two orthogonal factors. Benefits of using virtual reality as an experimental method include that in a virtual environment, as in the real world, there is no artificial spatial divide between participant and stimulus. Moreover, virtual reality experiments do not necessarily have to include a repetitive trial structure or an unnatural experimental task. Virtual agents outperform experimental confederates in terms of the consistency and replicability of their behaviour, allowing for reproducible science across participants and research labs. The main promise of virtual reality as a tool for the experimental language sciences, however, is that it shifts theoretical focus towards the interplay between different modalities (e.g., speech, gesture, eye gaze, facial expressions) in dynamic and communicative real-world environments, complementing studies that focus on one modality (e.g. speech) in isolation.
  • Perlman, M., & Gibbs, R. W. (2013). Pantomimic gestures reveal the sensorimotor imagery of a human-fostered gorilla. Journal of Mental Imagery, 37(3/4), 73-96.

    Abstract

    This article describes the use of pantomimic gestures by the human-fostered gorilla, Koko, as evidence of her sensorimotor imagery. We present five video recorded instances of Koko's spontaneously created pantomimes during her interactions with human caregivers. The precise movements and context of each gesture are described in detail to examine how it functions to communicate Koko's requests for various objects and actions to be performed. Analysis assess the active "iconicity" of each targeted gesture and examines the underlying elements of sensorimotor imagery that are incorporated by the gesture. We suggest that Koko's pantomimes reflect an imaginative understanding of different actions, objects, and events that is similar in important respects with humans' embodied imagery capabilities.
  • Persson, J., Szalisznyó, K., Antoni, G., Wall, A., Fällmar, D., Zora, H., & Bodén, R. (2020). Phosphodiesterase 10A levels are related to striatal function in schizophrenia: a combined positron emission tomography and functional magnetic resonance imaging study. European Archives of Psychiatry and Clinical Neuroscience, 270(4), 451-459. doi:10.1007/s00406-019-01021-0.

    Abstract

    Pharmacological inhibition of phosphodiesterase 10A (PDE10A) is being investigated as a treatment option in schizophrenia. PDE10A acts postsynaptically on striatal dopamine signaling by regulating neuronal excitability through its inhibition of cyclic adenosine monophosphate (cAMP), and we recently found it to be reduced in schizophrenia compared to controls. Here, this finding of reduced PDE10A in schizophrenia was followed up in the same sample to investigate the effect of reduced striatal PDE10A on the neural and behavioral function of striatal and downstream basal ganglia regions. A positron emission tomography (PET) scan with the PDE10A ligand [11C]Lu AE92686 was performed, followed by a 6 min resting-state magnetic resonance imaging (MRI) scan in ten patients with schizophrenia. To assess the relationship between striatal function and neurophysiological and behavioral functioning, salience processing was assessed using a mismatch negativity paradigm, an auditory event-related electroencephalographic measure, episodic memory was assessed using the Rey auditory verbal learning test (RAVLT) and executive functioning using trail-making test B. Reduced striatal PDE10A was associated with increased amplitude of low-frequency fluctuations (ALFF) within the putamen and substantia nigra, respectively. Higher ALFF in the substantia nigra, in turn, was associated with lower episodic memory performance. The findings are in line with a role for PDE10A in striatal functioning, and suggest that reduced striatal PDE10A may contribute to cognitive symptoms in schizophrenia.
  • Peter, M. S., & Rowland, C. F. (2019). Aligning developmental and processing accounts of implicit and statistical learning. Topics in Cognitive Science, 11, 555-572. doi:10.1111/tops.12396.

    Abstract

    A long‐standing question in child language research concerns how children achieve mature syntactic knowledge in the face of a complex linguistic environment. A widely accepted view is that this process involves extracting distributional regularities from the environment in a manner that is incidental and happens, for the most part, without the learner's awareness. In this way, the debate speaks to two associated but separate literatures in language acquisition: statistical learning and implicit learning. Both fields have explored this issue in some depth but, at present, neither the results from the infant studies used by the statistical learning literature nor the artificial grammar learning tasks studies from the implicit learning literature can be used to fully explain how children's syntax becomes adult‐like. In this work, we consider an alternative explanation—that children use error‐based learning to become mature syntax users. We discuss this proposal in the light of the behavioral findings from structural priming studies and the computational findings from Chang, Dell, and Bock's (2006) dual‐path model, which incorporates properties from both statistical and implicit learning, and offers an explanation for syntax learning and structural priming using a common error‐based learning mechanism. We then turn our attention to future directions for the field, here suggesting how structural priming might inform the statistical learning and implicit learning literature on the nature of the learning mechanism.
  • Peter, M. S., Durrant, S., Jessop, A., Bidgood, A., Pine, J. M., & Rowland, C. F. (2019). Does speed of processing or vocabulary size predict later language growth in toddlers? Cognitive Psychology, 115: 101238. doi:10.1016/j.cogpsych.2019.101238.

    Abstract

    It is becoming increasingly clear that the way that children acquire cognitive representations
    depends critically on how their processing system is developing. In particular, recent studies
    suggest that individual differences in language processing speed play an important role in explaining
    the speed with which children acquire language. Inconsistencies across studies, however,
    mean that it is not clear whether this relationship is causal or correlational, whether it is
    present right across development, or whether it extends beyond word learning to affect other
    aspects of language learning, like syntax acquisition. To address these issues, the current study
    used the looking-while-listening paradigm devised by Fernald, Swingley, and Pinto (2001) to test
    the speed with which a large longitudinal cohort of children (the Language 0–5 Project) processed
    language at 19, 25, and 31 months of age, and took multiple measures of vocabulary (UKCDI,
    Lincoln CDI, CDI-III) and syntax (Lincoln CDI) between 8 and 37 months of age. Processing
    speed correlated with vocabulary size - though this relationship changed over time, and was
    observed only when there was variation in how well the items used in the looking-while-listening
    task were known. Fast processing speed was a positive predictor of subsequent vocabulary
    growth, but only for children with smaller vocabularies. Faster processing speed did, however,
    predict faster syntactic growth across the whole sample, even when controlling for concurrent
    vocabulary. The results indicate a relatively direct relationship between processing speed and
    syntactic development, but point to a more complex interaction between processing speed, vocabulary
    size and subsequent vocabulary growth.
  • Petras, K., Ten Oever, S., Jacobs, C., & Goffaux, V. (2019). Coarse-to-fine information integration in human vision. NeuroImage, 186, 103-112. doi:10.1016/j.neuroimage.2018.10.086.

    Abstract

    Coarse-to-fine theories of vision propose that the coarse information carried by the low spatial frequencies (LSF) of visual input guides the integration of finer, high spatial frequency (HSF) detail. Whether and how LSF modulates HSF processing in naturalistic broad-band stimuli is still unclear. Here we used multivariate decoding of EEG signals to separate the respective contribution of LSF and HSF to the neural response evoked by broad-band images. Participants viewed images of human faces, monkey faces and phase-scrambled versions that were either broad-band or filtered to contain LSF or HSF. We trained classifiers on EEG scalp-patterns evoked by filtered scrambled stimuli and evaluated the derived models on broad-band scrambled and intact trials. We found reduced HSF contribution when LSF was informative towards image content, indicating that coarse information does guide the processing of fine detail, in line with coarse-to-fine theories. We discuss the potential cortical mechanisms underlying such coarse-to-fine feedback.

    Additional information

    Supplementary figures
  • Petzell, M., & Hammarström, H. (2013). Grammatical and lexical subclassification of the Morogoro region, Tanzania. Nordic journal of African Studies, 22(3), 129-157.

    Abstract

    This article discusses lexical and grammatical comparison and sub-grouping in a set of closely related Bantu language varieties in the Morogoro region, Tanzania. The Greater Ruvu Bantu language varieties include Kagulu [G12], Zigua [G31], Kwere [G32], Zalamo [G33], Nguu [G34], Luguru [G35], Kami [G36] and Kutu [G37]. The comparison is based on 27 morphophonological and morphosyntactic parameters, supplemented by a lexicon of 500 items. In order to determine the relationships and boundaries between the varieties, grammatical phenomena constitute a valuable complement to counting the number of identical words or cognates. We have used automated cognate judgment methods, as well as manual cognate judgments based on older sources, in order to compare lexical data. Finally, we have included speaker attitudes (i.e. self-assessment of linguistic similarity) in an attempt to map whether the languages that are perceived by speakers as being linguistically similar really are closely related.
  • Piai, V., Roelofs, A., Acheson, D. J., & Takashima, A. (2013). Attention for speaking: Neural substrates of general and specific mechanisms for monitoring and control. Frontiers in Human Neuroscience, 7: 832. doi:10.3389/fnhum.2013.00832.

    Abstract

    Accumulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and monitoring processes have remained relatively underspecified. We report the results of an fMRI study examining the neural substrates related to performance in three attention-demanding tasks varying in the amount of linguistic processing: vocal picture naming while ignoring distractors (picture-word interference, PWI); vocal color naming while ignoring distractors (Stroop); and manual object discrimination while ignoring spatial position (Simon task). All three tasks had congruent and incongruent stimuli, while PWI and Stroop also had neutral stimuli. Analyses focusing on common activation across tasks identified a portion of the dorsal anterior cingulate cortex (ACC) that was active in incongruent trials for all three tasks, suggesting that this region subserves a domain-general attentional control function. In the language tasks, this area showed increased activity for incongruent relative to congruent stimuli, consistent with the involvement of domain-general mechanisms of attentional control in word production. The two language tasks also showed activity in anterior-superior temporal gyrus (STG). Activity increased for neutral PWI stimuli (picture and word did not share the same semantic category) relative to incongruent (categorically related) and congruent stimuli. This finding is consistent with the involvement of language-specific areas in word production, possibly related to retrieval of lexical-semantic information from memory. The current results thus suggest that in addition to engaging language-specific areas for core linguistic processes, speaking also engages the ACC, a region that is likely implementing domain-general attentional control.
  • Piai, V., Meyer, L., Schreuder, R., & Bastiaansen, M. C. M. (2013). Sit down and read on: Working memory and long-term memory in particle-verb processing. Brain and Language, 127(2), 296-306. doi:10.1016/j.bandl.2013.09.015.

    Abstract

    Particle verbs (e.g., look up) are lexical items for which particle and verb share a single lexical entry. Using event-related brain potentials, we examined working memory and long-term memory involvement in particle-verb processing. Dutch participants read sentences with head verbs that allow zero, two, or more than five particles to occur downstream. Additionally, sentences were presented for which the encountered particle was semantically plausible, semantically implausible, or forming a non-existing particle verb. An anterior negativity was observed at the verbs that potentially allow for a particle downstream relative to verbs that do not, possibly indexing storage of the verb until the dependency with its particle can be closed. Moreover, a graded N400 was found at the particle (smallest amplitude for plausible particles and largest for particles forming non-existing particle verbs), suggesting that lexical access to a shared lexical entry occurred at two separate time points.
  • Piai, V., & Roelofs, A. (2013). Working memory capacity and dual-task interference in picture naming. Acta Psychologica, 142, 332-342. doi:10.1016/j.actpsy.2013.01.006.
  • Pijls, F., & Kempen, G. (1986). Een psycholinguïstisch model voor grammatische samentrekking. De Nieuwe Taalgids, 79, 217-234.
  • Plomp, R., & Levelt, W. J. M. (1965). Tonal consonance and critical bandwidth. Journal of the Acoustical Society of America, 38, 548-560. doi:10.1121/1.1909741.

    Abstract

    Firstly, theories are reviewed on the explanation of tonal consonance as the singular nature of tone intervals with frequency ratios corresponding with small integer numbers. An evaluation of these explanations in the light of some experimental studies supports the hypothesis, as promoted by von Helmholtz, that the difference between consonant and dissonant intervals is related to beats of adjacent partials. This relation was studied more fully by experiments in which subjects had to judge simple-tone intervals as a function of test frequency and interval width. The results may be considered as a modification of von Helmholtz's conception and indicate that, as a function of frequency, the transition range between consonant and dissonant intervals is related to critical bandwidth. Simple-tone intervals are evaluated as consonant for frequency differences exceeding this bandwidth. whereas the most dissonant intervals correspond with frequency differences of about a quarter of this bandwidth. On the base of these results, some properties of consonant intervals consisting of complex tones are explained. To answer the question whether critical bandwidth also plays a rôle in music, the chords of two compositions (parts of a trio sonata of J. S. Bach and of a string quartet of A. Dvorák) were analyzed by computing interval distributions as a function of frequency and number of harmonics taken into account. The results strongly suggest that, indeed, critical bandwidth plays an important rôle in music: for a number of harmonics representative for musical instruments, the "density" of simultaneous partials alters as a function of frequency in the same way as critical bandwidth does.
  • Poort, E. D., & Rodd, J. M. (2019). A database of Dutch–English cognates, interlingual homographs and translation equivalents. Journal of Cognition, 2(1): 15. doi:10.5334/joc.67.

    Abstract

    To investigate the structure of the bilingual mental lexicon, researchers in the field of bilingualism often use words that exist in multiple languages: cognates (which have the same meaning) and interlingual homographs (which have a different meaning). A high proportion of these studies have investigated language processing in Dutch–English bilinguals. Despite the abundance of research using such materials, few studies exist that have validated such materials. We conducted two rating experiments in which Dutch–English bilinguals rated the meaning, spelling and pronunciation similarity of pairs of Dutch and English words. On the basis of these results, we present a new database of Dutch–English identical cognates (e.g. “wolf”–“wolf”; n = 58), non-identical cognates (e.g. “kat”–“cat”; n = 74), interlingual homographs (e.g. “angel”–“angel”; n = 72) and translation equivalents (e.g. “wortel”–“carrot”; n = 78). The database can be accessed at http://osf.io/tcdxb/.

    Additional information

    database
  • Poort, E. D., & Rodd, J. M. (2019). Towards a distributed connectionist account of cognates and interlingual homographs: Evidence from semantic relatedness tasks. PeerJ, 7: e6725. doi:10.7717/peerj.6725.

    Abstract

    Background

    Current models of how bilinguals process cognates (e.g., “wolf”, which has the same meaning in Dutch and English) and interlingual homographs (e.g., “angel”, meaning “insect’s sting” in Dutch) are based primarily on data from lexical decision tasks. A major drawback of such tasks is that it is difficult—if not impossible—to separate processes that occur during decision making (e.g., response competition) from processes that take place in the lexicon (e.g., lateral inhibition). Instead, we conducted two English semantic relatedness judgement experiments.
    Methods

    In Experiment 1, highly proficient Dutch–English bilinguals (N = 29) and English monolinguals (N = 30) judged the semantic relatedness of word pairs that included a cognate (e.g., “wolf”–“howl”; n = 50), an interlingual homograph (e.g., “angel”–“heaven”; n = 50) or an English control word (e.g., “carrot”–“vegetable”; n = 50). In Experiment 2, another group of highly proficient Dutch–English bilinguals (N = 101) read sentences in Dutch that contained one of those cognates, interlingual homographs or the Dutch translation of one of the English control words (e.g., “wortel” for “carrot”) approximately 15 minutes prior to completing the English semantic relatedness task.
    Results

    In Experiment 1, there was an interlingual homograph inhibition effect of 39 ms only for the bilinguals, but no evidence for a cognate facilitation effect. Experiment 2 replicated these findings and also revealed that cross-lingual long-term priming had an opposite effect on the cognates and interlingual homographs: recent experience with a cognate in Dutch speeded processing of those items 15 minutes later in English but slowed processing of interlingual homographs. However, these priming effects were smaller than previously observed using a lexical decision task.
    Conclusion

    After comparing our results to studies in both the bilingual and monolingual domain, we argue that bilinguals appear to process cognates and interlingual homographs as monolinguals process polysemes and homonyms, respectively. In the monolingual domain, processing of such words is best modelled using distributed connectionist frameworks. We conclude that it is necessary to explore the viability of such a model for the bilingual case.
  • Postema, M., De Marco, M., Colato, E., & Venneri, A. (2019). A study of within-subject reliability of the brain’s default-mode network. Magnetic Resonance Materials in Physics, Biology and Medicine, 32(3), 391-405. doi:10.1007/s10334-018-00732-0.

    Abstract

    Objective

    Resting-state functional magnetic resonance imaging (fMRI) is promising for Alzheimer’s disease (AD). This study aimed to examine short-term reliability of the default-mode network (DMN), one of the main haemodynamic patterns of the brain.
    Materials and methods

    Using a 1.5 T Philips Achieva scanner, two consecutive resting-state fMRI runs were acquired on 69 healthy adults, 62 patients with mild cognitive impairment (MCI) due to AD, and 28 patients with AD dementia. The anterior and posterior DMN and, as control, the visual-processing network (VPN) were computed using two different methodologies: connectivity of predetermined seeds (theory-driven) and dual regression (data-driven). Divergence and convergence in network strength and topography were calculated with paired t tests, global correlation coefficients, voxel-based correlation maps, and indices of reliability.
    Results

    No topographical differences were found in any of the networks. High correlations and reliability were found in the posterior DMN of healthy adults and MCI patients. Lower reliability was found in the anterior DMN and in the VPN, and in the posterior DMN of dementia patients.
    Discussion

    Strength and topography of the posterior DMN appear relatively stable and reliable over a short-term period of acquisition but with some degree of variability across clinical samples.
  • Postema, M., Van Rooij, D., Anagnostou, E., Arango, C., Auzias, G., Behrmann, M., Busatto Filho, G., Calderoni, S., Calvo, R., Daly, E., Deruelle, C., Di Martino, A., Dinstein, I., Duran, F. L. S., Durston, S., Ecker, C., Ehrlich, S., Fair, D., Fedor, J., Feng, X. and 38 morePostema, M., Van Rooij, D., Anagnostou, E., Arango, C., Auzias, G., Behrmann, M., Busatto Filho, G., Calderoni, S., Calvo, R., Daly, E., Deruelle, C., Di Martino, A., Dinstein, I., Duran, F. L. S., Durston, S., Ecker, C., Ehrlich, S., Fair, D., Fedor, J., Feng, X., Fitzgerald, J., Floris, D. L., Freitag, C. M., Gallagher, L., Glahn, D. C., Gori, I., Haar, S., Hoekstra, L., Jahanshad, N., Jalbrzikowski, M., Janssen, J., King, J. A., Kong, X., Lazaro, L., Lerch, J. P., Luna, B., Martinho, M. M., McGrath, J., Medland, S. E., Muratori, F., Murphy, C. M., Murphy, D. G. M., O'Hearn, K., Oranje, B., Parellada, M., Puig, O., Retico, A., Rosa, P., Rubia, K., Shook, D., Taylor, M., Tosetti, M., Wallace, G. L., Zhou, F., Thompson, P., Fisher, S. E., Buitelaar, J. K., & Francks, C. (2019). Altered structural brain asymmetry in autism spectrum disorder in a study of 54 datasets. Nature Communications, 10: 4958. doi:10.1038/s41467-019-13005-8.
  • Postema, M., Carrion Castillo, A., Fisher, S. E., Vingerhoets, G., & Francks, C. (2020). The genetics of situs inversus without primary ciliary dyskinesia. Scientific Reports, 10: 3677. doi:10.1038/s41598-020-60589-z.

    Abstract

    Situs inversus (SI), a left-right mirror reversal of the visceral organs, can occur with recessive Primary Ciliary Dyskinesia (PCD). However, most people with SI do not have PCD, and the etiology of their condition remains poorly studied. We sequenced the genomes of 15 people with SI, of which six had PCD, as well as 15 controls. Subjects with non-PCD SI in this sample had an elevated rate of left-handedness (five out of nine), which suggested possible developmental mechanisms linking brain and body laterality. The six SI subjects with PCD all had likely recessive mutations in genes already known to cause PCD. Two non-PCD SI cases also had recessive mutations in known PCD genes, suggesting reduced penetrance for PCD in some SI cases. One non-PCD SI case had recessive mutations in PKD1L1, and another in CFAP52 (also known as WDR16). Both of these genes have previously been linked to SI without PCD. However, five of the nine non-PCD SI cases, including three of the left-handers in this dataset, had no obvious monogenic basis for their condition. Environmental influences, or possible random effects in early development, must be considered.

    Additional information

    Supplementary information
  • St Pourcain, B., Whitehouse, A. J. O., Ang, W. Q., Warrington, N. M., Glessner, J. T., Wang, K., Timpson, N. J., Evans, D. M., Kemp, J. P., Ring, S. M., McArdle, W. L., Golding, J., Hakonarson, H., Pennell, C. E., & Smith, G. (2013). Common variation contributes to the genetic architecture of social communication traits. Molecular Autism, 4: 34. doi:10.1186/2040-2392-4-34.

    Abstract

    Background: Social communication difficulties represent an autistic trait that is highly heritable and persistent during the course of development. However, little is known about the underlying genetic architecture of this phenotype. Methods: We performed a genome-wide association study on parent-reported social communication problems using items of the children’s communication checklist (age 10 to 11 years) studying single and/or joint marker effects. Analyses were conducted in a large UK population-based birth cohort (Avon Longitudinal Study of Parents and their Children, ALSPAC, N = 5,584) and followed-up within a sample of children with comparable measures from Western Australia (RAINE, N = 1364). Results: Two of our seven independent top signals (P- discovery <1.0E-05) were replicated (0.009 < P- replication ≤0.02) within RAINE and suggested evidence for association at 6p22.1 (rs9257616, meta-P = 2.5E-07) and 14q22.1 (rs2352908, meta-P = 1.1E-06). The signal at 6p22.1 was identified within the olfactory receptor gene cluster within the broader major histocompatibility complex (MHC) region. The strongest candidate locus within this genomic area was TRIM27. This gene encodes an ubiquitin E3 ligase, which is an interaction partner of methyl-CpG-binding domain (MBD) proteins, such as MBD3 and MBD4, and rare protein-coding mutations within MBD3 and MBD4 have been linked to autism. The signal at 14q22.1 was found within a gene-poor region. Single-variant findings were complemented by estimations of the narrow-sense heritability in ALSPAC suggesting that approximately a fifth of the phenotypic variance in social communication traits is accounted for by joint additive effects of genotyped single nucleotide polymorphisms throughout the genome (h2(SE) = 0.18(0.066), P = 0.0027). Conclusion: Overall, our study provides both joint and single-SNP-based evidence for the contribution of common polymorphisms to variation in social communication phenotypes.
  • Pouw, W., Paxton, A., Harrison, S. J., & Dixon, J. A. (2020). Reply to Ravignani and Kotz: Physical impulses from upper-limb movements impact the respiratory–vocal system. Proceedings of the National Academy of Sciences of the United States of America, 117(38), 23225-23226. doi:10.1073/pnas.2015452117.
  • Pouw, W., Paxton, A., Harrison, S. J., & Dixon, J. A. (2020). Acoustic information about upper limb movement in voicing. Proceedings of the National Academy of Sciences of the United States of America, 117(21), 11364-11367. doi:10.1073/pnas.2004163117.

    Abstract

    We show that the human voice has complex acoustic qualities that are directly coupled to peripheral musculoskeletal tensioning of the body, such as subtle wrist movements. In this study, human vocalizers produced a steady-state vocalization while rhythmically moving the wrist or the arm at different tempos. Although listeners could only hear but not see the vocalizer, they were able to completely synchronize their own rhythmic wrist or arm movement with the movement of the vocalizer which they perceived in the voice acoustics. This study corroborates
    recent evidence suggesting that the human voice is constrained by bodily tensioning affecting the respiratory-vocal system. The current results show that the human voice contains a bodily imprint that is directly informative for the interpersonal perception of another’s dynamic physical states.
  • Pouw, W., Wassenburg, S. I., Hostetter, A. B., De Koning, B. B., & Paas, F. (2020). Does gesture strengthen sensorimotor knowledge of objects? The case of the size-weight illusion. Psychological Research, 84(4), 966-980. doi:10.1007/s00426-018-1128-y.

    Abstract

    Co-speech gestures have been proposed to strengthen sensorimotor knowledge related to objects’ weight and manipulability.
    This pre-registered study (https ://www.osf.io/9uh6q /) was designed to explore how gestures affect memory for sensorimotor
    information through the application of the visual-haptic size-weight illusion (i.e., objects weigh the same, but are experienced
    as different in weight). With this paradigm, a discrepancy can be induced between participants’ conscious illusory
    perception of objects’ weight and their implicit sensorimotor knowledge (i.e., veridical motor coordination). Depending on
    whether gestures reflect and strengthen either of these types of knowledge, gestures may respectively decrease or increase
    the magnitude of the size-weight illusion. Participants (N = 159) practiced a problem-solving task with small and large
    objects that were designed to induce a size-weight illusion, and then explained the task with or without co-speech gesture
    or completed a control task. Afterwards, participants judged the heaviness of objects from memory and then while holding
    them. Confirmatory analyses revealed an inverted size-weight illusion based on heaviness judgments from memory and we
    found gesturing did not affect judgments. However, exploratory analyses showed reliable correlations between participants’
    heaviness judgments from memory and (a) the number of gestures produced that simulated actions, and (b) the kinematics of
    the lifting phases of those gestures. These findings suggest that gestures emerge as sensorimotor imaginings that are governed
    by the agent’s conscious renderings about the actions they describe, rather than implicit motor routines.
  • Pouw, W., Harrison, S. J., Esteve-Gibert, N., & Dixon, J. A. (2020). Energy flows in gesture-speech physics: The respiratory-vocal system and its coupling with hand gestures. The Journal of the Acoustical Society of America, 148(3): 1231. doi:10.1121/10.0001730.

    Abstract

    Expressive moments in communicative hand gestures often align with emphatic stress in speech. It has recently been found that acoustic markers of emphatic stress arise naturally during steady-state phonation when upper-limb movements impart physical impulses on the body, most likely affecting acoustics via respiratory activity. In this confirmatory study, participants (N = 29) repeatedly uttered consonant-vowel (/pa/) mono-syllables while moving in particular phase relations with speech, or not moving the upper limbs. This study shows that respiration-related activity is affected by (especially high-impulse) gesturing when vocalizations occur near peaks in physical impulse. This study further shows that gesture-induced moments of bodily impulses increase the amplitude envelope of speech, while not similarly affecting the Fundamental Frequency (F0). Finally, tight relations between respiration-related activity and vocalization were observed, even in the absence of movement, but even more so when upper-limb movement is present. The current findings expand a developing line of research showing that speech is modulated by functional biomechanical linkages between hand gestures and the respiratory system. This identification of gesture-speech biomechanics promises to provide an alternative phylogenetic, ontogenetic, and mechanistic explanatory route of why communicative upper limb movements co-occur with speech in humans.
    ACKNOWLEDGMENTS

    Additional information

    Link to Preprint on OSF
  • Pouw, W., & Dixon, J. A. (2019). Entrainment and modulation of gesture-speech synchrony under delayed auditory feedback. Cognitive Science, 43(3): e12721. doi:10.1111/cogs.12721.

    Abstract

    Gesture–speech synchrony re-stabilizes when hand movement or speech is disrupted by a delayed
    feedback manipulation, suggesting strong bidirectional coupling between gesture and speech. Yet it
    has also been argued from case studies in perceptual–motor pathology that hand gestures are a special
    kind of action that does not require closed-loop re-afferent feedback to maintain synchrony with
    speech. In the current pre-registered within-subject study, we used motion tracking to conceptually
    replicate McNeill’s (1992) classic study on gesture–speech synchrony under normal and 150 ms
    delayed auditory feedback of speech conditions (NO DAF vs. DAF). Consistent with, and extending
    McNeill’s original results, we obtain evidence that (a) gesture-speech synchrony is more stable
    under DAF versus NO DAF (i.e., increased coupling effect), (b) that gesture and speech variably
    entrain to the external auditory delay as indicated by a consistent shift in gesture-speech synchrony
    offsets (i.e., entrainment effect), and (c) that the coupling effect and the entrainment effect are codependent.
    We suggest, therefore, that gesture–speech synchrony provides a way for the cognitive
    system to stabilize rhythmic activity under interfering conditions.

    Additional information

    https://osf.io/pcde3/
  • Pouw, W., & Dixon, J. A. (2020). Gesture networks: Introducing dynamic time warping and network analysis for the kinematic study of gesture ensembles. Discourse Processes, 57(4), 301-319. doi:10.1080/0163853X.2019.1678967.

    Abstract

    We introduce applications of established methods in time-series and network
    analysis that we jointly apply here for the kinematic study of gesture
    ensembles. We define a gesture ensemble as the set of gestures produced
    during discourse by a single person or a group of persons. Here we are
    interested in how gestures kinematically relate to one another. We use
    a bivariate time-series analysis called dynamic time warping to assess how
    similar each gesture is to other gestures in the ensemble in terms of their
    velocity profiles (as well as studying multivariate cases with gesture velocity
    and speech amplitude envelope profiles). By relating each gesture event to
    all other gesture events produced in the ensemble, we obtain a weighted
    matrix that essentially represents a network of similarity relationships. We
    can therefore apply network analysis that can gauge, for example, how
    diverse or coherent certain gestures are with respect to the gesture ensemble.
    We believe these analyses promise to be of great value for gesture
    studies, as we can come to understand how low-level gesture features
    (kinematics of gesture) relate to the higher-order organizational structures
    present at the level of discourse.

    Additional information

    Open Data OSF
  • Pouw, W., Harrison, S. J., & Dixon, J. A. (2020). Gesture–speech physics: The biomechanical basis for the emergence of gesture–speech synchrony. Journal of Experimental Psychology: General, 149(2), 391-404. doi:10.1037/xge0000646.

    Abstract

    The phenomenon of gesture–speech synchrony involves tight coupling of prosodic contrasts in gesture
    movement (e.g., peak velocity) and speech (e.g., peaks in fundamental frequency; F0). Gesture–speech
    synchrony has been understood as completely governed by sophisticated neural-cognitive mechanisms.
    However, gesture–speech synchrony may have its original basis in the resonating forces that travel through the
    body. In the current preregistered study, movements with high physical impact affected phonation in line with
    gesture–speech synchrony as observed in natural contexts. Rhythmic beating of the arms entrained phonation
    acoustics (F0 and the amplitude envelope). Such effects were absent for a condition with low-impetus
    movements (wrist movements) and a condition without movement. Further, movement–phonation synchrony
    was more pronounced when participants were standing as opposed to sitting, indicating a mediating role for
    postural stability. We conclude that gesture–speech synchrony has a biomechanical basis, which will have
    implications for our cognitive, ontogenetic, and phylogenetic understanding of multimodal language.
  • Pouw, W., Rop, G., De Koning, B., & Paas, F. (2019). The cognitive basis for the split-attention effect. Journal of Experimental Psychology: General, 148(11), 2058-2075. doi:10.1037/xge0000578.

    Abstract

    The split-attention effect entails that learning from spatially separated, but mutually referring information
    sources (e.g., text and picture), is less effective than learning from the equivalent spatially integrated
    sources. According to cognitive load theory, impaired learning is caused by the working memory load
    imposed by the need to distribute attention between the information sources and mentally integrate them.
    In this study, we directly tested whether the split-attention effect is caused by spatial separation per se.
    Spatial distance was varied in basic cognitive tasks involving pictures (Experiment 1) and text–picture
    combinations (Experiment 2; preregistered study), and in more ecologically valid learning materials
    (Experiment 3). Experiment 1 showed that having to integrate two pictorial stimuli at greater distances
    diminished performance on a secondary visual working memory task, but did not lead to slower
    integration. When participants had to integrate a picture and written text in Experiment 2, a greater
    distance led to slower integration of the stimuli, but not to diminished performance on the secondary task.
    Experiment 3 showed that presenting spatially separated (compared with integrated) textual and pictorial
    information yielded fewer integrative eye movements, but this was not further exacerbated when
    increasing spatial distance even further. This effect on learning processes did not lead to differences in
    learning outcomes between conditions. In conclusion, we provide evidence that larger distances between
    spatially separated information sources influence learning processes, but that spatial separation on its
    own is not likely to be the only, nor a sufficient, condition for impacting learning outcomes.

    Files private

    Request files
  • Pouw, W., Trujillo, J. P., & Dixon, J. A. (2020). The quantification of gesture–speech synchrony: A tutorial and validation of multimodal data acquisition using device-based and video-based motion tracking. Behavior Research Methods, 52, 723-740. doi:10.3758/s13428-019-01271-9.

    Abstract

    There is increasing evidence that hand gestures and speech synchronize their activity on multiple dimensions and timescales. For example, gesture’s kinematic peaks (e.g., maximum speed) are coupled with prosodic markers in speech. Such coupling operates on very short timescales at the level of syllables (200 ms), and therefore requires high-resolution measurement of gesture kinematics and speech acoustics. High-resolution speech analysis is common for gesture studies, given that field’s classic ties with (psycho)linguistics. However, the field has lagged behind in the objective study of gesture kinematics (e.g., as compared to research on instrumental action). Often kinematic peaks in gesture are measured by eye, where a “moment of maximum effort” is determined by several raters. In the present article, we provide a tutorial on more efficient methods to quantify the temporal properties of gesture kinematics, in which we focus on common challenges and possible solutions that come with the complexities of studying multimodal language. We further introduce and compare, using an actual gesture dataset (392 gesture events), the performance of two video-based motion-tracking methods (deep learning vs. pixel change) against a high-performance wired motion-tracking system (Polhemus Liberty). We show that the videography methods perform well in the temporal estimation of kinematic peaks, and thus provide a cheap alternative to expensive motion-tracking systems. We hope that the present article incites gesture researchers to embark on the widespread objective study of gesture kinematics and their relation to speech.
  • Preisig, B., Sjerps, M. J., Hervais-Adelman, A., Kösem, A., Hagoort, P., & Riecke, L. (2020). Bilateral gamma/delta transcranial alternating current stimulation affects interhemispheric speech sound integration. Journal of Cognitive Neuroscience, 32(7), 1242-1250. doi:10.1162/jocn_a_01498.

    Abstract

    Perceiving speech requires the integration of different speech cues, that is, formants. When the speech signal is split so that different cues are presented to the right and left ear (dichotic listening), comprehension requires the integration of binaural information. Based on prior electrophysiological evidence, we hypothesized that the integration of dichotically presented speech cues is enabled by interhemispheric phase synchronization between primary and secondary auditory cortex in the gamma frequency band. We tested this hypothesis by applying transcranial alternating current stimulation (TACS) bilaterally above the superior temporal lobe to induce or disrupt interhemispheric gamma-phase coupling. In contrast to initial predictions, we found that gamma TACS applied in-phase above the two hemispheres (interhemispheric lag 0°) perturbs interhemispheric integration of speech cues, possibly because the applied stimulation perturbs an inherent phase lag between the left and right auditory cortex. We also observed this disruptive effect when applying antiphasic delta TACS (interhemispheric lag 180°). We conclude that interhemispheric phase coupling plays a functional role in interhemispheric speech integration. The direction of this effect may depend on the stimulation frequency.
  • Preisig, B., Sjerps, M. J., Kösem, A., & Riecke, L. (2019). Dual-site high-density 4Hz transcranial alternating current stimulation applied over auditory and motor cortical speech areas does not influence auditory-motor mapping. Brain Stimulation, 12(3), 775-777. doi:10.1016/j.brs.2019.01.007.
  • Preisig, B., & Sjerps, M. J. (2019). Hemispheric specializations affect interhemispheric speech sound integration during duplex perception. The Journal of the Acoustical Society of America, 145, EL190-EL196. doi:10.1121/1.5092829.

    Abstract

    The present study investigated whether speech-related spectral information benefits from initially predominant right or left hemisphere processing. Normal hearing individuals categorized speech sounds composed of an ambiguous base (perceptually intermediate between /ga/ and /da/), presented to one ear, and a disambiguating low or high F3 chirp presented to the other ear. Shorter response times were found when the chirp was presented to the left ear than to the right ear (inducing initially right-hemisphere chirp processing), but no between-ear differences in strength of overall integration. The results are in line with the assumptions of a right hemispheric dominance for spectral processing.

    Additional information

    Supplementary material
  • Prystauka, Y., & Lewis, A. G. (2019). The power of neural oscillations to inform sentence comprehension: A linguistic perspective. Language and Linguistics Compass, 13 (9): e12347. doi:10.1111/lnc3.12347.

    Abstract

    The field of psycholinguistics is currently experiencing an explosion of interest in the analysis of neural oscillations—rhythmic brain activity synchronized at different temporal and spatial levels. Given that language comprehension relies on a myriad of processes, which are carried out in parallel in distributed brain networks, there is hope that this methodology might bring the field closer to understanding some of the more basic (spatially and temporally distributed, yet at the same time often overlapping) neural computations that support language function. In this review, we discuss existing proposals linking oscillatory dynamics in different frequency bands to basic neural computations and review relevant theories suggesting associations between band-specific oscillations and higher-level cognitive processes. More or less consistent patterns of oscillatory activity related to certain types of linguistic processing can already be derived from the evidence that has accumulated over the past few decades. The centerpiece of the current review is a synthesis of such patterns grouped by linguistic phenomenon. We restrict our review to evidence linking measures of oscillatory
    power to the comprehension of sentences, as well as linguistically (and/or pragmatically) more complex structures. For each grouping, we provide a brief summary and a table of associated oscillatory signatures that a psycholinguist might expect to find when employing a particular linguistic task. Summarizing across different paradigms, we conclude that a handful of basic neural oscillatory mechanisms are likely recruited in different ways and at different times for carrying out a variety of linguistic computations.
  • Quinn, S., & Kidd, E. (2019). Symbolic play promotes non‐verbal communicative exchange in infant–caregiver dyads. British Journal of Developmental Psychology, 37(1), 33-50. doi:10.1111/bjdp.12251.

    Abstract

    Symbolic play has long been considered a fertile context for communicative development (Bruner, 1983, Child's talk: Learning to use language, Oxford University Press, Oxford; Vygotsky, 1962, Thought and language, MIT Press, Cambridge, MA; Vygotsky, 1978, Mind in society: The development of higher psychological processes. Harvard University Press, Cambridge, MA). In the current study, we examined caregiver–infant interaction during symbolic play and compared it to interaction in a comparable but non‐symbolic context (i.e., ‘functional’ play). Fifty‐four (N = 54) caregivers and their 18‐month‐old infants were observed engaging in 20 min of play (symbolic, functional). Play interactions were coded and compared across play conditions for joint attention (JA) and gesture use. Compared with functional play, symbolic play was characterized by greater frequency and duration of JA and greater gesture use, particularly the use of iconic gestures with an object in hand. The results suggest that symbolic play provides a rich context for the exchange and negotiation of meaning, and thus may contribute to the development of important skills underlying communicative development.
  • Radenkovic, S., Bird, M. J., Emmerzaal, T. L., Wong, S. Y., Felgueira, C., Stiers, K. M., Sabbagh, L., Himmelreich, N., Poschet, G., Windmolders, P., Verheijen, J., Witters, P., Altassan, R., Honzik, T., Eminoglu, T. F., James, P. M., Edmondson, A. C., Hertecant, J., Kozicz, T., Thiel, C. and 5 moreRadenkovic, S., Bird, M. J., Emmerzaal, T. L., Wong, S. Y., Felgueira, C., Stiers, K. M., Sabbagh, L., Himmelreich, N., Poschet, G., Windmolders, P., Verheijen, J., Witters, P., Altassan, R., Honzik, T., Eminoglu, T. F., James, P. M., Edmondson, A. C., Hertecant, J., Kozicz, T., Thiel, C., Vermeersch, P., Cassiman, D., Beamer, L., Morava, E., & Ghesquiere, B. (2019). The metabolic map into the pathomechanism and treatment of PGM1-CDG. American Journal of Human Genetics, 104(5), 835-846. doi:10.1016/j.ajhg.2019.03.003.

    Abstract

    Phosphoglucomutase 1 (PGM1) encodes the metabolic enzyme that interconverts glucose-6-P and glucose-1-P. Mutations in PGM1 cause impairment in glycogen metabolism and glycosylation, the latter manifesting as a congenital disorder of glycosylation (CDG). This unique metabolic defect leads to abnormal N-glycan synthesis in the endoplasmic reticulum (ER) and the Golgi apparatus (GA). On the basis of the decreased galactosylation in glycan chains, galactose was administered to individuals with PGM1-CDG and was shown to markedly reverse most disease-related laboratory abnormalities. The disease and treatment mechanisms, however, have remained largely elusive. Here, we confirm the clinical benefit of galactose supplementation in PGM1-CDG-affected individuals and obtain significant insights into the functional and biochemical regulation of glycosylation. We report here that, by using tracer-based metabolomics, we found that galactose treatment of PGM1-CDG fibroblasts metabolically re-wires their sugar metabolism, and as such replenishes the depleted levels of galactose-1-P, as well as the levels of UDP-glucose and UDP-galactose, the nucleotide sugars that are required for ER- and GA-linked glycosylation, respectively. To this end, we further show that the galactose in UDP-galactose is incorporated into mature, de novo glycans. Our results also allude to the potential of monosaccharide therapy for several other CDG.
  • Räsänen, O., Seshadri, S., Karadayi, J., Riebling, E., Bunce, J., Cristia, A., Metze, F., Casillas, M., Rosemberg, C., Bergelson, E., & Soderstrom, M. (2019). Automatic word count estimation from daylong child-centered recordings in various language environments using language-independent syllabification of speech. Speech Communication, 113, 63-80. doi:10.1016/j.specom.2019.08.005.

    Abstract

    Automatic word count estimation (WCE) from audio recordings can be used to quantify the amount of verbal communication in a recording environment. One key application of WCE is to measure language input heard by infants and toddlers in their natural environments, as captured by daylong recordings from microphones worn by the infants. Although WCE is nearly trivial for high-quality signals in high-resource languages, daylong recordings are substantially more challenging due to the unconstrained acoustic environments and the presence of near- and far-field speech. Moreover, many use cases of interest involve languages for which reliable ASR systems or even well-defined lexicons are not available. A good WCE system should also perform similarly for low- and high-resource languages in order to enable unbiased comparisons across different cultures and environments. Unfortunately, the current state-of-the-art solution, the LENA system, is based on proprietary software and has only been optimized for American English, limiting its applicability. In this paper, we build on existing work on WCE and present the steps we have taken towards a freely available system for WCE that can be adapted to different languages or dialects with a limited amount of orthographically transcribed speech data. Our system is based on language-independent syllabification of speech, followed by a language-dependent mapping from syllable counts (and a number of other acoustic features) to the corresponding word count estimates. We evaluate our system on samples from daylong infant recordings from six different corpora consisting of several languages and socioeconomic environments, all manually annotated with the same protocol to allow direct comparison. We compare a number of alternative techniques for the two key components in our system: speech activity detection and automatic syllabification of speech. As a result, we show that our system can reach relatively consistent WCE accuracy across multiple corpora and languages (with some limitations). In addition, the system outperforms LENA on three of the four corpora consisting of different varieties of English. We also demonstrate how an automatic neural network-based syllabifier, when trained on multiple languages, generalizes well to novel languages beyond the training data, outperforming two previously proposed unsupervised syllabifiers as a feature extractor for WCE.
  • Rasenberg, M., Ozyurek, A., & Dingemanse, M. (2020). Alignment in multimodal interaction: An integrative framework. Cognitive Science, 44(11): e12911. doi:10.1111/cogs.12911.

    Abstract

    When people are engaged in social interaction, they can repeat aspects of each other’s communicative behavior, such as words or gestures. This kind of behavioral alignment has been studied across a wide range of disciplines and has been accounted for by diverging theories. In this paper, we review various operationalizations of lexical and gestural alignment. We reveal that scholars have fundamentally different takes on when and how behavior is considered to be aligned, which makes it difficult to compare findings and draw uniform conclusions. Furthermore, we show that scholars tend to focus on one particular dimension of alignment (traditionally, whether two instances of behavior overlap in form), while other dimensions remain understudied. This hampers theory testing and building, which requires a well‐defined account of the factors that are central to or might enhance alignment. To capture the complex nature of alignment, we identify five key dimensions to formalize the relationship between any pair of behavior: time, sequence, meaning, form, and modality. We show how assumptions regarding the underlying mechanism of alignment (placed along the continuum of priming vs. grounding) pattern together with operationalizations in terms of the five dimensions. This integrative framework can help researchers in the field of alignment and related phenomena (including behavior matching, mimicry, entrainment, and accommodation) to formulate their hypotheses and operationalizations in a more transparent and systematic manner. The framework also enables us to discover unexplored research avenues and derive new hypotheses regarding alignment.
  • Rasenberg, M., Rommers, J., & Van Bergen, G. (2020). Anticipating predictability: An ERP investigation of expectation-managing discourse markers in dialogue comprehension. Language, Cognition and Neuroscience, 35(1), 1-16. doi:10.1080/23273798.2019.1624789.

    Abstract

    n two ERP experiments, we investigated how the Dutch discourse markers eigenlijk “actually”, signalling expectation disconfirmation, and inderdaad “indeed”, signalling expectation confirmation, affect incremental dialogue comprehension. We investigated their effects on the processing of subsequent (un)predictable words, and on the quality of word representations in memory. Participants read dialogues with (un)predictable endings that followed a discourse marker (eigenlijk in Experiment 1, inderdaad in Experiment 2) or a control adverb. We found no strong evidence that discourse markers modulated online predictability effects elicited by subsequently read words. However, words following eigenlijk elicited an enhanced posterior post-N400 positivity compared with words following an adverb regardless of their predictability, potentially reflecting increased processing costs associated with pragmatically driven discourse updating. No effects of inderdaad were found on online processing, but inderdaad seemed to influence memory for (un)predictable dialogue endings. These findings nuance our understanding of how pragmatic markers affect incremental language comprehension.

    Additional information

    plcp_a_1624789_sm6686.docx
  • Ravignani, A., & Kotz, S. (2020). Breathing, voice and synchronized movement. Proceedings of the National Academy of Sciences of the United States of America, 117(38), 23223-23224. doi:10.1073/pnas.2011402117.
  • Ravignani, A., Sonnweber, R.-S., Stobbe, N., & Fitch, W. T. (2013). Action at a distance: Dependency sensitivity in a New World primate. Biology Letters, 9(6): 0130852. doi:10.1098/rsbl.2013.0852.

    Abstract

    Sensitivity to dependencies (correspondences between distant items) in sensory stimuli plays a crucial role in human music and language. Here, we show that squirrel monkeys (Saimiri sciureus) can detect abstract, non-adjacent dependencies in auditory stimuli. Monkeys discriminated between tone sequences containing a dependency and those lacking it, and generalized to previously unheard pitch classes and novel dependency distances. This constitutes the first pattern learning study where artificial stimuli were designed with the species' communication system in mind. These results suggest that the ability to recognize dependencies represents a capability that had already evolved in humans’ last common ancestor with squirrel monkeys, and perhaps before.
  • Ravignani, A. (2019). [Review of the book Animal beauty: On the evolution of bological aesthetics by C. Nüsslein-Volhard]. Animal Behaviour, 155, 171-172. doi:10.1016/j.anbehav.2019.07.005.
  • Ravignani, A. (2019). [Review of the book The origins of musicality ed. by H. Honing]. Perception, 48(1), 102-105. doi:10.1177/0301006618817430.
  • Ravignani, A. (2019). Humans and other musical animals [Review of the book The evolving animal orchestra: In search of what makes us musical by Henkjan Honing]. Current Biology, 29(8), R271-R273. doi:10.1016/j.cub.2019.03.013.
  • Ravignani, A., & de Reus, K. (2019). Modelling animal interactive rhythms in communication. Evolutionary Bioinformatics, 15, 1-14. doi:10.1177/1176934318823558.

    Abstract

    Time is one crucial dimension conveying information in animal communication. Evolution has shaped animals’ nervous systems to produce signals with temporal properties fitting their socio-ecological niches. Many quantitative models of mechanisms underlying rhythmic behaviour exist, spanning insects, crustaceans, birds, amphibians, and mammals. However, these computational and mathematical models are often presented in isolation. Here, we provide an overview of the main mathematical models employed in the study of animal rhythmic communication among conspecifics. After presenting basic definitions and mathematical formalisms, we discuss each individual model. These computational models are then compared using simulated data to uncover similarities and key differences in the underlying mechanisms found across species. Our review of the empirical literature is admittedly limited. We stress the need of using comparative computer simulations – both before and after animal experiments – to better understand animal timing in interaction. We hope this article will serve as a potential first step towards a common computational framework to describe temporal interactions in animals, including humans.

    Additional information

    Supplemental material files
  • Ravignani, A., Verga, L., & Greenfield, M. D. (2019). Interactive rhythms across species: The evolutionary biology of animal chorusing and turn-taking. Annals of the New York Academy of Sciences, 1453(1), 12-21. doi:10.1111/nyas.14230.

    Abstract

    The study of human language is progressively moving toward comparative and interactive frameworks, extending the concept of turn‐taking to animal communication. While such an endeavor will help us understand the interactive origins of language, any theoretical account for cross‐species turn‐taking should consider three key points. First, animal turn‐taking must incorporate biological studies on animal chorusing, namely how different species coordinate their signals over time. Second, while concepts employed in human communication and turn‐taking, such as intentionality, are still debated in animal behavior, lower level mechanisms with clear neurobiological bases can explain much of animal interactive behavior. Third, social behavior, interactivity, and cooperation can be orthogonal, and the alternation of animal signals need not be cooperative. Considering turn‐taking a subset of chorusing in the rhythmic dimension may avoid overinterpretation and enhance the comparability of future empirical work.
  • Ravignani, A. (2019). Everything you always wanted to know about sexual selection in 129 pages [Review of the book Sexual selection: A very short introduction by M. Zuk and L. W. Simmons]. Journal of Mammalogy, 100(6), 2004-2005. doi:10.1093/jmammal/gyz168.
  • Ravignani, A., & Gamba, M. (2019). Evolving musicality [Review of the book The evolving animal orchestra: In search of what makes us musical by Henkjan Honing]. Trends in Ecology and Evolution, 34(7), 583-584. doi:10.1016/j.tree.2019.04.016.
  • Ravignani, A., Kello, C. T., de Reus, K., Kotz, S. A., Dalla Bella, S., Mendez-Arostegui, M., Rapado-Tamarit, B., Rubio-Garcia, A., & de Boer, B. (2019). Ontogeny of vocal rhythms in harbor seal pups: An exploratory study. Current Zoology, 65(1), 107-120. doi:10.1093/cz/zoy055.

    Abstract

    Puppyhood is a very active social and vocal period in a harbor seal's life Phoca vitulina. An important feature of vocalizations is their temporal and rhythmic structure, and understanding vocal timing and rhythms in harbor seals is critical to a cross-species hypothesis in evolutionary neuroscience that links vocal learning, rhythm perception, and synchronization. This study utilized analytical techniques that may best capture rhythmic structure in pup vocalizations with the goal of examining whether (1) harbor seal pups show rhythmic structure in their calls and (2) rhythms evolve over time. Calls of 3 wild-born seal pups were recorded daily over the course of 1-3 weeks; 3 temporal features were analyzed using 3 complementary techniques. We identified temporal and rhythmic structure in pup calls across different time windows. The calls of harbor seal pups exhibit some degree of temporal and rhythmic organization, which evolves over puppyhood and resembles that of other species' interactive communication. We suggest next steps for investigating call structure in harbor seal pups and propose comparative hypotheses to test in other pinniped species.
  • Ravignani, A., Filippi, P., & Fitch, W. T. (2019). Perceptual tuning influences rule generalization: Testing humans with monkey-tailored stimuli. i-Perception, 10(2), 1-5. doi:10.1177/2041669519846135.

    Abstract

    Comparative research investigating how nonhuman animals generalize patterns of auditory stimuli often uses sequences of human speech syllables and reports limited generalization abilities in animals. Here, we reverse this logic, testing humans with stimulus sequences tailored to squirrel monkeys. When test stimuli are familiar (human voices), humans succeed in two types of generalization. However, when the same structural rule is instantiated over unfamiliar but perceivable sounds within squirrel monkeys’ optimal hearing frequency range, human participants master only one type of generalization. These findings have methodological implications for the design of comparative experiments, which should be fair towards all tested species’ proclivities and limitations.

    Additional information

    Supplemental material files
  • Ravignani, A., Olivera, M. V., Gingras, B., Hofer, R., Hernandez, R. C., Sonnweber, R. S., & Fitch, T. W. (2013). Primate drum kit: A system for studying acoustic pattern production by non-human primates using acceleration and strain sensors. Sensors, 13(8), 9790-9820. doi:10.3390/s130809790.

    Abstract

    The possibility of achieving experimentally controlled, non-vocal acoustic production in non-human primates is a key step to enable the testing of a number of hypotheses on primate behavior and cognition. However, no device or solution is currently available, with the use of sensors in non-human animals being almost exclusively devoted to applications in food industry and animal surveillance. Specifically, no device exists which simultaneously allows: (i) spontaneous production of sound or music by non-human animals via object manipulation, (ii) systematical recording of data sensed from these movements, (iii) the possibility to alter the acoustic feedback properties of the object using remote control. We present two prototypes we developed for application with chimpanzees (Pan troglodytes) which, while fulfilling the aforementioned requirements, allow to arbitrarily associate sounds to physical object movements. The prototypes differ in sensing technology, costs, intended use and construction requirements. One prototype uses four piezoelectric elements embedded between layers of Plexiglas and foam. Strain data is sent to a computer running Python through an Arduino board. A second prototype consists in a modified Wii Remote contained in a gum toy. Acceleration data is sent via Bluetooth to a computer running Max/MSP. We successfully pilot tested the first device with a group of chimpanzees. We foresee using these devices for a range of cognitive experiments.
  • Ravignani, A. (2019). Singing seals imitate human speech. Journal of Experimental Biology, 222: jeb208447. doi:10.1242/jeb.208447.

Share this page