Displaying 1 - 100 of 122
  • Alferink, I., & Gullberg, M. (2014). French-Dutch bilinguals do not maintain obligatory semantic distinctions: Evidence from placement verbs. Bilingualism: Language and Cognition, 17, 22-37. doi:10.1017/S136672891300028X.

    Abstract

    It is often said that bilinguals are not the sum of two monolinguals but that bilingual systems represent a third pattern. This study explores the exact nature of this pattern. We ask whether there is evidence of a merged system when one language makes an obligatory distinction that the other one does not, namely in the case of placement verbs in French and Dutch, and whether such a merged system is realised as a more general or a more specific system. The results show that in elicited descriptions Belgian French-Dutch bilinguals drop one of the categories in one of the languages, resulting in a more general semantic system in comparison with the non-contact variety. They do not uphold the obligatory distinction in the verb nor elsewhere despite its communicative relevance. This raises important questions regarding how widespread these differences are and what drives these patterns
  • Bergmann, C., Ten Bosch, L., & Boves, L. (2014). A computational model of the headturn preference procedure: Design, challenges, and insights. In J. Mayor, & P. Gomez (Eds.), Computational Models of Cognitive Processes (pp. 125-136). World Scientific. doi:10.1142/9789814458849_0010.

    Abstract

    The Headturn Preference Procedure (HPP) is a frequently used method (e.g., Jusczyk & Aslin; and subsequent studies) to investigate linguistic abilities in infants. In this paradigm infants are usually first familiarised with words and then tested for a listening preference for passages containing those words in comparison to unrelated passages. Listening preference is defined as the time an infant spends attending to those passages with his or her head turned towards a flashing light and the speech stimuli. The knowledge and abilities inferred from the results of HPP studies have been used to reason about and formally model early linguistic skills and language acquisition. However, the actual cause of infants' behaviour in HPP experiments has been subject to numerous assumptions as there are no means to directly tap into cognitive processes. To make these assumptions explicit, and more crucially, to understand how infants' behaviour emerges if only general learning mechanisms are assumed, we introduce a computational model of the HPP. Simulations with the computational HPP model show that the difference in infant behaviour between familiarised and unfamiliar words in passages can be explained by a general learning mechanism and that many assumptions underlying the HPP are not necessarily warranted. We discuss the implications for conventional interpretations of the outcomes of HPP experiments.
  • Bergmann, C. (2014). Computational models of early language acquisition and the role of different voices. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Böckler, A., Hömke, P., & Sebanz, N. (2014). Invisible Man: Exclusion from shared attention affects gaze behavior and self-reports. Social Psychological and Personality Science, 5(2), 140-148. doi:10.1177/1948550613488951.

    Abstract

    Social exclusion results in lowered satisfaction of basic needs and shapes behavior in subsequent social situations. We investigated
    participants’ immediate behavioral response during exclusion from an interaction that consisted of establishing eye contact. A
    newly developed eye-tracker-based ‘‘looking game’’ was employed; participants exchanged looks with two virtual partners in an
    exchange where the player who had just been looked at chose whom to look at next. While some participants received as many
    looks as the virtual players (included), others were ignored after two initial looks (excluded). Excluded participants reported lower
    basic need satisfaction, lower evaluation of the interaction, and devaluated their interaction partners more than included
    participants, demonstrating that people are sensitive to epistemic ostracism. In line with William’s need-threat model,
    eye-tracking results revealed that excluded participants did not withdraw from the unfavorable interaction, but increased the
    number of looks to the player who could potentially reintegrate them.
  • Buckler, H. (2014). The acquisition of morphophonological alternations across languages. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Cai, D., Fonteijn, H. M., Guadalupe, T., Zwiers, M., Wittfeld, K., Teumer, A., Hoogman, M., Arias Vásquez, A., Yang, Y., Buitelaar, J., Fernández, G., Brunner, H. G., Van Bokhoven, H., Franke, B., Hegenscheid, K., Homuth, G., Fisher, S. E., Grabe, H. J., Francks, C., & Hagoort, P. (2014). A genome wide search for quantitative trait loci affecting the cortical surface area and thickness of Heschl's gyrus. Genes, Brain and Behavior, 13, 675-685. doi:10.1111/gbb.12157.

    Abstract

    Heschl's gyrus (HG) is a core region of the auditory cortex whose morphology is highly variable across individuals. This variability has been linked to sound perception ability in both speech and music domains. Previous studies show that variations in morphological features of HG, such as cortical surface area and thickness, are heritable. To identify genetic variants that affect HG morphology, we conducted a genome-wide association scan (GWAS) meta-analysis in 3054 healthy individuals using HG surface area and thickness as quantitative traits. None of the single nucleotide polymorphisms (SNPs) showed association P values that would survive correction for multiple testing over the genome. The most significant association was found between right HG area and SNP rs72932726 close to gene DCBLD2 (3q12.1; P=2.77x10(-7)). This SNP was also associated with other regions involved in speech processing. The SNP rs333332 within gene KALRN (3q21.2; P=2.27x10(-6)) and rs143000161 near gene COBLL1 (2q24.3; P=2.40x10(-6)) were associated with the area and thickness of left HG, respectively. Both genes are involved in the development of the nervous system. The SNP rs7062395 close to the X-linked deafness gene POU3F4 was associated with right HG thickness (Xq21.1; P=2.38x10(-6)). This is the first molecular genetic analysis of variability in HG morphology
  • Choi, J. (2014). Rediscovering a forgotten language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Deriziotis, P., O'Roak, B. J., Graham, S. A., Estruch, S. B., Dimitropoulou, D., Bernier, R. A., Gerdts, J., Shendure, J., Eichler, E. E., & Fisher, S. E. (2014). De novo TBR1 mutations in sporadic autism disrupt protein functions. Nature Communications, 5: 4954. doi:10.1038/ncomms5954.

    Abstract

    Next-generation sequencing recently revealed that recurrent disruptive mutations in a few genes may account for 1% of sporadic autism cases. Coupling these novel genetic data to empirical assays of protein function can illuminate crucial molecular networks. Here we demonstrate the power of the approach, performing the first functional analyses of TBR1 variants identified in sporadic autism. De novo truncating and missense mutations disrupt multiple aspects of TBR1 function, including subcellular localization, interactions with co-regulators and transcriptional repression. Missense mutations inherited from unaffected parents did not disturb function in our assays. We show that TBR1 homodimerizes, that it interacts with FOXP2, a transcription factor implicated in speech/language disorders, and that this interaction is disrupted by pathogenic mutations affecting either protein. These findings support the hypothesis that de novo mutations in sporadic autism have severe functional consequences. Moreover, they uncover neurogenetic mechanisms that bridge different neurodevelopmental disorders involving language deficits.
  • Deriziotis, P., Graham, S. A., Estruch, S. B., & Fisher, S. E. (2014). Investigating protein-protein interactions in live cells using Bioluminescence Resonance Energy Transfer. Journal of visualized experiments, 87: e51438. doi:10.3791/51438.

    Abstract

    Assays based on Bioluminescence Resonance Energy Transfer (BRET) provide a sensitive and reliable means to monitor protein-protein interactions in live cells. BRET is the non-radiative transfer of energy from a ‘donor’ luciferase enzyme to an ‘acceptor’ fluorescent protein. In the most common configuration of this assay, the donor is Renilla reniformis luciferase and the acceptor is Yellow Fluorescent Protein (YFP). Because the efficiency of energy transfer is strongly distance-dependent, observation of the BRET phenomenon requires that the donor and acceptor be in close proximity. To test for an interaction between two proteins of interest in cultured mammalian cells, one protein is expressed as a fusion with luciferase and the second as a fusion with YFP. An interaction between the two proteins of interest may bring the donor and acceptor sufficiently close for energy transfer to occur. Compared to other techniques for investigating protein-protein interactions, the BRET assay is sensitive, requires little hands-on time and few reagents, and is able to detect interactions which are weak, transient, or dependent on the biochemical environment found within a live cell. It is therefore an ideal approach for confirming putative interactions suggested by yeast two-hybrid or mass spectrometry proteomics studies, and in addition it is well-suited for mapping interacting regions, assessing the effect of post-translational modifications on protein-protein interactions, and evaluating the impact of mutations identified in patient DNA.

    Additional information

    video
  • Dingemanse, M., Blythe, J., & Dirksmeyer, T. (2014). Formats for other-initiation of repair across languages: An exercise in pragmatic typology. Studies in Language, 38, 5-43. doi:10.1075/sl.38.1.01din.

    Abstract

    In conversation, people have to deal with problems of speaking, hearing, and understanding. We report on a cross-linguistic investigation of the conversational structure of other-initiated repair (also known as collaborative repair, feedback, requests for clarification, or grounding sequences). We take stock of formats for initiating repair across languages (comparable to English huh?, who?, y’mean X?, etc.) and find that different languages make available a wide but remarkably similar range of linguistic resources for this function. We exploit the patterned variation as evidence for several underlying concerns addressed by repair initiation: characterising trouble, managing responsibility, and handling knowledge. The concerns do not always point in the same direction and thus provide participants in interaction with alternative principles for selecting one format over possible others. By comparing conversational structures across languages, this paper contributes to pragmatic typology: the typology of systems of language use and the principles that shape them
  • Dolscheid, S., Hunnius, S., Casasanto, D., & Majid, A. (2014). Prelinguistic infants are sensitive to space-pitch associations found across cultures. Psychological Science, 25(6), 1256-1261. doi:10.1177/0956797614528521.

    Abstract

    People often talk about musical pitch using spatial metaphors. In English, for instance, pitches can be “high” or “low” (i.e., height-pitch association), whereas in other languages, pitches are described as “thin” or “thick” (i.e., thickness-pitch association). According to results from psychophysical studies, metaphors in language can shape people’s nonlinguistic space-pitch representations. But does language establish mappings between space and pitch in the first place, or does it only modify preexisting associations? To find out, we tested 4-month-old Dutch infants’ sensitivity to height-pitch and thickness-pitch mappings using a preferential-looking paradigm. The infants looked significantly longer at cross-modally congruent stimuli for both space-pitch mappings, which indicates that infants are sensitive to these associations before language acquisition. The early presence of space-pitch mappings means that these associations do not originate from language. Instead, language builds on preexisting mappings, changing them gradually via competitive associative learning. Space-pitch mappings that are language-specific in adults develop from mappings that may be universal in infants.
  • Dolscheid, S., Willems, R. M., Hagoort, P., & Casasanto, D. (2014). The relation of space and musical pitch in the brain. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 421-426). Austin, Tx: Cognitive Science Society.

    Abstract

    Numerous experiments show that space and musical pitch are
    closely linked in people's minds. However, the exact nature of
    space-pitch associations and their neuronal underpinnings are
    not well understood. In an fMRI experiment we investigated
    different types of spatial representations that may underlie
    musical pitch. Participants judged stimuli that varied in
    spatial height in both the visual and tactile modalities, as well
    as auditory stimuli that varied in pitch height. In order to
    distinguish between unimodal and multimodal spatial bases of
    musical pitch, we examined whether pitch activations were
    present in modality-specific (visual or tactile) versus
    multimodal (visual and tactile) regions active during spatial
    height processing. Judgments of musical pitch were found to
    activate unimodal visual areas, suggesting that space-pitch
    associations may involve modality-specific spatial
    representations, supporting a key assumption of embodied
    theories of metaphorical mental representation.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2014). Phoneme category retuning in a non-native language. In Proceedings of Interspeech 2014: 15th Annual Conference of the International Speech Communication Association (pp. 553-557).

    Abstract

    Previous studies have demonstrated that native listeners
    modify their interpretation of a speech sound when a talker
    produces an ambiguous sound in order to quickly tune into a
    speaker, but there is hardly any evidence that non-native
    listeners employ a similar mechanism when encountering
    ambiguous pronunciations. So far, one study demonstrated
    this lexically-guided perceptual learning effect for nonnatives,
    using phoneme categories similar in the native
    language of the listeners and the non-native language of the
    stimulus materials. The present study investigates the question
    whether phoneme category retuning is possible in a nonnative
    language for a contrast, /l/-/r/, which is phonetically
    differently embedded in the native (Dutch) and nonnative
    (English) languages involved. Listening experiments indeed
    showed a lexically-guided perceptual learning effect.
    Assuming that Dutch listeners have different phoneme
    categories for the native Dutch and non-native English /r/, as
    marked differences between the languages exist for /r/, these
    results, for the first time, seem to suggest that listeners are not
    only able to retune their native phoneme categories but also
    their non-native phoneme categories to include ambiguous
    pronunciations.
  • Gialluisi, A., Newbury, D. F., Wilcutt, E. G., Olson, R. K., DeFries, J. C., Brandler, W. M., Pennington, B. F., Smith, S. D., Scerri, T. S., Simpson, N. H., The SLI Consortium, Luciano, M., Evans, D. M., Bates, T. C., Stein, J. F., Talcott, J. B., Monaco, A. P., Paracchini, S., Francks, C., & Fisher, S. E. (2014). Genome-wide screening for DNA variants associated with reading and language traits. Genes, Brain and Behavior, 13, 686-701. doi:10.1111/gbb.12158.

    Abstract

    Reading and language abilities are heritable traits that are likely to share some genetic influences with each other. To identify pleiotropic genetic variants affecting these traits, we first performed a Genome-wide Association Scan (GWAS) meta-analysis using three richly characterised datasets comprising individuals with histories of reading or language problems, and their siblings. GWAS was performed in a total of 1862 participants using the first principal component computed from several quantitative measures of reading- and language-related abilities, both before and after adjustment for performance IQ. We identified novel suggestive associations at the SNPs rs59197085 and rs5995177 (uncorrected p≈10−7 for each SNP), located respectively at the CCDC136/FLNC and RBFOX2 genes. Each of these SNPs then showed evidence for effects across multiple reading and language traits in univariate association testing against the individual traits. FLNC encodes a structural protein involved in cytoskeleton remodelling, while RBFOX2 is an important regulator of alternative splicing in neurons. The CCDC136/FLNC locus showed association with a comparable reading/language measure in an independent sample of 6434 participants from the general population, although involving distinct alleles of the associated SNP. Our datasets will form an important part of on-going international efforts to identify genes contributing to reading and language skills.
  • Gialluisi, A., Pippucci, T., & Romeo, G. (2014). Reply to ten Kate et al. European Journal of Human Genetics, 2, 157-158. doi:10.1038/ejhg.2013.153.
  • Gonzalez Gomez, N., Hayashi, A., Tsuji, S., Mazuka, R., & Nazzi, T. (2014). The role of the input on the development of the LC bias: A crosslinguistic comparison. Cognition, 132(3), 301-311. doi:10.1016/j.cognition.2014.04.004.

    Abstract

    Previous studies have described the existence of a phonotactic bias called the Labial–Coronal (LC) bias, corresponding to a tendency to produce more words beginning with a labial consonant followed by a coronal consonant (i.e. “bat”) than the opposite CL pattern (i.e. “tap”). This bias has initially been interpreted in terms of articulatory constraints of the human speech production system. However, more recently, it has been suggested that this presumably language-general LC bias in production might be accompanied by LC and CL biases in perception, acquired in infancy on the basis of the properties of the linguistic input. The present study investigates the origins of these perceptual biases, testing infants learning Japanese, a language that has been claimed to possess more CL than LC sequences, and comparing them with infants learning French, a language showing a clear LC bias in its lexicon. First, a corpus analysis of Japanese IDS and ADS revealed the existence of an overall LC bias, except for plosive sequences in ADS, which show a CL bias across counts. Second, speech preference experiments showed a perceptual preference for CL over LC plosive sequences (all recorded by a Japanese speaker) in 13- but not in 7- and 10-month-old Japanese-learning infants (Experiment 1), while revealing the emergence of an LC preference between 7 and 10 months in French-learning infants, using the exact same stimuli. These crosslinguistic behavioral differences, obtained with the same stimuli, thus reflect differences in processing in two populations of infants, which can be linked to differences in the properties of the lexicons of their respective native languages. These findings establish that the emergence of a CL/LC bias is related to exposure to a linguistic input.
  • Guadalupe, T., Willems, R. M., Zwiers, M., Arias Vasquez, A., Hoogman, M., Hagoort, P., Fernández, G., Buitelaar, J., Franke, B., Fisher, S. E., & Francks, C. (2014). Differences in cerebral cortical anatomy of left- and right-handers. Frontiers in Psychology, 5: 261. doi:10.3389/fpsyg.2014.00261.

    Abstract

    The left and right sides of the human brain are specialized for different kinds of information processing, and much of our cognition is lateralized to an extent towards one side or the other. Handedness is a reflection of nervous system lateralization. Roughly ten percent of people are mixed- or left-handed, and they show an elevated rate of reductions or reversals of some cerebral functional asymmetries compared to right-handers. Brain anatomical correlates of left-handedness have also been suggested. However, the relationships of left-handedness to brain structure and function remain far from clear. We carried out a comprehensive analysis of cortical surface area differences between 106 left-handed subjects and 1960 right-handed subjects, measured using an automated method of regional parcellation (FreeSurfer, Destrieux atlas). This is the largest study sample that has so far been used in relation to this issue. No individual cortical region showed an association with left-handedness that survived statistical correction for multiple testing, although there was a nominally significant association with the surface area of a previously implicated region: the left precentral sulcus. Identifying brain structural correlates of handedness may prove useful for genetic studies of cerebral asymmetries, as well as providing new avenues for the study of relations between handedness, cerebral lateralization and cognition.
  • Guadalupe, T., Zwiers, M. P., Teumer, A., Wittfeld, K., Arias Vasquez, A., Hoogman, M., Hagoort, P., Fernández, G., Buitelaar, J., Hegenscheid, K., Völzke, H., Franke, B., Fisher, S. E., Grabe, H. J., & Francks, C. (2014). Measurement and genetics of human subcortical and hippocampal asymmetries in large datasets. Human Brain Mapping, 35(7), 3277-3289. doi:10.1002/hbm.22401.

    Abstract

    Functional and anatomical asymmetries are prevalent features of the human brain, linked to gender, handedness, and cognition. However, little is known about the neurodevelopmental processes involved. In zebrafish, asymmetries arise in the diencephalon before extending within the central nervous system. We aimed to identify genes involved in the development of subtle, left-right volumetric asymmetries of human subcortical structures using large datasets. We first tested the feasibility of measuring left-right volume differences in such large-scale samples, as assessed by two automated methods of subcortical segmentation (FSL|FIRST and FreeSurfer), using data from 235 subjects who had undergone MRI twice. We tested the agreement between the first and second scan, and the agreement between the segmentation methods, for measures of bilateral volumes of six subcortical structures and the hippocampus, and their volumetric asymmetries. We also tested whether there were biases introduced by left-right differences in the regional atlases used by the methods, by analyzing left-right flipped images. While many bilateral volumes were measured well (scan-rescan r = 0.6-0.8), most asymmetries, with the exception of the caudate nucleus, showed lower repeatabilites. We meta-analyzed genome-wide association scan results for caudate nucleus asymmetry in a combined sample of 3,028 adult subjects but did not detect associations at genome-wide significance (P < 5 × 10-8). There was no enrichment of genetic association in genes involved in left-right patterning of the viscera. Our results provide important information for researchers who are currently aiming to carry out large-scale genome-wide studies of subcortical and hippocampal volumes, and their asymmetries
  • Hammond, J. (2014). Switch-reference antecedence and subordination in Whitesands (Oceanic). In R. van Gijn, J. Hammond, D. Matić, S. van Putten, & A. V. Galucio (Eds.), Information structure and reference tracking in complex sentences. (pp. 263-290). Amsterdam: Benjamins.

    Abstract

    Whitesands is an Oceanic language of the southern Vanuatu subgroup. Like the related languages of southern Vanuatu, Whitesands has developed a clause-linkage system which monitors referent continuity on new clauses – typically contrasting with the previous clause. In this chapter I address how the construction interacts with topic continuity in discourse. I outline the morphosyntactic form of this anaphoric co-reference device. From a functionalist perspective, I show how the system is used in natural discourse and discuss its restrictions with respect to relative and complement clauses. I conclude with a discussion on its interactions with theoretical notions of information structure – in particular the nature of presupposed versus asserted clauses, information back- and foregrounding and how these affect the use of the switch-reference system
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2014). In dialogue with an avatar, syntax production is identical compared to dialogue with a human partner. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 2351-2356). Austin, Tx: Cognitive Science Society.

    Abstract

    The use of virtual reality (VR) as a methodological tool is
    becoming increasingly popular in behavioural research due
    to its seemingly limitless possibilities. This new method has
    not been used frequently in the field of psycholinguistics,
    however, possibly due to the assumption that humancomputer
    interaction does not accurately reflect human-human
    interaction. In the current study we compare participants’
    language behaviour in a syntactic priming task with human
    versus avatar partners. Our study shows comparable priming
    effects between human and avatar partners (Human: 12.3%;
    Avatar: 12.6% for passive sentences) suggesting that VR is a
    valid platform for conducting language research and studying
    dialogue interactions.
  • Hoey, E. (2014). Sighing in interaction: Somatic, semiotic, and social. Research on Language and Social Interaction, 47(2), 175-200. doi:10.1080/08351813.2014.900229.

    Abstract

    Participants in interaction routinely orient to gaze, bodily comportment, and nonlexical vocalizations as salient for developing an analysis of the unfolding course of action. In this article, I address the respiratory phenomenon of sighing, the aim being to describe sighing as a situated practice that contributes to the achievement of particular actions in interaction. I report on the various actions sighs implement or construct and how their positioning and delivery informs participants’ understandings of their significance for interaction. Data are in American English
  • Holler, J., Schubotz, L., Kelly, S., Hagoort, P., Schuetze, M., & Ozyurek, A. (2014). Social eye gaze modulates processing of speech and co-speech gesture. Cognition, 133, 692-697. doi:10.1016/j.cognition.2014.08.008.

    Abstract

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech + gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker’s preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients’ speech processing suffers, gestures can enhance the comprehension of a speaker’s message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension.
  • Kunert, R., & Scheepers, C. (2014). Speed and accuracy of dyslexic versus typical word recognition: An eye-movement investigation. Frontiers in Psychology, 5: 1129. doi:10.3389/fpsyg.2014.01129.

    Abstract

    Developmental dyslexia is often characterized by a dual deficit in both word recognition accuracy and general processing speed. While previous research into dyslexic word recognition may have suffered from speed-accuracy trade-off, the present study employed a novel eye-tracking task that is less prone to such confounds. Participants (10 dyslexics and 12 controls) were asked to look at real word stimuli, and to ignore simultaneously presented non-word stimuli, while their eye-movements were recorded. Improvements in word recognition accuracy over time were modeled in terms of a continuous non-linear function. The words' rhyme consistency and the non-words' lexicality (unpronounceable, pronounceable, pseudohomophone) were manipulated within-subjects. Speed-related measures derived from the model fits confirmed generally slower processing in dyslexics, and showed a rhyme consistency effect in both dyslexics and controls. In terms of overall error rate, dyslexics (but not controls) performed less accurately on rhyme-inconsistent words, suggesting a representational deficit for such words in dyslexics. Interestingly, neither group showed a pseudohomophone effect in speed or accuracy, which might call the task-independent pervasiveness of this effect into question. The present results illustrate the importance of distinguishing between speed- vs. accuracy-related effects for our understanding of dyslexic word recognition

    Additional information

    Kunert_Data Sheet 1.DOCX
  • Kupisch, T., Lein, T., Barton, D., Schröder, D. J., Stangen, I., & Stoehr, A. (2014). Acquisition outcomes across domains in adult simultaneous bilinguals with French as weaker and stronger language. Journal of French Language Studies, 24(3), 347-376. doi:10.1017/S0959269513000197.

    Abstract

    This study investigates the adult grammars of French simultaneous bilingual speakers (2L1s) whose other language is German. Apart from providing an example of French as heritage language in Europe, the goals of this paper are (i) to compare the acquisition of French in a minority and majority language context, (ii) to identify the relative vulnerability of individual domains, and (iii) to investigate whether 2L1s are vulnerable to language attrition when moving to their heritage country during adulthood. We include two groups of German-French 2L1s: One group grew up predominantly in France, but moved to Germany during adulthood; the other group grew up predominantly in Germany and stayed there. Performance is compared in different domains, including adjective placement, gender marking, articles, prepositions, foreign accent and voice onset time. Results indicate that differences between the two groups are minimal in morpho-syntax, but more prominent in pronunciation.
  • Lahey, M., & Ernestus, M. (2014). Pronunciation variation in infant-directed speech: Phonetic reduction of two highly frequent words. Language Learning and Development, 10, 308-327. doi:10.1080/15475441.2013.860813.

    Abstract

    In spontaneous conversations between adults, words are often pronounced with fewer segments or syllables than their citation forms. The question arises whether infant-directed speech also contains phonetic reduction. If so, infants would be presented with speech input that enables them to acquire reduced variants from an early age. This study compared speech directed at 11- and 12-month-old infants with adult-directed conversational speech and adult-directed read speech. In an acoustic study, 216 tokens of the Dutch words allemaal and helemaal from speech corpora were analyzed for duration, number of syllables, and vowel quality. In a perception study, adult participants rated these same materials for reduction and provided phonetic transcriptions. The results show that these two words are frequently reduced in infant-directed speech, and that their degree of reduction is comparable with conversational adult-directed speech. These findings suggest that lexical representations for reduced pronunciation variants can be acquired early in linguistic development

    Files private

    Request files
  • Lai, V. T., Garrido Rodriguez, G., & Narasimhan, B. (2014). Thinking-for-speaking in early and late bilinguals. Bilingualism: Language and Cognition, 17, 139-152. doi:10.1017/S1366728913000151.

    Abstract

    When speakers describe motion events using different languages, they subsequently classify those events in language-specific ways (Gennari, Sloman, Malt & Fitch, 2002). Here we ask if bilingual speakers flexibly shift their event classification preferences based on the language in which they verbally encode those events. English–Spanish bilinguals and monolingual controls described motion events in either Spanish or English. Subsequently they judged the similarity of the motion events in a triad task. Bilinguals tested in Spanish and Spanish monolinguals were more likely to make similarity judgments based on the path of motion versus bilinguals tested in English and English monolinguals. The effect is modulated in bilinguals by the age of acquisition of the second language. Late bilinguals based their judgments on path more often when Spanish was used to describe the motion events versus English. Early bilinguals had a path preference independent of the language in use. These findings support “thinking-for-speaking” (Slobin, 1996) in late bilinguals.
  • Lartseva, A., Dijkstra, T., Kan, C. C., & Buitelaar, J. K. (2014). Processing of emotion words by patients with Autism Spectrum Disorders: Evidence from reaction times and EEG. Journal of Autism and Developmental Disorders, 44, 2882-2894. doi:10.1007/s10803-014-2149-z.

    Abstract

    This study investigated processing of emotion words in autism spectrum disorders (ASD) using reaction times and event-related potentials (ERP). Adults with (n = 21) and without (n = 20) ASD performed a lexical decision task on emotion and neutral words while their brain activity was recorded. Both groups showed faster responses to emotion words compared to neutral, suggesting intact early processing of emotion in ASD. In the ERPs, the control group showed a typical late positive component (LPC) at 400-600 ms for emotion words compared to neutral, while the ASD group showed no LPC. The between-group difference in LPC amplitude was significant, suggesting that emotion words were processed differently by individuals with ASD, although their behavioral performance was similar to that of typical individuals
  • Lewis, A., Freeman-Mills, L., de la Calle-Mustienes, E., Giráldez-Pérez, R. M., Davis, H., Jaeger, E., Becker, M., Hubner, N. C., Nguyen, L. N., Zeron-Medina, J., Bond, G., Stunnenberg, H. G., Carvajal, J. J., Gomez-Skarmeta, J. L., Leedham, S., & Tomlinson, I. (2014). A polymorphic enhancer near GREM1 influences bowel cancer risk through diifferential CDX2 and TCF7L2 binding. Cell Reports, 8(4), Pages 983-990. doi:10.1016/j.celrep.2014.07.020.

    Abstract

    A rare germline duplication upstream of the bone morphogenetic protein antagonist GREM1 causes a Mendelian-dominant predisposition to colorectal cancer (CRC). The underlying disease mechanism is strong, ectopic GREM1 overexpression in the intestinal epithelium. Here, we confirm that a common GREM1 polymorphism, rs16969681, is also associated with CRC susceptibility, conferring ∼20% differential risk in the general population. We hypothesized the underlying cause to be moderate differences in GREM1 expression. We showed that rs16969681 lies in a region of active chromatin with allele- and tissue-specific enhancer activity. The CRC high-risk allele was associated with stronger gene expression, and higher Grem1 mRNA levels increased the intestinal tumor burden in ApcMin mice. The intestine-specific transcription factor CDX2 and Wnt effector TCF7L2 bound near rs16969681, with significantly higher affinity for the risk allele, and CDX2 overexpression in CDX2/GREM1-negative cells caused re-expression of GREM1. rs16969681 influences CRC risk through effects on Wnt-driven GREM1 expression in colorectal tumors.
  • Magi, A., Tattini, L., Palombo, F., Benelli, M., Gialluisi, A., Giusti, B., Abbate, R., Seri, M., Gensini, G. F., Romeo, G., & Pippucci, T. (2014). H3M2: Detection of runs of homozygosity from whole-exome sequencing data. Bioinformatics, 2852-2859. doi:10.1093/bioinformatics/btu401.

    Abstract

    Motivation: Runs of homozygosity (ROH) are sizable chromosomal stretches of homozygous genotypes, ranging in length from tens of kilobases to megabases. ROHs can be relevant for population and medical genetics, playing a role in predisposition to both rare and common disorders. ROHs are commonly detected by single nucleotide polymorphism (SNP) microarrays, but attempts have been made to use whole-exome sequencing (WES) data. Currently available methods developed for the analysis of uniformly spaced SNP-array maps do not fit easily to the analysis of the sparse and non-uniform distribution of the WES target design. Results: To meet the need of an approach specifically tailored to WES data, we developed (HM2)-M-3, an original algorithm based on heterogeneous hidden Markov model that incorporates inter-marker distances to detect ROH from WES data. We evaluated the performance of H-3 M-2 to correctly identify ROHs on synthetic chromosomes and examined its accuracy in detecting ROHs of different length (short, medium and long) from real 1000 genomes project data. H3M2 turned out to be more accurate than GERMLINE and PLINK, two state-of-the-art algorithms, especially in the detection of short and medium ROHs
  • Mazuka, R., Hasegawa, M., & Tsuji, S. (2014). Development of non-native vowel discrimination: Improvement without exposure. Developmental Psychobiology, 56(2), 192-209. doi:10.1002/dev.21193.

    Abstract

    he present study tested Japanese 4.5- and 10-month old infants' ability to discriminate three German vowel pairs, none of which are contrastive in Japanese, using a visual habituation–dishabituation paradigm. Japanese adults' discrimination of the same pairs was also tested. The results revealed that Japanese 4.5-month old infants discriminated the German /bu:k/-/by:k/ contrast, but they showed no evidence of discriminating the /bi:k/-/be:k/ or /bu:k/-/bo:k/ contrasts. Japanese 10-month old infants, on the other hand, discriminated the German /bi:k/-/be:k/ contrast, while they showed no evidence of discriminating the /bu:k/-/by:k/ or /bu:k/-/bo:k/ contrasts. Japanese adults, in contrast, were highly accurate in their discrimination of all of the pairs. The results indicate that discrimination of non-native contrasts is not always easy even for young infants, and that their ability to discriminate non-native contrasts can improve with age even when they receive no exposure to a language in which the given contrast is phonemic. © 2013 Wiley Periodicals, Inc. Dev Psychobiol 56: 192–209, 2014.
  • Mulder, K., Dijkstra, T., Schreuder, R., & Baayen, R. H. (2014). Effects of primary and secondary morphological family size in monolingual and bilingual word processing. Journal of Memory and Language, 72, 59-84. doi:10.1016/j.jml.2013.12.004.

    Abstract

    This study investigated primary and secondary morphological family size effects in monolingual and bilingual processing, combining experimentation with computational modeling. Family size effects were investigated in an English lexical decision task for Dutch-English bilinguals and English monolinguals using the same materials. To account for the possibility that family size effects may only show up in words that resemble words in the native language of the bilinguals, the materials included, in addition to purely English items, Dutch-English cognates (identical and non-identical in form). As expected, the monolingual data revealed facilitatory effects of English primary family size. Moreover, while the monolingual data did not show a main effect of cognate status, only form-identical cognates revealed an inhibitory effect of English secondary family size. The bilingual data showed stronger facilitation for identical cognates, but as for monolinguals, this effect was attenuated for words with a large secondary family size. In all, the Dutch-English primary and secondary family size effects in bilinguals were strikingly similar to those of monolinguals. Computational simulations suggest that the primary and secondary family size effects can be understood in terms of discriminative learning of the English lexicon. (C) 2014 Elsevier Inc. All rights reserved.

    Files private

    Request files
  • Muysken, P., Hammarström, H., Birchall, J., Danielsen, S., Eriksen, L., Galucio, A. V., Van Gijn, R., Van de Kerke, S., Kolipakam, V., Krasnoukhova, O., Müller, N., & O'Connor, L. (2014). The languages of South America: Deep families, areal relationships, and language contact. In P. Muysken, & L. O'Connor (Eds.), Language contact in South America (pp. 299-323). Cambridge: Cambridge University Press.
  • Neger, T. M., Rietveld, T., & Janse, E. (2014). Relationship between perceptual learning in speech and statistical learning in younger and older adults. Frontiers in Human Neuroscience, 8: 628. doi:10.3389/fnhum.2014.00628.

    Abstract

    Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with sixty meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly.
  • O'Connor, L., & Kolipakam, V. (2014). Human migrations, dispersals, and contacts in South America. In L. O'Connor, & P. Muysken (Eds.), The native languages of South America: Origins, development, typology (pp. 29-55). Cambridge: Cambridge University Press.
  • Ortega, G., Sumer, B., & Ozyurek, A. (2014). Type of iconicity matters: Bias for action-based signs in sign language acquisition. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1114-1119). Austin, Tx: Cognitive Science Society.

    Abstract

    Early studies investigating sign language acquisition claimed
    that signs whose structures are motivated by the form of their
    referent (iconic) are not favoured in language development.
    However, recent work has shown that the first signs in deaf
    children’s lexicon are iconic. In this paper we go a step
    further and ask whether different types of iconicity modulate
    learning sign-referent links. Results from a picture description
    task indicate that children and adults used signs with two
    possible variants differentially. While children signing to
    adults favoured variants that map onto actions associated with
    a referent (action signs), adults signing to another adult
    produced variants that map onto objects’ perceptual features
    (perceptual signs). Parents interacting with children used
    more action variants than signers in adult-adult interactions.
    These results are in line with claims that language
    development is tightly linked to motor experience and that
    iconicity can be a communicative strategy in parental input.
  • Peeters, D., Runnqvist, E., Bertrand, D., & Grainger, J. (2014). Asymmetrical switch costs in bilingual language production induced by reading words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(1), 284-292. doi:10.1037/a0034060.

    Abstract

    We examined language-switching effects in French–English bilinguals using a paradigm where pictures are always named in the same language (either French or English) within a block of trials, and on each trial, the picture is preceded by a printed word from the same language or from the other language. Participants had to either make a language decision on the word or categorize it as an animal name or not. Picture-naming latencies in French (Language 1 [L1]) were slower when pictures were preceded by an English word than by a French word, independently of the task performed on the word. There were no language-switching effects when pictures were named in English (L2). This pattern replicates asymmetrical switch costs found with the cued picture-naming paradigm and shows that the asymmetrical pattern can be obtained (a) in the absence of artificial (nonlinguistic) language cues, (b) when the switch involves a shift from comprehension in 1 language to production in another, and (c) when the naming language is blocked (univalent response). We concluded that language switch costs in bilinguals cannot be reduced to effects driven by task control or response-selection mechanisms.
  • Peeters, D., & Dresler, M. (2014). The scientific significance of sleep-talking. Frontiers for Young Minds, 2(9). Retrieved from http://kids.frontiersin.org/articles/24/the_scientific_significance_of_sleep_talking/.

    Abstract

    Did one of your parents, siblings, or friends ever tell you that you were talking in your sleep? Nothing to be ashamed of! A recent study found that more than half of all people have had the experience of speaking out loud while being asleep [1]. This might even be underestimated, because often people do not notice that they are sleep-talking, unless somebody wakes them up or tells them the next day. Most neuroscientists, linguists, and psychologists studying language are interested in our language production and language comprehension skills during the day. In the present article, we will explore what is known about the production of overt speech during the night. We suggest that the study of sleep-talking may be just as interesting and informative as the study of wakeful speech.
  • Peeters, D., Azar, Z., & Ozyurek, A. (2014). The interplay between joint attention, physical proximity, and pointing gesture in demonstrative choice. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1144-1149). Austin, Tx: Cognitive Science Society.
  • Piai, V., Roelofs, A., Jensen, O., Schoffelen, J.-M., & Bonnefond, M. (2014). Distinct patterns of brain activity characterise lexical activation and competition in spoken word production. PLoS One, 9(2): e88674. doi:10.1371/journal.pone.0088674.

    Abstract

    According to a prominent theory of language production, concepts activate multiple associated words in memory, which enter into competition for selection. However, only a few electrophysiological studies have identified brain responses reflecting competition. Here, we report a magnetoencephalography study in which the activation of competing words was manipulated by presenting pictures (e.g., dog) with distractor words. The distractor and picture name were semantically related (cat), unrelated (pin), or identical (dog). Related distractors are stronger competitors to the picture name because they receive additional activation from the picture relative to other distractors. Picture naming times were longer with related than unrelated and identical distractors. Phase-locked and non-phase-locked activity were distinct but temporally related. Phase-locked activity in left temporal cortex, peaking at 400 ms, was larger on unrelated than related and identical trials, suggesting differential activation of alternative words by the picture-word stimuli. Non-phase-locked activity between roughly 350–650 ms (4–10 Hz) in left superior frontal gyrus was larger on related than unrelated and identical trials, suggesting differential resolution of the competition among the alternatives, as reflected in the naming times. These findings characterise distinct patterns of activity associated with lexical activation and competition, supporting the theory that words are selected by competition.
  • Piai, V. (2014). Choosing our words: Lexical competition and the involvement of attention in spoken word production. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Piai, V., Roelofs, A., & Schriefers, H. (2014). Locus of semantic interference in picture naming: Evidence from dual-task performance. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(1), 147-165. doi:10.1037/a0033745.

    Abstract

    Disagreement exists regarding the functional locus of semantic interference of distractor words in picture naming. This effect is a cornerstone of modern psycholinguistic models of word production, which assume that it arises in lexical response-selection. However, recent evidence from studies of dual-task performance suggests a locus in perceptual or conceptual processing, prior to lexical response-selection. In these studies, participants manually responded to a tone and named a picture while ignoring a written distractor word. The stimulus onset asynchrony (SOA) between tone and picture–word stimulus was manipulated. Semantic interference in naming latencies was present at long tone pre-exposure SOAs, but reduced or absent at short SOAs. Under the prevailing structural or strategic response-selection bottleneck and central capacity sharing models of dual-task performance, the underadditivity of the effects of SOA and stimulus type suggests that semantic interference emerges before lexical response-selection. However, in more recent studies, additive effects of SOA and stimulus type were obtained. Here, we examined the discrepancy in results between these studies in 6 experiments in which we systematically manipulated various dimensions on which these earlier studies differed, including tasks, materials, stimulus types, and SOAs. In all our experiments, additive effects of SOA and stimulus type on naming latencies were obtained. These results strongly suggest that the semantic interference effect arises after perceptual and conceptual processing, during lexical response-selection or later. We discuss several theoretical alternatives with respect to their potential to account for the discrepancy between the present results and other studies showing underadditivity.
  • Piai, V., Roelofs, A., & Maris, E. (2014). Oscillatory brain responses in spoken word production reflect lexical frequency and sentential constraint. Neuropsychologia, 53, 146-156. doi:10.1016/j.neuropsychologia.2013.11.014.

    Abstract

    Two fundamental factors affecting the speed of spoken word production are lexical frequency and sentential constraint, but little is known about their timing and electrophysiological basis. In the present study, we investigated event-related potentials (ERPs) and oscillatory brain responses induced by these factors, using a task in which participants named pictures after reading sentences. Sentence contexts were either constraining or nonconstraining towards the final word, which was presented as a picture. Picture names varied in their frequency of occurrence in the language. Naming latencies and electrophysiological responses were examined as a function of context and lexical frequency. Lexical frequency is an index of our cumulative learning experience with words, so lexical-frequency effects most likely reflect access to memory representations for words. Pictures were named faster with constraining than nonconstraining contexts. Associated with this effect, starting around 400 ms pre-picture presentation, oscillatory power between 8 and 30 Hz was lower for constraining relative to nonconstraining contexts. Furthermore, pictures were named faster with high-frequency than low-frequency names, but only for nonconstraining contexts, suggesting differential ease of memory access as a function of sentential context. Associated with the lexical-frequency effect, starting around 500 ms pre-picture presentation, oscillatory power between 4 and 10 Hz was higher for high-frequency than for low-frequency names, but only for constraining contexts. Our results characterise electrophysiological responses associated with lexical frequency and sentential constraint in spoken word production, and point to new avenues for studying these fundamental factors in language production.
  • Pippucci, T., Magi, A., Gialluisi, A., & Romeo, G. (2014). Detection of runs of homozygosity from whole exome sequencing data: State of the art and perspectives for clinical, population and epidemiological studies. Human Heredity, 77, 63-72. doi:10.1159/000362412.

    Abstract

    Runs of homozygosity (ROH) are sizeable stretches of homozygous genotypes at consecutive polymorphic DNA marker positions, traditionally captured by means of genome-wide single nucleotide polymorphism (SNP) genotyping. With the advent of next-generation sequencing (NGS) technologies, a number of methods initially devised for the analysis of SNP array data (those based on sliding-window algorithms such as PLINK or GERMLINE and graphical tools like HomozygosityMapper) or specifically conceived for NGS data have been adopted for the detection of ROH from whole exome sequencing (WES) data. In the latter group, algorithms for both graphical representation (AgileVariantMapper, HomSI) and computational detection (H3M2) of WES-derived ROH have been proposed. Here we examine these different approaches and discuss available strategies to implement ROH detection in WES analysis. Among sliding-window algorithms, PLINK appears to be well-suited for the detection of ROH, especially of the long ones. As a method specifically tailored for WES data, H3M2 outperforms existing algorithms especially on short and medium ROH. We conclude that, notwithstanding the irregular distribution of exons, WES data can be used with some approximation for unbiased genome-wide analysis of ROH features, with promising applications to homozygosity mapping of disease genes, comparative analysis of populations and epidemiological studies based on consanguinity
  • Poellmann, K., Bosker, H. R., McQueen, J. M., & Mitterer, H. (2014). Perceptual adaptation to segmental and syllabic reductions in continuous spoken Dutch. Journal of Phonetics, 46, 101-127. doi:10.1016/j.wocn.2014.06.004.

    Abstract

    This study investigates if and how listeners adapt to reductions in casual continuous speech. In a perceptual-learning variant of the visual-world paradigm, two groups of Dutch participants were exposed to either segmental (/b/ → [ʋ]) or syllabic (ver- → [fː]) reductions in spoken Dutch sentences. In the test phase, both groups heard both kinds of reductions, but now applied to different words. In one of two experiments, the segmental reduction exposure group was better than the syllabic reduction exposure group in recognizing new reduced /b/-words. In both experiments, the syllabic reduction group showed a greater target preference for new reduced ver-words. Learning about reductions was thus applied to previously unheard words. This lexical generalization suggests that mechanisms compensating for segmental and syllabic reductions take place at a prelexical level, and hence that lexical access involves an abstractionist mode of processing. Existing abstractionist models need to be revised, however, as they do not include representations of sequences of segments (corresponding e.g. to ver-) at the prelexical level.
  • Poellmann, K., Mitterer, H., & McQueen, J. M. (2014). Use what you can: Storage, abstraction processes and perceptual adjustments help listeners recognize reduced forms. Frontiers in Psychology, 5: 437. doi:10.3389/fpsyg.2014.00437.

    Abstract

    Three eye-tracking experiments tested whether native listeners recognized reduced Dutch words better after having heard the same reduced words, or different reduced words of the same reduction type and whether familiarization with one reduction type helps listeners to deal with another reduction type. In the exposure phase, a segmental reduction group was exposed to /b/-reductions (e.g., "minderij" instead of "binderij", 'book binder') and a syllabic reduction group was exposed to full-vowel deletions (e.g., "p'raat" instead of "paraat", 'ready'), while a control group did not hear any reductions. In the test phase, all three groups heard the same speaker producing reduced-/b/ and deleted-vowel words that were either repeated (Experiments 1 & 2) or new (Experiment 3), but that now appeared as targets in semantically neutral sentences. Word-specific learning effects were found for vowel-deletions but not for /b/-reductions. Generalization of learning to new words of the same reduction type occurred only if the exposure words showed a phonologically consistent reduction pattern (/b/-reductions). In contrast, generalization of learning to words of another reduction type occurred only if the exposure words showed a phonologically inconsistent reduction pattern (the vowel deletions; learning about them generalized to recognition of the /b/-reductions). In order to deal with reductions, listeners thus use various means. They store reduced variants (e.g., for the inconsistent vowel-deleted words) and they abstract over incoming information to build up and apply mapping rules (e.g., for the consistent /b/-reductions). Experience with inconsistent pronunciations leads to greater perceptual flexibility in dealing with other forms of reduction uttered by the same speaker than experience with consistent pronunciations.
  • Presciuttini, S., Gialluisi, A., Barbuti, S., Curcio, M., Scatena, F., Carli, G., & Santarcangelo, E. L. (2014). Hypnotizability and Catechol-O-Methyltransferase (COMT) polymorphysms in Italians. Frontiers in Human Neuroscience, 7: 929. doi:10.3389/fnhum.2013.00929.

    Abstract

    Higher brain dopamine content depending on lower activity of Catechol-O-Methyltransferase (COMT) in subjects with high hypnotizability scores (highs) has been considered responsible for their attentional characteristics. However, the results of the previous genetic studies on association between hypnotizability and the COMT single nucleotide polymorphism (SNP) rs4680 (Val158Met) were inconsistent. Here, we used a selective genotyping approach to re-evaluate the association between hypnotizability and COMT in the context of a two-SNP haplotype analysis, considering not only the Val158Met polymorphism, but also the closely located rs4818 SNP. An Italian sample of 53 highs, 49 low hypnotizable subjects (lows), and 57 controls, were genotyped for a segment of 805 bp of the COMT gene, including Val158Met and the closely located rs4818 SNP. Our selective genotyping approach had 97.1% power to detect the previously reported strongest association at the significance level of 5%. We found no evidence of association at the SNP, haplotype, and diplotype levels. Thus, our results challenge the dopamine-based theory of hypnosis and indirectly support recent neuropsychological and neurophysiological findings reporting the lack of any association between hypnotizability and focused attention abilities.
  • Reifegerste, J. (2014). Morphological processing in younger and older people: Evidence for flexible dual-route access. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Rodenas-Cuadrado, P., Ho, J., & Vernes, S. C. (2014). Shining a light on CNTNAP2: Complex functions to complex disorders. European Journal of Human Genetics, 22(2), 171-178. doi:10.1038/ejhg.2013.100.

    Abstract

    The genetic basis of complex neurological disorders involving language are poorly understood, partly due to the multiple additive genetic risk factors that are thought to be responsible. Furthermore, these conditions are often syndromic in that they have a range of endophenotypes that may be associated with the disorder and that may be present in different combinations in patients. However, the emergence of individual genes implicated across multiple disorders has suggested that they might share similar underlying genetic mechanisms. The CNTNAP2 gene is an excellent example of this, as it has recently been implicated in a broad range of phenotypes including autism spectrum disorder (ASD), schizophrenia, intellectual disability, dyslexia and language impairment. This review considers the evidence implicating CNTNAP2 in these conditions, the genetic risk factors and mutations that have been identified in patient and population studies and how these relate to patient phenotypes. The role of CNTNAP2 is examined in the context of larger neurogenetic networks during development and disorder, given what is known regarding the regulation and function of this gene. Understanding the role of CNTNAP2 in diverse neurological disorders will further our understanding of how combinations of individual genetic risk factors can contribute to complex conditions
  • Rojas-Berscia, L. M. (2014). Towards an ontological theory of language: Radical minimalism, memetic linguistics and linguistic engineering, prolegomena. Ianua: Revista Philologica Romanica, 14(2), 69-81.

    Abstract

    In contrast to what has happened in other sciences, the establishment of what is the study object of linguistics as an autonomous discipline has not been resolved yet. Ranging from external explanations of language as a system (Saussure 1916), the existence of a mental innate language capacity or UG (Chomsky 1965, 1981, 1995), the cognitive complexity of the mental language capacity and the acquisition of languages in use (Langacker 1987, 1991, 2008; Croft & Cruse 2004; Evans & Levinson 2009) most, if not all, theoretical approaches have provided explanations that somehow isolated our discipline from developments in other major sciences, such as physics and evolutionary biology. In the present article I will present some of the basic issues regarding the current debate in the discipline, in order to identify some problems regarding the modern assumptions on language. Furthermore, a new proposal on how to approach linguistic phenomena will be given, regarding what I call «the main three» basic problems our discipline has to face ulteriorly. Finally, some preliminary ideas on a new paradigm of Linguistics which tries to answer these three basic problems will be presented, mainly based in the recently-born formal theory called Radical Minimalism (Krivochen 2011a, 2011b) and what I dub Memetic Linguistics and Linguistic Engineering
  • Rossi, G. (2014). When do people not use language to make requests? In P. Drew, & E. Couper-Kuhlen (Eds.), Requesting in social interaction (pp. 301-332). Amsterdam: John Benjamins.

    Abstract

    In everyday joint activities (e.g. playing cards, preparing potatoes, collecting empty plates), participants often request others to pass, move or otherwise deploy objects. In order to get these objects to or from the requestee, requesters need to manipulate them, for example by holding them out, reaching for them, or placing them somewhere. As they perform these manual actions, requesters may or may not accompany them with language (e.g. Take this potato and cut it or Pass me your plate). This study shows that adding or omitting language in the design of a request is influenced in the first place by a criterion of recognition. When the requested action is projectable from the advancement of an activity, presenting a relevant object to the requestee is enough for them to understand what to do; when, on the other hand, the requested action is occasioned by a contingent development of the activity, requesters use language to specify what the requestee should do. This criterion operates alongside a perceptual criterion, to do with the affordances of the visual and auditory modality. When the requested action is projectable but the requestee is not visually attending to the requester’s manual behaviour, the requester can use just enough language to attract the requestee’s attention and secure immediate recipiency. This study contributes to a line of research concerned with the organisation of verbal and nonverbal resources for requesting. Focussing on situations in which language is not – or only minimally – used, it demonstrates the role played by visible bodily behaviour and by the structure of everyday activities in the formation and understanding of requests.
  • Schmidt, J., Janse, E., & Scharenborg, O. (2014). Age, hearing loss and the perception of affective utterances in conversational speech. In Proceedings of Interspeech 2014: 15th Annual Conference of the International Speech Communication Association (pp. 1929-1933).

    Abstract

    This study investigates whether age and/or hearing loss influence the perception of the emotion dimensions arousal (calm vs. aroused) and valence (positive vs. negative attitude) in conversational speech fragments. Specifically, this study focuses on the relationship between participants' ratings of affective speech and acoustic parameters known to be associated with arousal and valence (mean F0, intensity, and articulation rate). Ten normal-hearing younger and ten older adults with varying hearing loss were tested on two rating tasks. Stimuli consisted of short sentences taken from a corpus of conversational affective speech. In both rating tasks, participants estimated the value of the emotion dimension at hand using a 5-point scale. For arousal, higher intensity was generally associated with higher arousal in both age groups. Compared to younger participants, older participants rated the utterances as less aroused, and showed a smaller effect of intensity on their arousal ratings. For valence, higher mean F0 was associated with more negative ratings in both age groups. Generally, age group differences in rating affective utterances may not relate to age group differences in hearing loss, but rather to other differences between the age groups, as older participants' rating patterns were not associated with their individual hearing loss.
  • Schoot, L., Menenti, L., Hagoort, P., & Segaert, K. (2014). A little more conversation - The influence of communicative context on syntactic priming in brain and behavior. Frontiers in Psychology, 5: 208. doi:10.3389/fpsyg.2014.00208.

    Abstract

    We report on an fMRI syntactic priming experiment in which we measure brain activity for participants who communicate with another participant outside the scanner. We investigated whether syntactic processing during overt language production and comprehension is influenced by having a (shared) goal to communicate. Although theory suggests this is true, the nature of this influence remains unclear. Two hypotheses are tested: i. syntactic priming effects (fMRI and RT) are stronger for participants in the communicative context than for participants doing the same experiment in a non-communicative context, and ii. syntactic priming magnitude (RT) is correlated with the syntactic priming magnitude of the speaker’s communicative partner. Results showed that across conditions, participants were faster to produce sentences with repeated syntax, relative to novel syntax. This behavioral result converged with the fMRI data: we found repetition suppression effects in the left insula extending into left inferior frontal gyrus (BA 47/45), left middle temporal gyrus (BA 21), left inferior parietal cortex (BA 40), left precentral gyrus (BA 6), bilateral precuneus (BA 7), bilateral supplementary motor cortex (BA 32/8) and right insula (BA 47). We did not find support for the first hypothesis: having a communicative intention does not increase the magnitude of syntactic priming effects (either in the brain or in behavior) per se. We did find support for the second hypothesis: if speaker A is strongly/weakly primed by speaker B, then speaker B is primed by speaker A to a similar extent. We conclude that syntactic processing is influenced by being in a communicative context, and that the nature of this influence is bi-directional: speakers are influenced by each other.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Examining strains and symptoms of the ‘Literacy Virus’: The effects of orthographic transparency on phonological processing in a connectionist model of reading. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014). Austin, TX: Cognitive Science Society.

    Abstract

    The effect of literacy on phonological processing has been described in terms of a virus that “infects all speech processing” (Frith, 1998). Empirical data has established that literacy leads to changes to the way in which phonological information is processed. Harm & Seidenberg (1999) demonstrated that a connectionist network trained to map between English orthographic and phonological representations display’s more componential phonological processing than a network trained only to stably represent the phonological forms of words. Within this study we use a similar model yet manipulate the transparency of orthographic-to-phonological mappings. We observe that networks trained on a transparent orthography are better at restoring phonetic features and phonemes. However, networks trained on non-transparent orthographies are more likely to restore corrupted phonological segments with legal, coarser linguistic units (e.g. onset, coda). Our study therefore provides an explicit description of how differences in orthographic transparency can lead to varying strains and symptoms of the ‘literacy virus’.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). A comprehensive model of spoken word recognition must be multimodal: Evidence from studies of language-mediated visual attention. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014). Austin, TX: Cognitive Science Society.

    Abstract

    When processing language, the cognitive system has access to information from a range of modalities (e.g. auditory, visual) to support language processing. Language mediated visual attention studies have shown sensitivity of the listener to phonological, visual, and semantic similarity when processing a word. In a computational model of language mediated visual attention, that models spoken word processing as the parallel integration of information from phonological, semantic and visual processing streams, we simulate such effects of competition within modalities. Our simulations raised untested predictions about stronger and earlier effects of visual and semantic similarity compared to phonological similarity around the rhyme of the word. Two visual world studies confirmed these predictions. The model and behavioral studies suggest that, during spoken word comprehension, multimodal information can be recruited rapidly to constrain lexical selection to the extent that phonological rhyme information may exert little influence on this process.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Modelling language – vision interactions in the hub and spoke framework. In J. Mayor, & P. Gomez (Eds.), Computational Models of Cognitive Processes: Proceedings of the 13th Neural Computation and Psychology Workshop (NCPW13). (pp. 3-16). Singapore: World Scientific Publishing.

    Abstract

    Multimodal integration is a central characteristic of human cognition. However our understanding of the interaction between modalities and its influence on behaviour is still in its infancy. This paper examines the value of the Hub & Spoke framework (Plaut, 2002; Rogers et al., 2004; Dilkina et al., 2008; 2010) as a tool for exploring multimodal interaction in cognition. We present a Hub and Spoke model of language–vision information interaction and report the model’s ability to replicate a range of phonological, visual and semantic similarity word-level effects reported in the Visual World Paradigm (Cooper, 1974; Tanenhaus et al, 1995). The model provides an explicit connection between the percepts of language and the distribution of eye gaze and demonstrates the scope of the Hub-and-Spoke architectural framework by modelling new aspects of multimodal cognition.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Literacy effects on language and vision: Emergent effects from an amodal shared resource (ASR) computational model. Cognitive Psychology, 75, 28-54. doi:10.1016/j.cogpsych.2014.07.002.

    Abstract

    Learning to read and write requires an individual to connect additional orthographic representations to pre-existing mappings between phonological and semantic representations of words. Past empirical results suggest that the process of learning to read and write (at least in alphabetic languages) elicits changes in the language processing system, by either increasing the cognitive efficiency of mapping between representations associated with a word, or by changing the granularity of phonological processing of spoken language, or through a combination of both. Behavioural effects of literacy have typically been assessed in offline explicit tasks that have addressed only phonological processing. However, a recent eye tracking study compared high and low literate participants on effects of phonology and semantics in processing measured implicitly using eye movements. High literates’ eye movements were more affected by phonological overlap in online speech than low literates, with only subtle differences observed in semantics. We determined whether these effects were due to cognitive efficiency and/or granularity of speech processing in a multimodal model of speech processing – the amodal shared resource model (ASR, Smith, Monaghan, & Huettig, 2013). We found that cognitive efficiency in the model had only a marginal effect on semantic processing and did not affect performance for phonological processing, whereas fine-grained versus coarse-grained phonological representations in the model simulated the high/low literacy effects on phonological processing, suggesting that literacy has a focused effect in changing the grain-size of phonological mappings.
  • Sumer, B., Perniss, P., Zwitserlood, I., & Ozyurek, A. (2014). Learning to express "left-right" & "front-behind" in a sign versus spoken language. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1550-1555). Austin, Tx: Cognitive Science Society.

    Abstract

    Developmental studies show that it takes longer for
    children learning spoken languages to acquire viewpointdependent
    spatial relations (e.g., left-right, front-behind),
    compared to ones that are not viewpoint-dependent (e.g.,
    in, on, under). The current study investigates how
    children learn to express viewpoint-dependent relations
    in a sign language where depicted spatial relations can be
    communicated in an analogue manner in the space in
    front of the body or by using body-anchored signs (e.g.,
    tapping the right and left hand/arm to mean left and
    right). Our results indicate that the visual-spatial
    modality might have a facilitating effect on learning to
    express these spatial relations (especially in encoding of
    left-right) in a sign language (i.e., Turkish Sign
    Language) compared to a spoken language (i.e.,
    Turkish).
  • Thompson, P. M., Stein, J. L., Medland, S. E., Hibar, D. P., Vasquez, A. A., Renteria, M. E., Toro, R., Jahanshad, N., Schumann, G., Franke, B., Wright, M. J., Martin, N. G., Agartz, I., Alda, M., Alhusaini, S., Almasy, L., Almeida, J., Alpert, K., Andreasen, N. C., Andreassen, O. A. and 269 moreThompson, P. M., Stein, J. L., Medland, S. E., Hibar, D. P., Vasquez, A. A., Renteria, M. E., Toro, R., Jahanshad, N., Schumann, G., Franke, B., Wright, M. J., Martin, N. G., Agartz, I., Alda, M., Alhusaini, S., Almasy, L., Almeida, J., Alpert, K., Andreasen, N. C., Andreassen, O. A., Apostolova, L. G., Appel, K., Armstrong, N. J., Aribisala, B., Bastin, M. E., Bauer, M., Bearden, C. E., Bergmann, Ø., Binder, E. B., Blangero, J., Bockholt, H. J., Bøen, E., Bois, C., Boomsma, D. I., Booth, T., Bowman, I. J., Bralten, J., Brouwer, R. M., Brunner, H. G., Brohawn, D. G., Buckner, R. L., Buitelaar, J., Bulayeva, K., Bustillo, J. R., Calhoun, V. D., Cannon, D. M., Cantor, R. M., Carless, M. A., Caseras, X., Cavalleri, G. L., Chakravarty, M. M., Chang, K. D., Ching, C. R. K., Christoforou, A., Cichon, S., Clark, V. P., Conrod, P., Coppola, G., Crespo-Facorro, B., Curran, J. E., Czisch, M., Deary, I. J., de Geus, E. J. C., den Braber, A., Delvecchio, G., Depondt, C., de Haan, L., de Zubicaray, G. I., Dima, D., Dimitrova, R., Djurovic, S., Dong, H., Donohoe, G., Duggirala, R., Dyer, T. D., Ehrlich, S., Ekman, C. J., Elvsåshagen, T., Emsell, L., Erk, S., Espeseth, T., Fagerness, J., Fears, S., Fedko, I., Fernández, G., Fisher, S. E., Foroud, T., Fox, P. T., Francks, C., Frangou, S., Frey, E. M., Frodl, T., Frouin, V., Garavan, H., Giddaluru, S., Glahn, D. C., Godlewska, B., Goldstein, R. Z., Gollub, R. L., Grabe, H. J., Grimm, O., Gruber, O., Guadalupe, T., Gur, R. E., Gur, R. C., Göring, H. H. H., Hagenaars, S., Hajek, T., Hall, G. B., Hall, J., Hardy, J., Hartman, C. A., Hass, J., Hatton, S. N., Haukvik, U. K., Hegenscheid, K., Heinz, A., Hickie, I. B., Ho, B.-C., Hoehn, D., Hoekstra, P. J., Hollinshead, M., Holmes, A. J., Homuth, G., Hoogman, M., Hong, L. E., Hosten, N., Hottenga, J.-J., Pol, H. E. H., Hwang, K. S., Jr, C. R. J., Jenkinson, M., Johnston, C., Jönsson, E. G., Kahn, R. S., Kasperaviciute, D., Kelly, S., Kim, S., Kochunov, P., Koenders, L., Krämer, B., Kwok, J. B. J., Lagopoulos, J., Laje, G., Landen, M., Landman, B. A., Lauriello, J., Lawrie, S. M., Lee, P. H., Le Hellard, S., Lemaître, H., Leonardo, C. D., Li, C.-s., Liberg, B., Liewald, D. C., Liu, X., Lopez, L. M., Loth, E., Lourdusamy, A., Luciano, M., Macciardi, F., Machielsen, M. W. J., MacQueen, G. M., Malt, U. F., Mandl, R., Manoach, D. S., Martinot, J.-L., Matarin, M., Mather, K. A., Mattheisen, M., Mattingsdal, M., Meyer-Lindenberg, A., McDonald, C., McIntosh, A. M., McMahon, F. J., McMahon, K. L., Meisenzahl, E., Melle, I., Milaneschi, Y., Mohnke, S., Montgomery, G. W., Morris, D. W., Moses, E. K., Mueller, B. A., Maniega, S. M., Mühleisen, T. W., Müller-Myhsok, B., Mwangi, B., Nauck, M., Nho, K., Nichols, T. E., Nilsson, L.-G., Nugent, A. C., Nyberg, L., Olvera, R. L., Oosterlaan, J., Ophoff, R. A., Pandolfo, M., Papalampropoulou-Tsiridou, M., Papmeyer, M., Paus, T., Pausova, Z., Pearlson, G. D., Penninx, B. W., Peterson, C. P., Pfennig, A., Phillips, M., Pike, G. B., Poline, J.-B., Potkin, S. G., Pütz, B., Ramasamy, A., Rasmussen, J., Rietschel, M., Rijpkema, M., Risacher, S. L., Roffman, J. L., Roiz-Santiañez, R., Romanczuk-Seiferth, N., Rose, E. J., Royle, N. A., Rujescu, D., Ryten, M., Sachdev, P. S., Salami, A., Satterthwaite, T. D., Savitz, J., Saykin, A. J., Scanlon, C., Schmaal, L., Schnack, H. G., Schork, A. J., Schulz, S. C., Schür, R., Seidman, L., Shen, L., Shoemaker, J. M., Simmons, A., Sisodiya, S. M., Smith, C., Smoller, J. W., Soares, J. C., Sponheim, S. R., Sprooten, E., Starr, J. M., Steen, V. M., Strakowski, S., Strike, L., Sussmann, J., Sämann, P. G., Teumer, A., Toga, A. W., Tordesillas-Gutierrez, D., Trabzuni, D., Trost, S., Turner, J., Van den Heuvel, M., van der Wee, N. J., van Eijk, K., van Erp, T. G. M., van Haren, N. E. M., van Ent, D. ‘., van Tol, M.-J., Hernández, M. C. V., Veltman, D. J., Versace, A., Völzke, H., Walker, R., Walter, H., Wang, L., Wardlaw, J. M., Weale, M. E., Weiner, M. W., Wen, W., Westlye, L. T., Whalley, H. C., Whelan, C. D., White, T., Winkler, A. M., Wittfeld, K., Woldehawariat, G., Wolf, C., Zilles, D., Zwiers, M. P., Thalamuthu, A., Schofield, P. R., Freimer, N. B., Lawrence, N. S., & Drevets, W. (2014). The ENIGMA Consortium: Large-scale collaborative analyses of neuroimaging and genetic data. Brain Imaging and Behavior, 8(2), 153-182. doi:10.1007/s11682-013-9269-5.

    Abstract

    The Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA) Consortium is a collaborative network of researchers working together on a range of large-scale studies that integrate data from 70 institutions worldwide. Organized into Working Groups that tackle questions in neuroscience, genetics, and medicine, ENIGMA studies have analyzed neuroimaging data from over 12,826 subjects. In addition, data from 12,171 individuals were provided by the CHARGE consortium for replication of findings, in a total of 24,997 subjects. By meta-analyzing results from many sites, ENIGMA has detected factors that affect the brain that no individual site could detect on its own, and that require larger numbers of subjects than any individual neuroimaging study has currently collected. ENIGMA’s first project was a genome-wide association study identifying common variants in the genome associated with hippocampal volume or intracranial volume. Continuing work is exploring genetic associations with subcortical volumes (ENIGMA2) and white matter microstructure (ENIGMA-DTI). Working groups also focus on understanding how schizophrenia, bipolar illness, major depression and attention deficit/hyperactivity disorder (ADHD) affect the brain. We review the current progress of the ENIGMA Consortium, along with challenges and unexpected discoveries made on the way
  • Thorgrimsson, G. (2014). Infants' understanding of communication as participants and observers. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Thorgrimsson, G., Fawcett, C., & Liszkowski, U. (2014). Infants’ expectations about gestures and actions in third-party interactions. Frontiers in Psychology, 5: 321. doi:10.3389/fpsyg.2014.00321.

    Abstract

    We investigated 14-month-old infants’ expectations toward a third party addressee of communicative gestures and an instrumental action. Infants’ eye movements were tracked as they observed a person (the Gesturer) point, direct a palm-up request gesture, or reach toward an object, and another person (the Addressee) respond by grasping it. Infants’ looking patterns indicate that when the Gesturer pointed or used the palm-up request, infants anticipated that the Addressee would give the object to the Gesturer, suggesting that they ascribed a motive of request to the gestures. In contrast, when the Gesturer reached for the object, and in a control condition where no action took place, the infants did not anticipate the Addressee’s response. The results demonstrate that infants’ recognition of communicative gestures extends to others’ interactions, and that infants can anticipate how third-party addressees will respond to others’ gestures.
  • Tsuji, S., & Cristia, A. (2014). Perceptual attunement in vowels: A meta-analysis. Developmental Psychobiology, 56(2), 179-191. doi:10.1002/dev.21179.

    Abstract

    Although the majority of evidence on perceptual narrowing in speech sounds is based on consonants, most models of infant speech perception generalize these findings to vowels, assuming that vowel perception improves for vowel sounds that are present in the infant's native language within the first year of life, and deteriorates for non-native vowel sounds over the same period of time. The present meta-analysis contributes to assessing to what extent these descriptions are accurate in the first comprehensive quantitative meta-analysis of perceptual narrowing in infant vowel discrimination, including results from behavioral, electrophysiological, and neuroimaging methods applied to infants 0–14 months of age. An analysis of effect sizes for native and non-native vowel discrimination over the first year of life revealed that they changed with age in opposite directions, being significant by about 6 months of age
  • Tsuji, S., Nishikawa, K., & Mazuka, R. (2014). Segmental distributions and consonant-vowel association patterns in Japanese infant- and adult-directed speech. Journal of Child Language, 41, 1276-1304. doi:10.1017/S0305000913000469.

    Abstract

    Japanese infant-directed speech (IDS) and adult-directed speech (ADS) were compared on their segmental distributions and consonant-vowel association patterns. Consistent with findings in other languages, a higher ratio of segments that are generally produced early was found in IDS compared to ADS: more labial consonants and low-central vowels, but fewer fricatives. Consonant-vowel associations also favored the early-produced labial-central, coronal-front, coronal-central, and dorsal-back patterns. On the other hand, clear language-specific patterns included a higher frequency of dorsals, affricates, geminates and moraic nasals in IDS. These segments are frequent in adult Japanese, but not in the early productions or the IDS of other studied languages. In combination with previous results, the current study suggests that both fine-tuning (an increased use of early-produced segments) and highlighting (an increased use of language-specifically relevant segments) might modify IDS on segmental level.
  • Tsuji, S. (2014). The road to native listening. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Van der Zande, P., Jesse, A., & Cutler, A. (2014). Cross-speaker generalisation in two phoneme-level perceptual adaptation processes. Journal of Phonetics, 43, 38-46. doi:10.1016/j.wocn.2014.01.003.

    Abstract

    Speech perception is shaped by listeners' prior experience with speakers. Listeners retune their phonetic category boundaries after encountering ambiguous sounds in order to deal with variations between speakers. Repeated exposure to an unambiguous sound, on the other hand, leads to a decrease in sensitivity to the features of that particular sound. This study investigated whether these changes in the listeners' perceptual systems can generalise to the perception of speech from a novel speaker. Specifically, the experiments looked at whether visual information about the identity of the speaker could prevent generalisation from occurring. In Experiment 1, listeners retuned auditory category boundaries using audiovisual speech input. This shift in the category boundaries affected perception of speech from both the exposure speaker and a novel speaker. In Experiment 2, listeners were repeatedly exposed to unambiguous speech either auditorily or audiovisually, leading to a decrease in sensitivity to the features of the exposure sound. Here, too, the changes affected the perception of both the exposure speaker and the novel speaker. Together, these results indicate that changes in the perceptual system can affect the perception of speech from a novel speaker and that visual speaker identity information did not prevent this generalisation.
  • Van Gijn, R., Hammond, J., Matić, D., Van Putten, S., & Galucio, A. V. (Eds.). (2014). Information structure and reference tracking in complex sentences. Amsterdam: Benjamins.

    Abstract

    This volume is dedicated to exploring the crossroads where complex sentences and information management – more specifically information structure and reference tracking – come together. Complex sentences are a highly relevant but understudied domain for studying notions of IS and RT. On the one hand, a complex sentence can be studied as a mini-unit of discourse consisting of two or more elements describing events, situations, or processes, with its own internal information-structural and referential organization. On the other hand, complex sentences can be studied as parts of larger discourse structures, such as narratives or conversations, in terms of how their information-structural characteristics relate to this wider context. The book offers new perspectives for the study of the interaction between complex sentences and information management, and moreover adds typological breadth by focusing on lesser studied languages from several parts of the world.
  • Van Putten, S. (2014). Information structure in Avatime. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Van de Velde, M., Meyer, A. S., & Konopka, A. E. (2014). Message formulation and structural assembly: Describing "easy" and "hard" events with preferred and dispreferred syntactic structures. Journal of Memory and Language, 71(1), 124-144. doi:10.1016/j.jml.2013.11.001.

    Abstract

    When formulating simple sentences to describe pictured events, speakers look at the referents they are describing in the order of mention. Accounts of incrementality in sentence production rely heavily on analyses of this gaze-speech link. To identify systematic sources of variability in message and sentence formulation, two experiments evaluated differences in formulation for sentences describing “easy” and “hard” events (more codable and less codable events) with preferred and dispreferred structures (actives and passives). Experiment 1 employed a subliminal cuing manipulation and a cumulative priming manipulation to increase production of passive sentences. Experiment 2 examined the influence of event codability on formulation without a cuing manipulation. In both experiments, speakers showed an early preference for looking at the agent of the event when constructing active sentences. This preference was attenuated by event codability, suggesting that speakers were less likely to prioritize encoding of a single character at the outset of formulation in “easy” events than in “harder” events. Accessibility of the agent influenced formulation primarily when an event was “harder” to describe. Formulation of passive sentences in Experiment 1 also began with early fixations to the agent but changed with exposure to passive syntax: speakers were more likely to consider the patient as a suitable sentential starting point after cumulative priming. The results show that the message-to-language mapping in production can vary with the ease of encoding an event structure and of generating a suitable linguistic structure.
  • Van Putten, S. (2014). Left-dislocation and subordination in Avatime (Kwa). In R. Van Gijn, J. Hammond, D. Matic, S. van Putten, & A.-V. Galucio (Eds.), Information Structure and Reference Tracking in Complex Sentences. (pp. 71-98). Amsterdam: John Benjamins.

    Abstract

    Left dislocation is characterized by a sentence-initial element which is crossreferenced in the remainder of the sentence, and often set off by an intonation break. Because of these properties, left dislocation has been analyzed as an extraclausal phenomenon. Whether or not left dislocation can occur within subordinate clauses has been a matter of debate in the literature, but has never been checked against corpus data. This paper presents data from Avatime, a Kwa (Niger-Congo) language spoken in Ghana, showing that left dislocation occurs within subordinate clauses in spontaneous discourse. This poses a problem for the extraclausal analysis of left dislocation. I show that this problem can best be solved by assuming that Avatime allows the embedding of units larger than a clause
  • Van der Zande, P., Jesse, A., & Cutler, A. (2014). Hearing words helps seeing words: A cross-modal word repetition effect. Speech Communication, 59, 31-43. doi:10.1016/j.specom.2014.01.001.

    Abstract

    Watching a speaker say words benefits subsequent auditory recognition of the same words. In this study, we tested whether hearing words also facilitates subsequent phonological processing from visual speech, and if so, whether speaker repetition influences the magnitude of this word repetition priming. We used long-term cross-modal repetition priming as a means to investigate the underlying lexical representations involved in listening to and seeing speech. In Experiment 1, listeners identified auditory-only words during exposure and visual-only words at test. Words at test were repeated or new and produced by the exposure speaker or a novel speaker. Results showed a significant effect of cross-modal word repetition priming but this was unaffected by speaker changes. Experiment 2 added an explicit recognition task at test. Listeners’ lipreading performance was again improved by prior exposure to auditory words. Explicit recognition memory was poor, and neither word repetition nor speaker repetition improved it. This suggests that cross-modal repetition priming is neither mediated by explicit memory nor improved by speaker information. Our results suggest that phonological representations in the lexicon are shared across auditory and visual processing, and that speaker information is not transferred across modalities at the lexical level.
  • Van de Velde, M., & Meyer, A. S. (2014). Syntactic flexibility and planning scope: The effect of verb bias on advance planning during sentence recall. Frontiers in Psychology, 5: 1174. doi:10.3389/fpsyg.2014.01174.

    Abstract

    In sentence production, grammatical advance planning scope depends on contextual factors (e.g., time pressure), linguistic factors (e.g., ease of structural processing), and cognitive factors (e.g., production speed). The present study tests the influence of the availability of multiple syntactic alternatives (i.e., syntactic flexibility) on the scope of advance planning during the recall of Dutch dative phrases. We manipulated syntactic flexibility by using verbs with a strong bias or a weak bias toward one structural alternative in sentence frames accepting both verbs (e.g., strong/weak bias: De ober schotelt/serveert de klant de maaltijd [voor] “The waiter dishes out/serves the customer the meal”). To assess lexical planning scope, we varied the frequency of the first post-verbal noun (N1, Experiment 1) or the second post-verbal noun (N2, Experiment 2). In each experiment, 36 speakers produced the verb phrases in a rapid serial visual presentation (RSVP) paradigm. On each trial, they read a sentence presented one word at a time, performed a short distractor task, and then saw a sentence preamble (e.g., De ober…) which they had to complete to form the presented sentence. Onset latencies were compared using linear mixed effects models. N1 frequency did not produce any effects. N2 frequency only affected sentence onsets in the weak verb bias condition and especially in slow speakers. These findings highlight the dependency of planning scope during sentence recall on the grammatical properties of the verb and the frequency of post-verbal nouns. Implications for utterance planning in everyday speech are discussed.
  • Van Rijswijk, R., & Muntendam, A. (2014). The prosody of focus in the Spanish of Quechua-Spanish bilinguals: A case study on noun phrases. International Journal of Bilingualism, 18(6), 614-632. doi:10.1177/1367006912456103.

    Abstract

    This study examines the prosody of focus in the Spanish of 16 Quechua-Spanish bilinguals near Cusco, Peru. Data come from a dialogue game that involved noun phrases consisting of a noun and an adjective. The questions in the game elicited broad focus, contrastive focus on the noun (non-final position) and contrastive focus on the adjective (final position). The phonetic analysis in Praat included peak alignment, peak height, local range and duration of the stressed syllable and word. The study revealed that Cusco Spanish differs from other Spanish varieties. In other Spanish varieties, contrastive focus is marked by early peak alignment, whereas broad focus involves a late peak on the non-final word. Furthermore, in other Spanish varieties contrastive focus is indicated by a higher F0 maximum, a wider local range, post-focal pitch reduction and a longer duration of the stressed syllable/word. For Cusco Spanish no phonological contrast between early and late peak alignment was found. However, peak alignment on the adjective in contrastive focus was significantly earlier than in the two other contexts. For women, similar results were found for the noun in contrastive focus. An additional prominence-lending feature marking contrastive focus concerned duration of the final word. Furthermore, the results revealed a higher F0 maximum for broad focus than for contrastive focus. The findings suggest a prosodic change, which is possibly due to contact with Quechua. The study contributes to research on information structure, prosody and contact-induced language change.
  • Veenstra, A., Acheson, D. J., Bock, K., & Meyer, A. S. (2014). Effects of semantic integration on subject–verb agreement: Evidence from Dutch. Language, Cognition and Neuroscience, 29(3), 355-380. doi:10.1080/01690965.2013.862284.

    Abstract

    The generation of subject–verb agreement is a central component of grammatical encoding. It is sensitive to conceptual and grammatical influences, but the interplay between these factors is still not fully understood. We investigate how semantic integration of the subject noun phrase (‘the secretary of/with the governor’) and the Local Noun Number (‘the secretary with the governor/governors’) affect the ease of selecting the verb form. Two hypotheses are assessed: according to the notional hypothesis, integration encourages the assignment of the singular notional number to the noun phrase and facilitates the choice of the singular verb form. According to the lexical interference hypothesis, integration strengthens the competition between nouns within the subject phrase, making it harder to select the verb form when the nouns mismatch in number. In two experiments, adult speakers of Dutch completed spoken preambles (Experiment 1) or selected appropriate verb forms (Experiment 2). Results showed facilitatory effects of semantic integration (fewer errors and faster responses with increasing integration). These effects did not interact with the effects of the Local Noun Number (slower response times and higher error rates for mismatching than for matching noun numbers). The findings thus support the notional hypothesis and a model of agreement where conceptual and lexical factors independently contribute to the determination of the number of the subject noun phrase and, ultimately, the verb.
  • Veenstra, A. (2014). Semantic and syntactic constraints on the production of subject-verb agreement. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Verkerk, A. (2014). Diachronic change in Indo-European motion event encoding. Journal of Historical Linguistics, 4, 40-83. doi:10.1075/jhl.4.1.02ver.

    Abstract

    There are many different syntactic constructions that languages can use to encode motion events. In recent decades, great advances have been made in the description and study of these syntactic constructions from languages spoken around the world (Talmy 1985, 1991, Slobin 1996, 2004). However, relatively little attention has been paid to historical change in these systems (exceptions are Vincent 1999, Dufresne, Dupuis & Tremblay 2003, Kopecka 2006 and Peyraube 2006). In this article, diachronic change of motion event encoding systems in Indo-European is investigated using the available historical–comparative data and phylogenetic comparative methods adopted from evolutionary biology. It is argued that Proto-Indo-European was not satellite-framed, as suggested by Talmy (2007) and Acedo Matellán and Mateu (2008), but had a mixed motion event encoding system, as is suggested by the available historical–comparative data
  • Verkerk, A. (2014). The correlation between motion event encoding and path verb lexicon size in the Indo-European language family. Folia Linguistica Historica, 35, 307-358. doi:10.1515/flih.2014.009.

    Abstract

    There have been opposing views on the possibility of a relationship between motion event encoding and the size of the path verb lexicon. Özçalışkan (2004) has proposed that verb-framed and satellite-framed languages should approximately have the same number of path verbs, whereas a review of some of the literature suggests that verb-framed languages typically have a bigger path verb lexicon than satelliteframed languages. In this article I demonstrate that evidence for this correlation can be found through phylogenetic comparative analysis of parallel corpus data from twenty Indo-European languages.
  • Verkerk, A. (2014). The evolutionary dynamics of motion event encoding. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Verkerk, A. (2014). Where Alice fell into: Motion events from a parallel corpus. In B. Szmrecsanyi, & B. Wälchli (Eds.), Aggregating dialectology, typology, and register analysis: Linguistic variation in text and speech (pp. 324-354). Berlin: De Gruyter.
  • Wnuk, E., & Burenhult, N. (2014). Contact and isolation in hunter-gatherer language dynamics: Evidence from Maniq phonology (Aslian, Malay Peninsula). Studies in Language, 38(4), 956-981. doi:10.1075/sl.38.4.06wnu.
  • Wnuk, E., & Majid, A. (2014). Revisiting the limits of language: The odor lexicon of Maniq. Cognition, 131, 125-138. doi:10.1016/j.cognition.2013.12.008.

    Abstract

    It is widely believed that human languages cannot encode odors. While this is true for English,
    and other related languages, data from some non-Western languages challenge this
    view. Maniq, a language spoken by a small population of nomadic hunter–gatherers in
    southern Thailand, is such a language. It has a lexicon of over a dozen terms dedicated
    to smell. We examined the semantics of these smell terms in 3 experiments (exemplar
    listing, similarity judgment and off-line rating). The exemplar listing task confirmed that
    Maniq smell terms have complex meanings encoding smell qualities. Analyses of the
    similarity data revealed that the odor lexicon is coherently structured by two dimensions.
    The underlying dimensions are pleasantness and dangerousness, as verified by the off-line
    rating study. Ethnographic data illustrate that smell terms have detailed semantics tapping
    into broader cultural constructs. Contrary to the widespread view that languages cannot
    encode odors, the Maniq data show odor can be a coherent semantic domain, thus shedding
    new light on the limits of language.
  • Zhou, W., & Broersma, M. (2014). Perception of birth language tone contrasts by adopted Chinese children. In C. Gussenhoven, Y. Chen, & D. Dediu (Eds.), Proceedings of the 4th International Symposium on Tonal Aspects of Language (pp. 63-66).

    Abstract

    The present study investigates how long after adoption adoptees forget the phonology of their birth language. Chinese children who were adopted by Dutch families were tested on the perception of birth language tone contrasts before, during, and after perceptual training. Experiment 1 investigated Cantonese tone 2 (High-Rising) and tone 5 (Low-Rising), and Experiment 2 investigated Mandarin tone 2 (High-Rising) and tone 3 (Low-Dipping). In both experiments, participants were adoptees and non-adopted Dutch controls. Results of both experiments show that the tone contrasts were very difficult to perceive for the adoptees, and that adoptees were not better at perceiving the tone contrasts than their non-adopted Dutch peers, before or after training. This demonstrates that forgetting took place relatively soon after adoption, and that the re-exposure that the adoptees were presented with did not lead to an improvement greater than that of the Dutch control participants. Thus, the findings confirm what has been anecdotally reported by adoptees and their parents, but what had not been empirically tested before, namely that birth language forgetting occurs very soon after adoption
  • Asaridou, S. S., & McQueen, J. M. (2013). Speech and music shape the listening brain: Evidence for shared domain-general mechanisms. Frontiers in Psychology, 4: 321. doi:10.3389/fpsyg.2013.00321.

    Abstract

    Are there bi-directional influences between speech perception and music perception? An answer to this question is essential for understanding the extent to which the speech and music that we hear are processed by domain-general auditory processes and/or by distinct neural auditory mechanisms. This review summarizes a large body of behavioral and neuroscientific findings which suggest that the musical experience of trained musicians does modulate speech processing, and a sparser set of data, largely on pitch processing, which suggest in addition that linguistic experience, in particular learning a tone language, modulates music processing. Although research has focused mostly on music on speech effects, we argue that both directions of influence need to be studied, and conclude that the picture which thus emerges is one of mutual interaction across domains. In particular, it is not simply that experience with spoken language has some effects on music perception, and vice versa, but that because of shared domain-general subcortical and cortical networks, experiences in both domains influence behavior in both domains.
  • Bergmann, C., Ten Bosch, L., Fikkert, P., & Boves, L. (2013). A computational model to investigate assumptions in the headturn preference procedure. Frontiers in Psychology, 4: 676. doi:10.3389/fpsyg.2013.00676.

    Abstract

    In this paper we use a computational model to investigate four assumptions that are tacitly present in interpreting the results of studies on infants' speech processing abilities using the Headturn Preference Procedure (HPP): (1) behavioral differences originate in different processing; (2) processing involves some form of recognition; (3) words are segmented from connected speech; and (4) differences between infants should not affect overall results. In addition, we investigate the impact of two potentially important aspects in the design and execution of the experiments: (a) the specific voices used in the two parts on HPP experiments (familiarization and test) and (b) the experimenter's criterion for what is a sufficient headturn angle. The model is designed to be maximize cognitive plausibility. It takes real speech as input, and it contains a module that converts the output of internal speech processing and recognition into headturns that can yield real-time listening preference measurements. Internal processing is based on distributed episodic representations in combination with a matching procedure based on the assumptions that complex episodes can be decomposed as positive weighted sums of simpler constituents. Model simulations show that the first assumptions hold under two different definitions of recognition. However, explicit segmentation is not necessary to simulate the behaviors observed in infant studies. Differences in attention span between infants can affect the outcomes of an experiment. The same holds for the experimenter's decision criterion. The speakers used in experiments affect outcomes in complex ways that require further investigation. The paper ends with recommendations for future studies using the HPP. - See more at: http://journal.frontiersin.org/Journal/10.3389/fpsyg.2013.00676/full#sthash.TUEwObRb.dpuf
  • Carrion Castillo, A., Franke, B., & Fisher, S. E. (2013). Molecular genetics of dyslexia: An overview. Dyslexia, 19(4), 214-240. doi:10.1002/dys.1464.

    Abstract

    Dyslexia is a highly heritable learning disorder with a complex underlying genetic architecture. Over the past decade, researchers have pinpointed a number of candidate genes that may contribute to dyslexia susceptibility. Here, we provide an overview of the state of the art, describing how studies have moved from mapping potential risk loci, through identification of associated gene variants, to characterization of gene function in cellular and animal model systems. Work thus far has highlighted some intriguing mechanistic pathways, such as neuronal migration, axon guidance, and ciliary biology, but it is clear that we still have much to learn about the molecular networks that are involved. We end the review by highlighting the past, present, and future contributions of the Dutch Dyslexia Programme to studies of genetic factors. In particular, we emphasize the importance of relating genetic information to intermediate neurobiological measures, as well as the value of incorporating longitudinal and developmental data into molecular designs
  • Dolscheid, S. (2013). High pitches and thick voices: The role of language in space-pitch associations. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Dolscheid, S., Graver, C., & Casasanto, D. (2013). Spatial congruity effects reveal metaphors, not markedness. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 2213-2218). Austin,TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0405/index.html.

    Abstract

    Spatial congruity effects have often been interpreted as evidence for metaphorical thinking, but an alternative markedness-based account challenges this view. In two experiments, we directly compared metaphor and markedness explanations for spatial congruity effects, using musical pitch as a testbed. English speakers who talk about pitch in terms of spatial height were tested in speeded space-pitch compatibility tasks. To determine whether space-pitch congruency effects could be elicited by any marked spatial continuum, participants were asked to classify high- and low-frequency pitches as 'high' and 'low' or as 'front' and 'back' (both pairs of terms constitute cases of marked continuums). We found congruency effects in high/low conditions but not in front/back conditions, indicating that markedness is not sufficient to account for congruity effects (Experiment 1). A second experiment showed that congruency effects were specific to spatial words that cued a vertical schema (tall/short), and that congruity effects were not an artifact of polysemy (e.g., 'high' referring both to space and pitch). Together, these results suggest that congruency effects reveal metaphorical uses of spatial schemas, not markedness effects.
  • Dolscheid, S., Shayan, S., Majid, A., & Casasanto, D. (2013). The thickness of musical pitch: Psychophysical evidence for linguistic relativity. Psychological Science, 24, 613-621. doi:10.1177/0956797612457374.

    Abstract

    Do people who speak different languages think differently, even when they are not using language? To find out, we used nonlinguistic psychophysical tasks to compare mental representations of musical pitch in native speakers of Dutch and Farsi. Dutch speakers describe pitches as high (hoog) or low (laag), whereas Farsi speakers describe pitches as thin (na-zok) or thick (koloft). Differences in language were reflected in differences in performance on two pitch-reproduction tasks, even though the tasks used simple, nonlinguistic stimuli and responses. To test whether experience using language influences mental representations of pitch, we trained native Dutch speakers to describe pitch in terms of thickness, as Farsi speakers do. After the training, Dutch speakers’ performance on a nonlinguistic psychophysical task resembled the performance of native Farsi speakers. People who use different linguistic space-pitch metaphors also think about pitch differently. Language can play a causal role in shaping nonlinguistic representations of musical pitch.

    Additional information

    DS_10.1177_0956797612457374.pdf
  • Enfield, N. J., Dingemanse, M., Baranova, J., Blythe, J., Brown, P., Dirksmeyer, T., Drew, P., Floyd, S., Gipper, S., Gisladottir, R. S., Hoymann, G., Kendrick, K. H., Levinson, S. C., Magyari, L., Manrique, E., Rossi, G., San Roque, L., & Torreira, F. (2013). Huh? What? – A first survey in 21 languages. In M. Hayashi, G. Raymond, & J. Sidnell (Eds.), Conversational repair and human understanding (pp. 343-380). New York: Cambridge University Press.

    Abstract

    Introduction

    A comparison of conversation in twenty-one languages from around the world reveals commonalities and differences in the way that people do open-class other-initiation of repair (Schegloff, Jefferson, and Sacks, 1977; Drew, 1997). We find that speakers of all of the spoken languages in the sample make use of a primary interjection strategy (in English it is Huh?), where the phonetic form of the interjection is strikingly similar across the languages: a monosyllable featuring an open non-back vowel [a, æ, ə, ʌ], often nasalized, usually with rising intonation and sometimes an [h-] onset. We also find that most of the languages have another strategy for open-class other-initiation of repair, namely the use of a question word (usually “what”). Here we find significantly more variation across the languages. The phonetic form of the question word involved is completely different from language to language: e.g., English [wɑt] versus Cha'palaa [ti] versus Duna [aki]. Furthermore, the grammatical structure in which the repair-initiating question word can or must be expressed varies within and across languages. In this chapter we present data on these two strategies – primary interjections like Huh? and question words like What? – with discussion of possible reasons for the similarities and differences across the languages. We explore some implications for the notion of repair as a system, in the context of research on the typology of language use.

    The general outline of this chapter is as follows. We first discuss repair as a system across languages and then introduce the focus of the chapter: open-class other-initiation of repair. A discussion of the main findings follows, where we identify two alternative strategies in the data: an interjection strategy (Huh?) and a question word strategy (What?). Formal features and possible motivations are discussed for the interjection strategy and the question word strategy in order. A final section discusses bodily behavior including posture, eyebrow movements and eye gaze, both in spoken languages and in a sign language.
  • Gialluisi, A., Incollu, S., Pippucci, T., Lepori, M. B., Zappu, A., Loudianos, G., & Romeo, G. (2013). The homozygosity index (HI) approach reveals high allele frequency for Wilson disease in the Sardinian population. European Journal of Human Genetics, 21, 1308-1311. doi:10.1038/ejhg.2013.43.

    Abstract

    Wilson disease (WD) is an autosomal recessive disorder resulting in pathological progressive copper accumulation in liver and other tissues. The worldwide prevalence (P) is about 30/million, while in Sardinia it is in the order of 1/10 000. However, all of these estimates are likely to suffer from an underdiagnosis bias. Indeed, a recent molecular neonatal screening in Sardinia reported a WD prevalence of 1:2707. In this study, we used a new approach that makes it possible to estimate the allelic frequency (q) of an autosomal recessive disorder if one knows the proportion between homozygous and compound heterozygous patients (the homozygosity index or HI) and the inbreeding coefficient (F) in a sample of affected individuals. We applied the method to a set of 178 Sardinian individuals (3 of whom born to consanguineous parents), each with a clinical and molecular diagnosis of WD. Taking into account the geographical provenance of the parents of every patient within Sardinia (to make F computation more precise), we obtained a q=0.0191 (F=7.8 × 10-4, HI=0.476) and a corresponding prevalence P=1:2732. This result confirms that the prevalence of WD is largely underestimated in Sardinia. On the other hand, the general reliability and applicability of the HI approach to other autosomal recessive disorders is confirmed, especially if one is interested in the genetic epidemiology of populations with high frequency of consanguineous marriages.
  • Gussenhoven, C., & Zhou, W. (2013). Revisiting pitch slope and height effects on perceived duration. In Proceedings of INTERSPEECH 2013: 14th Annual Conference of the International Speech Communication Association (pp. 1365-1369).

    Abstract

    The shape of pitch contours has been shown to have an effect on the perceived duration of vowels. For instance, vowels with high level pitch and vowels with falling contours sound longer than vowels with low level pitch. Depending on whether the
    comparison is between level pitches or between level and dynamic contours, these findings have been interpreted in two ways. For inter-level comparisons, where the duration results are the reverse of production results, a hypercorrection strategy in production has been proposed [1]. By contrast, for comparisons between level pitches and dynamic contours, the
    longer production data for dynamic contours have been held responsible. We report an experiment with Dutch and Chinese listeners which aimed to show that production data and perception data are each other’s opposites for high, low, falling and rising contours. We explain the results, which are consistent with earlier findings, in terms of the compensatory listening strategy of [2], arguing that the perception effects are due to a perceptual compensation of articulatory strategies and
    constraints, rather than that differences in production compensate for psycho-acoustic perception effects.
  • Hanique, I., Aalders, E., & Ernestus, M. (2013). How robust are exemplar effects in word comprehension? The mental lexicon, 8, 269-294. doi:10.1075/ml.8.3.01han.

    Abstract

    This paper studies the robustness of exemplar effects in word comprehension by means of four long-term priming experiments with lexical decision tasks in Dutch. A prime and target represented the same word type and were presented with the same or different degree of reduction. In Experiment 1, participants heard only a small number of trials, a large proportion of repeated words, and stimuli produced by only one speaker. They recognized targets more quickly if these represented the same degree of reduction as their primes, which forms additional evidence for the exemplar effects reported in the literature. Similar effects were found for two speakers who differ in their pronunciations. In Experiment 2, with a smaller proportion of repeated words and more trials between prime and target, participants recognized targets preceded by primes with the same or a different degree of reduction equally quickly. Also, in Experiments 3 and 4, in which listeners were not exposed to one but two types of pronunciation variation (reduction degree and speaker voice), no exemplar effects arose. We conclude that the role of exemplars in speech comprehension during natural conversations, which typically involve several speakers and few repeated content words, may be smaller than previously assumed.
  • Holler, J., Schubotz, L., Kelly, S., Schuetze, M., Hagoort, P., & Ozyurek, A. (2013). Here's not looking at you, kid! Unaddressed recipients benefit from co-speech gestures when speech processing suffers. In M. Knauff, M. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 2560-2565). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0463/index.html.

    Abstract

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from these different modalities, and how perceived communicative intentions, often signaled through visual signals, such as eye
    gaze, may influence this processing. We address this question by simulating a triadic communication context in which a
    speaker alternated her gaze between two different recipients. Participants thus viewed speech-only or speech+gesture
    object-related utterances when being addressed (direct gaze) or unaddressed (averted gaze). Two object images followed
    each message and participants’ task was to choose the object that matched the message. Unaddressed recipients responded significantly slower than addressees for speech-only
    utterances. However, perceiving the same speech accompanied by gestures sped them up to a level identical to
    that of addressees. That is, when speech processing suffers due to not being addressed, gesture processing remains intact and enhances the comprehension of a speaker’s message
  • Johnson, E. K., Lahey, M., Ernestus, M., & Cutler, A. (2013). A multimodal corpus of speech to infant and adult listeners. Journal of the Acoustical Society of America, 134, EL534-EL540. doi:10.1121/1.4828977.

    Abstract

    An audio and video corpus of speech addressed to 28 11-month-olds is described. The corpus allows comparisons between adult speech directed towards infants, familiar adults and unfamiliar adult addressees, as well as of caregivers’ word teaching strategies across word classes. Summary data show that infant-directed speech differed more from speech to unfamiliar than familiar adults; that word teaching strategies for nominals versus verbs and adjectives differed; that mothers mostly addressed infants with multi-word utterances; and that infants’ vocabulary size was unrelated to speech rate, but correlated positively with predominance of continuous caregiver speech (not of isolated words) in the input.
  • Kupisch, T., Akpinar, D., & Stoehr, A. (2013). Gender assignment and gender agreement in adult bilinguals and second learners of French. Linguistic Approaches to Bilingualism, 3, 150-179. doi:10.1075/lab.3.2.02kup.
  • Mulder, K. (2013). Family and neighbourhood relations in the mental lexicon: A cross-language perspective. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    We lezen en horen dagelijks duizenden woorden zonder dat het ons enige moeite lijkt te kosten. Toch speelt zich in ons brein ondertussen een complex mentaal proces af, waarbij tal van andere woorden dan het aangeboden woord, ook actief worden. Dit gebeurt met name wanneer die andere woorden overeenkomen met de feitelijk aangeboden woorden in spelling, uitspraak of betekenis. Deze activatie als gevolg van gelijkenis strekt zich zelfs uit tot andere talen: ook daarin worden gelijkende woorden actief. Waar liggen de grenzen van dit activatieproces? Activeer je bij het verwerken van het Engelse woord 'steam' ook het Nederlandse woord 'stram'(een zogenaamd 'buurwoord)? En activeer je bij 'clock' zowel 'clockwork' als 'klokhuis' ( twee morfolologische familieleden uit verschillende talen)? Kimberley Mulder onderzocht hoe het leesproces van Nederlands-Engelse tweetaligen wordt beïnvloed door zulke relaties. In meerdere experimentele studies vond ze dat tweetaligen niet alleen morfologische familieleden en orthografische buren activeren uit de taal waarin ze op dat moment lezen, maar ook uit de andere taal die ze kennen. Het lezen van een woord beperkt zich dus geenszins tot wat je eigenlijk ziet, maar activeert een heel netwerk van woorden in je brein.

    Additional information

    full text via Radboud Repository
  • Mulder, K., Schreuder, R., & Dijkstra, T. (2013). Morphological family size effects in L1 and L2 processing: An electrophysiological study. Language and Cognitive Processes, 27, 1004-1035. doi:10.1080/01690965.2012.733013.

    Abstract

    The present study examined Morphological Family Size effects in first and second language processing. Items with a high or low Dutch (L1) Family Size were contrasted in four experiments involving Dutch–English bilinguals. In two experiments, reaction times (RTs) were collected in English (L2) and Dutch (L1) lexical decision tasks; in two other experiments, an L1 and L2 go/no-go lexical decision task were performed while Event-Related Potentials (ERPs) were recorded. Two questions were addressed. First, is the ERP signal sensitive to the morphological productivity of words? Second, does nontarget language activation in L2 processing spread beyond the item itself, to the morphological family of the activated nontarget word? The two behavioural experiments both showed a facilitatory effect of Dutch Family Size, indicating that the morphological family in the L1 is activated regardless of language context. In the two ERP experiments, Family Size effects were found to modulate the N400 component. Less negative waveforms were observed for words with a high L1 Family Size compared to words with a low L1 Family Size in the N400 time window, in both the L1 and L2 task. In addition, these Family Size effects persisted in later time windows. The data are discussed in light of the Morphological Family Resonance Model (MFRM) model of morphological processing and the BIA + model.
  • Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). Getting to the point: The influence of communicative intent on the kinematics of pointing gestures. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 1127-1132). Austin, TX: Cognitive Science Society.

    Abstract

    In everyday communication, people not only use speech but
    also hand gestures to convey information. One intriguing
    question in gesture research has been why gestures take the
    specific form they do. Previous research has identified the
    speaker-gesturer’s communicative intent as one factor
    shaping the form of iconic gestures. Here we investigate
    whether communicative intent also shapes the form of
    pointing gestures. In an experimental setting, twenty-four
    participants produced pointing gestures identifying a referent
    for an addressee. The communicative intent of the speakergesturer
    was manipulated by varying the informativeness of
    the pointing gesture. A second independent variable was the
    presence or absence of concurrent speech. As a function of their communicative intent and irrespective of the presence of speech, participants varied the durations of the stroke and the post-stroke hold-phase of their gesture. These findings add to our understanding of how the communicative context influences the form that a gesture takes.
  • Peeters, D., Dijkstra, T., & Grainger, J. (2013). The representation and processing of identical cognates by late bilinguals: RT and ERP effects. Journal of Memory and Language, 68, 315-332. doi:10.1016/j.jml.2012.12.003.

    Abstract

    Across the languages of a bilingual, translation equivalents can have the same orthographic form and shared meaning (e.g., TABLE in French and English). How such words, called orthographically identical cognates, are processed and represented in the bilingual brain is not well understood. In the present study, late French–English bilinguals processed such identical cognates and control words in an English lexical decision task. Both behavioral and electrophysiological data were collected. Reaction times to identical cognates were shorter than for non-cognate controls and depended on both English and French frequency. Cognates with a low English frequency showed a larger cognate advantage than those with a high English frequency. In addition, N400 amplitude was found to be sensitive to cognate status and both the English and French frequency of the cognate words. Theoretical consequences for the processing and representation of identical cognates are discussed.
  • Piai, V., Roelofs, A., Acheson, D. J., & Takashima, A. (2013). Attention for speaking: Neural substrates of general and specific mechanisms for monitoring and control. Frontiers in Human Neuroscience, 7: 832. doi:10.3389/fnhum.2013.00832.

    Abstract

    Accumulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and monitoring processes have remained relatively underspecified. We report the results of an fMRI study examining the neural substrates related to performance in three attention-demanding tasks varying in the amount of linguistic processing: vocal picture naming while ignoring distractors (picture-word interference, PWI); vocal color naming while ignoring distractors (Stroop); and manual object discrimination while ignoring spatial position (Simon task). All three tasks had congruent and incongruent stimuli, while PWI and Stroop also had neutral stimuli. Analyses focusing on common activation across tasks identified a portion of the dorsal anterior cingulate cortex (ACC) that was active in incongruent trials for all three tasks, suggesting that this region subserves a domain-general attentional control function. In the language tasks, this area showed increased activity for incongruent relative to congruent stimuli, consistent with the involvement of domain-general mechanisms of attentional control in word production. The two language tasks also showed activity in anterior-superior temporal gyrus (STG). Activity increased for neutral PWI stimuli (picture and word did not share the same semantic category) relative to incongruent (categorically related) and congruent stimuli. This finding is consistent with the involvement of language-specific areas in word production, possibly related to retrieval of lexical-semantic information from memory. The current results thus suggest that in addition to engaging language-specific areas for core linguistic processes, speaking also engages the ACC, a region that is likely implementing domain-general attentional control.
  • Piai, V., Roelofs, A., Jensen, O., Schoffelen, J.-M., & Bonnefond, M. (2013). Distinct patterns of brain activity characterize lexical activation and competition in speech production [Abstract]. Journal of Cognitive Neuroscience, 25 Suppl., 106.

    Abstract

    A fundamental ability of speakers is to
    quickly retrieve words from long-term memory. According to a prominent theory, concepts activate multiple associated words, which enter into competition for selection. Previous electrophysiological studies have provided evidence for the activation of multiple alternative words, but did not identify brain responses refl ecting competition. We report a magnetoencephalography study examining the timing and neural substrates of lexical activation and competition. The degree of activation of competing words was
    manipulated by presenting pictures (e.g., dog) simultaneously with distractor
    words. The distractors were semantically related to the picture name (cat), unrelated (pin), or identical (dog). Semantic distractors are stronger competitors to the picture name, because they receive additional activation from the picture, whereas unrelated distractors do not. Picture naming times were longer with semantic than with unrelated and identical distractors. The patterns of phase-locked and non-phase-locked activity were distinct
    but temporally overlapping. Phase-locked activity in left middle temporal
    gyrus, peaking at 400 ms, was larger on unrelated than semantic and identical trials, suggesting differential effort in processing the alternative words activated by the picture-word stimuli. Non-phase-locked activity in the 4-10 Hz range between 400-650 ms in left superior frontal gyrus was larger on semantic than unrelated and identical trials, suggesting different
    degrees of effort in resolving the competition among the alternatives
    words, as refl ected in the naming times. These findings characterize distinct
    patterns of brain activity associated with lexical activation and competition
    respectively, and their temporal relation, supporting the theory that words are selected by competition.
  • Piai, V., Meyer, L., Schreuder, R., & Bastiaansen, M. C. M. (2013). Sit down and read on: Working memory and long-term memory in particle-verb processing. Brain and Language, 127(2), 296-306. doi:10.1016/j.bandl.2013.09.015.

    Abstract

    Particle verbs (e.g., look up) are lexical items for which particle and verb share a single lexical entry. Using event-related brain potentials, we examined working memory and long-term memory involvement in particle-verb processing. Dutch participants read sentences with head verbs that allow zero, two, or more than five particles to occur downstream. Additionally, sentences were presented for which the encountered particle was semantically plausible, semantically implausible, or forming a non-existing particle verb. An anterior negativity was observed at the verbs that potentially allow for a particle downstream relative to verbs that do not, possibly indexing storage of the verb until the dependency with its particle can be closed. Moreover, a graded N400 was found at the particle (smallest amplitude for plausible particles and largest for particles forming non-existing particle verbs), suggesting that lexical access to a shared lexical entry occurred at two separate time points.

Share this page