Publications

Displaying 301 - 400 of 554
  • Mazuka, R., Hasegawa, M., & Tsuji, S. (2014). Development of non-native vowel discrimination: Improvement without exposure. Developmental Psychobiology, 56(2), 192-209. doi:10.1002/dev.21193.

    Abstract

    he present study tested Japanese 4.5- and 10-month old infants' ability to discriminate three German vowel pairs, none of which are contrastive in Japanese, using a visual habituation–dishabituation paradigm. Japanese adults' discrimination of the same pairs was also tested. The results revealed that Japanese 4.5-month old infants discriminated the German /bu:k/-/by:k/ contrast, but they showed no evidence of discriminating the /bi:k/-/be:k/ or /bu:k/-/bo:k/ contrasts. Japanese 10-month old infants, on the other hand, discriminated the German /bi:k/-/be:k/ contrast, while they showed no evidence of discriminating the /bu:k/-/by:k/ or /bu:k/-/bo:k/ contrasts. Japanese adults, in contrast, were highly accurate in their discrimination of all of the pairs. The results indicate that discrimination of non-native contrasts is not always easy even for young infants, and that their ability to discriminate non-native contrasts can improve with age even when they receive no exposure to a language in which the given contrast is phonemic. © 2013 Wiley Periodicals, Inc. Dev Psychobiol 56: 192–209, 2014.
  • McQueen, J. M. (2003). The ghost of Christmas future: Didn't Scrooge learn to be good? Commentary on Magnuson, McMurray, Tanenhaus and Aslin (2003). Cognitive Science, 27(5), 795-799. doi:10.1207/s15516709cog2705_6.

    Abstract

    Magnuson, McMurray, Tanenhaus, and Aslin [Cogn. Sci. 27 (2003) 285] suggest that they have evidence of lexical feedback in speech perception, and that this evidence thus challenges the purely feedforward Merge model [Behav. Brain Sci. 23 (2000) 299]. This evidence is open to an alternative explanation, however, one which preserves the assumption in Merge that there is no lexical-prelexical feedback during on-line speech processing. This explanation invokes the distinction between perceptual processing that occurs in the short term, as an utterance is heard, and processing that occurs over the longer term, for perceptual learning.
  • McQueen, J. M., & Sereno, J. (2005). Cleaving automatic processes from strategic biases in phonological priming. Memory & Cognition, 33(7), 1185-1209.

    Abstract

    In a phonological priming experiment using spoken Dutch words, Dutch listeners were taught varying expectancies and relatedness relations about the phonological form of target words, given particular primes. They learned to expect that, after a particular prime, if the target was a word, it would be from a specific phonological category. The expectancy either involved phonological overlap (e.g., honk-vonk, “base-spark”; expected related) or did not (e.g., nest-galm, “nest-boom”; expected unrelated, where the learned expectation after hearing nest was a word rhyming in -alm). Targets were occasionally inconsistent with expectations. In these inconsistent expectancy trials, targets were either unrelated (e.g., honk-mest, “base-manure”; unexpected unrelated), where the listener was expecting a related target, or related (e.g., nest-pest, “nest-plague”; unexpected related), where the listener was expecting an unrelated target. Participant expectations and phonological relatedness were thus manipulated factorially for three types of phonological overlap (rhyme, one onset phoneme, and three onset phonemes) at three interstimulus intervals (ISIs; 50, 500, and 2,000 msec). Lexical decisions to targets revealed evidence of expectancy-based strategies for all three types of overlap (e.g., faster responses to expected than to unexpected targets, irrespective of phonological relatedness) and evidence of automatic phonological processes, but only for the rhyme and three-phoneme onset overlap conditions and, most strongly, at the shortest ISI (e.g., faster responses to related than to unrelated targets, irrespective of expectations). Although phonological priming thus has both automatic and strategic components, it is possible to cleave them apart.
  • McQueen, J. M., & Huettig, F. (2014). Interference of spoken word recognition through phonological priming from visual objects and printed words. Attention, Perception & Psychophysics, 76, 190-200. doi:10.3758/s13414-013-0560-8.

    Abstract

    Three cross-modal priming experiments examined the influence of pre-exposure to
    pictures and printed words on the speed of spoken word recognition. Targets for
    auditory lexical decision were spoken Dutch words and nonwords, presented in
    isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory
    stimuli were preceded by primes which were pictures (Experiments 1 and 3) or those pictures’ printed names (Experiment 2). Prime-target pairs were phonologically onsetrelated (e.g., pijl-pijn, arrow-pain), were from the same semantic category (e.g., pijlzwaard, arrow-sword), or were unrelated on both dimensions. Phonological
    interference and semantic facilitation were observed in all experiments. Priming
    magnitude was similar for pictures and printed words, and did not vary with picture
    viewing time or number of pictures in the display (either one or four). These effects
    arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision-making. This suggests
    that, by default, processing of related pictures and printed words influences how
    quickly we recognize related spoken words.
  • McQueen, J. M., Cutler, A., & Norris, D. (2003). Flow of information in the spoken word recognition system. Speech Communication, 41(1), 257-270. doi:10.1016/S0167-6393(02)00108-5.

    Abstract

    Spoken word recognition consists of two major component processes. First, at the prelexical stage, an abstract description of the utterance is generated from the information in the speech signal. Second, at the lexical stage, this description is used to activate all the words stored in the mental lexicon which match the input. These multiple candidate words then compete with each other. We review evidence which suggests that positive (match) and negative (mismatch) information of both a segmental and a suprasegmental nature is used to constrain this activation and competition process. We then ask whether, in addition to the necessary influence of the prelexical stage on the lexical stage, there is also feedback from the lexicon to the prelexical level. In two phonetic categorization experiments, Dutch listeners were asked to label both syllable-initial and syllable-final ambiguous fricatives (e.g., sounds ranging from [f] to [s]) in the word–nonword series maf–mas, and the nonword–word series jaf–jas. They tended to label the sounds in a lexically consistent manner (i.e., consistent with the word endpoints of the series). These lexical effects became smaller in listeners’ slower responses, even when the listeners were put under pressure to respond as fast as possible. Our results challenge models of spoken word recognition in which feedback modulates the prelexical analysis of the component sounds of a word whenever that word is heard
  • Meeuwissen, M., Roelofs, A., & Levelt, W. J. M. (2003). Planning levels in naming and reading complex numerals. Memory & Cognition, 31(8), 1238-1249.

    Abstract

    On the basis of evidence from studies of the naming and reading of numerals, Ferrand (1999) argued that the naming of objects is slower than reading their names, due to a greater response uncertainty in naming than in reading, rather than to an obligatory conceptual preparation for naming, but not for reading. We manipulated the need for conceptual preparation, while keeping response uncertainty constant in the naming and reading of complex numerals. In Experiment 1, participants named three-digit Arabic numerals either as house numbers or clock times. House number naming latencies were determined mostly by morphophonological factors, such as morpheme frequency and the number of phonemes, whereas clock time naming latencies revealed an additional conceptual involvement. In Experiment 2, the numerals were presented in alphabetic format and had to be read aloud. Reading latencies were determined mostly by morphophonological factors in both modes. These results suggest that conceptual preparation, rather than response uncertainty, is responsible for the difference between naming and reading latencies.
  • Mehta, G., & Cutler, A. (1988). Detection of target phonemes in spontaneous and read speech. Language and Speech, 31, 135-156.

    Abstract

    Although spontaneous speech occurs more frequently in most listeners’ experience than read speech, laboratory studies of human speech recognition typically use carefully controlled materials read from a script. The phonological and prosodic characteristics of spontaneous and read speech differ considerably, however, which suggests that laboratory results may not generalize to the recognition of spontaneous and read speech materials, and their response time to detect word-initial target phonemes was measured. Response were, overall, equally fast in each speech mode. However analysis of effects previously reported in phoneme detection studies revealed significant differences between speech modes. In read speech but not in spontaneous speech, later targets were detected more rapidly than earlier targets, and targets preceded by long words were detected more rapidly than targets preceded by short words. In contrast, in spontaneous speech but not in read speech, targets were detected more rapidly in accented than unaccented words and in strong than in weak syllables. An explanation for this pattern is offered in terms of characteristic prosodic differences between spontaneous and read speech. The results support claim from previous work that listeners pay great attention to prosodic information in the process of recognizing speech.
  • Meira, S., & Terrill, A. (2005). Contrasting contrastive demonstratives in Tiriyó and Lavukaleve. Linguistics, 43(6), 1131-1152. doi:10.1515/ling.2005.43.6.1131.

    Abstract

    This article explores the contrastive function of demonstratives in two languages, Tiriyó (Cariban, northern Brazil) and Lavukaleve (Papuan isolate, Solomon Islands). The contrastive function has to a large extent been neglected in the theoretical literature on demonstrative functions, although preliminary investigations suggest that there are significant differences in demonstrative use in contrastive versus noncontrastive contexts. Tiriyó and Lavukaleve have what seem at first glance to be rather similar three-term demonstrative systems for exophoric deixis, with a proximal term, a distal term, and a middle term. However, under contrastive usage, significant differences between the two systems become apparent. In presenting an analysis of the contrastive use of demonstratives in these two languages, this article aims to show that the contrastive function is an important parameter of variation in demonstrative systems.
  • Meyer, A. S., Roelofs, A., & Levelt, W. J. M. (2003). Word length effects in object naming: The role of a response criterion. Journal of Memory and Language, 48(1), 131-147. doi:10.1016/S0749-596X(02)00509-0.

    Abstract

    According to Levelt, Roelofs, and Meyer (1999) speakers generate the phonological and phonetic representations of successive syllables of a word in sequence and only begin to speak after having fully planned at least one complete phonological word. Therefore, speech onset latencies should be longer for long than for short words. We tested this prediction in four experiments in which Dutch participants named or categorized objects with monosyllabic or disyllabic names. Experiment 1 yielded a length effect on production latencies when objects with long and short names were tested in separate blocks, but not when they were mixed. Experiment 2 showed that the length effect was not due to a difference in the ease of object recognition. Experiment 3 replicated the results of Experiment 1 using a within-participants design. In Experiment 4, the long and short target words appeared in a phrasal context. In addition to the speech onset latencies, we obtained the viewing times for the target objects, which have been shown to depend on the time necessary to plan the form of the target names. We found word length effects for both dependent variables, but only when objects with short and long names were presented in separate blocks. We argue that in pure and mixed blocks speakers used different response deadlines, which they tried to meet by either generating the motor programs for one syllable or for all syllables of the word before speech onset. Computer simulations using WEAVER++ support this view.
  • Meyer, A. S., Levelt, W. J. M., & Wissink, M. T. (1996). Een modulair model van zinsproductie. Logopedie, 9(2), 21-31.

    Abstract

    In deze bijdrage wordt een modulair model van zinsproductie besproken. De planningsprocessen, die aan de productie van een zin voorafgaan, kunnen in twee hoofdcomponenten onderverdeeld worden: deconceptualisering (het bedenken van de inhoud van de uiting) en de formulering (het vastleggen van de linguïstische vorm). Het formuleringsproces bestaat weer uit twee componenten, te weten de grammatische en fonologische codering. Ook deze componenten bestaan elk weer uit een aantal subcomponenten. Dit artikel beschrijft wat de specifieke taak van iedere component is, hoe deze uitgevoerd wordt en hoe de componenten samenwerken. Tevens worden enkele belangrijke methoden van taalproductie-onderzoek besproken.
  • Meyer, A. S. (1996). Lexical access in phrase and sentence production: Results from picture-word interference experiments. Journal of Memory and Language, 35, 477-496. doi:doi:10.1006/jmla.1996.0026.

    Abstract

    Four experiments investigated the span of advance planning for phrases and short sentences. Dutch subjects were presented with pairs of objects, which they named using noun-phrase conjunctions (e.g., the translation equivalent of ''the arrow and the bag'') or sentences (''the arrow is next to the bag''). Each display was accompanied by an auditory distracter, which was related in form or meaning to the first or second noun of the utterance or unrelated to both. For sentences and phrases, the mean speech onset time was longer when the distracter was semantically related to the first or second noun and shorter when it was phonologically related to the first noun than when it was unrelated. No phonological facilitation was found for the second noun. This suggests that before utterance onset both target lemmas and the first target form were selected.
  • Misersky, J., Gygax, P. M., Canal, P., Gabriel, U., Garnham, A., Braun, F., Chiarini, T., Englund, K., Hanulíková, A., Öttl, A., Valdrova, J., von Stockhausen, L., & Sczesny, S. (2014). Norms on the gender perception of role nouns in Czech, English, French, German, Italian, Norwegian, and Slovak. Behavior Research Methods, 46(3), 841-871. doi:10.3758/s13428-013-0409-z.

    Abstract

    We collected norms on the gender stereotypicality of an extensive list of role nouns in Czech, English, French, German, Italian, Norwegian, and Slovak, to be used as a basis for the selection of stimulus materials in future studies. We present a Web-based tool (available at https://www.unifr.ch/lcg/) that we developed to collect these norms and that we expect to be useful for other researchers, as well. In essence, we provide (a) gender stereotypicality norms across a number of languages and (b) a tool to facilitate cross-language as well as cross-cultural comparisons when researchers are interested in the investigation of the impact of stereotypicality on the processing of role nouns.
  • Moisik, S. R., Lin, H., & Esling, J. H. (2014). A study of laryngeal gestures in Mandarin citation tones using simultaneous laryngoscopy and laryngeal ultrasound (SLLUS). Journal of the International Phonetic Association, 44, 21-58. doi:10.1017/S0025100313000327.

    Abstract

    In this work, Mandarin tone production is examined using simultaneous laryngoscopy and laryngeal ultrasound (SLLUS). Laryngoscopy is used to obtain information about laryngeal state, and laryngeal ultrasound is used to quantify changes in larynx height. With this methodology, several observations are made concerning the production of Mandarin tone in citation form. Two production strategies are attested for low tone production: (i) larynx lowering and (ii) larynx raising with laryngeal constriction. Another finding is that the larynx rises continually during level tone production, which is interpreted as a means to compensate for declining subglottal pressure. In general, we argue that larynx height plays a supportive role in facilitating f0 change under circumstances where intrinsic mechanisms for f0 control are insufficient to reach tonal targets due to vocal fold inertia. Activation of the laryngeal constrictor can be used to achieve low tone targets through mechanical adjustment to vocal fold dynamics. We conclude that extra-glottal laryngeal mechanisms play important roles in facilitating the production of tone targets and should be integrated into the contemporary articulatory model of tone production
  • Moisik, S. R., & Esling, J. H. (2014). Modeling biomechanical influence of epilaryngeal stricture on the vocal folds: A low-dimensional model of vocal-ventricular coupling. Journal of Speech, Language, and Hearing Research, 57, S687-S704. doi:10.1044/2014_JSLHR-S-12-0279.

    Abstract

    Purpose: Physiological and phonetic studies suggest that, at moderate levels of epilaryngeal stricture, the ventricular folds impinge upon the vocal folds and influence their dynamical behavior, which is thought to be responsible for constricted laryngeal sounds. In this work, the authors examine this hypothesis through biomechanical modeling. Method: The dynamical response of a low-dimensional, lumped-element model of the vocal folds under the influence of vocal-ventricular fold coupling was evaluated. The model was assessed for F0 and cover-mass phase difference. Case studies of simulations of different constricted phonation types and of glottal stop illustrate various additional aspects of model performance. Results: Simulated vocal-ventricular fold coupling lowers F0 and perturbs the mucosal wave. It also appears to reinforce irregular patterns of oscillation, and it can enhance laryngeal closure in glottal stop production. Conclusion: The effects of simulated vocal-ventricular fold coupling are consistent with sounds, such as creaky voice, harsh voice, and glottal stop, that have been observed to involve epilaryngeal stricture and apparent contact between the vocal folds and ventricular folds. This supports the view that vocal-ventricular fold coupling is important in the vibratory dynamics of such sounds and, furthermore, suggests that these sounds may intrinsically require epilaryngeal stricture
  • Morgan, J., & Meyer, A. S. (2005). Processing of extrafoveal objects during multiple-object naming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31, 428-442. doi:10.1037/0278-7393.31.3.428.

    Abstract

    In 3 experiments, the authors investigated the extent to which objects that are about to be named are processed prior to fixation. Participants named pairs or triplets of objects. One of the objects, initially seen extrafoveally (the interloper), was replaced by a different object (the target) during the saccade toward it. The interloper-target pairs were identical or unrelated objects or visually and conceptually unrelated objects with homophonous names (e.g., animal-baseball bat). The mean latencies and gaze durations for the targets were shorter in the identity and homophone conditions than in the unrelated condition. This was true when participants viewed a fixation mark until the interloper appeared and when they fixated on another object and prepared to name it while viewing the interloper. These results imply that objects that are about to be named may undergo far-reaching processing, including access to their names, prior to fixation.
  • Moscoso del Prado Martín, F., Deutsch, A., Frost, R., Schreuder, R., De Jong, N. H., & Baayen, R. H. (2005). Changing places: A cross-language perspective on frequency and family size in Dutch and Hebrew. Journal of Memory and Language, 53(4), 496-512. doi:10.1016/j.jml.2005.07.003.

    Abstract

    This study uses the morphological family size effect as a tool for exploring the degree of isomorphism in the networks of morphologically related words in the Hebrew and Dutch mental lexicon. Hebrew and Dutch are genetically unrelated, and they structure their morphologically complex words in very different ways. Two visual lexical decision experiments document substantial cross-language predictivity for the family size measure after partialing out the effect of word frequency and word length. Our data show that the morphological family size effect is not restricted to Indo-European languages but extends to languages with non-concatenative morphology. In Hebrew, a new inhibitory component of the family size effect emerged that arises when a Hebrew root participates in different semantic fields.
  • Mulder, K., Dijkstra, T., Schreuder, R., & Baayen, R. H. (2014). Effects of primary and secondary morphological family size in monolingual and bilingual word processing. Journal of Memory and Language, 72, 59-84. doi:10.1016/j.jml.2013.12.004.

    Abstract

    This study investigated primary and secondary morphological family size effects in monolingual and bilingual processing, combining experimentation with computational modeling. Family size effects were investigated in an English lexical decision task for Dutch-English bilinguals and English monolinguals using the same materials. To account for the possibility that family size effects may only show up in words that resemble words in the native language of the bilinguals, the materials included, in addition to purely English items, Dutch-English cognates (identical and non-identical in form). As expected, the monolingual data revealed facilitatory effects of English primary family size. Moreover, while the monolingual data did not show a main effect of cognate status, only form-identical cognates revealed an inhibitory effect of English secondary family size. The bilingual data showed stronger facilitation for identical cognates, but as for monolinguals, this effect was attenuated for words with a large secondary family size. In all, the Dutch-English primary and secondary family size effects in bilinguals were strikingly similar to those of monolinguals. Computational simulations suggest that the primary and secondary family size effects can be understood in terms of discriminative learning of the English lexicon. (C) 2014 Elsevier Inc. All rights reserved.

    Files private

    Request files
  • Nakayama, M., Verdonschot, R. G., Sears, C. R., & Lupker, S. J. (2014). The masked cognate translation priming effect for different-script bilinguals is modulated by the phonological similarity of cognate words: Further support for the phonological account. Journal of Cognitive Psychology, 26(7), 714-724. doi:10.1080/20445911.2014.953167.

    Abstract

    The effect of phonological similarity on L1-L2 cognate translation priming was examined with Japanese-English bilinguals. According to the phonological account, the cognate priming effect for different-script bilinguals consists of additive effects of phonological and conceptual facilitation. If true, then the size of the cognate priming effect would be directly influenced by the phonological similarity of cognate translation equivalents. The present experiment tested and confirmed this prediction: the cognate priming effect was significantly larger for cognate prime-target pairs with high-phonological similarity than pairs with low-phonological similarity. Implications for the nature of lexical processing in same-versus different-script bilinguals are discussed.
  • Narasimhan, B. (2005). Splitting the notion of 'agent': Case-marking in early child Hindi. Journal of Child Language, 32(4), 787-803. doi:10.1017/S0305000905007117.

    Abstract

    Two construals of agency are evaluated as possible innate biases guiding case-marking in children. A BROAD construal treats agentive arguments of multi-participant and single-participant events as being similar. A NARROWER construal is restricted to agents of multi-participant events. In Hindi, ergative case-marking is associated with agentive participants of multi-participant, perfective actions. Children relying on a broad or narrow construal of agent are predicted to overextend ergative case-marking to agentive participants of transitive imperfective actions and/or intransitive actions. Longitudinal data from three children acquiring Hindi (1;7 to 3;9) reveal no overextension errors, suggesting early sensitivity to distributional patterns in the input.
  • Narasimhan, B., Budwig, N., & Murty, L. (2005). Argument realization in Hindi caregiver-child discourse. Journal of Pragmatics, 37(4), 461-495. doi:10.1016/j.pragma.2004.01.005.

    Abstract

    An influential claim in the child language literature posits that children use structural cues in the input language to acquire verb meaning (Gleitman, 1990). One such cue is the number of arguments co-occurring with the verb, which provides an indication as to the event type associated with the verb (Fisher, 1995). In some languages however (e.g. Hindi), verb arguments are ellipted relatively freely, subject to certain discourse-pragmatic constraints. In this paper, we address three questions: Is the pervasive argument ellipsis characteristic of adult Hindi also found in Hindi-speaking caregivers’ input ? If so, do children consequently make errors in verb transitivity? How early do children learning a split-ergative language, such as Hindi, exhibit sensitivity to discourse-pragmatic influences on argument realization? We show that there is massive argument ellipsis in caregivers’ input to 3–4 year-olds. However, children acquiring Hindi do not make transitivity errors in their own speech. Nor do they elide arguments randomly. Rather, even at this early age, children appear to be sensitive to discourse-pragmatics in their own spontaneous speech production. These findings in a split-ergative language parallel patterns of argument realization found in children acquiring both nominative-accusative languages (e.g. Korean) and ergative-absolutive languages (e.g. Tzeltal, Inuktitut).
  • Narasimhan, B. (2003). Motion events and the lexicon: The case of Hindi. Lingua, 113(2), 123-160. doi:10.1016/S0024-3841(02)00068-2.

    Abstract

    English, and a variety of Germanic languages, allow constructions such as the bottle floated into the cave , whereas languages such as Spanish, French, and Hindi are highly restricted in allowing manner of motion verbs to occur with path phrases. This typological observation has been accounted for in terms of the conflation of complex meaning in basic or derived verbs [Talmy, L., 1985. Lexicalization patterns: semantic structure in lexical forms. In: Shopen, T. (Ed.), Language Typology and Syntactic Description 3: Grammatical Categories and the Lexicon. Cambridge University Press, Cambridge, pp. 57–149; Levin, B., Rappaport-Hovav, M., 1995. Unaccusativity: At the Syntax–Lexical Semantics Interface. MIT Press, Cambridge, MA], or the presence of path “satellites” with special grammatical properties in the lexicon of languages such as English, which allow such phrasal combinations [cf. Talmy, L., 1985. Lexicalization patterns: semantic structure in lexical forms. In: Shopen, T. (Ed.), Language Typology and Syntactic Description 3: Grammatical Categories and the Lexicon. Cambridge University Press, Cambridge, pp. 57–149; Talmy, L., 1991. Path to realisation: via aspect and result. In: Proceedings of the Seventeenth Annual Meeting of the Berkeley Linguistics Society. Berkeley Linguistics Society, Berkeley, pp. 480–520]. I use data from Hindi to show that there is little empirical support for the claim that the constraint on the phrasal combination is correlated with differences in verb meaning or the presence of satellites in the lexicon of a language. However, proposals which eschew lexicalization accounts for more general aspectual constraints on the manner verb + path phrase combination in Spanish-type languages (Aske, J., 1989. Path Predicates in English and Spanish: A Closer look. In: Proceedings of the Fifteenth Annual Meeting of the Berkeley Linguistics Society. Berkeley Linguistics Society, Berkeley, pp. 1–14) cannot account for the full range of data in Hindi either. On the basis of these facts, I argue that an empirically adequate account can be formulated in terms of a general mapping constraint, formulated in terms of whether the lexical requirements of the verb strictly or weakly constrain its syntactic privileges of occurrence. In Hindi, path phrases can combine with manner of motion verbs only to the degree that they are compatible with the semantic profile of the verb. Path phrases in English, on the other hand, can extend the verb's “semantic profile” subject to certain constraints. I suggest that path phrases are licensed in English by the semantic requirements of the “construction” in which they appear rather than by the selectional requirements of the verb (Fillmore, C., Kay, P., O'Connor, M.C., 1988, Regularity and idiomaticity in grammatical constructions. Language 64, 501–538; Jackendoff, 1990, Semantic Structures. MIT Press, Cambridge, MA; Goldberg, 1995, Constructions: A Construction Grammar Approach to Argument Structure. University of Chicago Press, Chicago and London).
  • Neger, T. M., Rietveld, T., & Janse, E. (2014). Relationship between perceptual learning in speech and statistical learning in younger and older adults. Frontiers in Human Neuroscience, 8: 628. doi:10.3389/fnhum.2014.00628.

    Abstract

    Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with sixty meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly.
  • Nieuwland, M. S. (2014). “Who’s he?” Event-related brain potentials and unbound pronouns. Journal of Memory and Language, 76, 1-28. doi:10.1016/j.jml.2014.06.002.

    Abstract

    Three experiments used event-related potentials to examine the processing consequences of gender-mismatching pronouns (e.g., “The aunt found out that he had won the lottery”), which have been shown to elicit P600 effects when judged as syntactically anomalous (Osterhout & Mobley, 1995). In each experiment, mismatching pronouns elicited a sustained, frontal negative shift (Nref) compared to matching pronouns: when participants were instructed to posit a new referent for mismatching pronouns (Experiment 1), and without this instruction (Experiments 2 and 3). In Experiments 1 and 2, the observed Nref was robust only in individuals with higher reading span scores. In Experiment 1, participants with lower reading span showed P600 effects instead, consistent with an attempt at coreferential interpretation despite gender mismatch. The results from the experiments combined suggest that, in absence of an acceptability judgment task, people are more likely to interpret mismatching pronouns as referring to an unknown, unheralded antecedent than as a grammatically anomalous anaphor for a given antecedent.
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2005). Testing the limits of the semantic illusion phenomenon: ERPs reveal temporary semantic change deafness in discourse comprehension. Cognitive Brain Research, 24(3), 691-701. doi:10.1016/j.cogbrainres.2005.04.003.

    Abstract

    In general, language comprehension is surprisingly reliable. Listeners very rapidly extract meaning from the unfolding speech signal, on a word-by-word basis, and usually successfully. Research on ‘semantic illusions’ however suggests that under certain conditions, people fail to notice that the linguistic input simply doesn't make sense. In the current event-related brain potentials (ERP) study, we examined whether listeners would, under such conditions, spontaneously detect an anomaly in which a human character central to the story at hand (e.g., “a tourist”) was suddenly replaced by an inanimate object (e.g., “a suitcase”). Because this replacement introduced a very powerful coherence break, we expected listeners to immediately notice the anomaly and generate the standard ERP effect associated with incoherent language, the N400 effect. However, instead of the standard N400 effect, anomalous words elicited a positive ERP effect from about 500–600 ms onwards. The absence of an N400 effect suggests that subjects did not immediately notice the anomaly, and that for a few hundred milliseconds the comprehension system has converged on an apparently coherent but factually incorrect interpretation. The presence of the later ERP effect indicates that subjects were processing for comprehension and did ultimately detect the anomaly. Therefore, we take the absence of a regular N400 effect as the online manifestation of a temporary semantic illusion. Our results also show that even attentive listeners sometimes fail to notice a radical change in the nature of a story character, and therefore suggest a case of short-lived ‘semantic change deafness’ in language comprehension.
  • Nitschke, S., Serratrice, L., & Kidd, E. (2014). The effect of linguistic nativeness on structural priming in comprehension. Language, Cognition and Neuroscience, 29(5), 525-542. doi:10.1080/01690965.2013.766355.

    Abstract

    The role of linguistic experience in structural priming is unclear. Although it is explicitly predicted that experience contributes to priming effects on several theoretical accounts, to date the empirical data has been mixed. To investigate this issue, we conducted four sentence-picture-matching experiments that primed for the comprehension of object relative clauses in L1 and proficient L2 speakers of German. It was predicted that an effect of experience would only be observed in instances where priming effects are likely to be weak in experienced L1 speakers. In such circumstances, priming should be stronger in L2 speakers because of their comparative lack of experience using and processing the L2 test structures. The experiments systematically manipulated the primes to decrease lexical and conceptual overlap between primes and targets. The results supported the hypothesis: in two of the four studies, the L2 group showed larger priming effects in comparison to the L1 group. This effect only occurred when animacy differences were introduced between the prime and target. The results suggest that linguistic experience as operationalised by nativeness affects the strength of priming, specifically in cases where there is a lack of lexical and conceptual overlap between prime and target.
  • Norris, D., McQueen, J. M., & Cutler, A. (2003). Perceptual learning in speech. Cognitive Psychology, 47(2), 204-238. doi:10.1016/S0010-0285(03)00006-9.

    Abstract

    This study demonstrates that listeners use lexical knowledge in perceptual learning of speech sounds. Dutch listeners first made lexical decisions on Dutch words and nonwords. The final fricative of 20 critical words had been replaced by an ambiguous sound, between [f] and [s]. One group of listeners heard ambiguous [f]-final words (e.g., [WI tlo?], from witlof, chicory) and unambiguous [s]-final words (e.g., naaldbos, pine forest). Another group heard the reverse (e.g., ambiguous [na:ldbo?], unambiguous witlof). Listeners who had heard [?] in [f]-final words were subsequently more likely to categorize ambiguous sounds on an [f]–[s] continuum as [f] than those who heard [?] in [s]-final words. Control conditions ruled out alternative explanations based on selective adaptation and contrast. Lexical information can thus be used to train categorization of speech. This use of lexical information differs from the on-line lexical feedback embodied in interactive models of speech perception. In contrast to on-line feedback, lexical feedback for learning is of benefit to spoken word recognition (e.g., in adapting to a newly encountered dialect).
  • Norris, D., & Cutler, A. (1988). Speech recognition in French and English. MRC News, 39, 30-31.
  • Norris, D., & Cutler, A. (1988). The relative accessibility of phonemes and syllables. Perception and Psychophysics, 43, 541-550. Retrieved from http://www.psychonomic.org/search/view.cgi?id=8530.

    Abstract

    Previous research comparing detection times for syllables and for phonemes has consistently found that syllables are responded to faster than phonemes. This finding poses theoretical problems for strictly hierarchical models of speech recognition, in which smaller units should be able to be identified faster than larger units. However, inspection of the characteristics of previous experiments’stimuli reveals that subjects have been able to respond to syllables on the basis of only a partial analysis of the stimulus. In the present experiment, five groups of subjects listened to identical stimulus material. Phoneme and syllable monitoring under standard conditions was compared with monitoring under conditions in which near matches of target and stimulus occurred on no-response trials. In the latter case, when subjects were forced to analyze each stimulus fully, phonemes were detected faster than syllables.
  • Nudel, R., Simpson, N. H., Baird, G., O’Hare, A., Conti-Ramsden, G., Bolton, P. F., Hennessy, E. R., SLI Consortium, Monaco, A. P., Fairfax, B. P., Knight, J. C., Winney, B., Fisher, S. E., & Newbury, D. F. (2014). Associations of HLA alleles with specific language impairment. Journal of Neurodevelopmental Disorders, 6: 1. doi:10.1186/1866-1955-6-1.

    Abstract

    Background Human leukocyte antigen (HLA) loci have been implicated in several neurodevelopmental disorders in which language is affected. However, to date, no studies have investigated the possible involvement of HLA loci in specific language impairment (SLI), a disorder that is defined primarily upon unexpected language impairment. We report association analyses of single-nucleotide polymorphisms (SNPs) and HLA types in a cohort of individuals affected by language impairment. Methods We perform quantitative association analyses of three linguistic measures and case-control association analyses using both SNP data and imputed HLA types. Results Quantitative association analyses of imputed HLA types suggested a role for the HLA-A locus in susceptibility to SLI. HLA-A A1 was associated with a measure of short-term memory (P = 0.004) and A3 with expressive language ability (P = 0.006). Parent-of-origin effects were found between HLA-B B8 and HLA-DQA1*0501 and receptive language. These alleles have a negative correlation with receptive language ability when inherited from the mother (P = 0.021, P = 0.034, respectively) but are positively correlated with the same trait when paternally inherited (P = 0.013, P = 0.029, respectively). Finally, case control analyses using imputed HLA types indicated that the DR10 allele of HLA-DRB1 was more frequent in individuals with SLI than population controls (P = 0.004, relative risk = 2.575), as has been reported for individuals with attention deficit hyperactivity disorder (ADHD). Conclusion These preliminary data provide an intriguing link to those described by previous studies of other neurodevelopmental disorders and suggest a possible role for HLA loci in language disorders.
  • Nudel, R., Simpson, N. H., Baird, G., O’Hare, A., Conti-Ramsden, G., Bolton, P. F., Hennessy, E. R., The SLli consortium, Ring, S. M., Smith, G. D., Francks, C., Paracchini, S., Monaco, A. P., Fisher, S. E., & Newbury, D. F. (2014). Genome-wide association analyses of child genotype effects and parent-of origin effects in specific language impairment. Genes, Brain and Behavior, 13, 418-429. doi:10.1111/gbb.12127.

    Abstract

    Specific language impairment (SLI) is a neurodevelopmental disorder that affects
    linguistic abilities when development is otherwise normal. We report the results of a genomewide association study of SLI which included parent-of-origin effects and child genotype effects and used 278 families of language-impaired children. The child genotype effects analysis did not identify significant associations. We found genome-wide significant paternal
    parent-of-origin effects on chromosome 14q12 (P=3.74×10-8) and suggestive maternal parent-of-origin-effects on chromosome 5p13 (P=1.16×10-7). A subsequent targeted association of six single-nucleotide-polymorphisms (SNPs) on chromosome 5 in 313 language-impaired individuals from the ALSPAC cohort replicated the maternal effects,
    albeit in the opposite direction (P=0.001); as fathers’ genotypes were not available in the ALSPAC study, the replication analysis did not include paternal parent-of-origin effects. The paternally-associated SNP on chromosome 14 yields a non-synonymous coding change within the NOP9 gene. This gene encodes an RNA-binding protein that has been reported to be significantly dysregulated in individuals with schizophrenia. The region of maternal
    association on chromosome 5 falls between the PTGER4 and DAB2 genes, in a region
    previously implicated in autism and ADHD. The top SNP in this association locus is a
    potential expression QTL of ARHGEF19 (also called WGEF) on chromosome 1. Members of this protein family have been implicated in intellectual disability. In sum, this study implicates parent-of-origin effects in language impairment, and adds an interesting new dimension to the emerging picture of shared genetic etiology across various neurodevelopmental disorders.
  • Nyberg, L., Marklund, P., Persson, J., Cabeza, R., Forkstam, C., Petersson, K. M., & Ingvar, M. (2003). Common prefrontal activations during working memory, episodic memory, and semantic memory. Neuropsychologia, 41(3), 371-377. doi:10.1016/S0028-3932(02)00168-9.

    Abstract

    Regions of the prefrontal cortex (PFC) are typically activated in many different cognitive functions. In most studies, the focus has been on the role of specific PFC regions in specific cognitive domains, but more recently similarities in PFC activations across cognitive domains have been stressed. Such similarities may suggest that a region mediates a common function across a variety of cognitive tasks. In this study, we compared the activation patterns associated with tests of working memory, semantic memory and episodic memory. The results converged on a general involvement of four regions across memory tests. These were located in left frontopolar cortex, left mid-ventrolateral PFC, left mid-dorsolateral PFC and dorsal anterior cingulate cortex. These findings provide evidence that some PFC regions are engaged during many different memory tests. The findings are discussed in relation to theories about the functional contribition of the PFC regions and the architecture of memory.
  • Nyberg, L., Sandblom, J., Jones, S., Stigsdotter Neely, A., Petersson, K. M., Ingvar, M., & Bäckman, L. (2003). Neural correlates of training-related memory improvement in adulthood and aging. Proceedings of the National Academy of Sciences of the United States of America, 100(23), 13728-13733. doi:10.1073/pnas.1735487100.

    Abstract

    Cognitive studies show that both younger and older adults can increase their memory performance after training in using a visuospatial mnemonic, although age-related memory deficits tend to be magnified rather than reduced after training. Little is known about the changes in functional brain activity that accompany training-induced memory enhancement, and whether age-related activity changes are associated with the size of training-related gains. Here, we demonstrate that younger adults show increased activity during memory encoding in occipito-parietal and frontal brain regions after learning the mnemonic. Older adults did not show increased frontal activity, and only those elderly persons who benefited from the mnemonic showed increased occipitoparietal activity. These findings suggest that age-related differences in cognitive reserve capacity may reflect both a frontal processing deficiency and a posterior production deficiency.
  • Ogdie, M. N., MacPhie, I. L., Minassian, S. L., Yang, M., Fisher, S. E., Francks, C., Cantor, R. M., McCracken, J. T., McGough, J. J., Nelson, S. F., Monaco, A. P., & Smalley, S. L. (2003). A genomewide scan for Attention-Deficit/Hyperactivity Disorder in an extended sample: Suggestive linkage on 17p11. American Journal of Human Genetics, 72(5), 1268-1279. doi:10.1086/375139.

    Abstract

    Attention-deficit/hyperactivity disorder (ADHD [MIM 143465]) is a common, highly heritable neurobehavioral disorder of childhood onset, characterized by hyperactivity, impulsivity, and/or inattention. As part of an ongoing study of the genetic etiology of ADHD, we have performed a genomewide linkage scan in 204 nuclear families comprising 853 individuals and 270 affected sibling pairs (ASPs). Previously, we reported genomewide linkage analysis of a “first wave” of these families composed of 126 ASPs. A follow-up investigation of one region on 16p yielded significant linkage in an extended sample. The current study extends the original sample of 126 ASPs to 270 ASPs and provides linkage analyses of the entire sample, using polymorphic microsatellite markers that define an ∼10-cM map across the genome. Maximum LOD score (MLS) analysis identified suggestive linkage for 17p11 (MLS=2.98) and four nominal regions with MLS values >1.0, including 5p13, 6q14, 11q25, and 20q13. These data, taken together with the fine mapping on 16p13, suggest two regions as highly likely to harbor risk genes for ADHD: 16p13 and 17p11. Interestingly, both regions, as well as 5p13, have been highlighted in genomewide scans for autism.
  • Olivers, C. N. L., Huettig, F., Singh, J. P., & Mishra, R. K. (2014). The influence of literacy on visual search. Visual Cognition, 21, 74-101. doi:10.1080/13506285.2013.875498.

    Abstract

    Currently one in five adults is still unable to read despite a rapidly developing world. Here we show that (il)literacy has important consequences for the cognitive ability of selecting relevant information from a visual display of non-linguistic material. In two experiments we compared low to high literacy observers on both an easy and a more difficult visual search task involving different types of chicken. Low literates were consistently slower (as indicated by overall RTs) in both experiments. More detailed analyses, including eye movement measures, suggest that the slowing is partly due to display wide (i.e. parallel) sensory processing but mainly due to post-selection processes, as low literates needed more time between fixating the target and generating a manual response. Furthermore, high and low literacy groups differed in the way search performance was distributed across the visual field. High literates performed relatively better when the target was presented in central regions, especially on the right. At the same time, high literacy was also associated with a more general bias towards the top and the left, especially in the more difficult search. We conclude that learning to read results in an extension of the functional visual field from the fovea to parafoveal areas, combined with some asymmetry in scan pattern influenced by the reading direction, both of which also influence other (e.g. non-linguistic) tasks such as visual search.

    Files private

    Request files
  • Onnink, A. M. H., Zwiers, M. P., Hoogman, M., Mostert, J. C., Kan, C. C., Buitelaar, J., & Franke, B. (2014). Brain alterations in adult ADHD: Effects of gender, treatment and comorbid depression. European Neuropsychopharmacology, 24(3), 397-409. doi:10.1016/j.euroneuro.2013.11.011.

    Abstract

    Children with attention-deficit/hyperactivity disorder (ADHD) have smaller volumes of total brain matter and subcortical regions, but it is unclear whether these represent delayed maturation or persist into adulthood. We performed a structural MRI study in 119 adult ADHD patients and 107 controls and investigated total gray and white matter and volumes of accumbens, caudate, globus pallidus, putamen, thalamus, amygdala and hippocampus. Additionally, we investigated effects of gender, stimulant treatment and history of major depression (MDD). There was no main effect of ADHD on the volumetric measures, nor was any effect observed in a secondary voxel-based morphometry (VBM) analysis of the entire brain. However, in the volumetric analysis a significant gender by diagnosis interaction was found for caudate volume. Male patients showed reduced right caudate volume compared to male controls, and caudate volume correlated with hyperactive/impulsive symptoms. Furthermore, patients using stimulant treatment had a smaller right hippocampus volume compared to medication-naïve patients and controls. ADHD patients with previous MDD showed smaller hippocampus volume compared to ADHD patients with no MDD. While these data were obtained in a cross-sectional sample and need to be replicated in a longitudinal study, the findings suggest that developmental brain differences in ADHD largely normalize in adulthood. Reduced caudate volume in male patients may point to distinct neurobiological deficits underlying ADHD in the two genders. Smaller hippocampus volume in ADHD patients with previous MDD is consistent with neurobiological alterations observed in MDD.

    Files private

    Request files
  • Ortega, G. (2014). Acquisition of a signed phonological system by hearing adults: The role of sign structure and iconicity. Sign Language and Linguistics, 17, 267-275. doi:10.1075/sll.17.2.09ort.
  • O'Shannessy, C. (2005). Light Warlpiri: A new language. Australian Journal of Linguistics, 25(1), 31-57. doi:10.1080/07268600500110472.
  • Otake, T., Yoneyama, K., Cutler, A., & van der Lugt, A. (1996). The representation of Japanese moraic nasals. Journal of the Acoustical Society of America, 100, 3831-3842. doi:10.1121/1.417239.

    Abstract

    Nasal consonants in syllabic coda position in Japanese assimilate to the place of articulation of a following consonant. The resulting forms may be perceived as different realizations of a single underlying unit, and indeed the kana orthographies represent them with a single character. In the present study, Japanese listeners' response time to detect nasal consonants was measured. Nasals in coda position, i.e., moraic nasals, were detected faster and more accurately than nonmoraic nasals, as reported in previous studies. The place of articulation with which moraic nasals were realized affected neither response time nor accuracy. Non-native subjects who knew no Japanese, given the same materials with the same instructions, simply failed to respond to moraic nasals which were realized bilabially. When the nasals were cross-spliced across place of articulation contexts the Japanese listeners still showed no significant place of articulation effects, although responses were faster and more accurate to unspliced than to cross-spliced nasals. When asked to detect the phoneme following the (cross-spliced) moraic nasal, Japanese listeners showed effects of mismatch between nasal and context, but non-native listeners did not. Together, these results suggest that Japanese listeners are capable of very rapid abstraction from phonetic realization to a unitary representation of moraic nasals; but they can also use the phonetic realization of a moraic nasal effectively to obtain anticipatory information about following phonemes.
  • Ozyurek, A., Kita, S., Allen, S., Furman, R., & Brown, A. (2005). How does linguistic framing of events influence co-speech gestures? Insights from crosslinguistic variations and similarities. Gesture, 5(1/2), 219-240.

    Abstract

    What are the relations between linguistic encoding and gestural representations of events during online speaking? The few studies that have been conducted on this topic have yielded somewhat incompatible results with regard to whether and how gestural representations of events change with differences in the preferred semantic and syntactic encoding possibilities of languages. Here we provide large scale semantic, syntactic and temporal analyses of speech- gesture pairs that depict 10 different motion events from 20 Turkish and 20 English speakers. We find that the gestural representations of the same events differ across languages when they are encoded by different syntactic frames (i.e., verb-framed or satellite-framed). However, where there are similarities across languages, such as omission of a certain element of the event in the linguistic encoding, gestural representations also look similar and omit the same content. The results are discussed in terms of what gestures reveal about the influence of language specific encoding on on-line thinking patterns and the underlying interactions between speech and gesture during the speaking process.
  • Ozyurek, A. (2014). Hearing and seeing meaning in speech and gesture: Insights from brain and behaviour. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 369(1651): 20130296. doi:10.1098/rstb.2013.0296.

    Abstract

    As we speak, we use not only the arbitrary form–meaning mappings of the speech channel but also motivated form–meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal–posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language.
  • Ozyurek, A. (1996). How children talk about a conversation. Journal of Child Language, 23(3), 693-714. doi:10.1017/S0305000900009004.

    Abstract

    This study investigates how children of different ages talk about a conversation that they have witnessed. 48 Turkish children, five, nine and thirteen years in age, saw a televised dialogue between two Sesame Street characters (Bert and Ernie). Afterward, they narrated what they had seen and heard. Their reports were analysed for the development of linguistic devices used to orient their listeners to the relevant properties of a conversational exchange. Each utterance in the child's narrative was analysed as to its conversational role: (1) whether the child used direct or indirect quotation frames; (2) whether the child marked the boundaries of conversational turns using speakers' names and (3) whether the child used a marker for pairing of utterances made by different speakers (agreement-disagreement, request-refusal, questioning-answering). Within pairings, children's use of (a) the temporal and evaluative connectivity markers and (b) the kind of verb of saying were identified. The data indicate that there is a developmental change in children's ability to use appropriate linguistic means to orient their listeners to the different properties of a conversation. The development and use of these linguistic means enable the child to establish different social roles in a narrative interaction. The findings are interpreted in terms of the child's social-communicative development from being a ' character' to becoming a ' narrator' and ' author' of the reported conversation in the narrative situation.
  • Pacheco, A., Araújo, S., Faísca, L., de Castro, S. L., Petersson, K. M., & Reis, A. (2014). Dyslexia's heterogeneity: Cognitive profiling of Portuguese children with dyslexia. Reading and Writing, 27(9), 1529-1545. doi:10.1007/s11145-014-9504-5.

    Abstract

    Recent studies have emphasized that developmental dyslexia is a multiple-deficit disorder, in contrast to the traditional single-deficit view. In this context, cognitive profiling of children with dyslexia may be a relevant contribution to this unresolved discussion. The aim of this study was to profile 36 Portuguese children with dyslexia from the 2nd to 5th grade. Hierarchical cluster analysis was used to group participants according to their phonological awareness, rapid automatized naming, verbal short-term memory, vocabulary, and nonverbal intelligence abilities. The results suggested a two-cluster solution: a group with poorer performance on phoneme deletion and rapid automatized naming compared with the remaining variables (Cluster 1) and a group characterized by underperforming on the variables most related to phonological processing (phoneme deletion and digit span), but not on rapid automatized naming (Cluster 2). Overall, the results seem more consistent with a hybrid perspective, such as that proposed by Pennington and colleagues (2012), for understanding the heterogeneity of dyslexia. The importance of characterizing the profiles of individuals with dyslexia becomes clear within the context of constructing remediation programs that are specifically targeted and are more effective in terms of intervention outcome.

    Additional information

    11145_2014_9504_MOESM1_ESM.doc
  • Paterson, K. B., Liversedge, S. P., Rowland, C. F., & Filik, R. (2003). Children's comprehension of sentences with focus particles. Cognition, 89(3), 263-294. doi:10.1016/S0010-0277(03)00126-4.

    Abstract

    We report three studies investigating children's and adults' comprehension of sentences containing the focus particle only. In Experiments 1 and 2, four groups of participants (6–7 years, 8–10 years, 11–12 years and adult) compared sentences with only in different syntactic positions against pictures that matched or mismatched events described by the sentence. Contrary to previous findings (Crain, S., Ni, W., & Conway, L. (1994). Learning, parsing and modularity. In C. Clifton, L. Frazier, & K. Rayner (Eds.), Perspectives on sentence processing. Hillsdale, NJ: Lawrence Erlbaum; Philip, W., & Lynch, E. (1999). Felicity, relevance, and acquisition of the grammar of every and only. In S. C. Howell, S. A. Fish, & T. Keith-Lucas (Eds.), Proceedings of the 24th annual Boston University conference on language development. Somerville, MA: Cascadilla Press) we found that young children predominantly made errors by failing to process contrast information rather than errors in which they failed to use syntactic information to restrict the scope of the particle. Experiment 3 replicated these findings with pre-schoolers.
  • Payne, B. R., Grison, S., Gao, X., Christianson, K., Morrow, D. G., & Stine-Morrow, E. A. L. (2014). Aging and individual differences in binding during sentence understanding: Evidence from temporary and global syntactic attachment ambiguities. Cognition, 130(2), 157-173. doi:10.1016/j.cognition.2013.10.005.

    Abstract

    We report an investigation of aging and individual differences in binding information during sentence understanding. An age-continuous sample of adults (N=91), ranging from 18 to 81 years of age, read sentences in which a relative clause could be attached high to a head noun NP1, attached low to its modifying prepositional phrase NP2 (e.g., The son of the princess who scratched himself/herself in public was humiliated), or in which the attachment site of the relative clause was ultimately indeterminate (e.g., The maid of the princess who scratched herself in public was humiliated). Word-by-word reading times and comprehension (e.g., who scratched?) were measured. A series of mixed-effects models were fit to the data, revealing: (1) that, on average, NP1-attached sentences were harder to process and comprehend than NP2-attached sentences; (2) that these average effects were independently moderated by verbal working memory capacity and reading experience, with effects that were most pronounced in the oldest participants and; (3) that readers on average did not allocate extra time to resolve global ambiguities, though older adults with higher working memory span did. Findings are discussed in relation to current models of lifespan cognitive development, working memory, language experience, and the role of prosodic segmentation strategies in reading. Collectively, these data suggest that aging brings differences in sentence understanding, and these differences may depend on independent influences of verbal working memory capacity and reading experience.

    Files private

    Request files
  • Peeters, D., Runnqvist, E., Bertrand, D., & Grainger, J. (2014). Asymmetrical switch costs in bilingual language production induced by reading words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(1), 284-292. doi:10.1037/a0034060.

    Abstract

    We examined language-switching effects in French–English bilinguals using a paradigm where pictures are always named in the same language (either French or English) within a block of trials, and on each trial, the picture is preceded by a printed word from the same language or from the other language. Participants had to either make a language decision on the word or categorize it as an animal name or not. Picture-naming latencies in French (Language 1 [L1]) were slower when pictures were preceded by an English word than by a French word, independently of the task performed on the word. There were no language-switching effects when pictures were named in English (L2). This pattern replicates asymmetrical switch costs found with the cued picture-naming paradigm and shows that the asymmetrical pattern can be obtained (a) in the absence of artificial (nonlinguistic) language cues, (b) when the switch involves a shift from comprehension in 1 language to production in another, and (c) when the naming language is blocked (univalent response). We concluded that language switch costs in bilinguals cannot be reduced to effects driven by task control or response-selection mechanisms.
  • Peeters, D., & Dresler, M. (2014). The scientific significance of sleep-talking. Frontiers for Young Minds, 2(9). Retrieved from http://kids.frontiersin.org/articles/24/the_scientific_significance_of_sleep_talking/.

    Abstract

    Did one of your parents, siblings, or friends ever tell you that you were talking in your sleep? Nothing to be ashamed of! A recent study found that more than half of all people have had the experience of speaking out loud while being asleep [1]. This might even be underestimated, because often people do not notice that they are sleep-talking, unless somebody wakes them up or tells them the next day. Most neuroscientists, linguists, and psychologists studying language are interested in our language production and language comprehension skills during the day. In the present article, we will explore what is known about the production of overt speech during the night. We suggest that the study of sleep-talking may be just as interesting and informative as the study of wakeful speech.
  • Penke, M., Janssen, U., Indefrey, P., & Seitz, R. (2005). No evidence for a rule/procedural deficit in German patients with Parkinson's disease. Brain and Language, 95(1), 139-140. doi:10.1016/j.bandl.2005.07.078.
  • Perlman, M., & Cain, A. A. (2014). Iconicity in vocalization, comparisons with gesture, and implications for theories on the evolution of language. Gesture, 14(3), 320-350. doi:10.1075/gest.14.3.03per.

    Abstract

    Scholars have often reasoned that vocalizations are extremely limited in their potential for iconic expression, especially in comparison to manual gestures (e.g., Armstrong & Wilcox, 2007; Tomasello, 2008). As evidence for an alternative view, we first review the growing body of research related to iconicity in vocalizations, including experimental work on sound symbolism, cross-linguistic studies documenting iconicity in the grammars and lexicons of languages, and experimental studies that examine iconicity in the production of speech and vocalizations. We then report an experiment in which participants created vocalizations to communicate 60 different meanings, including 30 antonymic pairs. The vocalizations were measured along several acoustic properties, and these properties were compared between antonyms. Participants were highly consistent in the kinds of sounds they produced for the majority of meanings, supporting the hypothesis that vocalization has considerable potential for iconicity. In light of these findings, we present a comparison between vocalization and manual gesture, and examine the detailed ways in which each modality can function in the iconic expression of particular kinds of meanings. We further discuss the role of iconic vocalizations and gesture in the evolution of language since our divergence from the great apes. In conclusion, we suggest that human communication is best understood as an ensemble of kinesis and vocalization, not just speech, in which expression in both modalities spans the range from arbitrary to iconic.
  • Petersson, K. M., Sandblom, J., Elfgren, C., & Ingvar, M. (2003). Instruction-specific brain activations during episodic encoding: A generalized level of processing effect. Neuroimage, 20, 1795-1810. doi:10.1016/S1053-8119(03)00414-2.

    Abstract

    In a within-subject design we investigated the levels-of-processing (LOP) effect using visual material in a behavioral and a corresponding PET study. In the behavioral study we characterize a generalized LOP effect, using pleasantness and graphical quality judgments in the encoding situation, with two types of visual material, figurative and nonfigurative line drawings. In the PET study we investigate the related pattern of brain activations along these two dimensions. The behavioral results indicate that instruction and material contribute independently to the level of recognition performance. Therefore the LOP effect appears to stem both from the relative relevance of the stimuli (encoding opportunity) and an altered processing of stimuli brought about by the explicit instruction (encoding mode). In the PET study, encoding of visual material under the pleasantness (deep) instruction yielded left lateralized frontoparietal and anterior temporal activations while surface-based perceptually oriented processing (shallow instruction) yielded right lateralized frontoparietal, posterior temporal, and occipitotemporal activations. The result that deep encoding was related to the left prefrontal cortex while shallow encoding was related to the right prefrontal cortex, holding the material constant, is not consistent with the HERA model. In addition, we suggest that the anterior medial superior frontal region is related to aspects of self-referential semantic processing and that the inferior parts of the anterior cingulate as well as the medial orbitofrontal cortex is related to affective processing, in this case pleasantness evaluation of the stimuli regardless of explicit semantic content. Finally, the left medial temporal lobe appears more actively engaged by elaborate meaning-based processing and the complex response pattern observed in different subregions of the MTL lends support to the suggestion that this region is functionally segregated.
  • Petersson, K. M. (2005). On the relevance of the neurobiological analogue of the finite-state architecture. Neurocomputing, 65(66), 825-832. doi:10.1016/j.neucom.2004.10.108.

    Abstract

    We present two simple arguments for the potential relevance of a neurobiological analogue of the finite-state architecture. The first assumes the classical cognitive framework, is wellknown, and is based on the assumption that the brain is finite with respect to its memory organization. The second is formulated within a general dynamical systems framework and is based on the assumption that the brain sustains some level of noise and/or does not utilize infinite precision processing. We briefly review the classical cognitive framework based on Church–Turing computability and non-classical approaches based on analog processing in dynamical systems. We conclude that the dynamical neurobiological analogue of the finitestate architecture appears to be relevant, at least at an implementational level, for cognitive brain systems
  • Piai, V., Roelofs, A., Jensen, O., Schoffelen, J.-M., & Bonnefond, M. (2014). Distinct patterns of brain activity characterise lexical activation and competition in spoken word production. PLoS One, 9(2): e88674. doi:10.1371/journal.pone.0088674.

    Abstract

    According to a prominent theory of language production, concepts activate multiple associated words in memory, which enter into competition for selection. However, only a few electrophysiological studies have identified brain responses reflecting competition. Here, we report a magnetoencephalography study in which the activation of competing words was manipulated by presenting pictures (e.g., dog) with distractor words. The distractor and picture name were semantically related (cat), unrelated (pin), or identical (dog). Related distractors are stronger competitors to the picture name because they receive additional activation from the picture relative to other distractors. Picture naming times were longer with related than unrelated and identical distractors. Phase-locked and non-phase-locked activity were distinct but temporally related. Phase-locked activity in left temporal cortex, peaking at 400 ms, was larger on unrelated than related and identical trials, suggesting differential activation of alternative words by the picture-word stimuli. Non-phase-locked activity between roughly 350–650 ms (4–10 Hz) in left superior frontal gyrus was larger on related than unrelated and identical trials, suggesting differential resolution of the competition among the alternatives, as reflected in the naming times. These findings characterise distinct patterns of activity associated with lexical activation and competition, supporting the theory that words are selected by competition.
  • Piai, V., Roelofs, A., & Schriefers, H. (2014). Locus of semantic interference in picture naming: Evidence from dual-task performance. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(1), 147-165. doi:10.1037/a0033745.

    Abstract

    Disagreement exists regarding the functional locus of semantic interference of distractor words in picture naming. This effect is a cornerstone of modern psycholinguistic models of word production, which assume that it arises in lexical response-selection. However, recent evidence from studies of dual-task performance suggests a locus in perceptual or conceptual processing, prior to lexical response-selection. In these studies, participants manually responded to a tone and named a picture while ignoring a written distractor word. The stimulus onset asynchrony (SOA) between tone and picture–word stimulus was manipulated. Semantic interference in naming latencies was present at long tone pre-exposure SOAs, but reduced or absent at short SOAs. Under the prevailing structural or strategic response-selection bottleneck and central capacity sharing models of dual-task performance, the underadditivity of the effects of SOA and stimulus type suggests that semantic interference emerges before lexical response-selection. However, in more recent studies, additive effects of SOA and stimulus type were obtained. Here, we examined the discrepancy in results between these studies in 6 experiments in which we systematically manipulated various dimensions on which these earlier studies differed, including tasks, materials, stimulus types, and SOAs. In all our experiments, additive effects of SOA and stimulus type on naming latencies were obtained. These results strongly suggest that the semantic interference effect arises after perceptual and conceptual processing, during lexical response-selection or later. We discuss several theoretical alternatives with respect to their potential to account for the discrepancy between the present results and other studies showing underadditivity.
  • Piai, V., Roelofs, A., & Maris, E. (2014). Oscillatory brain responses in spoken word production reflect lexical frequency and sentential constraint. Neuropsychologia, 53, 146-156. doi:10.1016/j.neuropsychologia.2013.11.014.

    Abstract

    Two fundamental factors affecting the speed of spoken word production are lexical frequency and sentential constraint, but little is known about their timing and electrophysiological basis. In the present study, we investigated event-related potentials (ERPs) and oscillatory brain responses induced by these factors, using a task in which participants named pictures after reading sentences. Sentence contexts were either constraining or nonconstraining towards the final word, which was presented as a picture. Picture names varied in their frequency of occurrence in the language. Naming latencies and electrophysiological responses were examined as a function of context and lexical frequency. Lexical frequency is an index of our cumulative learning experience with words, so lexical-frequency effects most likely reflect access to memory representations for words. Pictures were named faster with constraining than nonconstraining contexts. Associated with this effect, starting around 400 ms pre-picture presentation, oscillatory power between 8 and 30 Hz was lower for constraining relative to nonconstraining contexts. Furthermore, pictures were named faster with high-frequency than low-frequency names, but only for nonconstraining contexts, suggesting differential ease of memory access as a function of sentential context. Associated with the lexical-frequency effect, starting around 500 ms pre-picture presentation, oscillatory power between 4 and 10 Hz was higher for high-frequency than for low-frequency names, but only for constraining contexts. Our results characterise electrophysiological responses associated with lexical frequency and sentential constraint in spoken word production, and point to new avenues for studying these fundamental factors in language production.
  • Pine, J. M., Lieven, E. V., & Rowland, C. F. (1996). Observational and checklist measures of vocabulary composition: What do they mean? Journal of Child Language, 23(3), 573-590. doi:10.1017/S0305000900008953.

    Abstract

    Observational and checklist measures of vocabulary composition have both recently been used to look at the absolute proportion of nouns in children's early vocabularies. However, they have tended to generate rather different results. The present study is an attempt to investigate the relationship between such measures in a sample of 26 children between 1;1 and 2;1 at approximately 50 and 100 words. The results show that although observational and checklist measures are significantly correlated, there are also systematic quantitative differences between them which seem to reflect a combination of checklist, maternal-report and observational sampling biases. This suggests that, although both kinds of measure may represent good indices of differences in vocabulary size and composition across children and hence be useful as dependent variables in correlational research, neither may be ideal for estimating the absolute proportion of nouns in children's vocabularies. The implication is that questions which rely on information about the absolute proportion of particular kinds of words in children's vocabularies can only be properly addressed by detailed longitudinal studies in which an attempt is made to collect more comprehensive vocabulary records for individual children.
  • Pine, J. M., Rowland, C. F., Lieven, E. V., & Theakston, A. L. (2005). Testing the Agreement/Tense Omission Model: Why the data on children's use of non-nominative 3psg subjects count against the ATOM. Journal of Child Language, 32(2), 269-289. doi:10.1017/S0305000905006860.

    Abstract

    One of the most influential recent accounts of pronoun case-marking errors in young children's speech is Schütze & Wexler's (1996) Agreement/Tense Omission Model (ATOM). The ATOM predicts that the rate of agreeing verbs with non-nominative subjects will be so low that such errors can be reasonably disregarded as noise in the data. The present study tests this prediction on data from 12 children between the ages of 1;8.22 and 3;0.10. This is done, first, by identifying children who produced a reasonably large number of non-nominative 3psg subjects; second, by estimating the expected rate of agreeing verbs with masculine and feminine non-nominative subjects in these children's speech; and, third, by examining the actual rate at which agreeing verb forms occurred with non-nominative subjects in those areas of the data in which the expected error rate was significantly greater than 10%. The results show, first, that only three of the children produced enough non-nominative subjects to allow a reasonable test of the ATOM to be made; second, that for all three of these children, the only area of the data in which the expected frequency of agreeing verbs with non-nominative subjects was significantly greater than 10% was their use of feminine case-marked subjects; and third, that for all three of these children, the rate of agreeing verbs with non-nominative feminine subjects was over 30%. These results raise serious doubts about the claim that children's use of non-nominative subjects can be explained in terms of AGR optionality, and suggest the need for a model of pronoun case-marking error that can explain why some children produce agreeing verb forms with non-nominative subjects as often as they do.
  • Pinget, A.-F., Bosker, H. R., Quené, H., & de Jong, N. H. (2014). Native speakers' perceptions of fluency and accent in L2 speech. Language Testing, 31, 349-365. doi:10.1177/0265532214526177.

    Abstract

    Oral fluency and foreign accent distinguish L2 from L1 speech production. In language testing practices, both fluency and accent are usually assessed by raters. This study investigates what exactly native raters of fluency and accent take into account when judging L2. Our aim is to explore the relationship between objectively measured temporal, segmental and suprasegmental properties of speech on the one hand, and fluency and accent as rated by native raters on the other hand. For 90 speech fragments from Turkish and English L2 learners of Dutch, several acoustic measures of fluency and accent were calculated. In Experiment 1, 20 native speakers of Dutch rated the L2 Dutch samples on fluency. In Experiment 2, 20 different untrained native speakers of Dutch judged the L2 Dutch samples on accentedness. Regression analyses revealed that acoustic measures of fluency were good predictors of fluency ratings. Secondly, segmental and suprasegmental measures of accent could predict some variance of accent ratings. Thirdly, perceived fluency and perceived accent were only weakly related. In conclusion, this study shows that fluency and perceived foreign accent can be judged as separate constructs.
  • Pippucci, T., Magi, A., Gialluisi, A., & Romeo, G. (2014). Detection of runs of homozygosity from whole exome sequencing data: State of the art and perspectives for clinical, population and epidemiological studies. Human Heredity, 77, 63-72. doi:10.1159/000362412.

    Abstract

    Runs of homozygosity (ROH) are sizeable stretches of homozygous genotypes at consecutive polymorphic DNA marker positions, traditionally captured by means of genome-wide single nucleotide polymorphism (SNP) genotyping. With the advent of next-generation sequencing (NGS) technologies, a number of methods initially devised for the analysis of SNP array data (those based on sliding-window algorithms such as PLINK or GERMLINE and graphical tools like HomozygosityMapper) or specifically conceived for NGS data have been adopted for the detection of ROH from whole exome sequencing (WES) data. In the latter group, algorithms for both graphical representation (AgileVariantMapper, HomSI) and computational detection (H3M2) of WES-derived ROH have been proposed. Here we examine these different approaches and discuss available strategies to implement ROH detection in WES analysis. Among sliding-window algorithms, PLINK appears to be well-suited for the detection of ROH, especially of the long ones. As a method specifically tailored for WES data, H3M2 outperforms existing algorithms especially on short and medium ROH. We conclude that, notwithstanding the irregular distribution of exons, WES data can be used with some approximation for unbiased genome-wide analysis of ROH features, with promising applications to homozygosity mapping of disease genes, comparative analysis of populations and epidemiological studies based on consanguinity
  • Pluymaekers, M., Ernestus, M., & Baayen, R. H. (2005). Articulatory planning is continuous and sensitive to informational redundancy. Phonetica, 62(2-4), 146-159. doi:10.1159/000090095.

    Abstract

    This study investigates the relationship between word repetition, predictability from neighbouring words, and articulatory reduction in Dutch. For the seven most frequent words ending in the adjectival suffix -lijk, 40 occurrences were randomly selected from a large database of face-to-face conversations. Analysis of the selected tokens showed that the degree of articulatory reduction (as measured by duration and number of realized segments) was affected by repetition, predictability from the previous word and predictability from the following word. Interestingly, not all of these effects were significant across morphemes and target words. Repetition effects were limited to suffixes, while effects of predictability from the previous word were restricted to the stems of two of the seven target words. Predictability from the following word affected the stems of all target words equally, but not all suffixes. The implications of these findings for models of speech production are discussed.
  • Pluymaekers, M., Ernestus, M., & Baayen, R. H. (2005). Lexical frequency and acoustic reduction in spoken Dutch. Journal of the Acoustical Society of America, 118(4), 2561-2569. doi:10.1121/1.2011150.

    Abstract

    This study investigates the effects of lexical frequency on the durational reduction of morphologically complex words in spoken Dutch. The hypothesis that high-frequency words are more reduced than low-frequency words was tested by comparing the durations of affixes occurring in different carrier words. Four Dutch affixes were investigated, each occurring in a large number of words with different frequencies. The materials came from a large database of face-to-face conversations. For each word containing a target affix, one token was randomly selected for acoustic analysis. Measurements were made of the duration of the affix as a whole and the durations of the individual segments in the affix. For three of the four affixes, a higher frequency of the carrier word led to shorter realizations of the affix as a whole, individual segments in the affix, or both. Other relevant factors were the sex and age of the speaker, segmental context, and speech rate. To accommodate for these findings, models of speech production should allow word frequency to affect the acoustic realizations of lower-level units, such as individual speech sounds occurring in affixes.
  • Poellmann, K., Bosker, H. R., McQueen, J. M., & Mitterer, H. (2014). Perceptual adaptation to segmental and syllabic reductions in continuous spoken Dutch. Journal of Phonetics, 46, 101-127. doi:10.1016/j.wocn.2014.06.004.

    Abstract

    This study investigates if and how listeners adapt to reductions in casual continuous speech. In a perceptual-learning variant of the visual-world paradigm, two groups of Dutch participants were exposed to either segmental (/b/ → [ʋ]) or syllabic (ver- → [fː]) reductions in spoken Dutch sentences. In the test phase, both groups heard both kinds of reductions, but now applied to different words. In one of two experiments, the segmental reduction exposure group was better than the syllabic reduction exposure group in recognizing new reduced /b/-words. In both experiments, the syllabic reduction group showed a greater target preference for new reduced ver-words. Learning about reductions was thus applied to previously unheard words. This lexical generalization suggests that mechanisms compensating for segmental and syllabic reductions take place at a prelexical level, and hence that lexical access involves an abstractionist mode of processing. Existing abstractionist models need to be revised, however, as they do not include representations of sequences of segments (corresponding e.g. to ver-) at the prelexical level.
  • Poellmann, K., Mitterer, H., & McQueen, J. M. (2014). Use what you can: Storage, abstraction processes and perceptual adjustments help listeners recognize reduced forms. Frontiers in Psychology, 5: 437. doi:10.3389/fpsyg.2014.00437.

    Abstract

    Three eye-tracking experiments tested whether native listeners recognized reduced Dutch words better after having heard the same reduced words, or different reduced words of the same reduction type and whether familiarization with one reduction type helps listeners to deal with another reduction type. In the exposure phase, a segmental reduction group was exposed to /b/-reductions (e.g., "minderij" instead of "binderij", 'book binder') and a syllabic reduction group was exposed to full-vowel deletions (e.g., "p'raat" instead of "paraat", 'ready'), while a control group did not hear any reductions. In the test phase, all three groups heard the same speaker producing reduced-/b/ and deleted-vowel words that were either repeated (Experiments 1 & 2) or new (Experiment 3), but that now appeared as targets in semantically neutral sentences. Word-specific learning effects were found for vowel-deletions but not for /b/-reductions. Generalization of learning to new words of the same reduction type occurred only if the exposure words showed a phonologically consistent reduction pattern (/b/-reductions). In contrast, generalization of learning to words of another reduction type occurred only if the exposure words showed a phonologically inconsistent reduction pattern (the vowel deletions; learning about them generalized to recognition of the /b/-reductions). In order to deal with reductions, listeners thus use various means. They store reduced variants (e.g., for the inconsistent vowel-deleted words) and they abstract over incoming information to build up and apply mapping rules (e.g., for the consistent /b/-reductions). Experience with inconsistent pronunciations leads to greater perceptual flexibility in dealing with other forms of reduction uttered by the same speaker than experience with consistent pronunciations.
  • Poletiek, F. H., & Rassin E. (Eds.). (2005). Het (on)bewuste [Special Issue]. De Psycholoog.
  • Poletiek, F. H., & Van den Bos, E. J. (2005). Het onbewuste is een dader met een motief. De Psycholoog, 40(1), 11-17.
  • Poletiek, F. H. (1996). Paradoxes of falsification. Quarterly Journal of Experimental Psychology Section A: Human Experimental Psychology, 49(2), 447-462. doi:10.1080/713755628.
  • St Pourcain, B., Cents, R. A., Whitehouse, A. J., Haworth, C. M., Davis, O. S., O’Reilly, P. F., Roulstone, S., Wren, Y., Ang, Q. W., Velders, F. P., Evans, D. M., Kemp, J. P., Warrington, N. M., Miller, L., Timpson, N. J., Ring, S. M., Verhulst, F. C., Hofman, A., Rivadeneira, F., Meaburn, E. L. and 12 moreSt Pourcain, B., Cents, R. A., Whitehouse, A. J., Haworth, C. M., Davis, O. S., O’Reilly, P. F., Roulstone, S., Wren, Y., Ang, Q. W., Velders, F. P., Evans, D. M., Kemp, J. P., Warrington, N. M., Miller, L., Timpson, N. J., Ring, S. M., Verhulst, F. C., Hofman, A., Rivadeneira, F., Meaburn, E. L., Price, T. S., Dale, P. S., Pillas, D., Yliherva, A., Rodriguez, A., Golding, J., Jaddoe, V. W., Jarvelin, M.-R., Plomin, R., Pennell, C. E., Tiemeier, H., & Davey Smith, G. (2014). Common variation near ROBO2 is associated with expressive vocabulary in infancy. Nature Communications, 5: 4831. doi:10.1038/ncomms5831.
  • St Pourcain, B., Skuse, D. H., Mandy, W. P., Wang, K., Hakonarson, H., Timpson, N. J., Evans, D. M., Kemp, J. P., Ring, S. M., McArdle, W. L., Golding, J., & Smith, G. D. (2014). Variability in the common genetic architecture of social-communication spectrum phenotypes during childhood and adolescence. Molecular Autism, 5: 18. doi:10.1186/2040-2392-5-18.

    Abstract

    Background Social-communication abilities are heritable traits, and their impairments overlap with the autism continuum. To characterise the genetic architecture of social-communication difficulties developmentally and identify genetic links with the autistic dimension, we conducted a genome-wide screen of social-communication problems at multiple time-points during childhood and adolescence. Methods Social-communication difficulties were ascertained at ages 8, 11, 14 and 17 years in a UK population-based birth cohort (Avon Longitudinal Study of Parents and Children; N ≤ 5,628) using mother-reported Social Communication Disorder Checklist scores. Genome-wide Complex Trait Analysis (GCTA) was conducted for all phenotypes. The time-points with the highest GCTA heritability were subsequently analysed for single SNP association genome-wide. Type I error in the presence of measurement relatedness and the likelihood of observing SNP signals near known autism susceptibility loci (co-location) were assessed via large-scale, genome-wide permutations. Association signals (P ≤ 10−5) were also followed up in Autism Genetic Resource Exchange pedigrees (N = 793) and the Autism Case Control cohort (Ncases/Ncontrols = 1,204/6,491). Results GCTA heritability was strongest in childhood (h2(8 years) = 0.24) and especially in later adolescence (h2(17 years) = 0.45), with a marked drop during early to middle adolescence (h2(11 years) = 0.16 and h2(14 years) = 0.08). Genome-wide screens at ages 8 and 17 years identified for the latter time-point evidence for association at 3p22.2 near SCN11A (rs4453791, P = 9.3 × 10−9; genome-wide empirical P = 0.011) and suggestive evidence at 20p12.3 at PLCB1 (rs3761168, P = 7.9 × 10−8; genome-wide empirical P = 0.085). None of these signals contributed to risk for autism. However, the co-location of population-based signals and autism susceptibility loci harbouring rare mutations, such as PLCB1, is unlikely to be due to chance (genome-wide empirical Pco-location = 0.007). Conclusions Our findings suggest that measurable common genetic effects for social-communication difficulties vary developmentally and that these changes may affect detectable overlaps with the autism spectrum.

    Additional information

    13229_2013_113_MOESM1_ESM.docx
  • Pouw, W., Van Gog, T., & Paas, F. (2014). An embedded and embodied cognition review of instructional manipulatives. Educational Psychology Review, 26, 51-72. doi:10.1007/s10648-014-9255-5.

    Abstract

    Recent literature on learning with instructional manipulatives seems to call for a moderate view on the effects of perceptual and interactive richness of instructional manipulatives on learning. This “moderate view” holds that manipulatives’ perceptual and interactive richness may compromise learning in two ways: (1) by imposing a very high cognitive load on the learner, and (2) by hindering drawing of symbolic inferences that are supposed to play a key role in transfer (i.e., application of knowledge to new situations in the absence of instructional manipulatives). This paper presents a contrasting view. Drawing on recent insights from Embedded Embodied perspectives on cognition, it is argued that (1) perceptual and interactive richness may provide opportunities for alleviating cognitive load (Embedded Cognition), and (2) transfer of learning is not reliant on decontextualized knowledge but may draw on previous sensorimotor experiences of the kind afforded by perceptual and interactive richness of manipulatives (Embodied Cognition). By negotiating the Embedded Embodied Cognition view with the moderate view, implications for research are derived.
  • Pouw, W., De Nooijer, J. A., Van Gog, T., Zwaan, R. A., & Paas, F. (2014). Toward a more embedded/extended perspective on the cognitive function of gestures. Frontiers in Psychology, 5: 359. doi:10.3389/fpsyg.2014.00359.

    Abstract

    Gestures are often considered to be demonstrative of the embodied nature of the mind (Hostetter and Alibali, 2008). In this article, we review current theories and research targeted at the intra-cognitive role of gestures. We ask the question how can gestures support internal cognitive processes of the gesturer? We suggest that extant theories are in a sense disembodied, because they focus solely on embodiment in terms of the sensorimotor neural precursors of gestures. As a result, current theories on the intra-cognitive role of gestures are lacking in explanatory scope to address how gestures-as-bodily-acts fulfill a cognitive function. On the basis of recent theoretical appeals that focus on the possibly embedded/extended cognitive role of gestures (Clark, 2013), we suggest that gestures are external physical tools of the cognitive system that replace and support otherwise solely internal cognitive processes. That is gestures provide the cognitive system with a stable external physical and visual presence that can provide means to think with. We show that there is a considerable amount of overlap between the way the human cognitive system has been found to use its environment, and how gestures are used during cognitive processes. Lastly, we provide several suggestions of how to investigate the embedded/extended perspective of the cognitive function of gestures.
  • Praamstra, P., Meyer, A. S., Cools, A. R., Horstink, M. W. I. M., & Stegeman, D. F. (1996). Movement preparation in Parkinson's disease: Time course and distribution of movement-related potentials in a movement precueing task. Brain, 119, 1689-1704. doi:10.1093/brain/119.5.1689.

    Abstract

    Investigations of the effects of advance information on movement preparation in Parkinson's disease using reaction time (RT) measures have yielded contradictory results. In order to obtain direct information regarding the time course of movement preparation, we combined RT measurements in a movement precueing task with multi-channel recordings of movement-related potentials in the present study. Movements of the index and middle fingers of the left and right hand were either precued or not by advance information regarding the side (left or right hand) of the required response. Reaction times were slower for patients than for control subjects. Both groups benefited equally from informative precues, indicating that patients utilized the advance information as effectively as control subjects. Lateralization of the movement-preceding cerebral activity [i.e. the lateralized readiness potential (LRP)] confirmed that patients used the available partial information to prepare their responses and started this process no later than controls. In conjunction with EMG onset times, the LRP onset measures allowed for a fractionation of the RTs, which provided clues to the stages where the slowness of Parkinson's disease patients might arise. No definite abnormalities of temporal parameters were found, but differences in the distribution of the lateralized movement-preceding activity between patients and controls suggested differences in the cortical organization of movement preparation. Differences in amplitude of the contingent negative variation (CNV) and differences in the way in which the CNV was modulated by the information given by the precue pointed in the same direction. A difference in amplitude of the P300 between patients and controls suggested that preprogramming a response required more effort from. patients than from control subjects.
  • Presciuttini, S., Gialluisi, A., Barbuti, S., Curcio, M., Scatena, F., Carli, G., & Santarcangelo, E. L. (2014). Hypnotizability and Catechol-O-Methyltransferase (COMT) polymorphysms in Italians. Frontiers in Human Neuroscience, 7: 929. doi:10.3389/fnhum.2013.00929.

    Abstract

    Higher brain dopamine content depending on lower activity of Catechol-O-Methyltransferase (COMT) in subjects with high hypnotizability scores (highs) has been considered responsible for their attentional characteristics. However, the results of the previous genetic studies on association between hypnotizability and the COMT single nucleotide polymorphism (SNP) rs4680 (Val158Met) were inconsistent. Here, we used a selective genotyping approach to re-evaluate the association between hypnotizability and COMT in the context of a two-SNP haplotype analysis, considering not only the Val158Met polymorphism, but also the closely located rs4818 SNP. An Italian sample of 53 highs, 49 low hypnotizable subjects (lows), and 57 controls, were genotyped for a segment of 805 bp of the COMT gene, including Val158Met and the closely located rs4818 SNP. Our selective genotyping approach had 97.1% power to detect the previously reported strongest association at the significance level of 5%. We found no evidence of association at the SNP, haplotype, and diplotype levels. Thus, our results challenge the dopamine-based theory of hypnosis and indirectly support recent neuropsychological and neurophysiological findings reporting the lack of any association between hypnotizability and focused attention abilities.
  • Radeau, M., & Van Berkum, J. J. A. (1996). Gender decision. Language and Cognitive Processes, 11(6), 605-610. doi:10.1080/016909696387006.

    Abstract

    In languages in which nouns have a grammatical gender, word recognition can be estimated by gender decision response times. Although gender decision has yet to be used extensively, it has proved sensitive to several factors that have been shown to affect lexical access. The task is not restricted to spoken language but can be used with linguistic information from other sensory modalities.
  • Rahmany, R., Marefat, H., & Kidd, E. (2014). Resumptive elements aid comprehension of object relative clauses: evidence from Persian. Journal of Child Language, 41(4), 937-48. doi:10.1017/s0305000913000147.
  • Ravignani, A., Bowling, D. L., & Fitch, W. T. (2014). Chorusing, synchrony, and the evolutionary functions of rhythm. Frontiers in Psychology, 5: 1118. doi:10.3389/fpsyg.2014.01118.

    Abstract

    A central goal of biomusicology is to understand the biological basis of human musicality. One approach to this problem has been to compare core components of human musicality (relative pitch perception, entrainment, etc.) with similar capacities in other animal species. Here we extend and clarify this comparative approach with respect to rhythm. First, whereas most comparisons between human music and animal acoustic behavior have focused on spectral properties (melody and harmony), we argue for the central importance of temporal properties, and propose that this domain is ripe for further comparative research. Second, whereas most rhythm research in non-human animals has examined animal timing in isolation, we consider how chorusing dynamics can shape individual timing, as in human music and dance, arguing that group behavior is key to understanding the adaptive functions of rhythm. To illustrate the interdependence between individual and chorusing dynamics, we present a computational model of chorusing agents relating individual call timing with synchronous group behavior. Third, we distinguish and clarify mechanistic and functional explanations of rhythmic phenomena, often conflated in the literature, arguing that this distinction is key for understanding the evolution of musicality. Fourth, we expand biomusicological discussions beyond the species typically considered, providing an overview of chorusing and rhythmic behavior across a broad range of taxa (orthopterans, fireflies, frogs, birds, and primates). Finally, we propose an “Evolving Signal Timing” hypothesis, suggesting that similarities between timing abilities in biological species will be based on comparable chorusing behaviors. We conclude that the comparative study of chorusing species can provide important insights into the adaptive function(s) of rhythmic behavior in our “proto-musical” primate ancestors, and thus inform our understanding of the biology and evolution of rhythm in human music and language.
  • Ravignani, A. (2014). Chronometry for the chorusing herd: Hamilton's legacy on context-dependent acoustic signalling—a comment on Herbers (2013). Biology Letters, 10(1): 20131018. doi:10.1098/rsbl.2013.1018.
  • Ravignani, A., Martins, M., & Fitch, W. T. (2014). Vocal learning, prosody, and basal ganglia: Don't underestimate their complexity. Behavioral and Brain Sciences, 37(6), 570-571. doi:10.1017/S0140525X13004184.

    Abstract

    In response to: Brain mechanisms of acoustic communication in humans and nonhuman primates: An evolutionary perspective

    Abstract:
    Ackermann et al.'s arguments in the target article need sharpening and rethinking at both mechanistic and evolutionary levels. First, the authors' evolutionary arguments are inconsistent with recent evidence concerning nonhuman animal rhythmic abilities. Second, prosodic intonation conveys much more complex linguistic information than mere emotional expression. Finally, human adults' basal ganglia have a considerably wider role in speech modulation than Ackermann et al. surmise.
  • Redmann, A., FitzPatrick, I., Hellwig, F. M., & Indefrey, P. (2014). The use of conceptual components in language production: an ERP study. Frontiers in Psychology, 5: 363. doi:10.3389/fpsyg.2014.00363.

    Abstract

    According to frame-theory, concepts can be represented as structured frames that contain conceptual attributes (e.g., "color") and their values (e.g., "red"). A particular color value can be seen as a core conceptual component for (high color-diagnostic; HCD) objects (e.g., bananas) which are strongly associated with a typical color, but less so for (low color-diagnostic; LCD) objects (e.g., bicycles) that exist in many different colors. To investigate whether the availability of a core conceptual component (color) affects lexical access in language production, we conducted two experiments on the naming of visually presented HCD and LCD objects. Experiment 1 showed that, when naming latencies were matched for colored HCD and LCD objects, achromatic HCD objects were named more slowly than achromatic LCD objects. In Experiment 2 we recorded ERPs while participants performed a picture-naming task, in which achromatic target pictures were either preceded by an appropriately colored box (primed condition) or a black and white checkerboard (unprimed condition). We focused on the P2 component, which has been shown to reflect difficulty of lexical access in language production. Results showed that HCD resulted in slower object-naming and a more pronounced P2. Priming also yielded a more positive P2 but did not result in an RT difference. ERP waveforms on the P1, P2 and N300 components showed a priming by color-diagnosticity interaction, the effect of color priming being stronger for HCD objects than for LCD objects. The effect of color-diagnosticity on the P2 component suggests that the slower naming of achromatic HCD objects is (at least in part) due to more difficult lexical retrieval. Hence, the color attribute seems to affect lexical retrieval in HCD words. The interaction between priming and color-diagnosticity indicates that priming with a feature hinders lexical access, especially if the feature is a core feature of the target object.
  • Reis, A., Guerreiro, M., & Petersson, K. M. (2003). A sociodemographic and neuropsychological characterization of an illiterate population. Applied Neuropsychology, 10, 191-204. doi:10.1207/s15324826an1004_1.

    Abstract

    The objectives of this article are to characterize the performance and to discuss the performance differences between literate and illiterate participants in a well-defined study population.We describe the participant-selection procedure used to investigate this population. Three groups with similar sociocultural backgrounds living in a relatively homogeneous fishing community in southern Portugal were characterized in terms of socioeconomic and sociocultural background variables and compared on a simple neuropsychological test battery; specifically, a literate group with more than 4 years of education (n = 9), a literate group with 4 years of education (n = 26), and an illiterate group (n = 31) were included in this study.We compare and discuss our results with other similar studies on the effects of literacy and illiteracy. The results indicate that naming and identification of real objects, verbal fluency using ecologically relevant semantic criteria, verbal memory, and orientation are not affected by literacy or level of formal education. In contrast, verbal working memory assessed with digit span, verbal abstraction, long-term semantic memory, and calculation (i.e., multiplication) are significantly affected by the level of literacy. We indicate that it is possible, with proper participant-selection procedures, to exclude general cognitive impairment and to control important sociocultural factors that potentially could introduce bias when studying the specific effects of literacy and level of formal education on cognitive brain function.
  • Reis, A., & Petersson, K. M. (2003). Educational level, socioeconomic status and aphasia research: A comment on Connor et al. (2001)- Effect of socioeconomic status on aphasia severity and recovery. Brain and Language, 87, 449-452. doi:10.1016/S0093-934X(03)00140-8.

    Abstract

    Is there a relation between socioeconomic factors and aphasia severity and recovery? Connor, Obler, Tocco, Fitzpatrick, and Albert (2001) describe correlations between the educational level and socioeconomic status of aphasic subjects with aphasia severity and subsequent recovery. As stated in the introduction by Connor et al. (2001), studies of the influence of educational level and literacy (or illiteracy) on aphasia severity have yielded conflicting results, while no significant link between socioeconomic status and aphasia severity and recovery has been established. In this brief note, we will comment on their findings and conclusions, beginning first with a brief review of literacy and aphasia research, and complexities encountered in these fields of investigation. This serves as a general background to our specific comments on Connor et al. (2001), which will be focusing on methodological issues and the importance of taking normative values in consideration when subjects with different socio-cultural or socio-economic backgrounds are assessed.
  • Rey, A., & Schiller, N. O. (2005). Graphemic complexity and multiple print-to-sound associations in visual word recognition. Memory & Cognition, 33(1), 76-85.

    Abstract

    It has recently been reported that words containing a multiletter grapheme are processed slower than are words composed of single-letter graphemes (Rastle & Coltheart, 1998; Rey, Jacobs, Schmidt-Weigand, & Ziegler, 1998). In the present study, using a perceptual identification task, we found in Experiment 1 that this graphemic complexity effect can be observed while controlling for multiple print-to-sound associations, indexed by regularity or consistency. In Experiment 2, we obtained cumulative effects of graphemic complexity and regularity. These effects were replicated in Experiment 3 in a naming task. Overall, these results indicate that graphemic complexity and multiple print-to-sound associations effects are independent and should be accounted for in different ways by models of written word processing.
  • Roberts, S. G., Dediu, D., & Moisik, S. R. (2014). How to speak Neanderthal. New Scientist, 222(2969), 40-41. doi:10.1016/S0262-4079(14)60970-2.
  • Rodenas-Cuadrado, P., Ho, J., & Vernes, S. C. (2014). Shining a light on CNTNAP2: Complex functions to complex disorders. European Journal of Human Genetics, 22(2), 171-178. doi:10.1038/ejhg.2013.100.

    Abstract

    The genetic basis of complex neurological disorders involving language are poorly understood, partly due to the multiple additive genetic risk factors that are thought to be responsible. Furthermore, these conditions are often syndromic in that they have a range of endophenotypes that may be associated with the disorder and that may be present in different combinations in patients. However, the emergence of individual genes implicated across multiple disorders has suggested that they might share similar underlying genetic mechanisms. The CNTNAP2 gene is an excellent example of this, as it has recently been implicated in a broad range of phenotypes including autism spectrum disorder (ASD), schizophrenia, intellectual disability, dyslexia and language impairment. This review considers the evidence implicating CNTNAP2 in these conditions, the genetic risk factors and mutations that have been identified in patient and population studies and how these relate to patient phenotypes. The role of CNTNAP2 is examined in the context of larger neurogenetic networks during development and disorder, given what is known regarding the regulation and function of this gene. Understanding the role of CNTNAP2 in diverse neurological disorders will further our understanding of how combinations of individual genetic risk factors can contribute to complex conditions
  • Roelofs, A. (2003). Shared phonological encoding processes and representations of languages in bilingual speakers. Language and Cognitive Processes, 18(2), 175-204. doi:10.1080/01690960143000515.

    Abstract

    Four form-preparation experiments investigated whether aspects of phonological encoding processes and representations are shared between languages in bilingual speakers. The participants were Dutch--English bilinguals. Experiment 1 showed that the basic rightward incrementality revealed in studies for the first language is also observed for second-language words. In Experiments 2 and 3, speakers were given words to produce that did or did not share onset segments, and that came or did not come from different languages. It was found that when onsets were shared among the response words, those onsets were prepared, even when the words came from different languages. Experiment 4 showed that preparation requires prior knowledge of the segments and that knowledge about their phonological features yields no effect. These results suggest that both first- and second-language words are phonologically planned through the same serial order mechanism and that the representations of segments common to the languages are shared.
  • Roelofs, A. (2005). The visual-auditory color-word Stroop asymmetry and its time course. Memory & Cognition, 33(8), 1325-1336.

    Abstract

    Four experiments examined crossmodal versions of the Stroop task in order (1) to look for Stroop asymmetries in color naming, spoken-word naming, and written-word naming and to evaluate the time course of these asymmetries, and (2) to compare these findings to current models of the Stroop effect. Participants named color patches while ignoring spoken color words presented with an onset varying from 300 msec before to 300 msec after the onset of the color (Experiment 1), or they named the spoken words and ignored the colors (Experiment 2). A secondary visual detection task assured that the participants looked at the colors in both tasks. Spoken color words yielded Stroop effects in color naming, but colors did not yield an effect in spoken-word naming at any stimulus onset asynchrony. This asymmetry in effects was obtained with equivalent color- and spoken-word-naming latencies. Written color words yielded a Stroop effect in naming spoken words (Experiment 3), and spoken color words yielded an effect in naming written words (Experiment 4). These results were interpreted as most consistent with an architectural account of the color-word Stroop asymmetry, in contrast with discriminability and pathway strength accounts.
  • Roelofs, A. (2003). Goal-referenced selection of verbal action: Modeling attentional control in the Stroop task. Psychological Review, 110(1), 88-125.

    Abstract

    This article presents a new account of the color-word Stroop phenomenon ( J. R. Stroop, 1935) based on an implemented model of word production, WEAVER++ ( W. J. M. Levelt, A. Roelofs, & A. S. Meyer, 1999b; A. Roelofs, 1992, 1997c). Stroop effects are claimed to arise from processing interactions within the language-production architecture and explicit goal-referenced control. WEAVER++ successfully simulates 16 classic data sets, mostly taken from the review by C. M. MacLeod (1991), including incongruency, congruency, reverse-Stroop, response-set, semantic-gradient, time-course, stimulus, spatial, multiple-task, manual, bilingual, training, age, and pathological effects. Three new experiments tested the account against alternative explanations. It is shown that WEAVER++ offers a more satisfactory account of the data than other models.
  • Roelofs, A., Meyer, A. S., & Levelt, W. J. M. (1996). Interaction between semantic and orthographic factors in conceptually driven naming: Comment on Starreveld and La Heij (1995). Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 246-251.

    Abstract

    P. A. Starreveld and W. La Heij (1995) tested the seriality view of lexical access in speech production, according to which lexical selection and the encoding of a word's form proceed in serial order without feedback. In 2 experiments, they looked at the combined effect of semantic and orthographic relatedness of written distracter words in tasks that required conceptually driven naming. They found an interaction between semantic relatedness and orthographic relatedness and argued that the observed interaction refutes the seriality view of lexical access. In this comment, the authors argue that Starreveld and La Heij's rejection of serial access was based on an oversimplified conception of the seriality view and that interaction, rather than additivity, is predicted by existing conceptions of serial access.
  • Rojas-Berscia, L. M. (2014). Towards an ontological theory of language: Radical minimalism, memetic linguistics and linguistic engineering, prolegomena. Ianua: Revista Philologica Romanica, 14(2), 69-81.

    Abstract

    In contrast to what has happened in other sciences, the establishment of what is the study object of linguistics as an autonomous discipline has not been resolved yet. Ranging from external explanations of language as a system (Saussure 1916), the existence of a mental innate language capacity or UG (Chomsky 1965, 1981, 1995), the cognitive complexity of the mental language capacity and the acquisition of languages in use (Langacker 1987, 1991, 2008; Croft & Cruse 2004; Evans & Levinson 2009) most, if not all, theoretical approaches have provided explanations that somehow isolated our discipline from developments in other major sciences, such as physics and evolutionary biology. In the present article I will present some of the basic issues regarding the current debate in the discipline, in order to identify some problems regarding the modern assumptions on language. Furthermore, a new proposal on how to approach linguistic phenomena will be given, regarding what I call «the main three» basic problems our discipline has to face ulteriorly. Finally, some preliminary ideas on a new paradigm of Linguistics which tries to answer these three basic problems will be presented, mainly based in the recently-born formal theory called Radical Minimalism (Krivochen 2011a, 2011b) and what I dub Memetic Linguistics and Linguistic Engineering
  • Roorda, D., Kalkman, G., Naaijer, M., & Van Cranenburgh, A. (2014). LAF-Fabric: A data analysis tool for linguistic annotation framework with an application to the Hebrew Bible. Computational linguistics in the Netherlands, 4, 105-120.

    Abstract

    The Linguistic Annotation Framework (LAF) provides a general, extensible stand-o markup system for corpora. This paper discusses LAF-Fabric, a new tool to analyse LAF resources in general with an extension to process the Hebrew Bible in particular. We rst walk through the history of the Hebrew Bible as text database in decennium-wide steps. Then we describe how LAF-Fabric may serve as an analysis tool for this corpus. Finally, we describe three analytic projects/work ows that benet from the new LAF representation: 1) the study of linguistic variation: extract cooccurrence data of common nouns between the books of the Bible (Martijn Naaijer); 2) the study of the grammar of Hebrew poetry in the Psalms: extract clause typology (Gino Kalkman); 3) construction of a parser of classical Hebrew by Data Oriented Parsing: generate tree structures from the database (Andreas van Cranenburgh).
  • Rösler, D., & Skiba, R. (1988). Möglichkeiten für den Einsatz einer Lehrmaterial-Datenbank in der Lehrerfortbildung. Deutsch lernen, 14(1), 24-31.
  • Roswandowitz, C., Mathias, S. R., Hintz, F., Kreitewolf, J., Schelinski, S., & von Kriegstein, K. (2014). Two cases of selective developmental voice-recognition impairments. Current Biology, 24(19), 2348-2353. doi:10.1016/j.cub.2014.08.048.

    Abstract

    Recognizing other individuals is an essential skill in humans and in other species [1, 2 and 3]. Over the last decade, it has become increasingly clear that person-identity recognition abilities are highly variable. Roughly 2% of the population has developmental prosopagnosia, a congenital deficit in recognizing others by their faces [4]. It is currently unclear whether developmental phonagnosia, a deficit in recognizing others by their voices [5], is equally prevalent, or even whether it actually exists. Here, we aimed to identify cases of developmental phonagnosia. We collected more than 1,000 data sets from self-selected German individuals by using a web-based screening test that was designed to assess their voice-recognition abilities. We then examined potentially phonagnosic individuals by using a comprehensive laboratory test battery. We found two novel cases of phonagnosia: AS, a 32-year-old female, and SP, a 32-year-old male; both are otherwise healthy academics, have normal hearing, and show no pathological abnormalities in brain structure. The two cases have comparable patterns of impairments: both performed at least 2 SDs below the level of matched controls on tests that required learning new voices, judging the familiarity of famous voices, and discriminating pitch differences between voices. In both cases, only voice-identity processing per se was affected: face recognition, speech intelligibility, emotion recognition, and musical ability were all comparable to controls. The findings confirm the existence of developmental phonagnosia as a modality-specific impairment and allow a first rough prevalence estimate.

    Files private

    Request files
  • Rowbotham, S., Wardy, A. J., Lloyd, D. M., Wearden, A., & Holler, J. (2014). Increased pain intensity is associated with greater verbal communication difficulty and increased production of speech and co-speech gestures. PLoS One, 9(10): e110779. doi:10.1371/journal.pone.0110779.

    Abstract

    Effective pain communication is essential if adequate treatment and support are to be provided. Pain communication is often multimodal, with sufferers utilising speech, nonverbal behaviours (such as facial expressions), and co-speech gestures (bodily movements, primarily of the hands and arms that accompany speech and can convey semantic information) to communicate their experience. Research suggests that the production of nonverbal pain behaviours is positively associated with pain intensity, but it is not known whether this is also the case for speech and co-speech gestures. The present study explored whether increased pain intensity is associated with greater speech and gesture production during face-to-face communication about acute, experimental pain. Participants (N = 26) were exposed to experimentally elicited pressure pain to the fingernail bed at high and low intensities and took part in video-recorded semi-structured interviews. Despite rating more intense pain as more difficult to communicate (t(25) = 2.21, p = .037), participants produced significantly longer verbal pain descriptions and more co-speech gestures in the high intensity pain condition (Words: t(25) = 3.57, p = .001; Gestures: t(25) = 3.66, p = .001). This suggests that spoken and gestural communication about pain is enhanced when pain is more intense. Thus, in addition to conveying detailed semantic information about pain, speech and co-speech gestures may provide a cue to pain intensity, with implications for the treatment and support received by pain sufferers. Future work should consider whether these findings are applicable within the context of clinical interactions about pain.
  • Rowbotham, S., Holler, J., Lloyd, D., & Wearden, A. (2014). Handling pain: The semantic interplay of speech and co-speech hand gestures in the description of pain sensations. Speech Communication, 57, 244-256. doi:10.1016/j.specom.2013.04.002.

    Abstract

    Pain is a private and subjective experience about which effective communication is vital, particularly in medical settings. Speakers often represent information about pain sensation in both speech and co-speech hand gestures simultaneously, but it is not known whether gestures merely replicate spoken information or complement it in some way. We examined the representational contribution
    of gestures in a range of consecutive analyses. Firstly, we found that 78% of speech units containing pain sensation were accompanied by gestures, with 53% of these gestures representing pain sensation. Secondly, in 43% of these instances, gestures represented pain sensation information that was not contained in speech, contributing additional, complementary information to the pain sensation message.
    Finally, when applying a specificity analysis, we found that in contrast with research in different domains of talk, gestures did not make the pain sensation information in speech more specific. Rather, they complemented the verbal pain message by representing different
    aspects of pain sensation, contributing to a fuller representation of pain sensation than speech alone. These findings highlight the importance of gestures in communicating about pain sensation and suggest that this modality provides additional information to supplement and clarify the often ambiguous verbal pain message

    Files private

    Request files
  • Rowland, C. F., Pine, J. M., Lieven, E. V., & Theakston, A. L. (2003). Determinants of acquisition order in wh-questions: Re-evaluating the role of caregiver speech. Journal of Child Language, 30(3), 609-635. doi:10.1017/S0305000903005695.

    Abstract

    Accounts that specify semantic and/or syntactic complexity as the primary determinant of the order in which children acquire particular words or grammatical constructions have been highly influential in the literature on question acquisition. One explanation of wh-question acquisition in particular suggests that the order in which English speaking children acquire wh-questions is determined by two interlocking linguistic factors; the syntactic function of the wh-word that heads the question and the semantic generality (or ‘lightness’) of the main verb (Bloom, Merkin & Wootten, 1982; Bloom, 1991). Another more recent view, however, is that acquisition is influenced by the relative frequency with which children hear particular wh-words and verbs in their input (e.g. Rowland & Pine, 2000). In the present study over 300 hours of naturalistic data from twelve two- to three-year-old children and their mothers were analysed in order to assess the relative contribution of complexity and input frequency to wh-question acquisition. The analyses revealed, first, that the acquisition order of wh-questions could be predicted successfully from the frequency with which particular wh-words and verbs occurred in the children's input and, second, that syntactic and semantic complexity did not reliably predict acquisition once input frequency was taken into account. These results suggest that the relationship between acquisition and complexity may be a by-product of the high correlation between complexity and the frequency with which mothers use particular wh-words and verbs. We interpret the results in terms of a constructivist view of language acquisition.
  • Rowland, C. F., & Pine, J. M. (2003). The development of inversion in wh-questions: a reply to Van Valin. Journal of Child Language, 30(1), 197-212. doi:10.1017/S0305000902005445.

    Abstract

    Van Valin (Journal of Child Language29, 2002, 161–75) presents a critique of Rowland & Pine (Journal of Child Language27, 2000, 157–81) and argues that the wh-question data from Adam (in Brown, A first language, Cambridge, MA, 1973) cannot be explained in terms of input frequencies as we suggest. Instead, he suggests that the data can be more successfully accounted for in terms of Role and Reference Grammar. In this note we re-examine the pattern of inversion and uninversion in Adam's wh-questions and argue that the RRG explanation cannot account for some of the developmental facts it was designed to explain.
  • Rowland, C. F., Pine, J. M., Lieven, E. V., & Theakston, A. L. (2005). The incidence of error in young children's wh-questions. Journal of Speech, Language, and Hearing Research, 48, 384-404. doi:10.1044/1092-4388(2005/027).

    Abstract

    Many current generativist theorists suggest that young children possess the grammatical principles of inversion required for question formation but make errors because they find it difficult to learn language-specific rules about how inversion applies. The present study analyzed longitudinal spontaneous sampled data from twelve 2–3-year-old English speaking children and the intensive diary data of 1 child (age 2;7 [years;months] to 2;11) in order to test some of these theories. The results indicated significantly different rates of error use across different auxiliaries. In particular, error rates differed across 2 forms of the same auxiliary subtype (e.g., auxiliary is vs. are), and auxiliary DO and modal auxiliaries attracted significantly higher rates of errors of inversion than other auxiliaries. The authors concluded that current generativist theories might have problems explaining the patterning of errors seen in children's questions, which might be more consistent with a constructivist account of development. However, constructivists need to devise more precise predictions in order to fully explain the acquisition of questions.
  • De Ruiter, J. P., Rossignol, S., Vuurpijl, L., Cunningham, D. W., & Levelt, W. J. M. (2003). SLOT: A research platform for investigating multimodal communication. Behavior Research Methods, Instruments, & Computers, 35(3), 408-419.

    Abstract

    In this article, we present the spatial logistics task (SLOT) platform for investigating multimodal communication between 2 human participants. Presented are the SLOT communication task and the software and hardware that has been developed to run SLOT experiments and record the participants’ multimodal behavior. SLOT offers a high level of flexibility in varying the context of the communication and is particularly useful in studies of the relationship between pen gestures and speech. We illustrate the use of the SLOT platform by discussing the results of some early experiments. The first is an experiment on negotiation with a one-way mirror between the participants, and the second is an exploratory study of automatic recognition of spontaneous pen gestures. The results of these studies demonstrate the usefulness of the SLOT platform for conducting multimodal communication research in both human– human and human–computer interactions.
  • Sadakata, M., & McQueen, J. M. (2014). Individual aptitude in Mandarin lexical tone perception predicts effectiveness of high-variability training. Frontiers in Psychology, 5: 1318. doi:10.3389/fpsyg.2014.01318.

    Abstract

    Although the high-variability training method can enhance learning of non-native speech categories, this can depend on individuals’ aptitude. The current study asked how general the effects of perceptual aptitude are by testing whether they occur with training materials spoken by native speakers and whether they depend on the nature of the to-be-learned material. Forty-five native Dutch listeners took part in a five-day training procedure in which they identified bisyllabic Mandarin pseudowords (e.g., asa) pronounced with different lexical tone combinations. The training materials were presented to different groups of listeners at three levels of variability: low (many repetitions of a limited set of words recorded by a single speaker), medium (fewer repetitions of a more variable set of words recorded by 3 speakers) and high (similar to medium but with 5 speakers). Overall, variability did not influence learning performance, but this was due to an interaction with individuals’ perceptual aptitude: increasing variability hindered improvements in performance for low-aptitude perceivers while it helped improvements in performance for high-aptitude perceivers. These results show that the previously observed interaction between individuals’ aptitude and effects of degree of variability extends to natural tokens of Mandarin speech. This interaction was not found, however, in a closely-matched study in which native Dutch listeners were trained on the Japanese geminate/singleton consonant contrast. This may indicate that the effectiveness of high-variability training depends not only on individuals’ aptitude in speech perception but also on the nature of the categories being acquired.
  • Salverda, A. P., Dahan, D., & McQueen, J. M. (2003). The role of prosodic boundaries in the resolution of lexical embedding in speech comprehension. Cognition, 90(1), 51-89. doi:10.1016/S0010-0277(03)00139-2.

    Abstract

    Participants' eye movements were monitored as they heard sentences and saw four pictured objects on a computer screen. Participants were instructed to click on the object mentioned in the sentence. There were more transitory fixations to pictures representing monosyllabic words (e.g. ham) when the first syllable of the target word (e.g. hamster) had been replaced by a recording of the monosyllabic word than when it came from a different recording of the target word. This demonstrates that a phonemically identical sequence can contain cues that modulate its lexical interpretation. This effect was governed by the duration of the sequence, rather than by its origin (i.e. which type of word it came from). The longer the sequence, the more monosyllabic-word interpretations it generated. We argue that cues to lexical-embedding disambiguation, such as segmental lengthening, result from the realization of a prosodic boundary that often but not always follows monosyllabic words, and that lexical candidates whose word boundaries are aligned with prosodic boundaries are favored in the word-recognition process.
  • Sanchis-Trilles, G., Alabau, V., Buck, C., Carl, M., Casacuberta, F., García Martínez, M., Germann, U., González Rubio, J., Hill, R. L., Koehn, P., Leiva, L. A., Mesa-Lao, B., Ortiz Martínez, D., Saint-Amand, H., Tsoukala, C., & Vidal, E. (2014). Interactive translation prediction versus conventional post-editing in practice: a study with the CasMaCat workbench. Machine Translation, 28(3-4), 217-235. doi:10.1007/s10590-014-9157-9.

    Abstract

    We conducted a field trial in computer-assisted professional translation to compare interactive translation prediction (ITP) against conventional post-editing (PE) of machine translation (MT) output. In contrast to the conventional PE set-up, where an MT system first produces a static translation hypothesis that is then edited by a professional (hence “post-editing”), ITP constantly updates the translation hypothesis in real time in response to user edits. Our study involved nine professional translators and four reviewers working with the web-based CasMaCat workbench. Various new interactive features aiming to assist the post-editor/translator were also tested in this trial. Our results show that even with little training, ITP can be as productive as conventional PE in terms of the total time required to produce the final translation. Moreover, translation editors working with ITP require fewer key strokes to arrive at the final version of their translation.

    Files private

    Request files
  • Scharenborg, O., ten Bosch, L., Boves, L., & Norris, D. (2003). Bridging automatic speech recognition and psycholinguistics: Extending Shortlist to an end-to-end model of human speech recognition [Letter to the editor]. Journal of the Acoustical Society of America, 114, 3032-3035. doi:10.1121/1.1624065.

    Abstract

    This letter evaluates potential benefits of combining human speech recognition ~HSR! and automatic speech recognition by building a joint model of an automatic phone recognizer ~APR! and a computational model of HSR, viz., Shortlist @Norris, Cognition 52, 189–234 ~1994!#. Experiments based on ‘‘real-life’’ speech highlight critical limitations posed by some of the simplifying assumptions made in models of human speech recognition. These limitations could be overcome by avoiding hard phone decisions at the output side of the APR, and by using a match between the input and the internal lexicon that flexibly copes with deviations from canonical phonemic representations.
  • Scharenborg, O., Ten Bosch, L., & Boves, L. (2003). ‘Early recognition’ of words in continuous speech. Automatic Speech Recognition and Understanding, 2003 IEEE Workshop, 61-66. doi:10.1109/ASRU.2003.1318404.

    Abstract

    In this paper, we present an automatic speech recognition (ASR) system based on the combination of an automatic phone recogniser and a computational model of human speech recognition – SpeM – that is capable of computing ‘word activations’ during the recognition process, in addition to doing normal speech recognition, a task in which conventional ASR architectures only provide output after the end of an utterance. We explain the notion of word activation and show that it can be used for ‘early recognition’, i.e. recognising a word before the end of the word is available. Our ASR system was tested on 992 continuous speech utterances, each containing at least one target word: a city name of at least two syllables. The results show that early recognition was obtained for 72.8% of the target words that were recognised correctly. Also, it is shown that word activation can be used as an effective confidence measure.

Share this page