Publications

Displaying 1 - 100 of 1416
  • Acheson, D. J., Postle, B. R., & MacDonald, M. C. (2010). The interaction of concreteness and phonological similarity in verbal working memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36(1), 17-36. doi:10.1037/a0017679.

    Abstract

    Although phonological representations have been a primary focus of verbal working memory research, lexical-semantic manipulations also influence performance. In the present study, the authors investigated whether a classic phenomenon in verbal working memory, the phonological similarity effect (PSE), is modulated by a lexical-semantic variable, word concreteness. Phonological overlap and concreteness were factorially manipulated in each of four experiments across which presentation modality (Experiments 1 and 2: visual presentation; Experiments 3 and 4: auditory presentation) and concurrent articulation (present in Experiments 2 and 4) were manipulated. In addition to main effects of each variable, results show a Phonological Overlap x Concreteness interaction whereby the magnitude of the PSE is greater for concrete word lists relative to abstract word lists. This effect is driven by superior item memory for nonoverlapping, concrete lists and is robust to the modality of presentation and concurrent articulation. These results demonstrate that in verbal working memory tasks, there are multiple routes to the phonological form of a word and that maintenance and retrieval occur over more than just a phonological level.
  • Acheson, D. J., & Hagoort, P. (2014). Twisting tongues to test for conflict monitoring in speech production. Frontiers in Human Neuroscience, 8: 206. doi:10.3389/fnhum.2014.00206.

    Abstract

    A number of recent studies have hypothesized that monitoring in speech production may occur via domain-general mechanisms responsible for the detection of response conflict. Outside of language, two ERP components have consistently been elicited in conflict-inducing tasks (e.g., the flanker task): the stimulus-locked N2 on correct trials, and the response-locked error-related negativity (ERN). The present investigation used these electrophysiological markers to test whether a common response conflict monitor is responsible for monitoring in speech and non-speech tasks. Electroencephalography (EEG) was recorded while participants performed a tongue twister (TT) task and a manual version of the flanker task. In the TT task, people rapidly read sequences of four nonwords arranged in TT and non-TT patterns three times. In the flanker task, people responded with a left/right button press to a center-facing arrow, and conflict was manipulated by the congruency of the flanking arrows. Behavioral results showed typical effects of both tasks, with increased error rates and slower speech onset times for TT relative to non-TT trials and for incongruent relative to congruent flanker trials. In the flanker task, stimulus-locked EEG analyses replicated previous results, with a larger N2 for incongruent relative to congruent trials, and a response-locked ERN. In the TT task, stimulus-locked analyses revealed broad, frontally-distributed differences beginning around 50 ms and lasting until just before speech initiation, with TT trials more negative than non-TT trials; response-locked analyses revealed an ERN. Correlation across these measures showed some correlations within a task, but little evidence of systematic cross-task correlation. Although the present results do not speak against conflict signals from the production system serving as cues to self-monitoring, they are not consistent with signatures of response conflict being mediated by a single, domain-general conflict monitor
  • Adank, P., & Janse, E. (2010). Comprehension of a novel accent by young and older listeners. Psychology and Aging, 25(3), 736-740. doi:10.1037/a0020054.

    Abstract

    The authors investigated perceptual learning of a novel accent in young and older listeners through
    measuring speech reception thresholds (SRTs) using speech materials spoken in a novel—unfamiliar—
    accent. Younger and older listeners adapted to this accent, but older listeners showed poorer comprehension
    of the accent. Furthermore, perceptual learning differed across groups: The older listeners
    stopped learning after the first block, whereas younger listeners showed further improvement with longer
    exposure. Among the older participants, hearing acuity predicted the SRT as well as the effect of the
    novel accent on SRT. Finally, a measure of executive function predicted the impact of accent on SRT.
  • Adank, P., Hagoort, P., & Bekkering, H. (2010). Imitation improves language comprehension. Psychological Science, 21, 1903-1909. doi:10.1177/0956797610389192.

    Abstract

    Humans imitate each other during social interaction. This imitative behavior streamlines social interaction and aids in learning to replicate actions. However, the effect of imitation on action comprehension is unclear. This study investigated whether vocal imitation of an unfamiliar accent improved spoken-language comprehension. Following a pretraining accent comprehension test, participants were assigned to one of six groups. The baseline group received no training, but participants in the other five groups listened to accented sentences, listened to and repeated accented sentences in their own accent, listened to and transcribed accented sentences, listened to and imitated accented sentences, or listened to and imitated accented sentences without being able to hear their own vocalizations. Posttraining measures showed that accent comprehension was most improved for participants who imitated the speaker’s accent. These results show that imitation may aid in streamlining interaction by improving spoken-language comprehension under adverse listening conditions.
  • Agus, T., Carrion Castillo, A., Pressnitzer, D., & Ramus, F. (2014). Perceptual learning of acoustic noise by individuals with dyslexia. Journal of Speech, Language, and Hearing Research., 57, 1069-1077. doi:10.1044/1092-4388(2013/13-0020).

    Abstract

    Purpose: A phonological deficit is thought to affect most individuals with developmental dyslexia. The present study addresses whether the phonological deficit is caused by difficulties with perceptual learning of fine acoustic details. Method: A demanding test of nonverbal auditory memory, “noise learning,” was administered to both adults with dyslexia and control adult participants. On each trial, listeners had to decide whether a stimulus was a 1-s noise token or 2 abutting presentations of the same 0.5-s noise token (repeated noise). Without the listener’s knowledge, the exact same noise tokens were presented over many trials. An improved ability to perform the task for such “reference” noises reflects learning of their acoustic details. Results: Listeners with dyslexia did not differ from controls in any aspect of the task, qualitatively or quantitatively. They required the same amount of training to achieve discrimination of repeated from nonrepeated noises, and they learned the reference noises as often and as rapidly as the control group. However, they did show all the hallmarks of dyslexia, including a well-characterized phonological deficit. Conclusion: The data did not support the hypothesis that deficits in basic auditory processing or nonverbal learning and memory are the cause of the phonological deficit in dyslexia
  • Alday, P. M., Schlesewsky, M., & Bornkessel-Schlesewsky, I. (2014). Towards a Computational Model of Actor-Based Language Comprehension. Neuroinformatics, 12(1), 143-179. doi:10.1007/s12021-013-9198-x.

    Abstract

    Neurophysiological data from a range of typologically diverse languages provide evidence for a cross-linguistically valid, actor-based strategy of understanding sentence-level meaning. This strategy seeks to identify the participant primarily responsible for the state of affairs (the actor) as quickly and unambiguously as possible, thus resulting in competition for the actor role when there are multiple candidates. Due to its applicability across languages with vastly different characteristics, we have proposed that the actor strategy may derive from more basic cognitive or neurobiological organizational principles, though it is also shaped by distributional properties of the linguistic input (e.g. the morphosyntactic coding strategies for actors in a given language). Here, we describe an initial computational model of the actor strategy and how it interacts with language-specific properties. Specifically, we contrast two distance metrics derived from the output of the computational model (one weighted and one unweighted) as potential measures of the degree of competition for actorhood by testing how well they predict modulations of electrophysiological activity engendered by language processing. To this end, we present an EEG study on word order processing in German and use linear mixed-effects models to assess the effect of the various distance metrics. Our results show that a weighted metric, which takes into account the weighting of an actor-identifying feature in the language under consideration outperforms an unweighted distance measure. We conclude that actor competition effects cannot be reduced to feature overlap between multiple sentence participants and thereby to the notion of similarity-based interference, which is prominent in current memory-based models of language processing. Finally, we argue that, in addition to illuminating the underlying neurocognitive mechanisms of actor competition, the present model can form the basis for a more comprehensive, neurobiologically plausible computational model of constructing sentence-level meaning.
  • Alferink, I., & Gullberg, M. (2014). French-Dutch bilinguals do not maintain obligatory semantic distinctions: Evidence from placement verbs. Bilingualism: Language and Cognition, 17, 22-37. doi:10.1017/S136672891300028X.

    Abstract

    It is often said that bilinguals are not the sum of two monolinguals but that bilingual systems represent a third pattern. This study explores the exact nature of this pattern. We ask whether there is evidence of a merged system when one language makes an obligatory distinction that the other one does not, namely in the case of placement verbs in French and Dutch, and whether such a merged system is realised as a more general or a more specific system. The results show that in elicited descriptions Belgian French-Dutch bilinguals drop one of the categories in one of the languages, resulting in a more general semantic system in comparison with the non-contact variety. They do not uphold the obligatory distinction in the verb nor elsewhere despite its communicative relevance. This raises important questions regarding how widespread these differences are and what drives these patterns
  • Alhama, R. G., Scha, R., & Zuidema, W. (2014). Rule learning in humans and animals. In E. A. Cartmill, S. Roberts, H. Lyn, & H. Cornish (Eds.), The evolution of language: Proceedings of the 10th International Conference (EVOLANG 10) (pp. 371-372). Singapore: World Scientific.
  • Alibali, M. W., Kita, S., & Young, A. J. (2000). Gesture and the process of speech production: We think, therefore we gesture. Language and Cognitive Processes, 15(6), 593-613. doi:10.1080/016909600750040571.

    Abstract

    At what point in the process of speech production is gesture involved? According to the Lexical Retrieval Hypothesis, gesture is involved in generating the surface forms of utterances. Specifically, gesture facilitates access to items in the mental lexicon. According to the Information Packaging Hypothesis, gesture is involved in the conceptual planning of messages. Specifically, gesture helps speakers to ''package'' spatial information into verbalisable units. We tested these hypotheses in 5-year-old children, using two tasks that required comparable lexical access, but different information packaging. In the explanation task, children explained why two items did or did not have the same quantity (Piagetian conservation). In the description task, children described how two items looked different. Children provided comparable verbal responses across tasks; thus, lexical access was comparable. However, the demands for information packaging differed. Participants' gestures also differed across the tasks. In the explanation task, children produced more gestures that conveyed perceptual dimensions of the objects, and more gestures that conveyed information that differed from the accompanying speech. The results suggest that gesture is involved in the conceptual planning of speech.
  • Altvater-Mackensen, N. (2010). Do manners matter? Asymmetries in the acquisition of manner of articulation features. PhD Thesis, Radboud University of Nijmegen, Nijmegen.
  • Ambridge, B., Pine, J. M., Rowland, C. F., Freudenthal, D., & Chang, F. (2014). Avoiding dative overgeneralisation errors: semantics, statistics or both? Language, Cognition and Neuroscience, 29(2), 218-243. doi:10.1080/01690965.2012.738300.

    Abstract

    How do children eventually come to avoid the production of overgeneralisation errors, in particular, those involving the dative (e.g., *I said her “no”)? The present study addressed this question by obtaining from adults and children (5–6, 9–10 years) judgements of well-formed and over-general datives with 301 different verbs (44 for children). A significant effect of pre-emption—whereby the use of a verb in the prepositional-object (PO)-dative construction constitutes evidence that double-object (DO)-dative uses are not permitted—was observed for every age group. A significant effect of entrenchment—whereby the use of a verb in any construction constitutes evidence that unattested dative uses are not permitted—was also observed for every age group, with both predictors also accounting for developmental change between ages 5–6 and 9–10 years. Adults demonstrated knowledge of a morphophonological constraint that prohibits Latinate verbs from appearing in the DO-dative construction (e.g., *I suggested her the trip). Verbs’ semantic properties (supplied by independent adult raters) explained additional variance for all groups and developmentally, with the relative influence of narrow- vs broad-range semantic properties increasing with age. We conclude by outlining an account of the formation and restriction of argument-structure generalisations designed to accommodate these findings.
  • Ambridge, B., Rowland, C. F., Theakston, A. L., & Tomasello, M. (2006). Comparing different accounts of inversion errors in children's non-subject wh-questions: ‘What experimental data can tell us?’. Journal of Child Language, 33(3), 519-557. doi:10.1017/S0305000906007513.

    Abstract

    This study investigated different accounts of children's acquisition of non-subject wh-questions. Questions using each of 4 wh-words (what, who, how and why), and 3 auxiliaries (BE, DO and CAN) in 3sg and 3pl form were elicited from 28 children aged 3;6–4;6. Rates of non-inversion error (Who she is hitting?) were found not to differ by wh-word, auxiliary or number alone, but by lexical auxiliary subtype and by wh-word+lexical auxiliary combination. This finding counts against simple rule-based accounts of question acquisition that include no role for the lexical subtype of the auxiliary, and suggests that children may initially acquire wh-word+lexical auxiliary combinations from the input. For DO questions, auxiliary-doubling errors (What does she does like?) were also observed, although previous research has found that such errors are virtually non-existent for positive questions. Possible reasons for this discrepancy are discussed.
  • Ameka, F. K., Dench, A., & Evans, N. (Eds.). (2006). Catching language: The standing challenge of grammar writing. Berlin: Mouton de Gruyter.

    Abstract

    Descriptive grammars are our main vehicle for documenting and analysing the linguistic structure of the world's 6,000 languages. They bring together, in one place, a coherent treatment of how the whole language works, and therefore form the primary source of information on a given language, consulted by a wide range of users: areal specialists, typologists, theoreticians of any part of language (syntax, morphology, phonology, historical linguistics etc.), and members of the speech communities concerned. The writing of a descriptive grammar is a major intellectual challenge, that calls on the grammarian to balance a respect for the language's distinctive genius with an awareness of how other languages work, to combine rigour with readability, to depict structural regularities while respecting a corpus of real material, and to represent something of the native speaker's competence while recognising the variation inherent in any speech community. Despite a recent surge of awareness of the need to document little-known languages, there is no book that focusses on the manifold issues that face the author of a descriptive grammar. This volume brings together contributors who approach the problem from a range of angles. Most have written descriptive grammars themselves, but others represent different types of reader. Among the topics they address are: overall issues of grammar design, the complementary roles of outsider and native speaker grammarians, the balance between grammar and lexicon, cross-linguistic comparability, the role of explanation in grammatical description, the interplay of theory and a range of fieldwork methods in language description, the challenges of describing languages in their cultural and historical context, and the tensions between linguistic particularity, established practice of particular schools of linguistic description and the need for a universally commensurable analytic framework. This book will renew the field of grammaticography, addressing a multiple readership of descriptive linguists, typologists, and formal linguists, by bringing together a range of distinguished practitioners from around the world to address these questions.
  • Ameka, F. K. (1999). [Review of M. E. Kropp Dakubu: Korle meets the sea: a sociolinguistic history of Accra]. Bulletin of the School of Oriental and African Studies, 62, 198-199. doi:10.1017/S0041977X0001836X.
  • Ameka, F. K. (2006). Ewe serial verb constructions in their grammatical context. In A. Y. Aikhenvald, & R. M. W. Dixon (Eds.), Serial verb constructions: A cross-linguistic typology (pp. 124-143). Oxford: Oxford University Press.
  • Ameka, F. K. (2006). Elements of the grammar of space in Ewe. In S. C. Levinson, & D. P. Wilkins (Eds.), Grammars of space: Explorations in cognitive diversity (pp. 359-399). Cambridge: Cambridge University Press.
  • Ameka, F. K., & Wilkins, D. P. (2006). Interjections. In J.-O. Ostman, & J. Verschueren (Eds.), Handbook of pragmatics (pp. 1-22). Amsterdam: Benjamins.
  • Ameka, F. K. (2010). Information packaging constructions in Kwa: Micro-variation and typology. In E. O. Aboh, & J. Essegbey (Eds.), Topics in Kwa syntax (pp. 141-176). Dordrecht: Springer.

    Abstract

    Kwa languages such as Akye, Akan, Ewe, Ga, Likpe, Yoruba etc. are not prototypically “topic-prominent” like Chinese nor “focus-prominent” like Somali, yet they have dedicated structural positions in the clause, as well as morphological markers for signalling the information status of the component parts of information units. They could thus be seen as “discourse configurational languages” (Kiss 1995). In this chapter, I first argue for distinct positions in the left periphery of the clause in these languages for scene-setting topics, contrastive topics and focus. I then describe the morpho-syntactic properties of various information packaging constructions and the variations that we find across the languages in this domain.
  • Ameka, F. K. (2006). Grammars in contact in the Volta Basin (West Africa): On contact induced grammatical change in Likpe. In A. Y. Aikhenvald, & R. M. W. Dixon (Eds.), Grammars in contact: A crosslinguistic typology (pp. 114-142). Oxford: Oxford University Press.
  • Ameka, F. K. (2006). Interjections. In K. Brown (Ed.), Encyclopedia of language & linguistics (2nd ed., pp. 743-746). Oxford: Elsevier.
  • Ameka, F. K. (1999). Interjections. In K. Brown, & J. Miller (Eds.), Concise encyclopedia of grammatical categories (pp. 213-216). Oxford: Elsevier.
  • Ameka, F. K. (1999). Partir c'est mourir un peu: Universal and culture specific features of leave taking. RASK International Journal of Language and Communication, 9/10, 257-283.
  • Ameka, F. K. (2006). Real descriptions: Reflections on native speaker and non-native speaker descriptions of a language. In F. K. Ameka, A. Dench, & N. Evans (Eds.), Catching language: The standing challenge of grammar writing (pp. 69-112). Berlin: Mouton de Gruyter.
  • Ameka, F. K. (1999). Spatial information packaging in Ewe and Likpe: A comparative perspective. Frankfurter Afrikanistische Blätter, 11, 7-34.
  • Ameka, F. K., De Witte, C., & Wilkins, D. (1999). Picture series for positional verbs: Eliciting the verbal component in locative descriptions. In D. Wilkins (Ed.), Manual for the 1999 Field Season (pp. 48-54). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.2573831.

    Abstract

    How do different languages encode location and position meanings? In conjunction with the BowPed picture series and Caused Positions task, this elicitation tool is designed to help researchers (i) identify a language’s resources for encoding topological relations; (ii) delimit the pragmatics of use of such resources; and (iii) determine the semantics of select spatial terms. The task focuses on the exploration of the predicative component of topological expressions (e.g., ‘the cassavas are lying in the basket’), especially the contrastive elicitation of positional verbs. The materials consist of a set of photographs of objects (e.g., bottles, cloths, sticks) in specific configurations with various ground items (e.g., basket, table, tree).

    Additional information

    1999_Positional_verbs_stimuli.zip
  • Ameka, F. K. (1999). The typology and semantics of complex nominal duplication in Ewe. Anthropological Linguistics, 41, 75-106.
  • Andics, A., McQueen, J. M., Petersson, K. M., Gál, V., Rudas, G., & Vidnyánszky, Z. (2010). Neural mechanisms for voice recognition. NeuroImage, 52, 1528-1540. doi:10.1016/j.neuroimage.2010.05.048.

    Abstract

    We investigated neural mechanisms that support voice recognition in a training paradigm with fMRI. The same listeners were trained on different weeks to categorize the mid-regions of voice-morph continua as an individual's voice. Stimuli implicitly defined a voice-acoustics space, and training explicitly defined a voice-identity space. The predefined centre of the voice category was shifted from the acoustic centre each week in opposite directions, so the same stimuli had different training histories on different tests. Cortical sensitivity to voice similarity appeared over different time-scales and at different representational stages. First, there were short-term adaptation effects: Increasing acoustic similarity to the directly preceding stimulus led to haemodynamic response reduction in the middle/posterior STS and in right ventrolateral prefrontal regions. Second, there were longer-term effects: Response reduction was found in the orbital/insular cortex for stimuli that were most versus least similar to the acoustic mean of all preceding stimuli, and, in the anterior temporal pole, the deep posterior STS and the amygdala, for stimuli that were most versus least similar to the trained voice-identity category mean. These findings are interpreted as effects of neural sharpening of long-term stored typical acoustic and category-internal values. The analyses also reveal anatomically separable voice representations: one in a voice-acoustics space and one in a voice-identity space. Voice-identity representations flexibly followed the trained identity shift, and listeners with a greater identity effect were more accurate at recognizing familiar voices. Voice recognition is thus supported by neural voice spaces that are organized around flexible ‘mean voice’ representations.
  • Araújo, S., Faísca, L., Bramão, I., Petersson, K. M., & Reis, A. (2014). Lexical and phonological processes in dyslexic readers: Evidences from a visual lexical decision task. Dyslexia, 20, 38-53. doi:10.1002/dys.1461.

    Abstract

    The aim of the present study was to investigate whether reading failure in the context of an orthography of intermediate consistency is linked to inefficient use of the lexical orthographic reading procedure. The performance of typically developing and dyslexic Portuguese-speaking children was examined in a lexical decision task, where the stimulus lexicality, word frequency and length were manipulated. Both lexicality and length effects were larger in the dyslexic group than in controls, although the interaction between group and frequency disappeared when the data were transformed to control for general performance factors. Children with dyslexia were influenced in lexical decision making by the stimulus length of words and pseudowords, whereas age-matched controls were influenced by the length of pseudowords only. These findings suggest that non-impaired readers rely mainly on lexical orthographic information, but children with dyslexia preferentially use the phonological decoding procedure—albeit poorly—most likely because they struggle to process orthographic inputs as a whole such as controls do. Accordingly, dyslexic children showed significantly poorer performance than controls for all types of stimuli, including words that could be considered over-learned, such as high-frequency words. This suggests that their orthographic lexical entries are less established in the orthographic lexicon
  • Araújo, S., Pacheco, A., Faísca, L., Petersson, K. M., & Reis, A. (2010). Visual rapid naming and phonological abilities: Different subtypes in dyslexic children. International Journal of Psychology, 45, 443-452. doi:10.1080/00207594.2010.499949.

    Abstract

    One implication of the double-deficit hypothesis for dyslexia is that there should be subtypes of dyslexic readers that exhibit rapid naming deficits with or without concomitant phonological processing problems. In the current study, we investigated the validity of this hypothesis for Portuguese orthography, which is more consistent than English orthography, by exploring different cognitive profiles in a sample of dyslexic children. In particular, we were interested in identifying readers characterized by a pure rapid automatized naming deficit. We also examined whether rapid naming and phonological awareness independently account for individual differences in reading performance. We characterized the performance of dyslexic readers and a control group of normal readers matched for age on reading, visual rapid naming and phonological processing tasks. Our results suggest that there is a subgroup of dyslexic readers with intact phonological processing capacity (in terms of both accuracy and speed measures) but poor rapid naming skills. We also provide evidence for an independent association between rapid naming and reading competence in the dyslexic sample, when the effect of phonological skills was controlled. Altogether, the results are more consistent with the view that rapid naming problems in dyslexia represent a second core deficit rather than an exclusive phonological explanation for the rapid naming deficits. Furthermore, additional non-phonological processes, which subserve rapid naming performance, contribute independently to reading development.
  • Arnhold, A., Vainio, M., Suni, A., & Järvikivi, J. (2010). Intonation of Finnish verbs. Speech Prosody 2010, 100054, 1-4. Retrieved from http://speechprosody2010.illinois.edu/papers/100054.pdf.

    Abstract

    A production experiment investigated the tonal shape of Finnish finite verbs in transitive sentences without narrow focus. Traditional descriptions of Finnish stating that non-focused finite verbs do not receive accents were only partly supported. Verbs were found to have a consistently smaller pitch range than words in other word classes, but their pitch contours were neither flat nor explainable by pure interpolation.
  • Arnon, I., Casillas, M., Kurumada, C., & Estigarribia, B. (Eds.). (2014). Language in interaction: Studies in honor of Eve V. Clark. Amsterdam: Benjamins.

    Abstract

    Understanding how communicative goals impact and drive the learning process has been a long-standing issue in the field of language acquisition. Recent years have seen renewed interest in the social and pragmatic aspects of language learning: the way interaction shapes what and how children learn. In this volume, we bring together researchers working on interaction in different domains to present a cohesive overview of ongoing interactional research. The studies address the diversity of the environments children learn in; the role of para-linguistic information; the pragmatic forces driving language learning; and the way communicative pressures impact language use and change. Using observational, empirical and computational findings, this volume highlights the effect of interpersonal communication on what children hear and what they learn. This anthology is inspired by and dedicated to Prof. Eve V. Clark – a pioneer in all matters related to language acquisition – and a major force in establishing interaction and communication as crucial aspects of language learning.
  • Auer, E., Wittenburg, P., Sloetjes, H., Schreer, O., Masneri, S., Schneider, D., & Tschöpel, S. (2010). Automatic annotation of media field recordings. In C. Sporleder, & K. Zervanou (Eds.), Proceedings of the ECAI 2010 Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities (LaTeCH 2010) (pp. 31-34). Lisbon: University de Lisbon. Retrieved from http://ilk.uvt.nl/LaTeCH2010/.

    Abstract

    In the paper we describe a new attempt to come to automatic detectors processing real scene audio-video streams that can be used by researchers world-wide to speed up their annotation and analysis work. Typically these recordings are taken in field and experimental situations mostly with bad quality and only little corpora preventing to use standard stochastic pattern recognition techniques. Audio/video processing components are taken out of the expert lab and are integrated in easy-to-use interactive frameworks so that the researcher can easily start them with modified parameters and can check the usefulness of the created annotations. Finally a variety of detectors may have been used yielding a lattice of annotations. A flexible search engine allows finding combinations of patterns opening completely new analysis and theorization possibilities for the researchers who until were required to do all annotations manually and who did not have any help in pre-segmenting lengthy media recordings.
  • Auer, E., Russel, A., Sloetjes, H., Wittenburg, P., Schreer, O., Masnieri, S., Schneider, D., & Tschöpel, S. (2010). ELAN as flexible annotation framework for sound and image processing detectors. In N. Calzolari, B. Maegaard, J. Mariani, J. Odjik, K. Choukri, S. Piperidis, M. Rosner, & D. Tapias (Eds.), Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (pp. 890-893). European Language Resources Association (ELRA).

    Abstract

    Annotation of digital recordings in humanities research still is, to a largeextend, a process that is performed manually. This paper describes the firstpattern recognition based software components developed in the AVATecH projectand their integration in the annotation tool ELAN. AVATecH (AdvancingVideo/Audio Technology in Humanities Research) is a project that involves twoMax Planck Institutes (Max Planck Institute for Psycholinguistics, Nijmegen,Max Planck Institute for Social Anthropology, Halle) and two FraunhoferInstitutes (Fraunhofer-Institut für Intelligente Analyse- undInformationssysteme IAIS, Sankt Augustin, Fraunhofer Heinrich-Hertz-Institute,Berlin) and that aims to develop and implement audio and video technology forsemi-automatic annotation of heterogeneous media collections as they occur inmultimedia based research. The highly diverse nature of the digital recordingsstored in the archives of both Max Planck Institutes, poses a huge challenge tomost of the existing pattern recognition solutions and is a motivation to makesuch technology available to researchers in the humanities.
  • Baayen, R. H., Feldman, L. B., & Schreuder, R. (2006). Morphological influences on the recognition of monosyllabic monomorphemic words. Journal of Memory and Language, 55(2), 290-313. doi:10.1016/j.jml.2006.03.008.

    Abstract

    Balota et al. [Balota, D., Cortese, M., Sergent-Marshall, S., Spieler, D., & Yap, M. (2004). Visual word recognition for single-syllable words. Journal of Experimental Psychology: General, 133, 283–316] studied lexical processing in word naming and lexical decision using hierarchical multiple regression techniques for a large data set of monosyllabic, morphologically simple words. The present study supplements their work by making use of more flexible regression techniques that are better suited for dealing with collinearity and non-linearity, and by documenting the contributions of several variables that they did not take into account. In particular, we included measures of morphological connectivity, as well as a new frequency count, the frequency of a word in speech rather than in writing. The morphological measures emerged as strong predictors in visual lexical decision, but not in naming, providing evidence for the importance of morphological connectivity even for the recognition of morphologically simple words. Spoken frequency was predictive not only for naming but also for visual lexical decision. In addition, it co-determined subjective frequency estimates and norms for age of acquisition. Finally, we show that frequency predominantly reflects conceptual familiarity rather than familiarity with a word’s form.
  • Baayen, R. H. (2014). Productivity in language production. In D. Sandra, & M. Taft (Eds.), Morphological Structure, Lexical Representation and Lexical Access: A Special Issue of Language and Cognitive Processes (pp. 447-469). London: Routledge.

    Abstract

    Lexical statistics and a production experiment are used to gauge the extent to which the linguistic notion of morphological productivity is relevant for psycholinguistic theories of speech production in languages such as Dutch and English. Lexical statistics of productivity show that despite the relatively poor morphology of Dutch, new words are created often enough for the marginalisation of word formation in theories of speech production to be theoretically unattractive. This conclusion is supported by the results of a production experiment in which subjects freely created hundreds of productive, but only a handful of unproductive, neologisms. A tentative solution is proposed as to why the opposite pattern has been observed in the speech of jargonaphasics.
  • Baggio, G., Choma, T., Van Lambalgen, M., & Hagoort, P. (2010). Coercion and compositionality. Journal of Cognitive Neuroscience, 22, 2131-2140. doi:10.1162/jocn.2009.21303.

    Abstract

    Research in psycholinguistics and in the cognitive neuroscience of language has suggested that semantic and syntactic integration are associated with different neurophysiologic correlates, such as the N400 and the P600 in the ERPs. However, only a handful of studies have investigated the neural basis of the syntax–semantics interface, and even fewer experiments have dealt with the cases in which semantic composition can proceed independently of the syntax. Here we looked into one such case—complement coercion—using ERPs. We compared sentences such as, “The journalist wrote the article” with “The journalist began the article.” The second sentence seems to involve a silent semantic element, which is expressed in the first sentence by the head of the VP “wrote the article.” The second type of construction may therefore require the reader to infer or recover from memory a richer event sense of the VP “began the article,” such as began writing the article, and to integrate that into a semantic representation of the sentence. This operation is referred to as “complement coercion.” Consistently with earlier reading time, eye tracking, and MEG studies, we found traces of such additional computations in the ERPs: Coercion gives rise to a long-lasting negative shift, which differs at least in duration from a standard N400 effect. Issues regarding the nature of the computation involved are discussed in the light of a neurocognitive model of language processing and a formal semantic analysis of coercion.
  • Bakker, I., Takashima, A., van Hell, J. G., Janzen, G., & McQueen, J. M. (2014). Competition from unseen or unheard novel words: Lexical consolidation across modalities. Journal of Memory and Language, 73, 116-139. doi:10.1016/j.jml.2014.03.002.

    Abstract

    In four experiments we investigated the formation of novel word memories across modalities, using competition between novel words and their existing phonological/orthographic neighbours as a test of lexical integration. Auditorily acquired novel words entered into competition both in the spoken modality (Experiment 1) and in the written modality (Experiment 4) after a consolidation period of 24 h. Words acquired from print, on the other hand, showed competition effects after 24 h in a visual word recognition task (Experiment 3) but required additional training and a consolidation period of a week before entering into spoken-word competition (Experiment 2). These cross-modal effects support the hypothesis that lexicalised rather than episodic representations underlie post-consolidation competition effects. We suggest that sublexical phoneme–grapheme conversion during novel word encoding and/or offline consolidation enables the formation of modality-specific lexemes in the untrained modality, which subsequently undergo the same cortical integration process as explicitly perceived word forms in the trained modality. Although conversion takes place in both directions, speech input showed an advantage over print both in terms of lexicalisation and explicit memory performance. In conclusion, the brain is able to integrate and consolidate internally generated lexical information as well as external perceptual input.
  • Banissy, M., Sauter, D., Ward, J., Warren, J. E., Walsh, V., & Scott, S. K. (2010). Suppressing sensorimotor activity modulates the discrimination of auditory emotions but not speaker identity. Journal of Neuroscience, 30(41), 13552-13557. doi:10.1523/JNEUROSCI.0786-10.2010.

    Abstract

    Our ability to recognise the emotions of others is a crucial feature of human social cognition. Functional neuroimaging studies indicate that activity in sensorimotor cortices is evoked during the perception of emotion. In the visual domain, right somatosensory cortex activity has been shown to be critical for facial emotion recognition. However, the importance of sensorimotor representations in modalities outside of vision remains unknown. Here we use continuous theta-burst transcranial magnetic stimulation (cTBS) to investigate whether neural activity in the right postcentral gyrus (rPoG) and right lateral premotor cortex (rPM) is involved in non-verbal auditory emotion recognition. Three groups of participants completed same-different tasks on auditory stimuli, discriminating between either the emotion expressed or the speakers' identities, prior to and following cTBS targeted at rPoG, rPM or the vertex (control site). A task-selective deficit in auditory emotion discrimination was observed. Stimulation to rPoG and rPM resulted in a disruption of participants' abilities to discriminate emotion, but not identity, from vocal signals. These findings suggest that sensorimotor activity may be a modality independent mechanism which aids emotion discrimination.

    Additional information

    S1_Banissy.pdf
  • Bardhan, N. P. (2010). Adults’ self-directed learning of an artificial lexicon: The dynamics of neighborhood reorganization. PhD Thesis, University of Rochester, Rochester, New York.

    Abstract

    Artificial lexicons have previously been used to examine the time course of the learning and recognition of spoken words, the role of segment type in word learning, and the integration of context during spoken word recognition. However, in all of these studies the experimenter determined the frequency and order of the words to be learned. In three experiments, we asked whether adult learners choose to listen to novel words in a particular order based on their acoustic similarity. We use a new paradigm for learning an artificial lexicon in which the learner, rather than the experimenter, determines the order and frequency of exposure to items. We analyze both the proportions of selections and the temporal clustering of subjects' sampling of lexical neighborhoods during training as well as their performance during repeated testing phases (accuracy and reaction time) to determine the time course of learning these neighborhoods. In the first experiment, subjects sampled the high and low density neighborhoods randomly in early learning, and then over-sampled the high density neighborhood until test performance on both neighborhoods reached asymptote. A second experiment involved items similar to the first, but also neighborhoods that are not fully revealed at the start of the experiment. Subjects adjusted their training patterns to focus their selections on neighborhoods of increasing density was revealed; evidence of learning in the test phase was slower to emerge than in the first experiment, impaired by the presence of additional sets of items of varying density. Crucially, in both the first and second experiments there was no effect of dense vs. sparse neighborhood in the accuracy results, which is accounted for by subjects’ over-sampling of items from the dense neighborhood. The third experiment was identical in design to the second except for a second day of further training and testing on the same items. Testing at the beginning of the second day showed impaired, not improved, accuracy, except for the consistently dense items. Further training, however, improved accuracy for some items to above Day 1 levels. Overall, these results provide a new window on the time-course of learning an artificial lexicon and the role that learners’ implicit preferences, stemming from their self-selected experience with the entire lexicon, play in learning highly confusable words.
  • Bardhan, N. P., Aslin, R., & Tanenhaus, M. (2010). Adults' self-directed learning of an artificial lexicon: The dynamics of neighborhood reorganization. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32nd Annual Meeting of the Cognitive Science Society (pp. 364-368). Austin, TX: Cognitive Science Society.
  • Barendse, M. T., Albers, C. J., Oort, F. J., & Timmerman, M. E. (2014). Measurement bias detection through Bayesian factor analysis. Frontiers in Psychology, 5: 1087. doi:10.3389/fpsyg.2014.01087.

    Abstract

    Measurement bias has been defined as a violation of measurement invariance. Potential violators—variables that possibly violate measurement invariance—can be investigated through restricted factor analysis (RFA). The purpose of the present paper is to investigate a Bayesian approach to estimate RFA models with interaction effects, in order to detect uniform and nonuniform measurement bias. Because modeling nonuniform bias requires an interaction term, it is more complicated than modeling uniform bias. The Bayesian approach seems especially suited for such complex models. In a simulation study we vary the type of bias (uniform, nonuniform), the type of violator (observed continuous, observed dichotomous, latent continuous), and the correlation between the trait and the violator (0.0, 0.5). For each condition, 100 sets of data are generated and analyzed. We examine the accuracy of the parameter estimates and the performance of two bias detection procedures, based on the DIC fit statistic, in Bayesian RFA. Results show that the accuracy of the estimated parameters is satisfactory. Bias detection rates are high in all conditions with an observed violator, and still satisfactory in all other conditions.
  • Barendse, M. T., Oort, F. J., & Garst, G. J. A. (2010). Using restricted factor analysis with latent moderated structures to detect uniform and nonuniform measurement bias: A simulation study. AStA Advances in Statistical Analysis, 94, 117-127. doi:10.1007/s10182-010-0126-1.

    Abstract

    Factor analysis is an established technique for the detection of measurement bias. Multigroup factor analysis (MGFA) can detect both uniform and nonuniform bias. Restricted factor analysis (RFA) can also be used to detect measurement bias, albeit only uniform measurement bias. Latent moderated structural equations (LMS) enable the estimation of nonlinear interaction effects in structural equation modelling. By extending the RFA method with LMS, the RFA method should be suited to detect nonuniform bias as well as uniform bias. In a simulation study, the RFA/LMS method and the MGFA method are compared in detecting uniform and nonuniform measurement bias under various conditions, varying the size of uniform bias, the size of nonuniform bias, the sample size, and the ability distribution. For each condition, 100 sets of data were generated and analysed through both detection methods. The RFA/LMS and MGFA methods turned out to perform equally well. Percentages of correctly identified items as biased (true positives) generally varied between 92% and 100%, except in small sample size conditions in which the bias was nonuniform and small. For both methods, the percentages of false positives were generally higher than the nominal levels of significance.
  • Baron-Cohen, S., Murphy, L., Chakrabarti, B., Craig, I., Mallya, U., Lakatosova, S., Rehnstrom, K., Peltonen, L., Wheelwright, S., Allison, C., Fisher, S. E., & Warrier, V. (2014). A genome wide association study of mathematical ability reveals an association at chromosome 3q29, a locus associated with autism and learning difficulties: A preliminary study. PLoS One, 9(5): e96374. doi:10.1371/journal.pone.0096374.

    Abstract

    Mathematical ability is heritable, but few studies have directly investigated its molecular genetic basis. Here we aimed to identify specific genetic contributions to variation in mathematical ability. We carried out a genome wide association scan using pooled DNA in two groups of U.K. samples, based on end of secondary/high school national academic exam achievement: high (n = 419) versus low (n = 183) mathematical ability while controlling for their verbal ability. Significant differences in allele frequencies between these groups were searched for in 906,600 SNPs using the Affymetrix GeneChip Human Mapping version 6.0 array. After meeting a threshold of p<1.5×10−5, 12 SNPs from the pooled association analysis were individually genotyped in 542 of the participants and analyzed to validate the initial associations (lowest p-value 1.14 ×10−6). In this analysis, one of the SNPs (rs789859) showed significant association after Bonferroni correction, and four (rs10873824, rs4144887, rs12130910 rs2809115) were nominally significant (lowest p-value 3.278 × 10−4). Three of the SNPs of interest are located within, or near to, known genes (FAM43A, SFT2D1, C14orf64). The SNP that showed the strongest association, rs789859, is located in a region on chromosome 3q29 that has been previously linked to learning difficulties and autism. rs789859 lies 1.3 kbp downstream of LSG1, and 700 bp upstream of FAM43A, mapping within the potential promoter/regulatory region of the latter. To our knowledge, this is only the second study to investigate the association of genetic variants with mathematical ability, and it highlights a number of interesting markers for future study.
  • Barr, D. J., & Seyfeddinipur, M. (2010). The role of fillers in listener attributions for speaker disfluency. Language and Cognitive Processes, 25, 441-455. doi:10.1080/01690960903047122.

    Abstract

    When listeners hear a speaker become disfluent, they expect the speaker to refer to something new. What is the mechanism underlying this expectation? In a mouse-tracking experiment, listeners sought to identify images that a speaker was describing. Listeners more strongly expected new referents when they heard a speaker say um than when they heard a matched utterance where the um was replaced by noise. This expectation was speaker-specific: it depended on what was new and old for the current speaker, not just on what was new or old for the listener. This finding suggests that listeners treat fillers as collateral signals.
  • Basnakova, J., Weber, K., Petersson, K. M., Van Berkum, J. J. A., & Hagoort, P. (2014). Beyond the language given: The neural correlates of inferring speaker meaning. Cerebral Cortex, 24(10), 2572-2578. doi:10.1093/cercor/bht112.

    Abstract

    Even though language allows us to say exactly what we mean, we often use language to say things indirectly, in a way that depends on the specific communicative context. For example, we can use an apparently straightforward sentence like "It is hard to give a good presentation" to convey deeper meanings, like "Your talk was a mess!" One of the big puzzles in language science is how listeners work out what speakers really mean, which is a skill absolutely central to communication. However, most neuroimaging studies of language comprehension have focused on the arguably much simpler, context-independent process of understanding direct utterances. To examine the neural systems involved in getting at contextually constrained indirect meaning, we used functional magnetic resonance imaging as people listened to indirect replies in spoken dialog. Relative to direct control utterances, indirect replies engaged dorsomedial prefrontal cortex, right temporo-parietal junction and insula, as well as bilateral inferior frontal gyrus and right medial temporal gyrus. This suggests that listeners take the speaker's perspective on both cognitive (theory of mind) and affective (empathy-like) levels. In line with classic pragmatic theories, our results also indicate that currently popular "simulationist" accounts of language comprehension fail to explain how listeners understand the speaker's intended message.
  • Bastiaansen, M. C. M., & Hagoort, P. (2006). Oscillatory neuronal dynamics during language comprehension. In C. Neuper, & W. Klimesch (Eds.), Event-related dynamics of brain oscillations (pp. 179-196). Amsterdam: Elsevier.

    Abstract

    Language comprehension involves two basic operations: the retrieval of lexical information (such as phonologic, syntactic, and semantic information) from long-term memory, and the unification of this information into a coherent representation of the overall utterance. Neuroimaging studies using hemo¬dynamic measures such as PET and fMRI have provided detailed information on which areas of the brain are involved in these language-related memory and unification operations. However, much less is known about the dynamics of the brain's language network. This chapter presents a literature review of the oscillatory neuronal dynamics of EEG and MEG data that can be observed during language comprehen¬sion tasks. From a detailed review of this (rapidly growing) literature the following picture emerges: memory retrieval operations are mostly accompanied by increased neuronal synchronization in the theta frequency range (4-7 Hz). Unification operations, in contrast, induce high-frequency neuronal synchro¬nization in the beta (12-30 Hz) and gamma (above 30 Hz) frequency bands. A desynchronization in the (upper) alpha frequency band is found for those studies that use secondary tasks, and seems to correspond with attentional processes, and with the behavioral consequences of the language comprehension process. We conclude that it is possible to capture the dynamics of the brain's language network by a careful analysis of the event-related changes in power and coherence of EEG and MEG data in a wide range of frequencies, in combination with subtle experimental manipulations in a range of language comprehension tasks. It appears then that neuronal synchrony is a mechanism by which the brain integrates the different types of information about language (such as phonological, orthographic, semantic, and syntactic infor¬mation) represented in different brain areas.
  • Bastiaansen, M. C. M., & Knösche, T. R. (2000). MEG tangential derivative mapping applied to Event-Related Desynchronization (ERD) research. Clinical Neurophysiology, 111, 1300-1305.

    Abstract

    Objectives: A problem with the topographic mapping of MEG data recorded with axial gradiometers is that field extrema are measured at sensors located at either side of a neuronal generator instead of at sensors directly above the source. This is problematic for the computation of event-related desynchronization (ERD) on MEG data, since ERD relies on a correspondence between the signal maximum and the location of the neuronal generator. Methods: We present a new method based on computing spatial derivatives of the MEG data. The limitations of this method were investigated by means of forward simulations, and the method was applied to a 150-channel MEG dataset. Results: The simulations showed that the method has some limitations. (1) Fewer channels reduce accuracy and amplitude. (2) It is less suitable for deep or very extended sources. (3) Multiple sources can only be distinguished if they are not too close to each other. Applying the method in the calculation of ERD on experimental data led to a considerable improvement of the ERD maps. Conclusions: The proposed method offers a significant advantage over raw MEG signals, both for the topographic mapping of MEG and for the analysis of rhythmic MEG activity by means of ERD.
  • Bastiaansen, M. C. M., Böcker, K. B. E., Cluitmans, P. J. M., & Brunia, C. H. M. (1999). Event-related desynchronization related to the anticipation of a stimulus providing knowledge of results. Clinical Neurophysiology, 110, 250-260.

    Abstract

    In the present paper, event-related desynchronization (ERD) in the alpha and beta frequency bands is quantified in order to investigate the processes related to the anticipation of a knowledge of results (KR) stimulus. In a time estimation task, 10 subjects were instructed to press a button 4 s after the presentation of an auditory stimulus. Two seconds after the response they received auditory or visual feedback on the timing of their response. Preceding the button press, a centrally maximal ERD is found. Preceding the visual KR stimulus, an ERD is present that has an occipital maximum. Contrary to expectation, preceding the auditory KR stimulus there are no signs of a modalityspecific ERD. Results are related to a thalamo-cortical gating model which predicts a correspondence between negative slow potentials and ERD during motor preparation and stimulus anticipation.
  • Bastiaansen, M. C. M., Magyari, L., & Hagoort, P. (2010). Syntactic unification operations are reflected in oscillatory dynamics during on-line sentence comprehension. Journal of Cognitive Neuroscience, 22, 1333-1347. doi:10.1162/jocn.2009.21283.

    Abstract

    There is growing evidence suggesting that synchronization changes in the oscillatory neuronal dynamics in the EEG or MEG reflect the transient coupling and uncoupling of functional networks related to different aspects of language comprehension. In this work, we examine how sentence-level syntactic unification operations are reflected in the oscillatory dynamics of the MEG. Participants read sentences that were either correct, contained a word category violation, or were constituted of random word sequences devoid of syntactic structure. A time-frequency analysis of MEG power changes revealed three types of effects. The first type of effect was related to the detection of a (word category) violation in a syntactically structured sentence, and was found in the alpha and gamma frequency bands. A second type of effect was maximally sensitive to the syntactic manipulations: A linear increase in beta power across the sentence was present for correct sentences, was disrupted upon the occurrence of a word category violation, and was absent in syntactically unstructured random word sequences. We therefore relate this effect to syntactic unification operations. Thirdly, we observed a linear increase in theta power across the sentence for all syntactically structured sentences. The effects are tentatively related to the building of a working memory trace of the linguistic input. In conclusion, the data seem to suggest that syntactic unification is reflected by neuronal synchronization in the lower-beta frequency band.
  • Bauer, B. L. M. (2000). Archaic syntax in Indo-European: The spread of transitivity in Latin and French. Berlin: Mouton de Gruyter.

    Abstract

    Several grammatical features in early Indo-European traditionally have not been understood. Although Latin, for example, was a nominative language, a number of its inherited characteristics do not fit that typology and are difficult to account for, such as stative mihi est constructions to express possession, impersonal verbs, or absolute constructions. With time these archaic features have been replaced by transitive structures (e.g. possessive ‘have’). This book presents an extensive comparative and historical analysis of archaic features in early Indo-European languages and their gradual replacement in the history of Latin and early Romance, showing that the new structures feature transitive syntax and fit the patterns of a nominative language.
  • Bauer, B. L. M. (1999). Aspects of impersonal constructions in Late Latin. In H. Petersmann, & R. Kettelmann (Eds.), Latin vulgaire – latin tardif V (pp. 209-211). Heidelberg: Winter.
  • Bauer, B. L. M. (2006). ‘Synthetic’ vs. ‘analytic’ in Romance: The importance of varieties. In R. Gess, & D. Arteaga (Eds.), Historical Romance linguistics: Retrospective and perspectives (pp. 287-304). Amsterdam: Benjamins.
  • Bauer, B. L. M. (2010). Fore-runners of Romance -mente adverbs in Latin prose and poetry. In E. Dickey, & A. Chahoud (Eds.), Colloquial and literary Latin (pp. 339-353). Cambridge: Cambridge University Press.
  • Bauer, B. L. M. (2000). From Latin to French: The linear development of word order. In B. Bichakjian, T. Chernigovskaya, A. Kendon, & A. Müller (Eds.), Becoming Loquens: More studies in language origins (pp. 239-257). Frankfurt am Main: Lang.
  • Bauer, B. L. M. (1999). Impersonal HABET constructions: At the cross-roads of Indo-European innovation. In E. Polomé, & C. Justus (Eds.), Language change and typological variation. Vol II. Grammatical universals and typology (pp. 590-612). Washington: Institute for the study of man.
  • Bauer, B. L. M. (2014). Indefinite HOMO in the Gospels of the Vulgata. In P. Molinell, P. Cuzzoli, & C. Fedriani (Eds.), Latin vulgaire – latin tardif X (pp. 415-435). Bergamo: Bergamo University Press.
  • Bavin, E. L., & Kidd, E. (2000). Learning new verbs: Beyond the input. In C. Davis, T. J. Van Gelder, & R. Wales (Eds.), Cognitive Science in Australia, 2000: Proceedings of the Fifth Biennial Conference of the Australasian Cognitive Science Society.
  • Bavin, E. L., Kidd, E., Prendergast, L., Baker, E., Dissanayake, C., & Prior, M. (2014). Severity of autism is related to children's language processing. Autism Research, 7(6), 687-694. doi:10.1002/aur.1410.

    Abstract

    Problems in language processing have been associated with autism spectrum disorder (ASD), with some research attributing the problems to overall language skills rather than a diagnosis of ASD. Lexical access was assessed in a looking-while-listening task in three groups of 5- to 7-year-old children; two had high-functioning ASD (HFA), an ASD severe (ASD-S) group (n = 16) and an ASD moderate (ASD-M) group (n = 21). The third group were typically developing (TD) (n = 48). Participants heard sentences of the form “Where's the x?” and their eye movements to targets (e.g., train), phonological competitors (e.g., tree), and distractors were recorded. Proportions of looking time at target were analyzed within 200 ms intervals. Significant group differences were found between the ASD-S and TD groups only, at time intervals 1000–1200 and 1200–1400 ms postonset. The TD group was more likely to be fixated on target. These differences were maintained after adjusting for language, verbal and nonverbal IQ, and attention scores. An analysis using parent report of autistic-like behaviors showed higher scores to be associated with lower proportions of looking time at target, regardless of group. Further analysis showed fixation for the TD group to be significantly faster than for the ASD-S. In addition, incremental processing was found for all groups. The study findings suggest that severity of autistic behaviors will impact significantly on children's language processing in real life situations when exposed to syntactically complex material. They also show the value of using online methods for understanding how young children with ASD process language. Autism Res 2014, 7: 687–694.
  • Begeer, S., Malle, B. F., Nieuwland, M. S., & Keysar, B. (2010). Using theory of mind to represent and take part in social interactions: Comparing individuals with high-functioning autism and typically developing controls. European Journal of Developmental Psychology, 7(1), 104-122. doi:10.1080/17405620903024263.

    Abstract

    The literature suggests that individuals with autism spectrum disorders (ASD) are deficient in their Theory of Mind (ToM) abilities. They sometimes do not seem to appreciate that behaviour is motivated by underlying mental states. If this is true, then individuals with ASD should also be deficient when they use their ToM to represent and take part in dyadic interactions. In the current study we compared the performance of normally intelligent adolescents and adults with ASD to typically developing controls. In one task they heard a narrative about an interaction and then retold it. In a second task they played a communication game that required them to take into account another person's perspective. We found that when they described people's behaviour the ASD individuals used fewer mental terms in their story narration, suggesting a lower tendency to represent interactions in mentalistic terms. Surprisingly, ASD individuals and control participants showed the same level of performance in the communication game that required them to distinguish between their beliefs and the other's beliefs. Given that ASD individuals show no deficiency in using their ToM in real interaction, it is unlikely that they have a systematically deficient ToM.
  • Benyamin, B., St Pourcain, B., Davis, O. S., Davies, G., Hansell, N. K., Brion, M.-J., Kirkpatrick, R. M., Cents, R. A. M., Franić, S., Miller, M. B., Haworth, C. M. A., Meaburn, E., Price, T. S., Evans, D. M., Timpson, N., Kemp, J., Ring, S., McArdle, W., Medland, S. E., Yang, J. and 23 moreBenyamin, B., St Pourcain, B., Davis, O. S., Davies, G., Hansell, N. K., Brion, M.-J., Kirkpatrick, R. M., Cents, R. A. M., Franić, S., Miller, M. B., Haworth, C. M. A., Meaburn, E., Price, T. S., Evans, D. M., Timpson, N., Kemp, J., Ring, S., McArdle, W., Medland, S. E., Yang, J., Harris, S. E., Liewald, D. C., Scheet, P., Xiao, X., Hudziak, J. J., de Geus, E. J. C., Jaddoe, V. W. V., Starr, J. M., Verhulst, F. C., Pennell, C., Tiemeier, H., Iacono, W. G., Palmer, L. J., Montgomery, G. W., Martin, N. G., Boomsma, D. I., Posthuma, D., McGue, M., Wright, M. J., Davey Smith, G., Deary, I. J., Plomin, R., & Visscher, P. M. (2014). Childhood intelligence is heritable, highly polygenic and associated with FNBP1L. Molecular Psychiatry, 19(2), 253-258. doi:10.1038/mp.2012.184.

    Abstract

    Intelligence in childhood, as measured by psychometric cognitive tests, is a strong predictor of many important life outcomes, including educational attainment, income, health and lifespan. Results from twin, family and adoption studies are consistent with general intelligence being highly heritable and genetically stable throughout the life course. No robustly associated genetic loci or variants for childhood intelligence have been reported. Here, we report the first genome-wide association study (GWAS) on childhood intelligence (age range 6–18 years) from 17 989 individuals in six discovery and three replication samples. Although no individual single-nucleotide polymorphisms (SNPs) were detected with genome-wide significance, we show that the aggregate effects of common SNPs explain 22–46% of phenotypic variation in childhood intelligence in the three largest cohorts (P=3.9 × 10−15, 0.014 and 0.028). FNBP1L, previously reported to be the most significantly associated gene for adult intelligence, was also significantly associated with childhood intelligence (P=0.003). Polygenic prediction analyses resulted in a significant correlation between predictor and outcome in all replication cohorts. The proportion of childhood intelligence explained by the predictor reached 1.2% (P=6 × 10−5), 3.5% (P=10−3) and 0.5% (P=6 × 10−5) in three independent validation cohorts. Given the sample sizes, these genetic prediction results are consistent with expectations if the genetic architecture of childhood intelligence is like that of body mass index or height. Our study provides molecular support for the heritability and polygenic nature of childhood intelligence. Larger sample sizes will be required to detect individual variants with genome-wide significance.
  • Berck, P., Bibiko, H.-J., Kemps-Snijders, M., Russel, A., & Wittenburg, P. (2006). Ontology-based language archive utilization. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 2295-2298).
  • Berends, S., Veenstra, A., & Van Hout, A. (2010). 'Nee, ze heeft er twee': Acquisition of the Dutch quantitative 'er'. Groninger Arbeiten zur Germanistischen Linguistik, 51, 1-7. Retrieved from http://irs.ub.rug.nl/dbi/4ef4a0b3eafcb.

    Abstract

    We present the first study on the acquisition of the Dutch quantitative pronoun er in sentences such as de vrouw draagt er drie ‘the woman is carrying three.’ There is a large literature on Dutch children’s interpretation of pronouns and a few recent production studies, all specifically looking at 3rd person singular pronouns and the so-called Delay of Principle B effect (Coopmans & Philip, 1996; Koster, 1993; Spenader, Smits and Hendriks, 2009). However, no one has studied children’s use of quantitative er. Dutch is the only Germanic language with such a pronoun.
  • Bergmann, C., Ten Bosch, L., & Boves, L. (2014). A computational model of the headturn preference procedure: Design, challenges, and insights. In J. Mayor, & P. Gomez (Eds.), Computational Models of Cognitive Processes (pp. 125-136). World Scientific. doi:10.1142/9789814458849_0010.

    Abstract

    The Headturn Preference Procedure (HPP) is a frequently used method (e.g., Jusczyk & Aslin; and subsequent studies) to investigate linguistic abilities in infants. In this paradigm infants are usually first familiarised with words and then tested for a listening preference for passages containing those words in comparison to unrelated passages. Listening preference is defined as the time an infant spends attending to those passages with his or her head turned towards a flashing light and the speech stimuli. The knowledge and abilities inferred from the results of HPP studies have been used to reason about and formally model early linguistic skills and language acquisition. However, the actual cause of infants' behaviour in HPP experiments has been subject to numerous assumptions as there are no means to directly tap into cognitive processes. To make these assumptions explicit, and more crucially, to understand how infants' behaviour emerges if only general learning mechanisms are assumed, we introduce a computational model of the HPP. Simulations with the computational HPP model show that the difference in infant behaviour between familiarised and unfamiliar words in passages can be explained by a general learning mechanism and that many assumptions underlying the HPP are not necessarily warranted. We discuss the implications for conventional interpretations of the outcomes of HPP experiments.
  • Bergmann, C. (2014). Computational models of early language acquisition and the role of different voices. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Bergmann, C., Paulus, M., & Fikkert, J. (2010). A closer look at pronoun comprehension: Comparing different methods. In J. Costa, A. Castro, M. Lobo, & F. Pratas (Eds.), Language Acquisition and Development: Proceedings of GALA 2009 (pp. 53-61). Newcastle upon Tyne: Cambridge Scholars Publishing.

    Abstract

    1. Introduction External input is necessary to acquire language. Consequently, the comprehension of various constituents of language, such as lexical items or syntactic and semantic structures should emerge at the same time as or even precede their production. However, in the case of pronouns this general assumption does not seem to hold. On the contrary, while children at the age of four use pronouns and reflexives appropriately during production (de Villiers, et al. 2006), a number of comprehension studies across different languages found chance performance in pronoun trials up to the age of seven, which co-occurs with a high level of accuracy in reflexive trials (for an overview see e.g. Conroy, et al. 2009; Elbourne 2005).
  • Bergmann, C., Gubian, M., & Boves, L. (2010). Modelling the effect of speaker familiarity and noise on infant word recognition. In Proceedings of the 11th Annual Conference of the International Speech Communication Association [Interspeech 2010] (pp. 2910-2913). ISCA.

    Abstract

    In the present paper we show that a general-purpose word learning model can simulate several important findings from recent experiments in language acquisition. Both the addition of background noise and varying the speaker have been found to influence infants’ performance during word recognition experiments. We were able to replicate this behaviour in our artificial word learning agent. We use the results to discuss both advantages and limitations of computational models of language acquisition.
  • Besharati, S., Forkel, S. J., Kopelman, M., Solms, M., Jenkinson, P. M., & Fotopoulou, A. (2014). The affective modulation of motor awareness in anosognosia for hemiplegia: Behavioural and lesion evidence. Cortex, 61, 127-140. doi:10.1016/j.cortex.2014.08.016.

    Abstract

    The possible role of emotion in anosognosia for hemiplegia (i.e., denial of motor deficits contralateral to a brain lesion), has long been debated between psychodynamic and neurocognitive theories. However, there are only a handful of case studies focussing on this topic, and the precise role of emotion in anosognosia for hemiplegia requires empirical investigation. In the present study, we aimed to investigate how negative and positive emotions influence motor awareness in anosognosia. Positive and negative emotions were induced under carefully-controlled experimental conditions in right-hemisphere stroke patients with anosognosia for hemiplegia (n = 11) and controls with clinically normal awareness (n = 10). Only the negative, emotion induction condition resulted in a significant improvement of motor awareness in anosognosic patients compared to controls; the positive emotion induction did not. Using lesion overlay and voxel-based lesion-symptom mapping approaches, we also investigated the brain lesions associated with the diagnosis of anosognosia, as well as with performance on the experimental task. Anatomical areas that are commonly damaged in AHP included the right-hemisphere motor and sensory cortices, the inferior frontal cortex, and the insula. Additionally, the insula, putamen and anterior periventricular white matter were associated with less awareness change following the negative emotion induction. This study suggests that motor unawareness and the observed lack of negative emotions about one's disabilities cannot be adequately explained by either purely motivational or neurocognitive accounts. Instead, we propose an integrative account in which insular and striatal lesions result in weak interoceptive and motivational signals. These deficits lead to faulty inferences about the self, involving a difficulty to personalise new sensorimotor information, and an abnormal adherence to premorbid beliefs about the body.

    Additional information

    supplementary file
  • Bidgood, A., Ambridge, B., Pine, J. M., & Rowland, C. F. (2014). The retreat from locative overgeneralisation errors: A novel verb grammaticality judgment study. PLoS One, 9(5): e97634. doi:10.1371/journal.pone.0097634.

    Abstract

    Whilst some locative verbs alternate between the ground- and figure-locative constructions (e.g. Lisa sprayed the flowers with water/Lisa sprayed water onto the flowers), others are restricted to one construction or the other (e.g. *Lisa filled water into the cup/*Lisa poured the cup with water). The present study investigated two proposals for how learners (aged 5–6, 9–10 and adults) acquire this restriction, using a novel-verb-learning grammaticality-judgment paradigm. In support of the semantic verb class hypothesis, participants in all age groups used the semantic properties of novel verbs to determine the locative constructions (ground/figure/both) in which they could and could not appear. In support of the frequency hypothesis, participants' tolerance of overgeneralisation errors decreased with each increasing level of verb frequency (novel/low/high). These results underline the need to develop an integrated account of the roles of semantics and frequency in the retreat from argument structure overgeneralisation.
  • Blasi, D. E., Christiansen, M. H., Wichmann, S., Hammarström, H., & Stadler, P. F. (2014). Sound symbolism and the origins of language. In E. A. Cartmill, S. Roberts, H. Lyn, & H. Cornish (Eds.), The evolution of language: Proceedings of the 10th International Conference (EVOLANG 10) (pp. 391-392). Singapore: World Scientific.
  • Blythe, J. (2010). From ethical datives to number markers in Murriny Patha. In R. Hendery, & J. Hendriks (Eds.), Grammatical change: Theory and description (pp. 157-187). Canberra: Pacific Linguistics.
  • Blythe, J. (2010). Self-association in Murriny Patha talk-in-interaction. In I. Mushin, & R. Gardner (Eds.), Studies in Australian Indigenous Conversation [Special issue] (pp. 447-469). Australian Journal of Linguistics. doi:10.1080/07268602.2010.518555.

    Abstract

    When referring to persons in talk-in-interaction, interlocutors recruit the particular referential expressions that best satisfy both cultural and interactional contingencies, as well as the speaker’s own personal objectives. Regular referring practices reveal cultural preferences for choosing particular classes of reference forms for engaging in particular types of activities. When speakers of the northern Australian language Murriny Patha refer to each other, they display a clear preference for associating the referent to the current conversation’s participants. This preference for Association is normally achieved through the use of triangular reference forms such as kinterms. Triangulations are reference forms that link the person being spoken about to another specified person (e.g. Bill’s doctor). Triangulations are frequently used to associate the referent to the current speaker (e.g.my father), to an addressed recipient (your uncle) or co-present other (this bloke’s cousin). Murriny Patha speakers regularly associate key persons to themselves when making authoritative claims about items of business and important events. They frequently draw on kinship links when attempting to bolster their epistemic position. When speakers demonstrate their relatedness to the event’s protagonists, they ground their contribution to the discussion as being informed by appropriate genealogical connections (effectively, ‘I happen to know something about that. He was after all my own uncle’).
  • Bocanegra, B. R., Poletiek, F. H., & Zwaan, R. A. (2014). Asymmetrical feature binding across language and perception. In Proceedings of the 7th annual Conference on Embodied and Situated Language Processing (ESLP 2014).
  • Bock, K., Butterfield, S., Cutler, A., Cutting, J. C., Eberhard, K. M., & Humphreys, K. R. (2006). Number agreement in British and American English: Disagreeing to agree collectively. Language, 82(1), 64-113.

    Abstract

    British andAmerican speakers exhibit different verb number agreement patterns when sentence subjects have collective headnouns. From linguistic andpsycholinguistic accounts of how agreement is implemented, three alternative hypotheses can be derived to explain these differences. The hypotheses involve variations in the representation of notional number, disparities in how notional andgrammatical number are used, and inequalities in the grammatical number specifications of collective nouns. We carriedout a series of corpus analyses, production experiments, andnorming studies to test these hypotheses. The results converge to suggest that British and American speakers are equally sensitive to variations in notional number andimplement subjectverb agreement in much the same way, but are likely to differ in the lexical specifications of number for collectives. The findings support a psycholinguistic theory that explains verb and pronoun agreement within a parallel architecture of lexical andsyntactic formulation.
  • Böcker, K. B. E., Bastiaansen, M. C. M., Vroomen, J., Brunia, C. H. M., & de Gelder, B. (1999). An ERP correlate of metrical stress in spoken word recognition. Psychophysiology, 36, 706-720. doi:10.1111/1469-8986.3660706.

    Abstract

    Rhythmic properties of spoken language such as metrical stress, that is, the alternation of strong and weak syllables, are important in speech recognition of stress-timed languages such as Dutch and English. Nineteen subjects listened passively to or discriminated actively between sequences of bisyllabic Dutch words, which started with either a weak or a strong syllable. Weak-initial words, which constitute 12% of the Dutch lexicon, evoked more negativity than strong-initial words in the interval between P2 and N400 components of the auditory event-related potential. This negativity was denoted as N325. The N325 was larger during stress discrimination than during passive listening. N325 was also larger when a weak-initial word followed sequence of strong-initial words than when it followed words with the same stress pattern. The latter difference was larger for listeners who performed well on stress discrimination. It was concluded that the N325 is probably a manifestation of the extraction of metrical stress from the acoustic signal and its transformation into task requirements.
  • Böckler, A., Hömke, P., & Sebanz, N. (2014). Invisible Man: Exclusion from shared attention affects gaze behavior and self-reports. Social Psychological and Personality Science, 5(2), 140-148. doi:10.1177/1948550613488951.

    Abstract

    Social exclusion results in lowered satisfaction of basic needs and shapes behavior in subsequent social situations. We investigated
    participants’ immediate behavioral response during exclusion from an interaction that consisted of establishing eye contact. A
    newly developed eye-tracker-based ‘‘looking game’’ was employed; participants exchanged looks with two virtual partners in an
    exchange where the player who had just been looked at chose whom to look at next. While some participants received as many
    looks as the virtual players (included), others were ignored after two initial looks (excluded). Excluded participants reported lower
    basic need satisfaction, lower evaluation of the interaction, and devaluated their interaction partners more than included
    participants, demonstrating that people are sensitive to epistemic ostracism. In line with William’s need-threat model,
    eye-tracking results revealed that excluded participants did not withdraw from the unfavorable interaction, but increased the
    number of looks to the player who could potentially reintegrate them.
  • Bod, R., Fitz, H., & Zuidema, W. (2006). On the structural ambiguity in natural language that the neural architecture cannot deal with [Commentary]. Behavioral and Brain Sciences, 29, 71-72. doi:10.1017/S0140525X06239025.

    Abstract

    We argue that van der Velde's & de Kamps's model does not solve the binding problem but merely shifts the burden of constructing appropriate neural representations of sentence structure to unexplained preprocessing of the linguistic input. As a consequence, their model is not able to explain how various neural representations can be assigned to sentences that are structurally ambiguous.
  • De Boer, B., & Perlman, M. (2014). Physical mechanisms may be as important as brain mechanisms in evolution of speech [Commentary on Ackerman, Hage, & Ziegler. Brain Mechanisms of acoustic communication in humans and nonhuman primates: an evolutionary perspective]. Behavioral and Brain Sciences, 37(6), 552-553. doi:10.1017/S0140525X13004007.

    Abstract

    We present two arguments why physical adaptations for vocalization may be as important as neural adaptations. First, fine control over vocalization is not easy for physical reasons, and modern humans may be exceptional. Second, we present an example of a gorilla that shows rudimentary voluntary control over vocalization, indicating that some neural control is already shared with great apes.
  • Bögels, S., Schriefers, H., Vonk, W., Chwilla, D. J., & Kerkhofs, R. (2010). The interplay between prosody and syntax in sentence processing: The case of subject- and object-control verbs. Journal of Cognitive Neuroscience, 22(5), 1036-1053. doi:10.1162/jocn.2009.21269.

    Abstract

    This study addresses the question whether prosodic information can affect the choice for a syntactic analysis in auditory sentence processing. We manipulated the prosody (in the form of a prosodic break; PB) of locally ambiguous Dutch sentences to favor one of two interpretations. The experimental items contained two different types of so-called control verbs (subject and object control) in the matrix clause and were syntactically disambiguated by a transitive or by an intransitive verb. In Experiment 1, we established the default off-line preference of the items for a transitive or an intransitive disambiguating verb with a visual and an auditory fragment completion test. The results suggested that subject- and object-control verbs differently affect the syntactic structure that listeners expect. In Experiment 2, we investigated these two types of verbs separately in an on-line ERP study. Consistent with the literature, the PB elicited a closure positive shift. Furthermore, in subject-control items, an N400 effect for intransitive relative to transitive disambiguating verbs was found, both for sentences with and for sentences without a PB. This result suggests that the default preference for subject-control verbs goes in the same direction as the effect of the PB. In object-control items, an N400 effect for intransitive relative to transitive disambiguating verbs was found for sentences with a PB but no effect in the absence of a PB. This indicates that a PB can affect the syntactic analysis that listeners pursue.
  • Bohnemeyer, J. (1999). A questionnaire on event integration. In D. Wilkins (Ed.), Manual for the 1999 Field Season (pp. 87-95). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3002691.

    Abstract

    How do we decide where events begin and end? Like the ECOM clips, this questionnaire is designed to investigate how a language divides and/or integrates complex scenarios into sub-events and macro-events. The questionnaire focuses on events of motion, caused state change (e.g., breaking), and transfer (e.g., giving). It provides a checklist of scenarios that give insight into where a language “draws the line” in event integration, based on known cross-linguistic differences.
  • Bohnemeyer, J. (2000). Event order in language and cognition. Linguistics in the Netherlands, 17(1), 1-16. doi:10.1075/avt.17.04boh.
  • Bohnemeyer, J. (1999). Event representation and event complexity: General introduction. In D. Wilkins (Ed.), Manual for the 1999 Field Season (pp. 69-73). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3002741.

    Abstract

    How do we decide where events begin and end? In some languages it makes sense to say something like Dan broke the plate, but in other languages it is necessary to treat this action as a complex scenario composed of separate stages (Dan dropped the plate and then the plate broke). This document introduces issues concerning the linguistic and cognitive representations of event complexity and integration, and provides an overview of tasks that are relevant to this topic, including the ECOM clips, the Questionnaire on Event integration, and the Questionnaire on motion lexicalisation and motion description.
  • Bohnemeyer, J., & Caelen, M. (1999). The ECOM clips: A stimulus for the linguistic coding of event complexity. In D. Wilkins (Ed.), Manual for the 1999 Field Season (pp. 74-86). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874627.

    Abstract

    How do we decide where events begin and end? In some languages it makes sense to say something like Dan broke the plate, but in other languages it is necessary to treat this action as a complex scenario composed of separate stages (Dan dropped the plate and then the plate broke). The “Event Complexity” (ECOM) clips are designed to explore how languages differ in dividing and/or integrating complex scenarios into sub-events and macro-events. The stimuli consist of animated clips of geometric shapes that participate in different scenarios (e.g., a circle “hits” a triangle and “breaks” it). Consultants are asked to describe the scenes, and then to comment on possible alternative descriptions.

    Additional information

    1999_The_ECOM_clips.zip
  • Bohnemeyer, J. (2000). Where do pragmatic meanings come from? In W. Spooren, T. Sanders, & C. van Wijk (Eds.), Samenhang in Diversiteit; Opstellen voor Leo Noorman, aangeboden bij gelegenheid van zijn zestigste verjaardag (pp. 137-153).
  • Bolton, J. L., Hayward, C., Direk, N., Lewis, J. G., Hammond, G. L., Hill, L. A., Anderson, A., Huffman, J., Wilson, J. F., Campbell, H., Rudan, I., Wright, A., Hastie, N., Wild, S. H., Velders, F. P., Hofman, A., Uitterlinden, A. G., Lahti, J., Räikkönen, K., Kajantie, E. and 37 moreBolton, J. L., Hayward, C., Direk, N., Lewis, J. G., Hammond, G. L., Hill, L. A., Anderson, A., Huffman, J., Wilson, J. F., Campbell, H., Rudan, I., Wright, A., Hastie, N., Wild, S. H., Velders, F. P., Hofman, A., Uitterlinden, A. G., Lahti, J., Räikkönen, K., Kajantie, E., Widen, E., Palotie, A., Eriksson, J. G., Kaakinen, M., Järvelin, M.-R., Timpson, N. J., Davey Smith, G., Ring, S. M., Evans, D. M., St Pourcain, B., Tanaka, T., Milaneschi, Y., Bandinelli, S., Ferrucci, L., van der Harst, P., Rosmalen, J. G. M., Bakker, S. J. L., Verweij, N., Dullaart, R. P. F., Mahajan, A., Lindgren, C. M., Morris, A., Lind, L., Ingelsson, E., Anderson, L. N., Pennell, C. E., Lye, S. J., Matthews, S. G., Eriksson, J., Mellstrom, D., Ohlsson, C., Price, J. F., Strachan, M. W. J., Reynolds, R. M., Tiemeier, H., Walker, B. R., & CORtisol NETwork (CORNET) Consortium (2014). Genome Wide Association Identifies Common Variants at the SERPINA6/SERPINA1 Locus Influencing Plasma Cortisol and Corticosteroid Binding Globulin. PLoS Genetics, 10(7): e1004474. doi:10.1371/journal.pgen.1004474.

    Abstract

    Variation in plasma levels of cortisol, an essential hormone in the stress response, is associated in population-based studies with cardio-metabolic, inflammatory and neuro-cognitive traits and diseases. Heritability of plasma cortisol is estimated at 30-60% but no common genetic contribution has been identified. The CORtisol NETwork (CORNET) consortium undertook genome wide association meta-analysis for plasma cortisol in 12,597 Caucasian participants, replicated in 2,795 participants. The results indicate that <1% of variance in plasma cortisol is accounted for by genetic variation in a single region of chromosome 14. This locus spans SERPINA6, encoding corticosteroid binding globulin (CBG, the major cortisol-binding protein in plasma), and SERPINA1, encoding α1-antitrypsin (which inhibits cleavage of the reactive centre loop that releases cortisol from CBG). Three partially independent signals were identified within the region, represented by common SNPs; detailed biochemical investigation in a nested sub-cohort showed all these SNPs were associated with variation in total cortisol binding activity in plasma, but some variants influenced total CBG concentrations while the top hit (rs12589136) influenced the immunoreactivity of the reactive centre loop of CBG. Exome chip and 1000 Genomes imputation analysis of this locus in the CROATIA-Korcula cohort identified missense mutations in SERPINA6 and SERPINA1 that did not account for the effects of common variants. These findings reveal a novel common genetic source of variation in binding of cortisol by CBG, and reinforce the key role of CBG in determining plasma cortisol levels. In turn this genetic variation may contribute to cortisol-associated degenerative diseases.
  • Bosker, H. R., Quené, H., Sanders, T. J. M., & de Jong, N. H. (2014). Native 'um's elicit prediction of low-frequency referents, but non-native 'um's do not. Journal of Memory and Language, 75, 104-116. doi:10.1016/j.jml.2014.05.004.

    Abstract

    Speech comprehension involves extensive use of prediction. Linguistic prediction may be guided by the semantics or syntax, but also by the performance characteristics of the speech signal, such as disfluency. Previous studies have shown that listeners, when presented with the filler uh, exhibit a disfluency bias for discourse-new or unknown referents, drawing inferences about the source of the disfluency. The goal of the present study is to study the contrast between native and non-native disfluencies in speech comprehension. Experiment 1 presented listeners with pictures of high-frequency (e.g., a hand) and low-frequency objects (e.g., a sewing machine) and with fluent and disfluent instructions. Listeners were found to anticipate reference to low-frequency objects when encountering disfluency, thus attributing disfluency to speaker trouble in lexical retrieval. Experiment 2 showed that, when participants listened to disfluent non-native speech, no anticipation of low-frequency referents was observed. We conclude that listeners can adapt their predictive strategies to the (non-native) speaker at hand, extending our understanding of the role of speaker identity in speech comprehension.
  • Bosker, H. R., Quené, H., Sanders, T. J. M., & de Jong, N. H. (2014). The perception of fluency in native and non-native speech. Language Learning, 64, 579-614. doi:10.1111/lang.12067.

    Abstract

    Where native speakers supposedly are fluent by default, non-native speakers often have to strive hard to achieve a native-like fluency level. However, disfluencies (such as pauses, fillers, repairs, etc.) occur in both native and non-native speech and it is as yet unclear ow luency raters weigh the fluency characteristics of native and non-native speech. Two rating experiments compared the way raters assess the luency of native and non-native speech. The fluency characteristics of native and non- native speech were controlled by using phonetic anipulations in pause (Experiment 1) and speed characteristics (Experiment 2). The results show that the ratings on manipulated native and on-native speech were affected in a similar fashion. This suggests that there is no difference in the way listeners weigh the fluency haracteristics of native and non-native speakers.
  • Bosker, H. R. (2014). The processing and evaluation of fluency in native and non-native speech. PhD Thesis, Utrecht University, Utrecht.

    Abstract

    Disfluency is a common characteristic of spontaneously produced speech. Disfluencies (e.g., silent pauses, filled pauses [uh’s and uhm’s], corrections, repetitions, etc.) occur in both native and non-native speech. There appears to be an apparent contradiction between claims from the evaluative and cognitive approach to fluency. On the one hand, the evaluative approach shows that non-native disfluencies have a negative effect on listeners’ subjective fluency impressions. On the other hand, the cognitive approach reports beneficial effects of native disfluencies on cognitive processes involved in speech comprehension, such as prediction and attention.

    This dissertation aims to resolve this apparent contradiction by combining the evaluative and cognitive approach. The reported studies target both the evaluation (Chapters 2 and 3) and the processing of fluency (Chapters 4 and 5) in native and non-native speech. Thus, it provides an integrative account of native and non-native fluency perception, informative to both language testing practice and cognitive psycholinguists. The proposed account of fluency perception testifies to the notion that speech performance matters: communication through spoken language does not only depend on what is said, but also on how it is said and by whom.
  • Bosker, H. R., Briaire, J., Heeren, W., van Heuven, V. J., & Jongman, S. R. (2010). Whispered speech as input for cochlear implants. In J. Van Kampen, & R. Nouwen (Eds.), Linguistics in the Netherlands 2010 (pp. 1-14).
  • Bottini, R., & Casasanto, D. (2010). Implicit spatial length modulates time estimates, but not vice versa. In C. Hölscher, T. F. Shipley, M. Olivetti Belardinelli, J. A. Bateman, & N. Newcombe (Eds.), Spatial Cognition VII. International Conference, Spatial Cognition 2010, Mt. Hood/Portland, OR, USA, August 15-19, 2010. Proceedings (pp. 152-162). Berlin Heidelberg: Springer.

    Abstract

    How are space and time represented in the human mind? Here we evaluate two theoretical proposals, one suggesting a symmetric relationship between space and time (ATOM theory) and the other an asymmetric relationship (metaphor theory). In Experiment 1, Dutch-speakers saw 7-letter nouns that named concrete objects of various spatial lengths (tr. pencil, bench, footpath) and estimated how much time they remained on the screen. In Experiment 2, participants saw nouns naming temporal events of various durations (tr. blink, party, season) and estimated the words’ spatial length. Nouns that named short objects were judged to remain on the screen for a shorter time, and nouns that named longer objects to remain for a longer time. By contrast, variations in the duration of the event nouns’ referents had no effect on judgments of the words’ spatial length. This asymmetric pattern of cross-dimensional interference supports metaphor theory and challenges ATOM.
  • Bottini, R., & Casasanto, D. (2010). Implicit spatial length modulates time estimates, but not vice versa. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32nd Annual Conference of the Cognitive Science Society (pp. 1348-1353). Austin, TX: Cognitive Science Society.

    Abstract

    Why do people accommodate to each other’s linguistic behavior? Studies of natural interactions (Giles, Taylor & Bourhis, 1973) suggest that speakers accommodate to achieve interactional goals, influencing what their interlocutor thinks or feels about them. But is this the only reason speakers accommodate? In real-world conversations, interactional motivations are ubiquitous, making it difficult to assess the extent to which they drive accommodation. Do speakers still accommodate even when interactional goals cannot be achieved, for instance, when their interlocutor cannot interpret their accommodation behavior? To find out, we asked participants to enter an immersive virtual reality (VR) environment and to converse with a virtual interlocutor. Participants accommodated to the speech rate of their virtual interlocutor even though he could not interpret their linguistic behavior, and thus accommodation could not possibly help them to achieve interactional goals. Results show that accommodation does not require explicit interactional goals, and suggest other social motivations for accommodation.
  • Bowerman, M. (1980). The structure and origin of semantic categories in the language learning child. In M. Foster, & S. Brandes (Eds.), Symbol as sense (pp. 277-299). New York: Academic Press.
  • Bowerman, M. (2000). Where do children's word meanings come from? Rethinking the role of cognition in early semantic development. In L. Nucci, G. Saxe, & E. Turiel (Eds.), Culture, thought and development (pp. 199-230). Mahwah, NJ: Lawrence Erlbaum.
  • Bramão, I., Faísca, L., Forkstam, C., Reis, A., & Petersson, K. M. (2010). Cortical brain regions associated with color processing: An FMRI study. The Open Neuroimaging Journal, 4, 164-173. doi:10.2174/1874440001004010164.

    Abstract

    To clarify whether the neural pathways concerning color processing are the same for natural objects, for artifacts objects and for non-sense objects we examined functional magnetic resonance imaging (FMRI) responses during a covert naming task including the factors color (color vs. black&white (B&W)) and stimulus type (natural vs. artifacts vs. non-sense objects). Our results indicate that the superior parietal lobule and precuneus (BA 7) bilaterally, the right hippocampus and the right fusifom gyrus (V4) make part of a network responsible for color processing both for natural and artifacts objects, but not for non-sense objects. The recognition of non-sense colored objects compared to the recognition of color objects activated the posterior cingulate/precuneus (BA 7/23/31), suggesting that color attribute induces the mental operation of trying to associate a non-sense composition with a familiar objects. When color objects (both natural and artifacts) were contrasted with color nonobjects we observed activations in the right parahippocampal gyrus (BA 35/36), the superior parietal lobule (BA 7) bilaterally, the left inferior middle temporal region (BA 20/21) and the inferior and superior frontal regions (BA 10/11/47). These additional activations suggest that colored objects recruit brain regions that are related to visual semantic information/retrieval and brain regions related to visuo-spatial processing. Overall, the results suggest that color information is an attribute that improve object recognition (based on behavioral results) and activate a specific neural network related to visual semantic information that is more extensive than for B&W objects during object recognition
  • Bramão, I., Faísca, L., Petersson, K. M., & Reis, A. (2010). The influence of surface color information and color knowledge information in object recognition. American Journal of Psychology, 123, 437-466. Retrieved from http://www.jstor.org/stable/10.5406/amerjpsyc.123.4.0437.

    Abstract

    In order to clarify whether the influence of color knowledge information in object recognition depends on the presence of the appropriate surface color, we designed a name—object verification task. The relationship between color and shape information provided by the name and by the object photo was manipulated in order to assess color interference independently of shape interference. We tested three different versions for each object: typically colored, black and white, and nontypically colored. The response times on the nonmatching trials were used to measure the interference between the name and the photo. We predicted that the more similar the name and the photo are, the longer it would take to respond. Overall, the color similarity effect disappeared in the black-and-white and nontypical color conditions, suggesting that the influence of color knowledge on object recognition depends on the presence of the appropriate surface color information.
  • Braun, B. (2006). Phonetics and phonology of thematic contrast in German. Language and Speech, 49(4), 451-493.

    Abstract

    It is acknowledged that contrast plays an important role in understanding discourse and information structure. While it is commonly assumed that contrast can be marked by intonation only, our understanding of the intonational realization of contrast is limited. For German there is mainly introspective evidence that the rising theme accent (or topic accent) is realized differently when signaling contrast than when not. In this article, the acoustic basis for the reported impressionistic differences is investigated in terms of the scaling (height) and alignment (positioning) of tonal targets.

    Subjects read target sentences in a contrastive and a noncontrastive context (Experiment 1). Prosodic annotation revealed that thematic accents were not realized with different accent types in the two contexts but acoustic comparison showed that themes in contrastive context exhibited a higher and later peak. The alignment and scaling of accents can hence be controlled in a linguistically meaningful way, which has implications for intonational phonology. In Experiment 2, nonlinguists' perception of a subset of the production data was assessed. They had to choose whether, in a contrastive context, the presumed contrastive or noncontrastive realization of a sentence was more appropriate. For some sentence pairs only, subjects had a clear preference. For Experiment 3, a group of linguists annotated the thematic accents of the contrastive and noncontrastive versions of the same data as used in Experiment 2. There was considerable disagreement in labels, but different accent types were consistently used when the two versions differed strongly in F0 excursion. Although themes in contrastive contexts were clearly produced differently than themes in noncontrastive contexts, this difference is not easily perceived or annotated.
  • Braun, B., Kochanski, G., Grabe, E., & Rosner, B. S. (2006). Evidence for attractors in English intonation. Journal of the Acoustical Society of America, 119(6), 4006-4015. doi:10.1121/1.2195267.

    Abstract

    Although the pitch of the human voice is continuously variable, some linguists contend that intonation in speech is restricted to a small, limited set of patterns. This claim is tested by asking subjects to mimic a block of 100 randomly generated intonation contours and then to imitate themselves in several successive sessions. The produced f0 contours gradually converge towards a limited set of distinct, previously recognized basic English intonation patterns. These patterns are "attractors" in the space of possible intonation English contours. The convergence does not occur immediately. Seven of the ten participants show continued convergence toward their attractors after the first iteration. Subjects retain and use information beyond phonological contrasts, suggesting that intonational phonology is not a complete description of their mental representation of intonation.
  • Braun, B., & Chen, A. (2010). Intonation of 'now' in resolving scope ambiguity in English and Dutch. Journal of Phonetics, 38, 431-444. doi:10.1016/j.wocn.2010.04.002.

    Abstract

    The adverb now in English (nu in Dutch) can draw listeners’ attention to an upcoming contrast (e.g., ‘Put X in Y. Now put X in Z’). In Dutch, but not English, the position of this sequential adverb may disambiguate which constituent is contrasted. We investigated whether and how the intonational realization of now/nu is varied to signal different scopes and whether it interacts with word order. Three contrast conditions (contrast in object, location, or both) were produced by eight Dutch and eight English speakers. Results showed no consistent use of word order for scope disambiguation in Dutch. Importantly, independent of language, an unaccented now/nu signaled a contrasting object while an accented now/nu signaled a contrast in the location. Since these intonational patterns were independent of word order, we interpreted the results in the framework of grammatical saliency: now/nu appears to be unmarked when the contrast lies in a salient constituent (the object) but marked with a prominent rise when a less salient constituent is contrasted (the location).

    Files private

    Request files
  • Braun, B., & Tagliapietra, L. (2010). The role of contrastive intonation contours in the retrieval of contextual alternatives. Language and Cognitive Processes, 25, 1024 -1043. doi:10.1080/01690960903036836.

    Abstract

    Sentences with a contrastive intonation contour are usually produced when the speaker entertains alternatives to the accented words. However, such contrastive sentences are frequently produced without making the alternatives explicit for the listener. In two cross-modal associative priming experiments we tested in Dutch whether such contextual alternatives become available to listeners upon hearing a sentence with a contrastive intonation contour compared with a sentence with a non-contrastive one. The first experiment tested the recognition of contrastive associates (contextual alternatives to the sentence-final primes), the second one the recognition of non-contrastive associates (generic associates which are not alternatives). Results showed that contrastive associates were facilitated when the primes occurred in sentences with a contrastive intonation contour but not in sentences with a non-contrastive intonation. Non-contrastive associates were weakly facilitated independent of intonation. Possibly, contrastive contours trigger an accommodation mechanism by which listeners retrieve the contrast available for the speaker.
  • Braun, B., & Tagliapietra, L. (2010). The role of contrastive intonation contours in the retrieval of contextual alternatives. In D. G. Watson, M. Wagner, & E. Gibson (Eds.), Experimental and theoretical advances in prosody (pp. 1024-1043). Hove: Psychology Press.

    Abstract

    Sentences with a contrastive intonation contour are usually produced when the speaker entertains alternatives to the accented words. However, such contrastive sentences are frequently produced without making the alternatives explicit for the listener. In two cross-modal associative priming experiments we tested in Dutch whether such contextual alternatives become available to listeners upon hearing a sentence with a contrastive intonation contour compared with a sentence with a non-contrastive one. The first experiment tested the recognition of contrastive associates (contextual alternatives to the sentence-final primes), the second one the recognition of non-contrastive associates (generic associates which are not alternatives). Results showed that contrastive associates were facilitated when the primes occurred in sentences with a contrastive intonation contour but not in sentences with a non-contrastive intonation. Non-contrastive associates were weakly facilitated independent of intonation. Possibly, contrastive contours trigger an accommodation mechanism by which listeners retrieve the contrast available for the speaker.
  • Brehm, L. (2014). Speed limits and red flags: Why number agreement accidents happen. PhD Thesis, University of Illinois at Urbana-Champaign, Urbana-Champaign, Il.

Share this page