Publications

Displaying 401 - 500 of 647
  • Onnink, A. M. H., Zwiers, M. P., Hoogman, M., Mostert, J. C., Kan, C. C., Buitelaar, J., & Franke, B. (2014). Brain alterations in adult ADHD: Effects of gender, treatment and comorbid depression. European Neuropsychopharmacology, 24(3), 397-409. doi:10.1016/j.euroneuro.2013.11.011.

    Abstract

    Children with attention-deficit/hyperactivity disorder (ADHD) have smaller volumes of total brain matter and subcortical regions, but it is unclear whether these represent delayed maturation or persist into adulthood. We performed a structural MRI study in 119 adult ADHD patients and 107 controls and investigated total gray and white matter and volumes of accumbens, caudate, globus pallidus, putamen, thalamus, amygdala and hippocampus. Additionally, we investigated effects of gender, stimulant treatment and history of major depression (MDD). There was no main effect of ADHD on the volumetric measures, nor was any effect observed in a secondary voxel-based morphometry (VBM) analysis of the entire brain. However, in the volumetric analysis a significant gender by diagnosis interaction was found for caudate volume. Male patients showed reduced right caudate volume compared to male controls, and caudate volume correlated with hyperactive/impulsive symptoms. Furthermore, patients using stimulant treatment had a smaller right hippocampus volume compared to medication-naïve patients and controls. ADHD patients with previous MDD showed smaller hippocampus volume compared to ADHD patients with no MDD. While these data were obtained in a cross-sectional sample and need to be replicated in a longitudinal study, the findings suggest that developmental brain differences in ADHD largely normalize in adulthood. Reduced caudate volume in male patients may point to distinct neurobiological deficits underlying ADHD in the two genders. Smaller hippocampus volume in ADHD patients with previous MDD is consistent with neurobiological alterations observed in MDD.

    Files private

    Request files
  • Ortega, G. (2014). Acquisition of a signed phonological system by hearing adults: The role of sign structure and iconicity. Sign Language and Linguistics, 17, 267-275. doi:10.1075/sll.17.2.09ort.
  • Ortega, G., Sumer, B., & Ozyurek, A. (2014). Type of iconicity matters: Bias for action-based signs in sign language acquisition. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1114-1119). Austin, Tx: Cognitive Science Society.

    Abstract

    Early studies investigating sign language acquisition claimed
    that signs whose structures are motivated by the form of their
    referent (iconic) are not favoured in language development.
    However, recent work has shown that the first signs in deaf
    children’s lexicon are iconic. In this paper we go a step
    further and ask whether different types of iconicity modulate
    learning sign-referent links. Results from a picture description
    task indicate that children and adults used signs with two
    possible variants differentially. While children signing to
    adults favoured variants that map onto actions associated with
    a referent (action signs), adults signing to another adult
    produced variants that map onto objects’ perceptual features
    (perceptual signs). Parents interacting with children used
    more action variants than signers in adult-adult interactions.
    These results are in line with claims that language
    development is tightly linked to motor experience and that
    iconicity can be a communicative strategy in parental input.
  • Otten, M., & Van Berkum, J. J. A. (2007). What makes a discourse constraining? Comparing the effects of discourse message and scenario fit on the discourse-dependent N400 effect. Brain Research, 1153, 166-177. doi:10.1016/j.brainres.2007.03.058.

    Abstract

    A discourse context provides a reader with a great deal of information that can provide constraints for further language processing, at several different levels. In this experiment we used event-related potentials (ERPs) to explore whether discourse-generated contextual constraints are based on the precise message of the discourse or, more `loosely', on the scenario suggested by one or more content words in the text. Participants read constraining stories whose precise message rendered a particular word highly predictable ("The manager thought that the board of directors should assemble to discuss the issue. He planned a...[meeting]") as well as non-constraining control stories that were only biasing in virtue of the scenario suggested by some of the words ("The manager thought that the board of directors need not assemble to discuss the issue. He planned a..."). Coherent words that were inconsistent with the message-level expectation raised in a constraining discourse (e.g., "session" instead of "meeting") elicited a classic centroparietal N400 effect. However, when the same words were only inconsistent with the scenario loosely suggested by earlier words in the text, they elicited a different negativity around 400 ms, with a more anterior, left-lateralized maximum. The fact that the discourse-dependent N400 effect cannot be reduced to scenario-mediated priming reveals that it reflects the rapid use of precise message-level constraints in comprehension. At the same time, the left-lateralized negativity in non-constraining stories suggests that, at least in the absence of strong message-level constraints, scenario-mediated priming does also rapidly affect comprehension.
  • Otten, M., Nieuwland, M. S., & Van Berkum, J. J. A. (2007). Great expectations: Specific lexical anticipation influences the processing of spoken language. BMC Neuroscience, 8: 89. doi:10.1186/1471-2202-8-89.

    Abstract

    Background Recently several studies have shown that people use contextual information to make predictions about the rest of the sentence or story as the text unfolds. Using event related potentials (ERPs) we tested whether these on-line predictions are based on a message-based representation of the discourse or on simple automatic activation by individual words. Subjects heard short stories that were highly constraining for one specific noun, or stories that were not specifically predictive but contained the same prime words as the predictive stories. To test whether listeners make specific predictions critical nouns were preceded by an adjective that was inflected according to, or in contrast with, the gender of the expected noun. Results When the message of the preceding discourse was predictive, adjectives with an unexpected gender-inflection evoked a negative deflection over right-frontal electrodes between 300 and 600 ms. This effect was not present in the prime control context, indicating that the prediction mismatch does not hinge on word-based priming but is based on the actual message of the discourse. Conclusions When listening to a constraining discourse people rapidly make very specific predictions about the remainder of the story, as the story unfolds. These predictions are not simply based on word-based automatic activation, but take into account the actual message of the discourse.
  • Özdemir, R., Roelofs, A., & Levelt, W. J. M. (2007). Perceptual uniqueness point effects in monitoring internal speech. Cognition, 105(2), 457-465. doi:10.1016/j.cognition.2006.10.006.

    Abstract

    Disagreement exists about how speakers monitor their internal speech. Production-based accounts assume that self-monitoring mechanisms exist within the production system, whereas comprehension-based accounts assume that monitoring is achieved through the speech comprehension system. Comprehension-based accounts predict perception-specific effects, like the perceptual uniqueness-point effect, in the monitoring of internal speech. We ran an extensive experiment testing this prediction using internal phoneme monitoring and picture naming tasks. Our results show an effect of the perceptual uniqueness point of a word in internal phoneme monitoring in the absence of such an effect in picture naming. These results support comprehension-based accounts of the monitoring of internal speech.
  • Ozyurek, A., Willems, R. M., Kita, S., & Hagoort, P. (2007). On-line integration of semantic information from speech and gesture: Insights from event-related brain potentials. Journal of Cognitive Neuroscience, 19(4), 605-616. doi:10.1162/jocn.2007.19.4.605.

    Abstract

    During language comprehension, listeners use the global semantic representation from previous sentence or discourse context to immediately integrate the meaning of each upcoming word into the unfolding message-level representation. Here we investigate whether communicative gestures that often spontaneously co-occur with speech are processed in a similar fashion and integrated to previous sentence context in the same way as lexical meaning. Event-related potentials were measured while subjects listened to spoken sentences with a critical verb (e.g., knock), which was accompanied by an iconic co-speech gesture (i.e., KNOCK). Verbal and/or gestural semantic content matched or mismatched the content of the preceding part of the sentence. Despite the difference in the modality and in the specificity of meaning conveyed by spoken words and gestures, the latency, amplitude, and topographical distribution of both word and gesture mismatches are found to be similar, indicating that the brain integrates both types of information simultaneously. This provides evidence for the claim that neural processing in language comprehension involves the simultaneous incorporation of information coming from a broader domain of cognition than only verbal semantics. The neural evidence for similar integration of information from speech and gesture emphasizes the tight interconnection between speech and co-speech gestures.
  • Ozyurek, A., & Kelly, S. D. (2007). Gesture, language, and brain. Brain and Language, 101(3), 181-185. doi:10.1016/j.bandl.2007.03.006.
  • Ozyurek, A. (2014). Hearing and seeing meaning in speech and gesture: Insights from brain and behaviour. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 369(1651): 20130296. doi:10.1098/rstb.2013.0296.

    Abstract

    As we speak, we use not only the arbitrary form–meaning mappings of the speech channel but also motivated form–meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal–posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language.
  • Ozyurek, A., & Trabasso, T. (1997). Evaluation during the understanding of narratives. Discourse Processes, 23(3), 305-337. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=hlh&AN=12673020&site=ehost-live.

    Abstract

    Evaluation plays a role in the telling and understanding of narratives, in communicative interaction, emotional understanding, and in psychological well-being. This article reports a study of evaluation by describing how readers monitor the concerns of characters over the course of a narrative. The main hypothesis is that readers tract the well-being via the expression of a character's internal states. Reader evaluations were revealed in think aloud protocols obtained during reading of narrative texts, one sentence at a time. Five kinds of evaluative inferences were found: appraisals (good versus bad), preferences (like versus don't like), emotions (happy versus frustrated), goals (want versus don't want), or purposes (to attain or maintain X versus to prevent or avoid X). Readers evaluated all sentences. The mean rate of evaluation per sentence was 0.55. Positive and negative evaluations over the course of the story indicated that things initially went badly for characters, improved with the formulation and execution of goal plans, declined with goal failure, and improved as characters formulated new goals and succeeded. The kind of evaluation made depended upon the episodic category of the event and the event's temporal location in the story. Evaluations also served to explain or predict events. In making evaluations, readers stayed within the frame of the story and perspectives of the character or narrator. They also moved out of the narrative frame and addressed evaluations towards the experimenter in a communicative context.
  • Pacheco, A., Araújo, S., Faísca, L., de Castro, S. L., Petersson, K. M., & Reis, A. (2014). Dyslexia's heterogeneity: Cognitive profiling of Portuguese children with dyslexia. Reading and Writing, 27(9), 1529-1545. doi:10.1007/s11145-014-9504-5.

    Abstract

    Recent studies have emphasized that developmental dyslexia is a multiple-deficit disorder, in contrast to the traditional single-deficit view. In this context, cognitive profiling of children with dyslexia may be a relevant contribution to this unresolved discussion. The aim of this study was to profile 36 Portuguese children with dyslexia from the 2nd to 5th grade. Hierarchical cluster analysis was used to group participants according to their phonological awareness, rapid automatized naming, verbal short-term memory, vocabulary, and nonverbal intelligence abilities. The results suggested a two-cluster solution: a group with poorer performance on phoneme deletion and rapid automatized naming compared with the remaining variables (Cluster 1) and a group characterized by underperforming on the variables most related to phonological processing (phoneme deletion and digit span), but not on rapid automatized naming (Cluster 2). Overall, the results seem more consistent with a hybrid perspective, such as that proposed by Pennington and colleagues (2012), for understanding the heterogeneity of dyslexia. The importance of characterizing the profiles of individuals with dyslexia becomes clear within the context of constructing remediation programs that are specifically targeted and are more effective in terms of intervention outcome.

    Additional information

    11145_2014_9504_MOESM1_ESM.doc
  • Pallier, C., Cutler, A., & Sebastian-Galles, N. (1997). Prosodic structure and phonetic processing: A cross-linguistic study. In Proceedings of EUROSPEECH 97 (pp. 2131-2134). Grenoble, France: ESCA.

    Abstract

    Dutch and Spanish differ in how predictable the stress pattern is as a function of the segmental content: it is correlated with syllable weight in Dutch but not in Spanish. In the present study, two experiments were run to compare the abilities of Dutch and Spanish speakers to separately process segmental and stress information. It was predicted that the Spanish speakers would have more difficulty focusing on the segments and ignoring the stress pattern than the Dutch speakers. The task was a speeded classification task on CVCV syllables, with blocks of trials in which the stress pattern could vary versus blocks in which it was fixed. First, we found interference due to stress variability in both languages, suggesting that the processing of segmental information cannot be performed independently of stress. Second, the effect was larger for Spanish than for Dutch, suggesting that that the degree of interference from stress variation may be partially mitigated by the predictability of stress placement in the language.
  • Papafragou, A., & Ozturk, O. (2007). Children's acquisition of modality. In Proceedings of the 2nd Conference on Generative Approaches to Language Acquisition North America (GALANA 2) (pp. 320-327). Somerville, Mass.: Cascadilla Press.
  • Papafragou, A. (2007). On the acquisition of modality. In T. Scheffler, & L. Mayol (Eds.), Penn Working Papers in Linguistics. Proceedings of the 30th Annual Penn Linguistics Colloquium (pp. 281-293). Department of Linguistics, University of Pennsylvania.
  • Payne, B. R., Grison, S., Gao, X., Christianson, K., Morrow, D. G., & Stine-Morrow, E. A. L. (2014). Aging and individual differences in binding during sentence understanding: Evidence from temporary and global syntactic attachment ambiguities. Cognition, 130(2), 157-173. doi:10.1016/j.cognition.2013.10.005.

    Abstract

    We report an investigation of aging and individual differences in binding information during sentence understanding. An age-continuous sample of adults (N=91), ranging from 18 to 81 years of age, read sentences in which a relative clause could be attached high to a head noun NP1, attached low to its modifying prepositional phrase NP2 (e.g., The son of the princess who scratched himself/herself in public was humiliated), or in which the attachment site of the relative clause was ultimately indeterminate (e.g., The maid of the princess who scratched herself in public was humiliated). Word-by-word reading times and comprehension (e.g., who scratched?) were measured. A series of mixed-effects models were fit to the data, revealing: (1) that, on average, NP1-attached sentences were harder to process and comprehend than NP2-attached sentences; (2) that these average effects were independently moderated by verbal working memory capacity and reading experience, with effects that were most pronounced in the oldest participants and; (3) that readers on average did not allocate extra time to resolve global ambiguities, though older adults with higher working memory span did. Findings are discussed in relation to current models of lifespan cognitive development, working memory, language experience, and the role of prosodic segmentation strategies in reading. Collectively, these data suggest that aging brings differences in sentence understanding, and these differences may depend on independent influences of verbal working memory capacity and reading experience.

    Files private

    Request files
  • Peeters, D., Runnqvist, E., Bertrand, D., & Grainger, J. (2014). Asymmetrical switch costs in bilingual language production induced by reading words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(1), 284-292. doi:10.1037/a0034060.

    Abstract

    We examined language-switching effects in French–English bilinguals using a paradigm where pictures are always named in the same language (either French or English) within a block of trials, and on each trial, the picture is preceded by a printed word from the same language or from the other language. Participants had to either make a language decision on the word or categorize it as an animal name or not. Picture-naming latencies in French (Language 1 [L1]) were slower when pictures were preceded by an English word than by a French word, independently of the task performed on the word. There were no language-switching effects when pictures were named in English (L2). This pattern replicates asymmetrical switch costs found with the cued picture-naming paradigm and shows that the asymmetrical pattern can be obtained (a) in the absence of artificial (nonlinguistic) language cues, (b) when the switch involves a shift from comprehension in 1 language to production in another, and (c) when the naming language is blocked (univalent response). We concluded that language switch costs in bilinguals cannot be reduced to effects driven by task control or response-selection mechanisms.
  • Peeters, D., & Dresler, M. (2014). The scientific significance of sleep-talking. Frontiers for Young Minds, 2(9). Retrieved from http://kids.frontiersin.org/articles/24/the_scientific_significance_of_sleep_talking/.

    Abstract

    Did one of your parents, siblings, or friends ever tell you that you were talking in your sleep? Nothing to be ashamed of! A recent study found that more than half of all people have had the experience of speaking out loud while being asleep [1]. This might even be underestimated, because often people do not notice that they are sleep-talking, unless somebody wakes them up or tells them the next day. Most neuroscientists, linguists, and psychologists studying language are interested in our language production and language comprehension skills during the day. In the present article, we will explore what is known about the production of overt speech during the night. We suggest that the study of sleep-talking may be just as interesting and informative as the study of wakeful speech.
  • Peeters, D., Azar, Z., & Ozyurek, A. (2014). The interplay between joint attention, physical proximity, and pointing gesture in demonstrative choice. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1144-1149). Austin, Tx: Cognitive Science Society.
  • Pereiro Estevan, Y., Wan, V., & Scharenborg, O. (2007). Finding maximum margin segments in speech. Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference, IV, 937-940. doi:10.1109/ICASSP.2007.367225.

    Abstract

    Maximum margin clustering (MMC) is a relatively new and promising kernel method. In this paper, we apply MMC to the task of unsupervised speech segmentation. We present three automatic speech segmentation methods based on MMC, which are tested on TIMIT and evaluated on the level of phoneme boundary detection. The results show that MMC is highly competitive with existing unsupervised methods for the automatic detection of phoneme boundaries. Furthermore, initial analyses show that MMC is a promising method for the automatic detection of sub-phonetic information in the speech signal.
  • Perlman, M., Clark, N., & Tanner, J. (2014). Iconicity and ape gesture. In E. A. Cartmill, S. G. Roberts, H. Lyn, & H. Cornish (Eds.), The Evolution of Language: Proceedings of the 10th International Conference (pp. 236-243). New Jersey: World Scientific.

    Abstract

    Iconic gestures are hypothesized to be c rucial to the evolution of language. Yet the important question of whether apes produce iconic gestures is the subject of considerable debate. This paper presents the current state of research on iconicity in ape gesture. In particular, it describes some of the empirical evidence suggesting that apes produce three different kinds of iconic gestures; it compares the iconicity hypothesis to other major hypotheses of ape gesture; and finally, it offers some directions for future ape gesture research
  • Perlman, M., & Cain, A. A. (2014). Iconicity in vocalization, comparisons with gesture, and implications for theories on the evolution of language. Gesture, 14(3), 320-350. doi:10.1075/gest.14.3.03per.

    Abstract

    Scholars have often reasoned that vocalizations are extremely limited in their potential for iconic expression, especially in comparison to manual gestures (e.g., Armstrong & Wilcox, 2007; Tomasello, 2008). As evidence for an alternative view, we first review the growing body of research related to iconicity in vocalizations, including experimental work on sound symbolism, cross-linguistic studies documenting iconicity in the grammars and lexicons of languages, and experimental studies that examine iconicity in the production of speech and vocalizations. We then report an experiment in which participants created vocalizations to communicate 60 different meanings, including 30 antonymic pairs. The vocalizations were measured along several acoustic properties, and these properties were compared between antonyms. Participants were highly consistent in the kinds of sounds they produced for the majority of meanings, supporting the hypothesis that vocalization has considerable potential for iconicity. In light of these findings, we present a comparison between vocalization and manual gesture, and examine the detailed ways in which each modality can function in the iconic expression of particular kinds of meanings. We further discuss the role of iconic vocalizations and gesture in the evolution of language since our divergence from the great apes. In conclusion, we suggest that human communication is best understood as an ensemble of kinesis and vocalization, not just speech, in which expression in both modalities spans the range from arbitrary to iconic.
  • Perniss, P. M. (2007). Achieving spatial coherence in German sign language narratives: The use of classifiers and perspective. Lingua, 117(7), 1315-1338. doi:10.1016/j.lingua.2005.06.013.

    Abstract

    Spatial coherence in discourse relies on the use of devices that provide information about where referents are and where events take place. In signed language, two primary devices for achieving and maintaining spatial coherence are the use of classifier forms and signing perspective. This paper gives a unified account of the relationship between perspective and classifiers, and divides the range of possible correspondences between these two devices into prototypical and non-prototypical alignments. An analysis of German Sign Language narratives of complex events investigates the role of different classifier-perspective constructions in encoding spatial information about location, orientation, action and motion, as well as size and shape of referents. In particular, I show how non-prototypical alignments, including simultaneity of perspectives, contribute to the maintenance of spatial coherence, and provide functional explanations in terms of efficiency and informativeness constraints on discourse.
  • Petersson, K. M., Elfgren, C., & Ingvar, M. (1997). A dynamic role of the medial temporal lobe during retrieval of declarative memory in man. NeuroImage, 6, 1-11.

    Abstract

    Understanding the role of the medial temporal lobe (MTL) in learning and memory is an important problem in cognitive neuroscience. Memory and learning processes that depend on the function of the MTL and related diencephalic structures (e.g., the anterior and mediodorsal thalamic nuclei) are defined as declarative. We have studied the MTL activity as indicated by regional cerebral blood flow with positron emission tomography and statistical parametric mapping during recall of abstract designs in a less practiced memory state as well as in a well-practiced (well-encoded) memory state. The results showed an increased activity of the MTL bilaterally (including parahippocampal gyrus extending into hippocampus proper, as well as anterior lingual and anterior fusiform gyri) during retrieval in the less practiced memory state compared to the well-practiced memory state, indicating a dynamic role of the MTL in retrieval during the learning processes. The results also showed that the activation of the MTL decreases as the subjects learn to draw abstract designs from memory, indicating a changing role of the MTL during recall in the earlier stages of acquisition compared to the well-encoded declarative memory state.
  • Petersson, K. M., Silva, C., Castro-Caldas, A., Ingvar, M., & Reis, A. (2007). Literacy: A cultural influence on functional left-right differences in the inferior parietal cortex. European Journal of Neuroscience, 26(3), 791-799. doi:10.1111/j.1460-9568.2007.05701.x.

    Abstract

    The current understanding of hemispheric interaction is limited. Functional hemispheric specialization is likely to depend on both genetic and environmental factors. In the present study we investigated the importance of one factor, literacy, for the functional lateralization in the inferior parietal cortex in two independent samples of literate and illiterate subjects. The results show that the illiterate group are consistently more right-lateralized than their literate controls. In contrast, the two groups showed a similar degree of left-right differences in early speech-related regions of the superior temporal cortex. These results provide evidence suggesting that a cultural factor, literacy, influences the functional hemispheric balance in reading and verbal working memory-related regions. In a third sample, we investigated grey and white matter with voxel-based morphometry. The results showed differences between literacy groups in white matter intensities related to the mid-body region of the corpus callosum and the inferior parietal and parietotemporal regions (literate > illiterate). There were no corresponding differences in the grey matter. This suggests that the influence of literacy on brain structure related to reading and verbal working memory is affecting large-scale brain connectivity more than grey matter per se.
  • Piai, V., Roelofs, A., Jensen, O., Schoffelen, J.-M., & Bonnefond, M. (2014). Distinct patterns of brain activity characterise lexical activation and competition in spoken word production. PLoS One, 9(2): e88674. doi:10.1371/journal.pone.0088674.

    Abstract

    According to a prominent theory of language production, concepts activate multiple associated words in memory, which enter into competition for selection. However, only a few electrophysiological studies have identified brain responses reflecting competition. Here, we report a magnetoencephalography study in which the activation of competing words was manipulated by presenting pictures (e.g., dog) with distractor words. The distractor and picture name were semantically related (cat), unrelated (pin), or identical (dog). Related distractors are stronger competitors to the picture name because they receive additional activation from the picture relative to other distractors. Picture naming times were longer with related than unrelated and identical distractors. Phase-locked and non-phase-locked activity were distinct but temporally related. Phase-locked activity in left temporal cortex, peaking at 400 ms, was larger on unrelated than related and identical trials, suggesting differential activation of alternative words by the picture-word stimuli. Non-phase-locked activity between roughly 350–650 ms (4–10 Hz) in left superior frontal gyrus was larger on related than unrelated and identical trials, suggesting differential resolution of the competition among the alternatives, as reflected in the naming times. These findings characterise distinct patterns of activity associated with lexical activation and competition, supporting the theory that words are selected by competition.
  • Piai, V., Roelofs, A., & Schriefers, H. (2014). Locus of semantic interference in picture naming: Evidence from dual-task performance. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(1), 147-165. doi:10.1037/a0033745.

    Abstract

    Disagreement exists regarding the functional locus of semantic interference of distractor words in picture naming. This effect is a cornerstone of modern psycholinguistic models of word production, which assume that it arises in lexical response-selection. However, recent evidence from studies of dual-task performance suggests a locus in perceptual or conceptual processing, prior to lexical response-selection. In these studies, participants manually responded to a tone and named a picture while ignoring a written distractor word. The stimulus onset asynchrony (SOA) between tone and picture–word stimulus was manipulated. Semantic interference in naming latencies was present at long tone pre-exposure SOAs, but reduced or absent at short SOAs. Under the prevailing structural or strategic response-selection bottleneck and central capacity sharing models of dual-task performance, the underadditivity of the effects of SOA and stimulus type suggests that semantic interference emerges before lexical response-selection. However, in more recent studies, additive effects of SOA and stimulus type were obtained. Here, we examined the discrepancy in results between these studies in 6 experiments in which we systematically manipulated various dimensions on which these earlier studies differed, including tasks, materials, stimulus types, and SOAs. In all our experiments, additive effects of SOA and stimulus type on naming latencies were obtained. These results strongly suggest that the semantic interference effect arises after perceptual and conceptual processing, during lexical response-selection or later. We discuss several theoretical alternatives with respect to their potential to account for the discrepancy between the present results and other studies showing underadditivity.
  • Piai, V., Roelofs, A., & Maris, E. (2014). Oscillatory brain responses in spoken word production reflect lexical frequency and sentential constraint. Neuropsychologia, 53, 146-156. doi:10.1016/j.neuropsychologia.2013.11.014.

    Abstract

    Two fundamental factors affecting the speed of spoken word production are lexical frequency and sentential constraint, but little is known about their timing and electrophysiological basis. In the present study, we investigated event-related potentials (ERPs) and oscillatory brain responses induced by these factors, using a task in which participants named pictures after reading sentences. Sentence contexts were either constraining or nonconstraining towards the final word, which was presented as a picture. Picture names varied in their frequency of occurrence in the language. Naming latencies and electrophysiological responses were examined as a function of context and lexical frequency. Lexical frequency is an index of our cumulative learning experience with words, so lexical-frequency effects most likely reflect access to memory representations for words. Pictures were named faster with constraining than nonconstraining contexts. Associated with this effect, starting around 400 ms pre-picture presentation, oscillatory power between 8 and 30 Hz was lower for constraining relative to nonconstraining contexts. Furthermore, pictures were named faster with high-frequency than low-frequency names, but only for nonconstraining contexts, suggesting differential ease of memory access as a function of sentential context. Associated with the lexical-frequency effect, starting around 500 ms pre-picture presentation, oscillatory power between 4 and 10 Hz was higher for high-frequency than for low-frequency names, but only for constraining contexts. Our results characterise electrophysiological responses associated with lexical frequency and sentential constraint in spoken word production, and point to new avenues for studying these fundamental factors in language production.
  • Pickering, M. J., & Majid, A. (2007). What are implicit causality and consequentiality? Language and Cognitive Processes, 22(5), 780-788. doi:10.1080/01690960601119876.

    Abstract

    Much work in psycholinguistics and social psychology has investigated the notion of implicit causality associated with verbs. Crinean and Garnham (2006) relate implicit causality to another phenomenon, implicit consequentiality. We argue that they and other researchers have confused the meanings of events and the reasons for those events, so that particular thematic roles (e.g., Agent, Patient) are taken to be causes or consequences of those events by definition. In accord with Garvey and Caramazza (1974), we propose that implicit causality and consequentiality are probabilistic notions that are straightforwardly related to the explicit causes and consequences of events and are analogous to other biases investigated in psycholinguistics.
  • Pine, J. M., Lieven, E. V., & Rowland, C. F. (1997). Stylistic variation at the “single-word” stage: Relations between maternal speech characteristics and children's vocabulary composition and usage. Child Development, 68(5), 807-819. doi:10.1111/j.1467-8624.1997.tb01963.x.

    Abstract

    In this study we test a number of different claims about the nature of stylistic variation at the “single-word” stage by examining the relation between variation in early vocabulary composition, variation in early language use, and variation in the structural and functional propreties of mothers' child-directed speech. Maternal-report and observational data were collected for 26 children at 10, 50, and 100 words, These were then correlated with a variety of different measures of maternal speech at 10 words, The results show substantial variation in the percentage of common nouns and unanalyzed phrases in children's vocabularies, and singficant relations between this variation and the way in which language is used by the child. They also reveal singficant relations between the way in whch mothers use language at 10 words and the way in chich their children use language at 50 words and between certain formal properties of mothers speech at 10 words and the percentage of common nouns and unanalyzed phrases in children's early vocabularies, However, most of these relations desappear when an attempt is made to control for ossible effects of the child on the mother at Time 1. The exception is a singficant negative correlation between mothers tendency to produce speech that illustrates word boundaries and the percentage of unanalyzed phrases at 50 and 100 words. This suggests that mothers whose sprech provides the child with information about where new words begin and end tend to have children with few unanalyzed. phrases in their early vocabularies.
  • Pinget, A.-F., Bosker, H. R., Quené, H., & de Jong, N. H. (2014). Native speakers' perceptions of fluency and accent in L2 speech. Language Testing, 31, 349-365. doi:10.1177/0265532214526177.

    Abstract

    Oral fluency and foreign accent distinguish L2 from L1 speech production. In language testing practices, both fluency and accent are usually assessed by raters. This study investigates what exactly native raters of fluency and accent take into account when judging L2. Our aim is to explore the relationship between objectively measured temporal, segmental and suprasegmental properties of speech on the one hand, and fluency and accent as rated by native raters on the other hand. For 90 speech fragments from Turkish and English L2 learners of Dutch, several acoustic measures of fluency and accent were calculated. In Experiment 1, 20 native speakers of Dutch rated the L2 Dutch samples on fluency. In Experiment 2, 20 different untrained native speakers of Dutch judged the L2 Dutch samples on accentedness. Regression analyses revealed that acoustic measures of fluency were good predictors of fluency ratings. Secondly, segmental and suprasegmental measures of accent could predict some variance of accent ratings. Thirdly, perceived fluency and perceived accent were only weakly related. In conclusion, this study shows that fluency and perceived foreign accent can be judged as separate constructs.
  • Pippucci, T., Magi, A., Gialluisi, A., & Romeo, G. (2014). Detection of runs of homozygosity from whole exome sequencing data: State of the art and perspectives for clinical, population and epidemiological studies. Human Heredity, 77, 63-72. doi:10.1159/000362412.

    Abstract

    Runs of homozygosity (ROH) are sizeable stretches of homozygous genotypes at consecutive polymorphic DNA marker positions, traditionally captured by means of genome-wide single nucleotide polymorphism (SNP) genotyping. With the advent of next-generation sequencing (NGS) technologies, a number of methods initially devised for the analysis of SNP array data (those based on sliding-window algorithms such as PLINK or GERMLINE and graphical tools like HomozygosityMapper) or specifically conceived for NGS data have been adopted for the detection of ROH from whole exome sequencing (WES) data. In the latter group, algorithms for both graphical representation (AgileVariantMapper, HomSI) and computational detection (H3M2) of WES-derived ROH have been proposed. Here we examine these different approaches and discuss available strategies to implement ROH detection in WES analysis. Among sliding-window algorithms, PLINK appears to be well-suited for the detection of ROH, especially of the long ones. As a method specifically tailored for WES data, H3M2 outperforms existing algorithms especially on short and medium ROH. We conclude that, notwithstanding the irregular distribution of exons, WES data can be used with some approximation for unbiased genome-wide analysis of ROH features, with promising applications to homozygosity mapping of disease genes, comparative analysis of populations and epidemiological studies based on consanguinity
  • Poellmann, K., Bosker, H. R., McQueen, J. M., & Mitterer, H. (2014). Perceptual adaptation to segmental and syllabic reductions in continuous spoken Dutch. Journal of Phonetics, 46, 101-127. doi:10.1016/j.wocn.2014.06.004.

    Abstract

    This study investigates if and how listeners adapt to reductions in casual continuous speech. In a perceptual-learning variant of the visual-world paradigm, two groups of Dutch participants were exposed to either segmental (/b/ → [ʋ]) or syllabic (ver- → [fː]) reductions in spoken Dutch sentences. In the test phase, both groups heard both kinds of reductions, but now applied to different words. In one of two experiments, the segmental reduction exposure group was better than the syllabic reduction exposure group in recognizing new reduced /b/-words. In both experiments, the syllabic reduction group showed a greater target preference for new reduced ver-words. Learning about reductions was thus applied to previously unheard words. This lexical generalization suggests that mechanisms compensating for segmental and syllabic reductions take place at a prelexical level, and hence that lexical access involves an abstractionist mode of processing. Existing abstractionist models need to be revised, however, as they do not include representations of sequences of segments (corresponding e.g. to ver-) at the prelexical level.
  • Poellmann, K., Mitterer, H., & McQueen, J. M. (2014). Use what you can: Storage, abstraction processes and perceptual adjustments help listeners recognize reduced forms. Frontiers in Psychology, 5: 437. doi:10.3389/fpsyg.2014.00437.

    Abstract

    Three eye-tracking experiments tested whether native listeners recognized reduced Dutch words better after having heard the same reduced words, or different reduced words of the same reduction type and whether familiarization with one reduction type helps listeners to deal with another reduction type. In the exposure phase, a segmental reduction group was exposed to /b/-reductions (e.g., "minderij" instead of "binderij", 'book binder') and a syllabic reduction group was exposed to full-vowel deletions (e.g., "p'raat" instead of "paraat", 'ready'), while a control group did not hear any reductions. In the test phase, all three groups heard the same speaker producing reduced-/b/ and deleted-vowel words that were either repeated (Experiments 1 & 2) or new (Experiment 3), but that now appeared as targets in semantically neutral sentences. Word-specific learning effects were found for vowel-deletions but not for /b/-reductions. Generalization of learning to new words of the same reduction type occurred only if the exposure words showed a phonologically consistent reduction pattern (/b/-reductions). In contrast, generalization of learning to words of another reduction type occurred only if the exposure words showed a phonologically inconsistent reduction pattern (the vowel deletions; learning about them generalized to recognition of the /b/-reductions). In order to deal with reductions, listeners thus use various means. They store reduced variants (e.g., for the inconsistent vowel-deleted words) and they abstract over incoming information to build up and apply mapping rules (e.g., for the consistent /b/-reductions). Experience with inconsistent pronunciations leads to greater perceptual flexibility in dealing with other forms of reduction uttered by the same speaker than experience with consistent pronunciations.
  • Poletiek, F. H. (1997). De wet 'bijzondere opnemingen in psychiatrische ziekenhuizen' aan de cijfers getoetst. Maandblad voor Geestelijke Volksgezondheid, 4, 349-361.
  • Poletiek, F. H. (in preparation). Inside the juror: The psychology of juror decision-making [Bespreking van De geest van de jury (1997)].
  • St Pourcain, B., Cents, R. A., Whitehouse, A. J., Haworth, C. M., Davis, O. S., O’Reilly, P. F., Roulstone, S., Wren, Y., Ang, Q. W., Velders, F. P., Evans, D. M., Kemp, J. P., Warrington, N. M., Miller, L., Timpson, N. J., Ring, S. M., Verhulst, F. C., Hofman, A., Rivadeneira, F., Meaburn, E. L. and 12 moreSt Pourcain, B., Cents, R. A., Whitehouse, A. J., Haworth, C. M., Davis, O. S., O’Reilly, P. F., Roulstone, S., Wren, Y., Ang, Q. W., Velders, F. P., Evans, D. M., Kemp, J. P., Warrington, N. M., Miller, L., Timpson, N. J., Ring, S. M., Verhulst, F. C., Hofman, A., Rivadeneira, F., Meaburn, E. L., Price, T. S., Dale, P. S., Pillas, D., Yliherva, A., Rodriguez, A., Golding, J., Jaddoe, V. W., Jarvelin, M.-R., Plomin, R., Pennell, C. E., Tiemeier, H., & Davey Smith, G. (2014). Common variation near ROBO2 is associated with expressive vocabulary in infancy. Nature Communications, 5: 4831. doi:10.1038/ncomms5831.
  • St Pourcain, B., Skuse, D. H., Mandy, W. P., Wang, K., Hakonarson, H., Timpson, N. J., Evans, D. M., Kemp, J. P., Ring, S. M., McArdle, W. L., Golding, J., & Smith, G. D. (2014). Variability in the common genetic architecture of social-communication spectrum phenotypes during childhood and adolescence. Molecular Autism, 5: 18. doi:10.1186/2040-2392-5-18.

    Abstract

    Background Social-communication abilities are heritable traits, and their impairments overlap with the autism continuum. To characterise the genetic architecture of social-communication difficulties developmentally and identify genetic links with the autistic dimension, we conducted a genome-wide screen of social-communication problems at multiple time-points during childhood and adolescence. Methods Social-communication difficulties were ascertained at ages 8, 11, 14 and 17 years in a UK population-based birth cohort (Avon Longitudinal Study of Parents and Children; N ≤ 5,628) using mother-reported Social Communication Disorder Checklist scores. Genome-wide Complex Trait Analysis (GCTA) was conducted for all phenotypes. The time-points with the highest GCTA heritability were subsequently analysed for single SNP association genome-wide. Type I error in the presence of measurement relatedness and the likelihood of observing SNP signals near known autism susceptibility loci (co-location) were assessed via large-scale, genome-wide permutations. Association signals (P ≤ 10−5) were also followed up in Autism Genetic Resource Exchange pedigrees (N = 793) and the Autism Case Control cohort (Ncases/Ncontrols = 1,204/6,491). Results GCTA heritability was strongest in childhood (h2(8 years) = 0.24) and especially in later adolescence (h2(17 years) = 0.45), with a marked drop during early to middle adolescence (h2(11 years) = 0.16 and h2(14 years) = 0.08). Genome-wide screens at ages 8 and 17 years identified for the latter time-point evidence for association at 3p22.2 near SCN11A (rs4453791, P = 9.3 × 10−9; genome-wide empirical P = 0.011) and suggestive evidence at 20p12.3 at PLCB1 (rs3761168, P = 7.9 × 10−8; genome-wide empirical P = 0.085). None of these signals contributed to risk for autism. However, the co-location of population-based signals and autism susceptibility loci harbouring rare mutations, such as PLCB1, is unlikely to be due to chance (genome-wide empirical Pco-location = 0.007). Conclusions Our findings suggest that measurable common genetic effects for social-communication difficulties vary developmentally and that these changes may affect detectable overlaps with the autism spectrum.

    Additional information

    13229_2013_113_MOESM1_ESM.docx
  • Pouw, W., Van Gog, T., & Paas, F. (2014). An embedded and embodied cognition review of instructional manipulatives. Educational Psychology Review, 26, 51-72. doi:10.1007/s10648-014-9255-5.

    Abstract

    Recent literature on learning with instructional manipulatives seems to call for a moderate view on the effects of perceptual and interactive richness of instructional manipulatives on learning. This “moderate view” holds that manipulatives’ perceptual and interactive richness may compromise learning in two ways: (1) by imposing a very high cognitive load on the learner, and (2) by hindering drawing of symbolic inferences that are supposed to play a key role in transfer (i.e., application of knowledge to new situations in the absence of instructional manipulatives). This paper presents a contrasting view. Drawing on recent insights from Embedded Embodied perspectives on cognition, it is argued that (1) perceptual and interactive richness may provide opportunities for alleviating cognitive load (Embedded Cognition), and (2) transfer of learning is not reliant on decontextualized knowledge but may draw on previous sensorimotor experiences of the kind afforded by perceptual and interactive richness of manipulatives (Embodied Cognition). By negotiating the Embedded Embodied Cognition view with the moderate view, implications for research are derived.
  • Pouw, W., De Nooijer, J. A., Van Gog, T., Zwaan, R. A., & Paas, F. (2014). Toward a more embedded/extended perspective on the cognitive function of gestures. Frontiers in Psychology, 5: 359. doi:10.3389/fpsyg.2014.00359.

    Abstract

    Gestures are often considered to be demonstrative of the embodied nature of the mind (Hostetter and Alibali, 2008). In this article, we review current theories and research targeted at the intra-cognitive role of gestures. We ask the question how can gestures support internal cognitive processes of the gesturer? We suggest that extant theories are in a sense disembodied, because they focus solely on embodiment in terms of the sensorimotor neural precursors of gestures. As a result, current theories on the intra-cognitive role of gestures are lacking in explanatory scope to address how gestures-as-bodily-acts fulfill a cognitive function. On the basis of recent theoretical appeals that focus on the possibly embedded/extended cognitive role of gestures (Clark, 2013), we suggest that gestures are external physical tools of the cognitive system that replace and support otherwise solely internal cognitive processes. That is gestures provide the cognitive system with a stable external physical and visual presence that can provide means to think with. We show that there is a considerable amount of overlap between the way the human cognitive system has been found to use its environment, and how gestures are used during cognitive processes. Lastly, we provide several suggestions of how to investigate the embedded/extended perspective of the cognitive function of gestures.
  • Presciuttini, S., Gialluisi, A., Barbuti, S., Curcio, M., Scatena, F., Carli, G., & Santarcangelo, E. L. (2014). Hypnotizability and Catechol-O-Methyltransferase (COMT) polymorphysms in Italians. Frontiers in Human Neuroscience, 7: 929. doi:10.3389/fnhum.2013.00929.

    Abstract

    Higher brain dopamine content depending on lower activity of Catechol-O-Methyltransferase (COMT) in subjects with high hypnotizability scores (highs) has been considered responsible for their attentional characteristics. However, the results of the previous genetic studies on association between hypnotizability and the COMT single nucleotide polymorphism (SNP) rs4680 (Val158Met) were inconsistent. Here, we used a selective genotyping approach to re-evaluate the association between hypnotizability and COMT in the context of a two-SNP haplotype analysis, considering not only the Val158Met polymorphism, but also the closely located rs4818 SNP. An Italian sample of 53 highs, 49 low hypnotizable subjects (lows), and 57 controls, were genotyped for a segment of 805 bp of the COMT gene, including Val158Met and the closely located rs4818 SNP. Our selective genotyping approach had 97.1% power to detect the previously reported strongest association at the significance level of 5%. We found no evidence of association at the SNP, haplotype, and diplotype levels. Thus, our results challenge the dopamine-based theory of hypnosis and indirectly support recent neuropsychological and neurophysiological findings reporting the lack of any association between hypnotizability and focused attention abilities.
  • Prieto, P., & Torreira, F. (2007). The segmental anchoring hypothesis revisited: Syllable structure and speech rate effects on peak timing in Spanish. Journal of Phonetics, 35, 473-500. doi:10.1016/j.wocn.2007.01.001.

    Abstract

    This paper addresses the validity of the segmental anchoring hypothesis for tonal landmarks (henceforth, SAH) as described in recent work by (among others) Ladd, Faulkner, D., Faulkner, H., & Schepman [1999. Constant ‘segmental’ anchoring of f0 movements under changes in speech rate. Journal of the Acoustical Society of America, 106, 1543–1554], Ladd [2003. Phonological conditioning of f0 target alignment. In: M. J. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the XVth international congress of phonetic sciences, Vol. 1, (pp. 249–252). Barcelona: Causal Productions; in press. Segmental anchoring of pitch movements: Autosegmental association or gestural coordination? Italian Journal of Linguistics, 18 (1)]. The alignment of LH* prenuclear peaks with segmental landmarks in controlled speech materials in Peninsular Spanish is analyzed as a function of syllable structure type (open, closed) of the accented syllable, segmental composition, and speaking rate. Contrary to the predictions of the SAH, alignment was affected by syllable structure and speech rate in significant and consistent ways. In: CV syllables the peak was located around the end of the accented vowel, and in CVC syllables around the beginning-mid part of the sonorant coda, but still far from the syllable boundary. With respect to the effects of rate, peaks were located earlier in the syllable as speech rate decreased. The results suggest that the accent gestures under study are synchronized with the syllable unit. In general, the longer the syllable, the longer the rise time. Thus the fundamental idea of the anchoring hypothesis can be taken as still valid. On the other hand, the tonal alignment patterns reported here can be interpreted as the outcome of distinct modes of gestural coordination in syllable-initial vs. syllable-final position: gestures at syllable onsets appear to be more tightly coordinated than gestures at the end of syllables [Browman, C. P., & Goldstein, L.M. (1986). Towards an articulatory phonology. Phonology Yearbook, 3, 219–252; Browman, C. P., & Goldstein, L. (1988). Some notes on syllable structure in articulatory phonology. Phonetica, 45, 140–155; (1992). Articulatory Phonology: An overview. Phonetica, 49, 155–180; Krakow (1999). Physiological organization of syllables: A review. Journal of Phonetics, 27, 23–54; among others]. Intergestural timing can thus provide a unifying explanation for (1) the contrasting behavior between the precise synchronization of L valleys with the onset of the syllable and the more variable timing of the end of the f0 rise, and, more specifically, for (2) the right-hand tonal pressure effects and ‘undershoot’ patterns displayed by peaks at the ends of syllables and other prosodic domains.
  • Protopapas, A., Gerakaki, S., & Alexandri, S. (2007). Sources of information for stress assignment in reading Greek. Applied Psycholinguistics, 28(4), 695 -720. doi:10.1017/S0142716407070373.

    Abstract

    To assign lexical stress when reading, the Greek reader can potentially rely on lexical information (knowledge of the word), visual–orthographic information (processing of the written diacritic), or a default metrical strategy (penultimate stress pattern). Previous studies with secondary education children have shown strong lexical effects on stress assignment and have provided evidence for a default pattern. Here we report two experiments with adult readers, in which we disentangle and quantify the effects of these three potential sources using nonword materials. Stimuli either resembled or did not resemble real words, to manipulate availability of lexical information; and they were presented with or without a diacritic, in a word-congruent or word-incongruent position, to contrast the relative importance of the three sources. Dual-task conditions, in which cognitive load during nonword reading was increased with phonological retention carrying a metrical pattern different from the default, did not support the hypothesis that the default arises from cumulative lexical activation in working memory.
  • Qin, S., Piekema, C., Petersson, K. M., Han, B., Luo, J., & Fernández, G. (2007). Probing the transformation of discontinuous associations into episodic memory: An event-related fMRI study. NeuroImage, 38(1), 212-222. doi:10.1016/j.neuroimage.2007.07.020.

    Abstract

    Using event-related functional magnetic resonance imaging, we identified brain regions involved in storing associations of events discontinuous in time into long-term memory. Participants were scanned while memorizing item-triplets including simultaneous and discontinuous associations. Subsequent memory tests showed that participants remembered both types of associations equally well. First, by constructing the contrast between the subsequent memory effects for discontinuous associations and simultaneous associations, we identified the left posterior parahippocampal region, dorsolateral prefrontal cortex, the basal ganglia, posterior midline structures, and the middle temporal gyrus as being specifically involved in transforming discontinuous associations into episodic memory. Second, we replicated that the prefrontal cortex and the medial temporal lobe (MTL) especially the hippocampus are involved in associative memory formation in general. Our findings provide evidence for distinct neural operation(s) that supports the binding and storing discontinuous associations in memory. We suggest that top-down signals from the prefrontal cortex and MTL may trigger reactivation of internal representation in posterior midline structures of the first event, thus allowing it to be associated with the second event. The dorsolateral prefrontal cortex together with basal ganglia may support this encoding operation by executive and binding processes within working memory, and the posterior parahippocampal region may play a role in binding and memory formation.
  • Rahmany, R., Marefat, H., & Kidd, E. (2014). Resumptive elements aid comprehension of object relative clauses: evidence from Persian. Journal of Child Language, 41(4), 937-48. doi:10.1017/s0305000913000147.
  • Rapold, C. J. (2007). From demonstratives to verb agreement in Benchnon: A diachronic perspective. In A. Amha, M. Mous, & G. Savà (Eds.), Omotic and Cushitic studies: Papers from the Fourth Cushitic Omotic Conference, Leiden, 10-12 April 2003 (pp. 69-88). Cologne: Rüdiger Köppe.
  • Ravignani, A., Bowling, D. L., & Fitch, W. T. (2014). Chorusing, synchrony, and the evolutionary functions of rhythm. Frontiers in Psychology, 5: 1118. doi:10.3389/fpsyg.2014.01118.

    Abstract

    A central goal of biomusicology is to understand the biological basis of human musicality. One approach to this problem has been to compare core components of human musicality (relative pitch perception, entrainment, etc.) with similar capacities in other animal species. Here we extend and clarify this comparative approach with respect to rhythm. First, whereas most comparisons between human music and animal acoustic behavior have focused on spectral properties (melody and harmony), we argue for the central importance of temporal properties, and propose that this domain is ripe for further comparative research. Second, whereas most rhythm research in non-human animals has examined animal timing in isolation, we consider how chorusing dynamics can shape individual timing, as in human music and dance, arguing that group behavior is key to understanding the adaptive functions of rhythm. To illustrate the interdependence between individual and chorusing dynamics, we present a computational model of chorusing agents relating individual call timing with synchronous group behavior. Third, we distinguish and clarify mechanistic and functional explanations of rhythmic phenomena, often conflated in the literature, arguing that this distinction is key for understanding the evolution of musicality. Fourth, we expand biomusicological discussions beyond the species typically considered, providing an overview of chorusing and rhythmic behavior across a broad range of taxa (orthopterans, fireflies, frogs, birds, and primates). Finally, we propose an “Evolving Signal Timing” hypothesis, suggesting that similarities between timing abilities in biological species will be based on comparable chorusing behaviors. We conclude that the comparative study of chorusing species can provide important insights into the adaptive function(s) of rhythmic behavior in our “proto-musical” primate ancestors, and thus inform our understanding of the biology and evolution of rhythm in human music and language.
  • Ravignani, A. (2014). Chronometry for the chorusing herd: Hamilton's legacy on context-dependent acoustic signalling—a comment on Herbers (2013). Biology Letters, 10(1): 20131018. doi:10.1098/rsbl.2013.1018.
  • Ravignani, A., Bowling, D., & Kirby, S. (2014). The psychology of biological clocks: A new framework for the evolution of rhythm. In E. A. Cartmill, S. G. Roberts, & H. Lyn (Eds.), The Evolution of Language: Proceedings of the 10th International Conference (pp. 262-269). Singapore: World Scientific.
  • Ravignani, A., Martins, M., & Fitch, W. T. (2014). Vocal learning, prosody, and basal ganglia: Don't underestimate their complexity. Behavioral and Brain Sciences, 37(6), 570-571. doi:10.1017/S0140525X13004184.

    Abstract

    In response to: Brain mechanisms of acoustic communication in humans and nonhuman primates: An evolutionary perspective

    Abstract:
    Ackermann et al.'s arguments in the target article need sharpening and rethinking at both mechanistic and evolutionary levels. First, the authors' evolutionary arguments are inconsistent with recent evidence concerning nonhuman animal rhythmic abilities. Second, prosodic intonation conveys much more complex linguistic information than mere emotional expression. Finally, human adults' basal ganglia have a considerably wider role in speech modulation than Ackermann et al. surmise.
  • Redmann, A., FitzPatrick, I., Hellwig, F. M., & Indefrey, P. (2014). The use of conceptual components in language production: an ERP study. Frontiers in Psychology, 5: 363. doi:10.3389/fpsyg.2014.00363.

    Abstract

    According to frame-theory, concepts can be represented as structured frames that contain conceptual attributes (e.g., "color") and their values (e.g., "red"). A particular color value can be seen as a core conceptual component for (high color-diagnostic; HCD) objects (e.g., bananas) which are strongly associated with a typical color, but less so for (low color-diagnostic; LCD) objects (e.g., bicycles) that exist in many different colors. To investigate whether the availability of a core conceptual component (color) affects lexical access in language production, we conducted two experiments on the naming of visually presented HCD and LCD objects. Experiment 1 showed that, when naming latencies were matched for colored HCD and LCD objects, achromatic HCD objects were named more slowly than achromatic LCD objects. In Experiment 2 we recorded ERPs while participants performed a picture-naming task, in which achromatic target pictures were either preceded by an appropriately colored box (primed condition) or a black and white checkerboard (unprimed condition). We focused on the P2 component, which has been shown to reflect difficulty of lexical access in language production. Results showed that HCD resulted in slower object-naming and a more pronounced P2. Priming also yielded a more positive P2 but did not result in an RT difference. ERP waveforms on the P1, P2 and N300 components showed a priming by color-diagnosticity interaction, the effect of color priming being stronger for HCD objects than for LCD objects. The effect of color-diagnosticity on the P2 component suggests that the slower naming of achromatic HCD objects is (at least in part) due to more difficult lexical retrieval. Hence, the color attribute seems to affect lexical retrieval in HCD words. The interaction between priming and color-diagnosticity indicates that priming with a feature hinders lexical access, especially if the feature is a core feature of the target object.
  • Reis, A., Faísca, L., Mendonça, S., Ingvar, M., & Petersson, K. M. (2007). Semantic interference on a phonological task in illiterate subjects. Scandinavian Journal of Psychology, 48(1), 69-74. doi:10.1111/j.1467-9450.2006.00544.x.

    Abstract

    Previous research suggests that learning an alphabetic written language influences aspects of the auditory-verbal language system. In this study, we examined whether literacy influences the notion of words as phonological units independent of lexical semantics in literate and illiterate subjects. Subjects had to decide which item in a word- or pseudoword pair was phonologically longest. By manipulating the relationship between referent size and phonological length in three word conditions (congruent, neutral, and incongruent) we could examine to what extent subjects focused on form rather than meaning of the stimulus material. Moreover, the pseudoword condition allowed us to examine global phonological awareness independent of lexical semantics. The results showed that literate performed significantly better than illiterate subjects in the neutral and incongruent word conditions as well as in the pseudoword condition. The illiterate group performed least well in the incongruent condition and significantly better in the pseudoword condition compared to the neutral and incongruent word conditions and suggest that performance on phonological word length comparisons is dependent on literacy. In addition, the results show that the illiterate participants are able to perceive and process phonological length, albeit less well than the literate subjects, when no semantic interference is present. In conclusion, the present results confirm and extend the finding that illiterate subjects are biased towards semantic-conceptual-pragmatic types of cognitive processing.
  • Ringersma, J., & Kemps-Snijders, M. (2007). Creating multimedia dictionaries of endangered languages using LEXUS. In H. van Hamme, & R. van Son (Eds.), Proceedings of Interspeech 2007 (pp. 65-68). Baixas, France: ISCA-Int.Speech Communication Assoc.

    Abstract

    This paper reports on the development of a flexible web based lexicon tool, LEXUS. LEXUS is targeted at linguists involved in language documentation (of endangered languages). It allows the creation of lexica within the structure of the proposed ISO LMF standard and uses the proposed concept naming conventions from the ISO data categories, thus enabling interoperability, search and merging. LEXUS also offers the possibility to visualize language, since it provides functionalities to include audio, video and still images to the lexicon. With LEXUS it is possible to create semantic network knowledge bases, using typed relations. The LEXUS tool is free for use. Index Terms: lexicon, web based application, endangered languages, language documentation.
  • Roberts, S. G., Dediu, D., & Levinson, S. C. (2014). Detecting differences between the languages of Neandertals and modern humans. In E. A. Cartmill, S. G. Roberts, H. Lyn, & H. Cornish (Eds.), The Evolution of Language: Proceedings of the 10th International Conference (pp. 501-502). Singapore: World Scientific.

    Abstract

    Dediu and Levinson (2013) argue that Neandertals had essentially modern language and speech, and that they were in genetic contact with the ancestors of modern humans during our dispersal out of Africa. This raises the possibility of cultural and linguistic contact between the two human lineages. If such contact did occur, then it might have influenced the cultural evolution of the languages. Since the genetic traces of contact with Neandertals are limited to the populations outside of Africa, Dediu & Levinson predict that there may be structural differences between the present-day languages derived from languages in contact with Neanderthals, and those derived from languages that were not influenced by such contact. Since the signature of such deep contact might reside in patterns of features, they suggested that machine learning methods may be able to detect these differences. This paper attempts to test this hypothesis and to estimate particular linguistic features that are potential candidates for carrying a signature of Neandertal languages.
  • Roberts, L., Marinis, T., Felser, C., & Clahsen, H. (2007). Antecedent priming at trace positions in children’s sentence processing. Journal of Psycholinguistic Research, 36(2), 175-188. doi: 10.1007/s10936-006-9038-3.

    Abstract

    The present study examines whether children reactivate a moved constituent at its gap position and how children’s more limited working memory span affects the way they process filler-gap dependencies. 46 5–7 year-old children and 54 adult controls participated in a cross-modal picture priming experiment and underwent a standardized working memory test. The results revealed a statistically significant interaction between the participants’ working memory span and antecedent reactivation: High-span children (n = 19) and high-span adults (n = 22) showed evidence of antecedent priming at the gap site, while for low-span children and adults, there was no such effect. The antecedent priming effect in the high-span participants indicates that in both children and adults, dislocated arguments access their antecedents at gap positions. The absence of an antecedent reactivation effect in the low-span participants could mean that these participants required more time to integrate the dislocated constituent and reactivated the filler later during the sentence.
  • Roberts, L. (2007). Investigating real-time sentence processing in the second language. Stem-, Spraak- en Taalpathologie, 15, 115-127.

    Abstract

    Second language (L2) acquisition researchers have always been concerned with what L2 learners know about the grammar of the target language but more recently there has been growing interest in how L2 learners put this knowledge to use in real-time sentence comprehension. In order to investigate real-time L2 sentence processing, the types of constructions studied and the methods used are often borrowed from the field of monolingual processing, but the overall issues are familiar from traditional L2 acquisition research. These cover questions relating to L2 learners’ native-likeness, whether or not L1 transfer is in evidence, and how individual differences such as proficiency and language experience might have an effect. The aim of this paper is to provide for those unfamiliar with the field, an overview of the findings of a selection of behavioral studies that have investigated such questions, and to offer a picture of how L2 learners and bilinguals may process sentences in real time.
  • Roberts, S. G., & De Vos, C. (2014). Gene-culture coevolution of a linguistic system in two modalities. In B. De Boer, & T. Verhoef (Eds.), Proceedings of Evolang X, Workshop on Signals, Speech, and Signs (pp. 23-27).

    Abstract

    Complex communication can take place in a range of modalities such as auditory, visual, and tactile modalities. In a very general way, the modality that individuals use is constrained by their biological biases (humans cannot use magnetic fields directly to communicate to each other). The majority of natural languages have a large audible component. However, since humans can learn sign languages just as easily, it’s not clear to what extent the prevalence of spoken languages is due to biological biases, the social environment or cultural inheritance. This paper suggests that we can explore the relative contribution of these factors by modelling the spontaneous emergence of sign languages that are shared by the deaf and hearing members of relatively isolated communities. Such shared signing communities have arisen in enclaves around the world and may provide useful insights by demonstrating how languages evolve as the deaf proportion of its members has strong biases towards the visual language modality. In this paper we describe a model of cultural evolution in two modalities, combining aspects that are thought to impact the emergence of sign languages in a more general evolutionary framework. The model can be used to explore hypotheses about how sign languages emerge.
  • Roberts, S. G., Dediu, D., & Moisik, S. R. (2014). How to speak Neanderthal. New Scientist, 222(2969), 40-41. doi:10.1016/S0262-4079(14)60970-2.
  • Roberts, S. G., Thompson, B., & Smith, K. (2014). Social interaction influences the evolution of cognitive biases for language. In E. A. Cartmill, S. G. Roberts, & H. Lyn (Eds.), The Evolution of Language: Proceedings of the 10th International Conference (pp. 278-285). Singapore: World Scientific. doi:0.1142/9789814603638_0036.

    Abstract

    Models of cultural evolution demonstrate that the link between individual biases and population- level phenomena can be obscured by the process of cultural transmission (Kirby, Dowman, & Griffiths, 2007). However, recent extensions to these models predict that linguistic diversity will not emerge and that learners should evolve to expect little linguistic variation in their input (Smith & Thompson, 2012). We demonstrate that this result derives from assumptions that privilege certain kinds of social interaction by exploring a range of alternative social models. We find several evolutionary routes to linguistic diversity, and show that social interaction not only influences the kinds of biases which could evolve to support language, but also the effects those biases have on a linguistic system. Given the same starting situation, the evolution of biases for language learning and the distribution of linguistic variation are affected by the kinds of social interaction that a population privileges.
  • Rodenas-Cuadrado, P., Ho, J., & Vernes, S. C. (2014). Shining a light on CNTNAP2: Complex functions to complex disorders. European Journal of Human Genetics, 22(2), 171-178. doi:10.1038/ejhg.2013.100.

    Abstract

    The genetic basis of complex neurological disorders involving language are poorly understood, partly due to the multiple additive genetic risk factors that are thought to be responsible. Furthermore, these conditions are often syndromic in that they have a range of endophenotypes that may be associated with the disorder and that may be present in different combinations in patients. However, the emergence of individual genes implicated across multiple disorders has suggested that they might share similar underlying genetic mechanisms. The CNTNAP2 gene is an excellent example of this, as it has recently been implicated in a broad range of phenotypes including autism spectrum disorder (ASD), schizophrenia, intellectual disability, dyslexia and language impairment. This review considers the evidence implicating CNTNAP2 in these conditions, the genetic risk factors and mutations that have been identified in patient and population studies and how these relate to patient phenotypes. The role of CNTNAP2 is examined in the context of larger neurogenetic networks during development and disorder, given what is known regarding the regulation and function of this gene. Understanding the role of CNTNAP2 in diverse neurological disorders will further our understanding of how combinations of individual genetic risk factors can contribute to complex conditions
  • Roelofs, A. (2007). On the modelling of spoken word planning: Rejoinder to La Heij, Starreveld, and Kuipers (2007). Language and Cognitive Processes, 22(8), 1281-1286. doi:10.1080/01690960701462291.

    Abstract

    The author contests several claims of La Heij, Starreveld, and Kuipers (this issue) concerning the modelling of spoken word planning. The claims are about the relevance of error findings, the interaction between semantic and phonological factors, the explanation of word-word findings, the semantic relatedness paradox, and production rules.
  • Roelofs, A. (2007). A critique of simple name-retrieval models of spoken word planning. Language and Cognitive Processes, 22(8), 1237-1260. doi:10.1080/01690960701461582.

    Abstract

    Simple name-retrieval models of spoken word planning (Bloem & La Heij, 2003; Starreveld & La Heij, 1996) maintain (1) that there are two levels in word planning, a conceptual and a lexical phonological level, and (2) that planning a word in both object naming and oral reading involves the selection of a lexical phonological representation. Here, the name retrieval models are compared to more complex models with respect to their ability to account for relevant data. It appears that the name retrieval models cannot easily account for several relevant findings, including some speech error biases, types of morpheme errors, and context effects on the latencies of responding to pictures and words. New analyses of the latency distributions in previous studies also pose a challenge. More complex models account for all these findings. It is concluded that the name retrieval models are too simple and that the greater complexity of the other models is warranted
  • Roelofs, A. (2007). Attention and gaze control in picture naming, word reading, and word categorizing. Journal of Memory and Language, 57(2), 232-251. doi:10.1016/j.jml.2006.10.001.

    Abstract

    The trigger for shifting gaze between stimuli requiring vocal and manual responses was examined. Participants were presented with picture–word stimuli and left- or right-pointing arrows. They vocally named the picture (Experiment 1), read the word (Experiment 2), or categorized the word (Experiment 3) and shifted their gaze to the arrow to manually indicate its direction. The experiments showed that the temporal coordination of vocal responding and gaze shifting depends on the vocal task and, to a lesser extent, on the type of relationship between picture and word. There was a close temporal link between gaze shifting and manual responding, suggesting that the gaze shifts indexed shifts of attention between the vocal and manual tasks. Computer simulations showed that a simple extension of WEAVER++ [Roelofs, A. (1992). A spreading-activation theory of lemma retrieval in speaking. Cognition, 42, 107–142.; Roelofs, A. (2003). Goal-referenced selection of verbal action: modeling attentional control in the Stroop task. Psychological Review, 110, 88–125.] with assumptions about attentional control in the coordination of vocal responding, gaze shifting, and manual responding quantitatively accounts for the key findings.
  • Roelofs, A., Özdemir, R., & Levelt, W. J. M. (2007). Influences of spoken word planning on speech recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(5), 900-913. doi:10.1037/0278-7393.33.5.900.

    Abstract

    In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway. (PsycINFO Database Record (c) 2007 APA, all rights reserved)
  • Roelofs, A. (1997). The WEAVER model of word-form encoding in speech production. Cognition, 64, 249-284. doi:10.1016/S0010-0277(97)00027-9.

    Abstract

    Lexical access in speaking consists of two major steps: lemma retrieval and word-form encoding. In Roelofs (Roelofs, A. 1992a. Cognition 42. 107-142; Roelofs. A. 1993. Cognition 47, 59-87.), I described a model of lemma retrieval. The present paper extends this work by presenting a comprehensive model of the second access step, word-form encoding. The model is called WEAVER (Word-form Encoding by Activation and VERification). Unlike other models of word-form generation, WEAVER is able to provide accounts of response time data, particularly from the picture-word interference paradigm and the implicit priming paradigm. Its key features are (1) retrieval by spreading activation, (2) verification of activated information by a production rule, (3) a rightward incremental construction of phonological representations using a principle of active syllabification, syllables are constructed on the fly rather than stored with lexical items, (4) active competitive selection of syllabic motor programs using a mathematical formalism that generates response times and (5) the association of phonological speech errors with the selection of syllabic motor programs due to the failure of verification.
  • Rojas-Berscia, L. M. (2014). Towards an ontological theory of language: Radical minimalism, memetic linguistics and linguistic engineering, prolegomena. Ianua: Revista Philologica Romanica, 14(2), 69-81.

    Abstract

    In contrast to what has happened in other sciences, the establishment of what is the study object of linguistics as an autonomous discipline has not been resolved yet. Ranging from external explanations of language as a system (Saussure 1916), the existence of a mental innate language capacity or UG (Chomsky 1965, 1981, 1995), the cognitive complexity of the mental language capacity and the acquisition of languages in use (Langacker 1987, 1991, 2008; Croft & Cruse 2004; Evans & Levinson 2009) most, if not all, theoretical approaches have provided explanations that somehow isolated our discipline from developments in other major sciences, such as physics and evolutionary biology. In the present article I will present some of the basic issues regarding the current debate in the discipline, in order to identify some problems regarding the modern assumptions on language. Furthermore, a new proposal on how to approach linguistic phenomena will be given, regarding what I call «the main three» basic problems our discipline has to face ulteriorly. Finally, some preliminary ideas on a new paradigm of Linguistics which tries to answer these three basic problems will be presented, mainly based in the recently-born formal theory called Radical Minimalism (Krivochen 2011a, 2011b) and what I dub Memetic Linguistics and Linguistic Engineering
  • Roorda, D., Kalkman, G., Naaijer, M., & Van Cranenburgh, A. (2014). LAF-Fabric: A data analysis tool for linguistic annotation framework with an application to the Hebrew Bible. Computational linguistics in the Netherlands, 4, 105-120.

    Abstract

    The Linguistic Annotation Framework (LAF) provides a general, extensible stand-o markup system for corpora. This paper discusses LAF-Fabric, a new tool to analyse LAF resources in general with an extension to process the Hebrew Bible in particular. We rst walk through the history of the Hebrew Bible as text database in decennium-wide steps. Then we describe how LAF-Fabric may serve as an analysis tool for this corpus. Finally, we describe three analytic projects/work ows that benet from the new LAF representation: 1) the study of linguistic variation: extract cooccurrence data of common nouns between the books of the Bible (Martijn Naaijer); 2) the study of the grammar of Hebrew poetry in the Psalms: extract clause typology (Gino Kalkman); 3) construction of a parser of classical Hebrew by Data Oriented Parsing: generate tree structures from the database (Andreas van Cranenburgh).
  • Roswandowitz, C., Mathias, S. R., Hintz, F., Kreitewolf, J., Schelinski, S., & von Kriegstein, K. (2014). Two cases of selective developmental voice-recognition impairments. Current Biology, 24(19), 2348-2353. doi:10.1016/j.cub.2014.08.048.

    Abstract

    Recognizing other individuals is an essential skill in humans and in other species [1, 2 and 3]. Over the last decade, it has become increasingly clear that person-identity recognition abilities are highly variable. Roughly 2% of the population has developmental prosopagnosia, a congenital deficit in recognizing others by their faces [4]. It is currently unclear whether developmental phonagnosia, a deficit in recognizing others by their voices [5], is equally prevalent, or even whether it actually exists. Here, we aimed to identify cases of developmental phonagnosia. We collected more than 1,000 data sets from self-selected German individuals by using a web-based screening test that was designed to assess their voice-recognition abilities. We then examined potentially phonagnosic individuals by using a comprehensive laboratory test battery. We found two novel cases of phonagnosia: AS, a 32-year-old female, and SP, a 32-year-old male; both are otherwise healthy academics, have normal hearing, and show no pathological abnormalities in brain structure. The two cases have comparable patterns of impairments: both performed at least 2 SDs below the level of matched controls on tests that required learning new voices, judging the familiarity of famous voices, and discriminating pitch differences between voices. In both cases, only voice-identity processing per se was affected: face recognition, speech intelligibility, emotion recognition, and musical ability were all comparable to controls. The findings confirm the existence of developmental phonagnosia as a modality-specific impairment and allow a first rough prevalence estimate.

    Files private

    Request files
  • Rowbotham, S., Wardy, A. J., Lloyd, D. M., Wearden, A., & Holler, J. (2014). Increased pain intensity is associated with greater verbal communication difficulty and increased production of speech and co-speech gestures. PLoS One, 9(10): e110779. doi:10.1371/journal.pone.0110779.

    Abstract

    Effective pain communication is essential if adequate treatment and support are to be provided. Pain communication is often multimodal, with sufferers utilising speech, nonverbal behaviours (such as facial expressions), and co-speech gestures (bodily movements, primarily of the hands and arms that accompany speech and can convey semantic information) to communicate their experience. Research suggests that the production of nonverbal pain behaviours is positively associated with pain intensity, but it is not known whether this is also the case for speech and co-speech gestures. The present study explored whether increased pain intensity is associated with greater speech and gesture production during face-to-face communication about acute, experimental pain. Participants (N = 26) were exposed to experimentally elicited pressure pain to the fingernail bed at high and low intensities and took part in video-recorded semi-structured interviews. Despite rating more intense pain as more difficult to communicate (t(25) = 2.21, p = .037), participants produced significantly longer verbal pain descriptions and more co-speech gestures in the high intensity pain condition (Words: t(25) = 3.57, p = .001; Gestures: t(25) = 3.66, p = .001). This suggests that spoken and gestural communication about pain is enhanced when pain is more intense. Thus, in addition to conveying detailed semantic information about pain, speech and co-speech gestures may provide a cue to pain intensity, with implications for the treatment and support received by pain sufferers. Future work should consider whether these findings are applicable within the context of clinical interactions about pain.
  • Rowbotham, S., Holler, J., Lloyd, D., & Wearden, A. (2014). Handling pain: The semantic interplay of speech and co-speech hand gestures in the description of pain sensations. Speech Communication, 57, 244-256. doi:10.1016/j.specom.2013.04.002.

    Abstract

    Pain is a private and subjective experience about which effective communication is vital, particularly in medical settings. Speakers often represent information about pain sensation in both speech and co-speech hand gestures simultaneously, but it is not known whether gestures merely replicate spoken information or complement it in some way. We examined the representational contribution
    of gestures in a range of consecutive analyses. Firstly, we found that 78% of speech units containing pain sensation were accompanied by gestures, with 53% of these gestures representing pain sensation. Secondly, in 43% of these instances, gestures represented pain sensation information that was not contained in speech, contributing additional, complementary information to the pain sensation message.
    Finally, when applying a specificity analysis, we found that in contrast with research in different domains of talk, gestures did not make the pain sensation information in speech more specific. Rather, they complemented the verbal pain message by representing different
    aspects of pain sensation, contributing to a fuller representation of pain sensation than speech alone. These findings highlight the importance of gestures in communicating about pain sensation and suggest that this modality provides additional information to supplement and clarify the often ambiguous verbal pain message

    Files private

    Request files
  • Rowland, C. F. (2007). Explaining errors in children’s questions. Cognition, 104(1), 106-134. doi:10.1016/j.cognition.2006.05.011.

    Abstract

    The ability to explain the occurrence of errors in children’s speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813–842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children’s speech, and that errors occur when children resort to other operations to produce questions [e.g. Dąbrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83–102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157–181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.
  • Rubio-Fernández, P. (2007). Suppression in metaphor interpretation: Differences between meaning selection and meaning construction. Journal of Semantics, 24(4), 345-371. doi:10.1093/jos/ffm006.

    Abstract

    Various accounts of metaphor interpretation propose that it involves constructing an ad hoc concept on the basis of the concept encoded by the metaphor vehicle (i.e. the expression used for conveying the metaphor). This paper discusses some of the differences between these theories and investigates their main empirical prediction: that metaphor interpretation involves enhancing properties of the metaphor vehicle that are relevant for interpretation, while suppressing those that are irrelevant. This hypothesis was tested in a cross-modal lexical priming study adapted from early studies on lexical ambiguity. The different patterns of suppression of irrelevant meanings observed in disambiguation studies and in the experiment on metaphor reported here are discussed in terms of differences between meaning selection and meaning construction.
  • De Ruiter, J. P. (2007). Some multimodal signals in humans. In I. Van de Sluis, M. Theune, E. Reiter, & E. Krahmer (Eds.), Proceedings of the Workshop on Multimodal Output Generation (MOG 2007) (pp. 141-148).

    Abstract

    In this paper, I will give an overview of some well-studied multimodal signals that humans produce while they communicate with other humans, and discuss the implications of those studies for HCI. I will first discuss a conceptual framework that allows us to distinguish between functional and sensory modalities. This distinction is important, as there are multiple functional modalities using the same sensory modality (e.g., facial expression and eye-gaze in the visual modality). A second theoretically important issue is redundancy. Some signals appear to be redundant with a signal in another modality, whereas others give new information or even appear to give conflicting information (see e.g., the work of Susan Goldin-Meadows on speech accompanying gestures). I will argue that multimodal signals are never truly redundant. First, many gestures that appear at first sight to express the same meaning as the accompanying speech generally provide extra (analog) information about manner, path, etc. Second, the simple fact that the same information is expressed in more than one modality is itself a communicative signal. Armed with this conceptual background, I will then proceed to give an overview of some multimodalsignals that have been investigated in human-human research, and the level of understanding we have of the meaning of those signals. The latter issue is especially important for potential implementations of these signals in artificial agents. First, I will discuss pointing gestures. I will address the issue of the timing of pointing gestures relative to the speech it is supposed to support, the mutual dependency between pointing gestures and speech, and discuss the existence of alternative ways of pointing from other cultures. The most frequent form of pointing that does not involve the index finger is a cultural practice called lip-pointing which employs two visual functional modalities, mouth-shape and eye-gaze, simultaneously for pointing. Next, I will address the issue of eye-gaze. A classical study by Kendon (1967) claims that there is a systematic relationship between eye-gaze (at the interlocutor) and turn-taking states. Research at our institute has shown that this relationship is weaker than has often been assumed. If the dialogue setting contains a visible object that is relevant to the dialogue (e.g., a map), the rate of eye-gaze-at-other drops dramatically and its relationship to turn taking disappears completely. The implications for machine generated eye-gaze are discussed. Finally, I will explore a theoretical debate regarding spontaneous gestures. It has often been claimed that the class of gestures that is called iconic by McNeill (1992) are a “window into the mind”. That is, they are claimed to give the researcher (or even the interlocutor) a direct view into the speaker’s thought, without being obscured by the complex transformation that take place when transforming a thought into a verbal utterance. I will argue that this is an illusion. Gestures can be shown to be specifically designed such that the listener can be expected to interpret them. Although the transformations carried out to express a thought in gesture are indeed (partly) different from the corresponding transformations for speech, they are a) complex, and b) severely understudied. This obviously has consequences both for the gesture research agenda, and for the generation of iconic gestures by machines.
  • De Ruiter, J. P. (2007). Postcards from the mind: The relationship between speech, imagistic gesture and thought. Gesture, 7(1), 21-38.

    Abstract

    In this paper, I compare three different assumptions about the relationship between speech, thought and gesture. These assumptions have profound consequences for theories about the representations and processing involved in gesture and speech production. I associate these assumptions with three simplified processing architectures. In the Window Architecture, gesture provides us with a 'window into the mind'. In the Language Architecture, properties of language have an influence on gesture. In the Postcard Architecture, gesture and speech are planned by a single process to become one multimodal message. The popular Window Architecture is based on the assumption that gestures come, as it were, straight out of the mind. I argue that during the creation of overt imagistic gestures, many processes, especially those related to (a) recipient design, and (b) effects of language structure, cause an observable gesture to be very different from the original thought that it expresses. The Language Architecture and the Postcard Architecture differ from the Window Architecture in that they both incorporate a central component which plans gesture and speech together, however they differ from each other in the way they align gesture and speech. The Postcard Architecture assumes that the process creating a multimodal message involving both gesture and speech has access to the concepts that are available in speech, while the Language Architecture relies on interprocess communication to resolve potential conflicts between the content of gesture and speech.
  • De Ruiter, J. P., & Enfield, N. J. (2007). The BIC model: A blueprint for the communicator. In C. Stephanidis (Ed.), Universal access in Human-Computer Interaction: Applications and services (pp. 251-258). Berlin: Springer.
  • Sadakata, M., & McQueen, J. M. (2014). Individual aptitude in Mandarin lexical tone perception predicts effectiveness of high-variability training. Frontiers in Psychology, 5: 1318. doi:10.3389/fpsyg.2014.01318.

    Abstract

    Although the high-variability training method can enhance learning of non-native speech categories, this can depend on individuals’ aptitude. The current study asked how general the effects of perceptual aptitude are by testing whether they occur with training materials spoken by native speakers and whether they depend on the nature of the to-be-learned material. Forty-five native Dutch listeners took part in a five-day training procedure in which they identified bisyllabic Mandarin pseudowords (e.g., asa) pronounced with different lexical tone combinations. The training materials were presented to different groups of listeners at three levels of variability: low (many repetitions of a limited set of words recorded by a single speaker), medium (fewer repetitions of a more variable set of words recorded by 3 speakers) and high (similar to medium but with 5 speakers). Overall, variability did not influence learning performance, but this was due to an interaction with individuals’ perceptual aptitude: increasing variability hindered improvements in performance for low-aptitude perceivers while it helped improvements in performance for high-aptitude perceivers. These results show that the previously observed interaction between individuals’ aptitude and effects of degree of variability extends to natural tokens of Mandarin speech. This interaction was not found, however, in a closely-matched study in which native Dutch listeners were trained on the Japanese geminate/singleton consonant contrast. This may indicate that the effectiveness of high-variability training depends not only on individuals’ aptitude in speech perception but also on the nature of the categories being acquired.
  • Salverda, A. P., Dahan, D., Tanenhaus, M. K., Crosswhite, K., Masharov, M., & McDonough, J. (2007). Effects of prosodically modulated sub-phonetic variation on lexical competition. Cognition, 105(2), 466-476. doi:10.1016/j.cognition.2006.10.008.

    Abstract

    Eye movements were monitored as participants followed spoken instructions to manipulate one of four objects pictured on a computer screen. Target words occurred in utterance-medial (e.g., Put the cap next to the square) or utterance-final position (e.g., Now click on the cap). Displays consisted of the target picture (e.g., a cap), a monosyllabic competitor picture (e.g., a cat), a polysyllabic competitor picture (e.g., a captain) and a distractor (e.g., a beaker). The relative proportion of fixations to the two types of competitor pictures changed as a function of the position of the target word in the utterance, demonstrating that lexical competition is modulated by prosodically conditioned phonetic variation.
  • Sanchis-Trilles, G., Alabau, V., Buck, C., Carl, M., Casacuberta, F., García Martínez, M., Germann, U., González Rubio, J., Hill, R. L., Koehn, P., Leiva, L. A., Mesa-Lao, B., Ortiz Martínez, D., Saint-Amand, H., Tsoukala, C., & Vidal, E. (2014). Interactive translation prediction versus conventional post-editing in practice: a study with the CasMaCat workbench. Machine Translation, 28(3-4), 217-235. doi:10.1007/s10590-014-9157-9.

    Abstract

    We conducted a field trial in computer-assisted professional translation to compare interactive translation prediction (ITP) against conventional post-editing (PE) of machine translation (MT) output. In contrast to the conventional PE set-up, where an MT system first produces a static translation hypothesis that is then edited by a professional (hence “post-editing”), ITP constantly updates the translation hypothesis in real time in response to user edits. Our study involved nine professional translators and four reviewers working with the web-based CasMaCat workbench. Various new interactive features aiming to assist the post-editor/translator were also tested in this trial. Our results show that even with little training, ITP can be as productive as conventional PE in terms of the total time required to produce the final translation. Moreover, translation editors working with ITP require fewer key strokes to arrive at the final version of their translation.

    Files private

    Request files
  • Sauter, D., & Scott, S. K. (2007). More than one kind of happiness: Can we recognize vocal expressions of different positive states? Motivation and Emotion, 31(3), 192-199.

    Abstract

    Several theorists have proposed that distinctions are needed between different positive emotional states, and that these discriminations may be particularly useful in the domain of vocal signals (Ekman, 1992b, Cognition and Emotion, 6, 169–200; Scherer, 1986, Psychological Bulletin, 99, 143–165). We report an investigation into the hypothesis that positive basic emotions have distinct vocal expressions (Ekman, 1992b, Cognition and Emotion, 6, 169–200). Non-verbal vocalisations are used that map onto five putative positive emotions: Achievement/Triumph, Amusement, Contentment, Sensual Pleasure, and Relief. Data from categorisation and rating tasks indicate that each vocal expression is accurately categorised and consistently rated as expressing the intended emotion. This pattern is replicated across two language groups. These data, we conclude, provide evidence for the existence of robustly recognisable expressions of distinct positive emotions.
  • Scharenborg, O., Ernestus, M., & Wan, V. (2007). Segmentation of speech: Child's play? In H. van Hamme, & R. van Son (Eds.), Proceedings of Interspeech 2007 (pp. 1953-1956). Adelaide: Causal Productions.

    Abstract

    The difficulty of the task of segmenting a speech signal into its words is immediately clear when listening to a foreign language; it is much harder to segment the signal into its words, since the words of the language are unknown. Infants are faced with the same task when learning their first language. This study provides a better understanding of the task that infants face while learning their native language. We employed an automatic algorithm on the task of speech segmentation without prior knowledge of the labels of the phonemes. An analysis of the boundaries erroneously placed inside a phoneme showed that the algorithm consistently placed additional boundaries in phonemes in which acoustic changes occur. These acoustic changes may be as great as the transition from the closure to the burst of a plosive or as subtle as the formant transitions in low or back vowels. Moreover, we found that glottal vibration may attenuate the relevance of acoustic changes within obstruents. An interesting question for further research is how infants learn to overcome the natural tendency to segment these ‘dynamic’ phonemes.
  • Scharenborg, O., & Wan, V. (2007). Can unquantised articulatory feature continuums be modelled? In INTERSPEECH 2007 - 8th Annual Conference of the International Speech Communication Association (pp. 2473-2476). ISCA Archive.

    Abstract

    Articulatory feature (AF) modelling of speech has received a considerable amount of attention in automatic speech recognition research. Although termed ‘articulatory’, previous definitions make certain assumptions that are invalid, for instance, that articulators ‘hop’ from one fixed position to the next. In this paper, we studied two methods, based on support vector classification (SVC) and regression (SVR), in which the articulation continuum is modelled without being restricted to using discrete AF value classes. A comparison with a baseline system trained on quantised values of the articulation continuum showed that both SVC and SVR outperform the baseline for two of the three investigated AFs, with improvements up to 5.6% absolute.
  • Scharenborg, O., Seneff, S., & Boves, L. (2007). A two-pass approach for handling out-of-vocabulary words in a large vocabulary recognition task. Computer, Speech & Language, 21, 206-218. doi:10.1016/j.csl.2006.03.003.

    Abstract

    This paper addresses the problem of recognizing a vocabulary of over 50,000 city names in a telephone access spoken dialogue system. We adopt a two-stage framework in which only major cities are represented in the first stage lexicon. We rely on an unknown word model encoded as a phone loop to detect OOV city names (referred to as ‘rare city’ names). We use SpeM, a tool that can extract words and word-initial cohorts from phone graphs from a large fallback lexicon, to provide an N-best list of promising city name hypotheses on the basis of the phone graph corresponding to the OOV. This N-best list is then inserted into the second stage lexicon for a subsequent recognition pass. Experiments were conducted on a set of spontaneous telephone-quality utterances; each containing one rare city name. It appeared that SpeM was able to include nearly 75% of the correct city names in an N-best hypothesis list of 3000 city names. With the names found by SpeM to extend the lexicon of the second stage recognizer, a word accuracy of 77.3% could be obtained. The best one-stage system yielded a word accuracy of 72.6%. The absolute number of correctly recognized rare city names almost doubled, from 62 for the best one-stage system to 102 for the best two-stage system. However, even the best two-stage system recognized only about one-third of the rare city names retrieved by SpeM. The paper discusses ways for improving the overall performance in the context of an application.
  • Scharenborg, O., Ten Bosch, L., & Boves, L. (2007). 'Early recognition' of polysyllabic words in continuous speech. Computer, Speech & Language, 21, 54-71. doi:10.1016/j.csl.2005.12.001.

    Abstract

    Humans are able to recognise a word before its acoustic realisation is complete. This in contrast to conventional automatic speech recognition (ASR) systems, which compute the likelihood of a number of hypothesised word sequences, and identify the words that were recognised on the basis of a trace back of the hypothesis with the highest eventual score, in order to maximise efficiency and performance. In the present paper, we present an ASR system, SpeM, based on principles known from the field of human word recognition that is able to model the human capability of ‘early recognition’ by computing word activation scores (based on negative log likelihood scores) during the speech recognition process. Experiments on 1463 polysyllabic words in 885 utterances showed that 64.0% (936) of these polysyllabic words were recognised correctly at the end of the utterance. For 81.1% of the 936 correctly recognised polysyllabic words the local word activation allowed us to identify the word before its last phone was available, and 64.1% of those words were already identified one phone after their lexical uniqueness point. We investigated two types of predictors for deciding whether a word is considered as recognised before the end of its acoustic realisation. The first type is related to the absolute and relative values of the word activation, which trade false acceptances for false rejections. The second type of predictor is related to the number of phones of the word that have already been processed and the number of phones that remain until the end of the word. The results showed that SpeM’s performance increases if the amount of acoustic evidence in support of a word increases and the risk of future mismatches decreases.
  • Scharenborg, O. (2007). Reaching over the gap: A review of efforts to link human and automatic speech recognition research. Speech Communication, 49, 336-347. doi:10.1016/j.specom.2007.01.009.

    Abstract

    The fields of human speech recognition (HSR) and automatic speech recognition (ASR) both investigate parts of the speech recognition process and have word recognition as their central issue. Although the research fields appear closely related, their aims and research methods are quite different. Despite these differences there is, however, lately a growing interest in possible cross-fertilisation. Researchers from both ASR and HSR are realising the potential benefit of looking at the research field on the other side of the ‘gap’. In this paper, we provide an overview of past and present efforts to link human and automatic speech recognition research and present an overview of the literature describing the performance difference between machines and human listeners. The focus of the paper is on the mutual benefits to be derived from establishing closer collaborations and knowledge interchange between ASR and HSR. The paper ends with an argument for more and closer collaborations between researchers of ASR and HSR to further improve research in both fields.
  • Scharenborg, O., Wan, V., & Moore, R. K. (2007). Towards capturing fine phonetic variation in speech using articulatory features. Speech Communication, 49, 811-826. doi:10.1016/j.specom.2007.01.005.

    Abstract

    The ultimate goal of our research is to develop a computational model of human speech recognition that is able to capture the effects of fine-grained acoustic variation on speech recognition behaviour. As part of this work we are investigating automatic feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. In the experiments reported here, we analysed the classification results from support vector machines (SVMs) and multilayer perceptrons (MLPs). MLPs have been widely and successfully used for the task of multi-value articulatory feature classification, while (to the best of our knowledge) SVMs have not. This paper compares the performance of the two classifiers and analyses the results in order to better understand the articulatory representations. It was found that the SVMs outperformed the MLPs for five out of the seven articulatory feature classes we investigated while using only 8.8–44.2% of the training material used for training the MLPs. The structure in the misclassifications of the SVMs and MLPs suggested that there might be a mismatch between the characteristics of the classification systems and the characteristics of the description of the AF values themselves. The analyses showed that some of the misclassified features are inherently confusable given the acoustic space. We concluded that in order to come to a feature set that can be used for a reliable and accurate automatic description of the speech signal; it could be beneficial to move away from quantised representations.
  • Schertz, J., & Ernestus, M. (2014). Variability in the pronunciation of non-native English the: Effects of frequency and disfluencies. Corpus Linguistics and Linguistic Theory, 10, 329-345. doi:10.1515/cllt-2014-0024.

    Abstract

    This study examines how lexical frequency and planning problems can predict phonetic variability in the function word ‘the’ in conversational speech produced by non-native speakers of English. We examined 3180 tokens of ‘the’ drawn from English conversations between native speakers of Czech or Norwegian. Using regression models, we investigated the effect of following word frequency and disfluencies on three phonetic parameters: vowel duration, vowel quality, and consonant quality. Overall, the non-native speakers showed variation that is very similar to the variation displayed by native speakers of English. Like native speakers, Czech speakers showed an effect of frequency on vowel durations, which were shorter in more frequent word sequences. Both groups of speakers showed an effect of frequency on consonant quality: the substitution of another consonant for /ð/ occurred more often in the context of more frequent words. The speakers in this study also showed a native-like allophonic distinction in vowel quality, in which /ði/ occurs more often before vowels and /ðə/ before consonants. Vowel durations were longer in the presence of following disfluencies, again mirroring patterns in native speakers, and the consonant quality was more likely to be the target /ð/ before disfluencies, as opposed to a different consonant. The fact that non-native speakers show native-like sensitivity to lexical frequency and disfluencies suggests that these effects are consequences of a general, non-language-specific production mechanism governing language planning. On the other hand, the non-native speakers in this study did not show native-like patterns of vowel quality in the presence of disfluencies, suggesting that the pattern attested in native speakers of English may result from language-specific processes separate from the general production mechanisms
  • Scheu, O., & Zinn, C. (2007). How did the e-learning session go? The student inspector. In Proceedings of the 13th International Conference on Artificial Intelligence and Education (AIED 2007). Amsterdam: IOS Press.

    Abstract

    Good teachers know their students, and exploit this knowledge to adapt or optimise their instruction. Traditional teachers know their students because they interact with them face-to-face in classroom or one-to-one tutoring sessions. In these settings, they can build student models, i.e., by exploiting the multi-faceted nature of human-human communication. In distance-learning contexts, teacher and student have to cope with the lack of such direct interaction, and this must have detrimental effects for both teacher and student. In a past study we have analysed teacher requirements for tracking student actions in computer-mediated settings. Given the results of this study, we have devised and implemented a tool that allows teachers to keep track of their learners'interaction in e-learning systems. We present the tool's functionality and user interfaces, and an evaluation of its usability.
  • Schijven, D., Sousa, V. C., Roelofs, J., Olivier, B., & Olivier, J. D. A. (2014). Serotonin 1A receptors and sexual behavior in a genetic model of depression. Pharmacology, Biochemistry and Behavior, 121, 82-87. doi:10.1016/j.pbb.2013.12.012.

    Abstract

    The Flinder Sensitive Line (FSL) is a rat strain that displays distinct behavioral and neurochemical features of major depression. Chronic selective serotonin reuptake inhibitors (SSRIs) are able to reverse these symptoms in FSL rats. It is well known that several abnormalities in the serotonergic system have been found in FSL rats, including increased 5-HT brain tissue levels and reduced 5-HT synthesis. SSRIs are known to exert (part of) their effects by desensitization of the 5-HT1A receptor and FSL rats appear to have lower 5-HT1A receptor densities compared with Flinder Resistant Line (FRL) rats. We therefore studied the sensitivity of this receptor on the sexual behavior performance in both FRL and FSL rats. First, basal sexual performance was studied after saline treatment followed by treatment of two different doses of the 5-HT1A receptor agonist ±8-OH-DPAT. Finally we measured the effect of a 5-HT1A receptor antagonist to check for specificity of the 5-HT1A receptor activation. Our results show that FSL rats have higher ejaculation frequencies compared with FRL rats which do not fit with a more depressive-like phenotype. Moreover FRL rats are more sensitive to effects of ±8-OH-DPAT upon EL and IF than FSL rats. The blunted response of FSL rats to the effects of ±8-OH-DPAT may be due to lower densities of 5-HT1A receptors.
  • Schiller, N. O., Van Lieshout, P. H. H. M., Meyer, A. S., & Levelt, W. J. M. (1997). Is the syllable an articulatory unit in speech production? Evidence from an Emma study. In P. Wille (Ed.), Fortschritte der Akustik: Plenarvorträge und Fachbeiträge der 23. Deutschen Jahrestagung für Akustik (DAGA 97) (pp. 605-606). Oldenburg: DEGA.
  • Schiller, N. O., Meyer, A. S., & Levelt, W. J. M. (1997). The syllabic structure of spoken words: Evidence from the syllabification of intervocalic consonants. Language and Speech, 40(2), 103-140.

    Abstract

    A series of experiments was carried out to investigate the syllable affiliation of intervocalic consonants following short vowels, long vowels, and schwa in Dutch. Special interest was paid to words such as letter ['leter] ''id.,'' where a short vowel is followed by a single consonant. On phonological grounds one may predict that the first syllable should always be closed, but earlier psycholinguistic research had shown that speakers tend to leave these syllables open. In our experiments, bisyllabic word forms were presented aurally, and participants produced their syllables in reversed order (Experiments 1 through 5), or repeated the words inserting a pause between the syllables (Experiment 6). The results showed that participants generally closed syllables with a short vowel. However, in a significant number of the cases they produced open short vowel syllables. Syllables containing schwa, like syllables with a long vowel, were hardly ever closed. Word stress, the phonetic quality of the vowel in the first syllable, and the experimental context influenced syllabification. Taken together, the experiments show that native speakers syllabify bisyllabic Dutch nouns in accordance with a small set of prosodic output constraints. To account for the variability of the results, we propose that these constraints differ in their probabilities of being applied.
  • Schmidt, J., Janse, E., & Scharenborg, O. (2014). Age, hearing loss and the perception of affective utterances in conversational speech. In Proceedings of Interspeech 2014: 15th Annual Conference of the International Speech Communication Association (pp. 1929-1933).

    Abstract

    This study investigates whether age and/or hearing loss influence the perception of the emotion dimensions arousal (calm vs. aroused) and valence (positive vs. negative attitude) in conversational speech fragments. Specifically, this study focuses on the relationship between participants' ratings of affective speech and acoustic parameters known to be associated with arousal and valence (mean F0, intensity, and articulation rate). Ten normal-hearing younger and ten older adults with varying hearing loss were tested on two rating tasks. Stimuli consisted of short sentences taken from a corpus of conversational affective speech. In both rating tasks, participants estimated the value of the emotion dimension at hand using a 5-point scale. For arousal, higher intensity was generally associated with higher arousal in both age groups. Compared to younger participants, older participants rated the utterances as less aroused, and showed a smaller effect of intensity on their arousal ratings. For valence, higher mean F0 was associated with more negative ratings in both age groups. Generally, age group differences in rating affective utterances may not relate to age group differences in hearing loss, but rather to other differences between the age groups, as older participants' rating patterns were not associated with their individual hearing loss.
  • Schoot, L., Menenti, L., Hagoort, P., & Segaert, K. (2014). A little more conversation - The influence of communicative context on syntactic priming in brain and behavior. Frontiers in Psychology, 5: 208. doi:10.3389/fpsyg.2014.00208.

    Abstract

    We report on an fMRI syntactic priming experiment in which we measure brain activity for participants who communicate with another participant outside the scanner. We investigated whether syntactic processing during overt language production and comprehension is influenced by having a (shared) goal to communicate. Although theory suggests this is true, the nature of this influence remains unclear. Two hypotheses are tested: i. syntactic priming effects (fMRI and RT) are stronger for participants in the communicative context than for participants doing the same experiment in a non-communicative context, and ii. syntactic priming magnitude (RT) is correlated with the syntactic priming magnitude of the speaker’s communicative partner. Results showed that across conditions, participants were faster to produce sentences with repeated syntax, relative to novel syntax. This behavioral result converged with the fMRI data: we found repetition suppression effects in the left insula extending into left inferior frontal gyrus (BA 47/45), left middle temporal gyrus (BA 21), left inferior parietal cortex (BA 40), left precentral gyrus (BA 6), bilateral precuneus (BA 7), bilateral supplementary motor cortex (BA 32/8) and right insula (BA 47). We did not find support for the first hypothesis: having a communicative intention does not increase the magnitude of syntactic priming effects (either in the brain or in behavior) per se. We did find support for the second hypothesis: if speaker A is strongly/weakly primed by speaker B, then speaker B is primed by speaker A to a similar extent. We conclude that syntactic processing is influenced by being in a communicative context, and that the nature of this influence is bi-directional: speakers are influenced by each other.
  • Schreiweis, C., Bornschein, U., Burguière, E., Kerimoglu, C., Schreiter, S., Dannemann, M., Goyal, S., Rea, E., French, C. A., Puliyadi, R., Groszer, M., Fisher, S. E., Mundry, R., Winter, C., Hevers, W., Pääbo, S., Enard, W., & Graybiel, A. M. (2014). Humanized Foxp2 accelerates learning by enhancing transitions from declarative to procedural performance. Proceedings of the National Academy of Sciences of the United States of America, 111, 14253-14258. doi:10.1073/pnas.1414542111.

    Abstract

    The acquisition of language and speech is uniquely human, but how genetic changes might have adapted the nervous system to this capacity is not well understood. Two human-specific amino acid substitutions in the transcription factor forkhead box P2 (FOXP2) are outstanding mechanistic candidates, as they could have been positively selected during human evolution and as FOXP2 is the sole gene to date firmly linked to speech and language development. When these two substitutions are introduced into the endogenous Foxp2 gene of mice (Foxp2hum), cortico-basal ganglia circuits are specifically affected. Here we demonstrate marked effects of this humanization of Foxp2 on learning and striatal neuroplasticity. Foxp2hum/hum mice learn stimulus–response associations faster than their WT littermates in situations in which declarative (i.e., place-based) and procedural (i.e., response-based) forms of learning could compete during transitions toward proceduralization of action sequences. Striatal districts known to be differently related to these two modes of learning are affected differently in the Foxp2hum/hum mice, as judged by measures of dopamine levels, gene expression patterns, and synaptic plasticity, including an NMDA receptor-dependent form of long-term depression. These findings raise the possibility that the humanized Foxp2 phenotype reflects a different tuning of corticostriatal systems involved in declarative and procedural learning, a capacity potentially contributing to adapting the human brain for speech and language acquisition.

    Files private

    Request files
  • Schulte im Walde, S., Melinger, A., Roth, M., & Weber, A. (2007). An empirical characterization of response types in German association norms. In Proceedings of the GLDV workshop on lexical-semantic and ontological resources.
  • Scott, D. R., & Cutler, A. (1982). Segmental cues to syntactic structure. In Proceedings of the Institute of Acoustics 'Spectral Analysis and its Use in Underwater Acoustics' (pp. E3.1-E3.4). London: Institute of Acoustics.
  • Segaert, K., Weber, K., Cladder-Micus, M., & Hagoort, P. (2014). The influence of verb-bound syntactic preferences on the processing of syntactic structures. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(5), 1448-1460. doi:10.1037/a0036796.

    Abstract

    Speakers sometimes repeat syntactic structures across sentences, a phenomenon called syntactic priming. We investigated the influence of verb-bound syntactic preferences on syntactic priming effects in response choices and response latencies for German ditransitive sentences. In the response choices we found inverse preference effects: There were stronger syntactic priming effects for primes in the less preferred structure, given the syntactic preference of the prime verb. In the response latencies we found positive preference effects: There were stronger syntactic priming effects for primes in the more preferred structure, given the syntactic preference of the prime verb. These findings provide further support for the idea that syntactic processing is lexically guided.
  • Segurado, R., Hamshere, M. L., Glaser, B., Nikolov, I., Moskvina, V., & Holmans, P. A. (2007). Combining linkage data sets for meta-analysis and mega-analysis: the GAW15 rheumatoid arthritis data set. BMC Proceedings, 1(Suppl 1): S104.

    Abstract

    We have used the genome-wide marker genotypes from Genetic Analysis Workshop 15 Problem 2 to explore joint evidence for genetic linkage to rheumatoid arthritis across several samples. The data consisted of four high-density genome scans on samples selected for rheumatoid arthritis. We cleaned the data, removed intermarker linkage disequilibrium, and assembled the samples onto a common genetic map using genome sequence positions as a reference for map interpolation. The individual studies were combined first at the genotype level (mega-analysis) prior to a multipoint linkage analysis on the combined sample, and second using the genome scan meta-analysis method after linkage analysis of each sample. The two approaches were compared, and give strong support to the HLA locus on chromosome 6 as a susceptibility locus. Other regions of interest include loci on chromosomes 11, 2, and 12.
  • Senft, G. (1985). Emic or etic or just another catch 22? A repartee to Hartmut Haberland. Journal of Pragmatics, 9, 845.
  • Senft, G. (1997). [Review of the book The design of language: An introduction to descriptive linguistics by Terry Crowley, John Lynch, Jeff Siegel, and Julie Piau]. Linguistics, 35, 781-785.
  • Senft, G. (2007). [Review of the book Bislama reference grammar by Terry Crowley]. Linguistics, 45(1), 235-239.
  • Senft, G. (2007). [Review of the book Serial verb constructions - A cross-linguistic typology by Alexandra Y. Aikhenvald and Robert M. W. Dixon]. Linguistics, 45(4), 833-840. doi:10.1515/LING.2007.024.

Share this page