Publications

Displaying 301 - 400 of 487
  • Mitterer, H., Chen, Y., & Zhou, X. (2011). Phonological abstraction in processing lexical-tone variation: Evidence from a learning paradigm. Cognitive Science, 35, 184-197. doi:10.1111/j.1551-6709.2010.01140.x.

    Abstract

    There is a growing consensus that the mental lexicon contains both abstract and word-specific acoustic information. To investigate their relative importance for word recognition, we tested to what extent perceptual learning is word specific or generalizable to other words. In an exposure phase, participants were divided into two groups; each group was semantically biased to interpret an ambiguous Mandarin tone contour as either tone1 or tone2. In a subsequent test phase, the perception of ambiguous contours was dependent on the exposure phase: Participants who heard ambiguous contours as tone1 during exposure were more likely to perceive ambiguous contours as tone1 than participants who heard ambiguous contours as tone2 during exposure. This learning effect was only slightly larger for previously encountered than for not previously encountered words. The results speak for an architecture with prelexical analysis of phonological categories to achieve both lexical access and episodic storage of exemplars.
  • Mitterer, H. (2011). Recognizing reduced forms: Different processing mechanisms for similar reductions. Journal of Phonetics, 39, 298-303. doi:10.1016/j.wocn.2010.11.009.

    Abstract

    Recognizing phonetically reduced forms is a huge challenge for spoken-word recognition. Phonetic reductions not only occur often, but also come in a variety of forms. The paper investigates how two similar forms of reductions – /t/-reduction and nasal place assimilation in Dutch – can eventually be recognized, focusing on the role of following phonological context. Previous research indicated that listeners take the following phonological context into account when compensating for /t/-reduction and nasal place assimilation. The current paper shows that these context effects arise in early perceptual processes for the perception of assimilated forms, but at a later stage of processing for the perception of /t/-reduced forms. This shows first that the recognition of apparently similarly reduced words may rely on different processing mechanisms and, second, that searching for dissociations over tasks is a promising research strategy to investigate how reduced forms are recognized.
  • Mitterer, H. (2011). The mental lexicon is fully specified: Evidence from eye-tracking. Journal of Experimental Psychology: Human Perception and Performance, 37(2), 496-513. doi:10.1037/a0020989.

    Abstract

    Four visual-world experiments, in which listeners heard spoken words and saw printed words, compared an optimal-perception account with the theory of phonological underspecification. This theory argues that default phonological features are not specified in the mental lexicon, leading to asymmetric lexical matching: Mismatching input ("pin") activates lexical entries with underspecified coronal stops ('tin'), but lexical entries with specified labial stops ('pin') are not activated by mismatching input ("tin"). The eye-tracking data failed to show such a pattern. Although words that were phonologically similar to the spoken target attracted more looks than unrelated distractors, this effect was symmetric in Experiment 1 with minimal pairs ("tin"- "pin") and in Experiments 2 and 3 with words with an onset overlap ("peacock" - "teacake"). Experiment 4 revealed that /t/-initial words were looked at more frequently if the spoken input mismatched only in terms of place than if it mismatched in place and voice, contrary to the assumption that /t/ is unspecified for place and voice. These results show that speech perception uses signal-driven information to the fullest, as predicted by an optimal perception account.
  • Monaco, A., Fisher, S. E., & The SLI Consortium (SLIC) (2007). Multivariate linkage analysis of specific language impairment (SLI). Annals of Human Genetics, 71(5), 660-673. doi:10.1111/j.1469-1809.2007.00361.x.

    Abstract

    Specific language impairment (SLI) is defined as an inability to develop appropriate language skills without explanatory medical conditions, low intelligence or lack of opportunity. Previously, a genome scan of 98 families affected by SLI was completed by the SLI Consortium, resulting in the identification of two quantitative trait loci (QTL) on chromosomes 16q (SLI1) and 19q (SLI2). This was followed by a replication of both regions in an additional 86 families. Both these studies applied linkage methods to one phenotypic trait at a time. However, investigations have suggested that simultaneous analysis of several traits may offer more power. The current study therefore applied a multivariate variance-components approach to the SLI Consortium dataset using additional phenotypic data. A multivariate genome scan was completed and supported the importance of the SLI1 and SLI2 loci, whilst highlighting a possible novel QTL on chromosome 10. Further investigation implied that the effect of SLI1 on non-word repetition was equally as strong on reading and spelling phenotypes. In contrast, SLI2 appeared to have influences on a selection of expressive and receptive language phenotypes in addition to non-word repetition, but did not show linkage to literacy phenotypes.

    Additional information

    Members_SLIC.doc
  • Mulder, K., & Hulstijn, J. H. (2011). Linguistic skills of adult native speakers, as a function of age and level of education. Applied Linguistics, 32, 475-494. doi:10.1093/applin/amr016.

    Abstract

    This study assessed, in a sample of 98 adult native speakers of Dutch, how their lexical skills and their speaking proficiency varied as a function of their age and level of education and profession (EP). Participants, categorized in terms of their age (18–35, 36–50, and 51–76 years old) and the level of their EP (low versus high), were tested on their lexical knowledge, lexical fluency, and lexical memory, and they performed four speaking tasks, differing in genre and formality. Speaking performance was rated in terms of communicative adequacy and in terms of number of words, number of T-units, words per T-unit, content words per T-unit, hesitations per T-unit, and grammatical errors per T-unit. Increasing age affected lexical knowledge positively but lexical fluency and memory negatively. High EP positively affected lexical knowledge and memory but EP did not affect lexical fluency. Communicative adequacy of the responses in the speaking tasks was positively affected by high EP but was not affected by age. It is concluded that, given the large variability in native speakers’ language knowledge and skills, studies investigating the question of whether second-language learners can reach native levels of proficiency, should take native-speaker variability into account.

    Additional information

    Mulder_2011_Supplementary Data.doc
  • Munafò, M. R., Freathy, R. M., Ring, S. M., St Pourcain, B., & Smith, G. D. (2011). Association of COMT Val108/158Met Genotype and Cigarette Smoking in Pregnant Women. Nicotine & Tobacco Research, 13(2), 55-63. doi:10.1093/ntr/ntq209.

    Abstract

    INTRODUCTION: Smoking behaviors, including heaviness of smoking and smoking cessation, are known to be under a degree of genetic influence. The enzyme catechol O-methyltransferase (COMT) is of relevance in studies of smoking behavior and smoking cessation due to its presence in dopaminergic brain regions. While the COMT gene is therefore one of the more promising candidate genes for smoking behavior, some inconsistencies have begun to emerge. METHODS: We explored whether the rs4680 A (Met) allele of the COMT gene predicts increased heaviness of smoking and reduced likelihood of smoking cessation in a large population-based cohort of pregnant women. We further conducted a meta-analysis of published data from community samples investigating the association of this polymorphism with heaviness of smoking and smoking status. RESULTS: In our primary sample, the A (Met) allele was associated with increased heaviness of smoking before pregnancy but not with the odds of continuing to smoke in pregnancy either in the first trimester or in the third trimester. Meta-analysis also indicated modest evidence of association of the A (Met) allele with increased heaviness of smoking but not with persistent smoking. CONCLUSIONS: Our data suggest a weak association between COMT genotype and heaviness of smoking, which is supported by our meta-analysis. However, it should be noted that the strength of evidence for this association was modest. Neither our primary data nor our meta-analysis support an association between COMT genotype and smoking cessation. Therefore, COMT remains a plausible candidate gene for smoking behavior phenotypes, in particular, heaviness of smoking.
  • Murty, L., Otake, T., & Cutler, A. (2007). Perceptual tests of rhythmic similarity: I. Mora Rhythm. Language and Speech, 50(1), 77-99. doi:10.1177/00238309070500010401.

    Abstract

    Listeners rely on native-language rhythm in segmenting speech; in different languages, stress-, syllable- or mora-based rhythm is exploited. The rhythmic similarity hypothesis holds that where two languages have similar rhythm, listeners of each language should segment their own and the other language similarly. Such similarity in listening was previously observed only for related languages (English-Dutch; French-Spanish). We now report three experiments in which speakers of Telugu, a Dravidian language unrelated to Japanese but similar to it in crucial aspects of rhythmic structure, heard speech in Japanese and in their own language, and Japanese listeners heard Telugu. For the Telugu listeners, detection of target sequences in Japanese speech was harder when target boundaries mismatched mora boundaries, exactly the pattern that Japanese listeners earlier exhibited with Japanese and other languages. The same results appeared when Japanese listeners heard Telugu speech containing only codas permissible in Japanese. Telugu listeners' results with Telugu speech were mixed, but the overall pattern revealed correspondences between the response patterns of the two listener groups, as predicted by the rhythmic similarity hypothesis. Telugu and Japanese listeners appear to command similar procedures for speech segmentation, further bolstering the proposal that aspects of language phonological structure affect listeners' speech segmentation.
  • Narasimhan, B., Eisenbeiss, S., & Brown, P. (Eds.). (2007). The linguistic encoding of multiple-participant events [Special Issue]. Linguistics, 45(3).

    Abstract

    This issue investigates the linguistic encoding of events with three or more participants from the perspectives of language typology and acquisition. Such “multiple-participant events” include (but are not limited to) any scenario involving at least three participants, typically encoded using transactional verbs like 'give' and 'show', placement verbs like 'put', and benefactive and applicative constructions like 'do (something for someone)', among others. There is considerable crosslinguistic and withinlanguage variation in how the participants (the Agent, Causer, Theme, Goal, Recipient, or Experiencer) and the subevents involved in multipleparticipant situations are encoded, both at the lexical and the constructional levels
  • Narasimhan, B. (2007). Cutting, breaking, and tearing verbs in Hindi and Tamil. Cognitive Linguistics, 18(2), 195-205. doi:10.1515/COG.2007.008.

    Abstract

    Tamil and Hindi verbs of cutting, breaking, and tearing are shown to have a high degree of overlap in their extensions. However, there are also differences in the lexicalization patterns of these verbs in the two languages with regard to their category boundaries, and the number of verb types that are available to make finer-grained distinctions. Moreover, differences in the extensional ranges of corresponding verbs in the two languages can be motivated in terms of the properties of the instrument and the theme object.
  • Narasimhan, B., Eisenbeiss, S., & Brown, P. (2007). "Two's company, more is a crowd": The linguistic encoding of multiple-participant events. Linguistics, 45(3), 383-392. doi:10.1515/LING.2007.013.

    Abstract

    This introduction to a special issue of the journal Linguistics sketches the challenges that multiple-participant events pose for linguistic and psycholinguistic theories, and summarizes the articles in the volume.
  • Narasimhan, B., & Gullberg, M. (2011). The role of input frequency and semantic transparency in the acquisition of verb meaning: Evidence from placement verbs in Tamil and Dutch. Journal of Child Language, 38, 504-532. doi:10.1017/S0305000910000164.

    Abstract

    We investigate how Tamil- and Dutch-speaking adults and 4- to 5-year-old children use caused posture verbs (‘lay/stand a bottle on a table’) to label placement events in which objects are oriented vertically or horizontally. Tamil caused posture verbs consist of morphemes that individually label the causal and result subevents (nikka veyyii ‘make stand’; paDka veyyii ‘make lie’), occurring in situational and discourse contexts where object orientation is at issue. Dutch caused posture verbs are less semantically transparent: they are monomorphemic (zetten ‘set/stand’; leggen ‘lay’), often occurring in contexts where factors other than object orientation determine use. Caused posture verbs occur rarely in corpora of Tamil input, whereas in Dutch input, they are used frequently. Elicited production data reveal that Tamil four-year-olds use infrequent placement verbs appropriately whereas Dutch children use high-frequency placement verbs inappropriately even at age five. Semantic transparency exerts a stronger influence than input frequency in constraining children’s verb meaning acquisition.
  • Nieuwland, M. S., Petersson, K. M., & Van Berkum, J. J. A. (2007). On sense and reference: Examining the functional neuroanatomy of referential processing. NeuroImage, 37(3), 993-1004. doi:10.1016/j.neuroimage.2007.05.048.

    Abstract

    In an event-related fMRI study, we examined the cortical networks involved in establishing reference during language comprehension. We compared BOLD responses to sentences containing referentially ambiguous pronouns (e.g., “Ronald told Frank that he…”), referentially failing pronouns (e.g., “Rose told Emily that he…”) or coherent pronouns. Referential ambiguity selectively recruited medial prefrontal regions, suggesting that readers engaged in problem-solving to select a unique referent from the discourse model. Referential failure elicited activation increases in brain regions associated with morpho-syntactic processing, and, for those readers who took failing pronouns to refer to unmentioned entities, additional regions associated with elaborative inferencing were observed. The networks activated by these two referential problems did not overlap with the network activated by a standard semantic anomaly. Instead, we observed a double dissociation, in that the systems activated by semantic anomaly are deactivated by referential ambiguity, and vice versa. This inverse coupling may reflect the dynamic recruitment of semantic and episodic processing to resolve semantically or referentially problematic situations. More generally, our findings suggest that neurocognitive accounts of language comprehension need to address not just how we parse a sentence and combine individual word meanings, but also how we determine who's who and what's what during language comprehension.
  • Nieuwland, M. S., Otten, M., & Van Berkum, J. J. A. (2007). Who are you talking about? Tracking discourse-level referential processing with event-related brain potentials. Journal of Cognitive Neuroscience, 19(2), 228-236. doi:10.1162/jocn.2007.19.2.228.

    Abstract

    In this event-related brain potentials (ERPs) study, we explored the possibility to selectively track referential ambiguity during spoken discourse comprehension. Earlier ERP research has shown that referentially ambiguous nouns (e.g., “the girl” in a two-girl context) elicit a frontal, sustained negative shift relative to unambiguous control words. In the current study, we examined whether this ERP effect reflects “deep” situation model ambiguity or “superficial” textbase ambiguity. We contrasted these different interpretations by investigating whether a discourse-level semantic manipulation that prevents referential ambiguity also averts the elicitation of a referentially induced ERP effect. We compared ERPs elicited by nouns that were referentially nonambiguous but were associated with two discourse entities (e.g., “the girl” with two girls introduced in the context, but one of which has died or left the scene), with referentially ambiguous and nonambiguous control words. Although temporally referentially ambiguous nouns elicited a frontal negative shift compared to control words, the “double bound” but referentially nonambiguous nouns did not. These results suggest that it is possible to selectively track referential ambiguity with ERPs at the level that is most relevant to discourse comprehension, the situation model.
  • Noble, C. H., Rowland, C. F., & Pine, J. M. (2011). Comprehension of argument structure and semantic roles: Evidence from English-learning children and the forced-choice pointing paradigm. Cognitive Science, 35(5), 963-982. doi:10.1111/j.1551-6709.2011.01175.x.

    Abstract

    Research using the intermodal preferential looking paradigm (IPLP) has consistently shown that English-learning children aged 2 can associate transitive argument structure with causal events. However, studies using the same methodology investigating 2-year-old children’s knowledge of the conjoined agent intransitive and semantic role assignment have reported inconsistent findings. The aim of the present study was to establish at what age English-learning children have verb-general knowledge of both transitive and intransitive argument structure using a new method: the forced-choice pointing paradigm. The results suggest that young 2-year-olds can associate transitive structures with causal (or externally caused) events and can use transitive structure to assign agent and patient roles correctly. However, the children were unable to associate the conjoined agent intransitive with noncausal events until aged 3;4. The results confirm the pattern from previous IPLP studies and indicate that children may develop the ability to comprehend different aspects of argument structure at different ages. The implications for theories of language acquisition and the nature of the language acquisition mechanism are discussed.
  • Norris, D., & Cutler, A. (1985). Juncture detection. Linguistics, 23, 689-705.
  • Nüse, R. (2007). Der Gebrauch und die Bedeutungen von auf, an und unter. Zeitschrift für Germanistische Linguistik, 35, 27-51.

    Abstract

    Present approaches to the semantics of the German prepositions auf an and unter draw on two propositions: First, that spatial prepositions in general specify a region in the surrounding of the relatum object. Second, that in the case of auf an and unter, these regions are to be defined with concepts like the vertical and/or the topological surfa¬ce (the whole surrounding exterior of an object). The present paper argues that the first proposition is right and that the second is wrong. That is, while it is true that prepositions specify regions, the regions specified by auf, an and unter should rather be defined in terms of everyday concepts like SURFACE, SIDE and UNDERSIDE. This idea is suggested by the fact that auf an and unter refer to different regions in different kinds of relatum objects, and that these regions are the same as the regions called surfaces, sides and undersides. Furthermore, reading and usage preferences of auf an and unter can be explained by a corresponding salience of the surfaces, sides and undersides of the relatum objects in question. All in all, therefore, a close look at the use of auf an and unter with different classes of relatum objects reveals problems for a semantic approach that draws on concepts like the vertical, while it suggests mea¬nings of these prepositions that refer to the surface, side and underside of an object.
  • O'Connor, L. (2007). 'Chop, shred, snap apart': Verbs of cutting and breaking in Lowland Chontal. Cognitive Linguistics, 18(2), 219-230. doi:10.1515/COG.2007.010.

    Abstract

    Typological descriptions of understudied languages reveal intriguing crosslinguistic variation in descriptions of events of object separation and destruction. In Lowland Chontal of Oaxaca, verbs of cutting and breaking lexicalize event perspectives that range from the common to the quite unusual, from the tearing of cloth to the snapping apart on the cross-grain of yarn. This paper describes the semantic and syntactic criteria that characterize three verb classes in this semantic domain, examines patterns of event construal, and takes a look at likely changes in these event descriptions from the perspective of endangered language recovery.
  • O'Connor, L. (2007). [Review of the book Pronouns by D.N.S. Bhat]. Journal of Pragmatics, 39(3), 612-616. doi:10.1016/j.pragma.2006.09.007.
  • Omar, R., Henley, S. M., Bartlett, J. W., Hailstone, J. C., Gordon, E., Sauter, D., Frost, C., Scott, S. K., & Warren, J. D. (2011). The structural neuroanatomy of music emotion recognition: Evidence from frontotemporal lobar degeneration. Neuroimage, 56, 1814-1821. doi:10.1016/j.neuroimage.2011.03.002.

    Abstract

    Despite growing clinical and neurobiological interest in the brain mechanisms that process emotion in music, these mechanisms remain incompletely understood. Patients with frontotemporal lobar degeneration (FTLD) frequently exhibit clinical syndromes that illustrate the effects of breakdown in emotional and social functioning. Here we investigated the neuroanatomical substrate for recognition of musical emotion in a cohort of 26 patients with FTLD (16 with behavioural variant frontotemporal dementia, bvFTD, 10 with semantic dementia, SemD) using voxel-based morphometry. On neuropsychological evaluation, patients with FTLD showed deficient recognition of canonical emotions (happiness, sadness, anger and fear) from music as well as faces and voices compared with healthy control subjects. Impaired recognition of emotions from music was specifically associated with grey matter loss in a distributed cerebral network including insula, orbitofrontal cortex, anterior cingulate and medial prefrontal cortex, anterior temporal and more posterior temporal and parietal cortices, amygdala and the subcortical mesolimbic system. This network constitutes an essential brain substrate for recognition of musical emotion that overlaps with brain regions previously implicated in coding emotional value, behavioural context, conceptual knowledge and theory of mind. Musical emotion recognition may probe the interface of these processes, delineating a profile of brain damage that is essential for the abstraction of complex social emotions.
  • Oostenveld, R., Fries, P., Maris, E., & Schoffelen, J.-M. (2011). FieldTrip: Open source software for advanced analysis of MEG, EEG, and Invasive Electrophysiological Data. Computational Intelligence and Neuroscience, 2011: 156869, pp. 156869. doi:10.1155/2011/156869.

    Abstract

    This paper describes FieldTrip, an open source software package that we developed for the analysis of MEG, EEG, and other electrophysiological data. The software is implemented as a MATLAB toolbox and includes a complete set of consistent and user-friendly high-level functions that allow experimental neuroscientists to analyze experimental data. It includes algorithms for simple and advanced analysis, such as time-frequency analysis using multitapers, source reconstruction using dipoles, distributed sources and beamformers, connectivity analysis, and nonparametric statistical permutation tests at the channel and source level. The implementation as toolbox allows the user to perform elaborate and structured analyses of large data sets using the MATLAB command line and batch scripting. Furthermore, users and developers can easily extend the functionality and implement new algorithms. The modular design facilitates the reuse in other software packages.
  • O’Roak, B. J., Deriziotis, P., Lee, C., Vives, L., Schwartz, J. J., Girirajan, S., Karakoc, E., MacKenzie, A. P., Ng, S. B., Baker, C., Rieder, M. J., Nickerson, D. A., Bernier, R., Fisher, S. E., Shendure, J., & Eichler, E. E. (2011). Exome sequencing in sporadic autism spectrum disorders identifies severe de novo mutations. Nature Genetics, 43, 585-589. doi:10.1038/ng.835.

    Abstract

    Evidence for the etiology of autism spectrum disorders (ASDs) has consistently pointed to a strong genetic component complicated by substantial locus heterogeneity1, 2. We sequenced the exomes of 20 individuals with sporadic ASD (cases) and their parents, reasoning that these families would be enriched for de novo mutations of major effect. We identified 21 de novo mutations, 11 of which were protein altering. Protein-altering mutations were significantly enriched for changes at highly conserved residues. We identified potentially causative de novo events in 4 out of 20 probands, particularly among more severely affected individuals, in FOXP1, GRIN2B, SCN1A and LAMC3. In the FOXP1 mutation carrier, we also observed a rare inherited CNTNAP2 missense variant, and we provide functional support for a multi-hit model for disease risk3. Our results show that trio-based exome sequencing is a powerful approach for identifying new candidate genes for ASDs and suggest that de novo mutations may contribute substantially to the genetic etiology of ASDs.

    Additional information

    ORoak_Supplementary text.pdf

    Files private

    Request files
  • Otten, M., & Van Berkum, J. J. A. (2007). What makes a discourse constraining? Comparing the effects of discourse message and scenario fit on the discourse-dependent N400 effect. Brain Research, 1153, 166-177. doi:10.1016/j.brainres.2007.03.058.

    Abstract

    A discourse context provides a reader with a great deal of information that can provide constraints for further language processing, at several different levels. In this experiment we used event-related potentials (ERPs) to explore whether discourse-generated contextual constraints are based on the precise message of the discourse or, more `loosely', on the scenario suggested by one or more content words in the text. Participants read constraining stories whose precise message rendered a particular word highly predictable ("The manager thought that the board of directors should assemble to discuss the issue. He planned a...[meeting]") as well as non-constraining control stories that were only biasing in virtue of the scenario suggested by some of the words ("The manager thought that the board of directors need not assemble to discuss the issue. He planned a..."). Coherent words that were inconsistent with the message-level expectation raised in a constraining discourse (e.g., "session" instead of "meeting") elicited a classic centroparietal N400 effect. However, when the same words were only inconsistent with the scenario loosely suggested by earlier words in the text, they elicited a different negativity around 400 ms, with a more anterior, left-lateralized maximum. The fact that the discourse-dependent N400 effect cannot be reduced to scenario-mediated priming reveals that it reflects the rapid use of precise message-level constraints in comprehension. At the same time, the left-lateralized negativity in non-constraining stories suggests that, at least in the absence of strong message-level constraints, scenario-mediated priming does also rapidly affect comprehension.
  • Otten, M., Nieuwland, M. S., & Van Berkum, J. J. A. (2007). Great expectations: Specific lexical anticipation influences the processing of spoken language. BMC Neuroscience, 8: 89. doi:10.1186/1471-2202-8-89.

    Abstract

    Background Recently several studies have shown that people use contextual information to make predictions about the rest of the sentence or story as the text unfolds. Using event related potentials (ERPs) we tested whether these on-line predictions are based on a message-based representation of the discourse or on simple automatic activation by individual words. Subjects heard short stories that were highly constraining for one specific noun, or stories that were not specifically predictive but contained the same prime words as the predictive stories. To test whether listeners make specific predictions critical nouns were preceded by an adjective that was inflected according to, or in contrast with, the gender of the expected noun. Results When the message of the preceding discourse was predictive, adjectives with an unexpected gender-inflection evoked a negative deflection over right-frontal electrodes between 300 and 600 ms. This effect was not present in the prime control context, indicating that the prediction mismatch does not hinge on word-based priming but is based on the actual message of the discourse. Conclusions When listening to a constraining discourse people rapidly make very specific predictions about the remainder of the story, as the story unfolds. These predictions are not simply based on word-based automatic activation, but take into account the actual message of the discourse.
  • Ottoni, C., Ricaut, F.-X., Vanderheyden, N., Brucato, N., Waelkens, M., & Decorte, R. (2011). Mitochondrial analysis of a Byzantine population reveals the differential impact of multiple historical events in South Anatolia. European Journal of Human Genetics, 19, 571-576. doi:10.1038/ejhg.2010.230.

    Abstract

    The archaeological site of Sagalassos is located in Southwest Turkey, in the western part of the Taurus mountain range. Human occupation of its territory is attested from the late 12th millennium BP up to the 13th century AD. By analysing the mtDNA variation in 85 skeletons from Sagalassos dated to the 11th–13th century AD, this study attempts to reconstruct the genetic signature potentially left in this region of Anatolia by the many civilizations, which succeeded one another over the centuries until the mid-Byzantine period (13th century BC). Authentic ancient DNA data were determined from the control region and some SNPs in the coding region of the mtDNA in 53 individuals. Comparative analyses with up to 157 modern populations allowed us to reconstruct the origin of the mid-Byzantine people still dwelling in dispersed hamlets in Sagalassos, and to detect the maternal contribution of their potential ancestors. By integrating the genetic data with historical and archaeological information, we were able to attest in Sagalassos a significant maternal genetic signature of Balkan/Greek populations, as well as ancient Persians and populations from the Italian peninsula. Some contribution from the Levant has been also detected, whereas no contribution from Central Asian population could be ascertained.
  • Özdemir, R., Roelofs, A., & Levelt, W. J. M. (2007). Perceptual uniqueness point effects in monitoring internal speech. Cognition, 105(2), 457-465. doi:10.1016/j.cognition.2006.10.006.

    Abstract

    Disagreement exists about how speakers monitor their internal speech. Production-based accounts assume that self-monitoring mechanisms exist within the production system, whereas comprehension-based accounts assume that monitoring is achieved through the speech comprehension system. Comprehension-based accounts predict perception-specific effects, like the perceptual uniqueness-point effect, in the monitoring of internal speech. We ran an extensive experiment testing this prediction using internal phoneme monitoring and picture naming tasks. Our results show an effect of the perceptual uniqueness point of a word in internal phoneme monitoring in the absence of such an effect in picture naming. These results support comprehension-based accounts of the monitoring of internal speech.
  • Ozyurek, A., Willems, R. M., Kita, S., & Hagoort, P. (2007). On-line integration of semantic information from speech and gesture: Insights from event-related brain potentials. Journal of Cognitive Neuroscience, 19(4), 605-616. doi:10.1162/jocn.2007.19.4.605.

    Abstract

    During language comprehension, listeners use the global semantic representation from previous sentence or discourse context to immediately integrate the meaning of each upcoming word into the unfolding message-level representation. Here we investigate whether communicative gestures that often spontaneously co-occur with speech are processed in a similar fashion and integrated to previous sentence context in the same way as lexical meaning. Event-related potentials were measured while subjects listened to spoken sentences with a critical verb (e.g., knock), which was accompanied by an iconic co-speech gesture (i.e., KNOCK). Verbal and/or gestural semantic content matched or mismatched the content of the preceding part of the sentence. Despite the difference in the modality and in the specificity of meaning conveyed by spoken words and gestures, the latency, amplitude, and topographical distribution of both word and gesture mismatches are found to be similar, indicating that the brain integrates both types of information simultaneously. This provides evidence for the claim that neural processing in language comprehension involves the simultaneous incorporation of information coming from a broader domain of cognition than only verbal semantics. The neural evidence for similar integration of information from speech and gesture emphasizes the tight interconnection between speech and co-speech gestures.
  • Ozyurek, A., & Kelly, S. D. (2007). Gesture, language, and brain. Brain and Language, 101(3), 181-185. doi:10.1016/j.bandl.2007.03.006.
  • Paternoster, L., Evans, D. M., Aagaard Nohr, E., Holst, C., Gaborieau, V., Brennan, P., Prior Gjesing, A., Grarup, N., Witte, D. R., Jørgensen, T., Linneberg, A., Lauritzen, T., Sandbaek, A., Hansen, T., Pedersen, O., Elliott, K. S., Kemp, J. P., St Pourcain, B., McMahon, G., Zelenika, D. and 5 morePaternoster, L., Evans, D. M., Aagaard Nohr, E., Holst, C., Gaborieau, V., Brennan, P., Prior Gjesing, A., Grarup, N., Witte, D. R., Jørgensen, T., Linneberg, A., Lauritzen, T., Sandbaek, A., Hansen, T., Pedersen, O., Elliott, K. S., Kemp, J. P., St Pourcain, B., McMahon, G., Zelenika, D., Hager, J., Lathrop, M., Timpson, N. J., Davey Smith, G., & Sørensen, T. I. A. (2011). Genome-Wide Population-Based Association Study of Extremely Overweight Young Adults – The GOYA Study. PLoS ONE, 6(9): e24303. doi:10.1371/journal.pone.0024303.

    Abstract

    Background Thirty-two common variants associated with body mass index (BMI) have been identified in genome-wide association studies, explaining ∼1.45% of BMI variation in general population cohorts. We performed a genome-wide association study in a sample of young adults enriched for extremely overweight individuals. We aimed to identify new loci associated with BMI and to ascertain whether using an extreme sampling design would identify the variants known to be associated with BMI in general populations. Methodology/Principal Findings From two large Danish cohorts we selected all extremely overweight young men and women (n = 2,633), and equal numbers of population-based controls (n = 2,740, drawn randomly from the same populations as the extremes, representing ∼212,000 individuals). We followed up novel (at the time of the study) association signals (p<}0.001) from the discovery cohort in a genome-wide study of 5,846 Europeans, before attempting to replicate the most strongly associated 28 SNPs in an independent sample of Danish individuals (n = 20,917) and a population-based cohort of 15-year-old British adolescents (n = 2,418). Our discovery analysis identified SNPs at three loci known to be associated with BMI with genome-wide confidence (P{<}5×10−8; FTO, MC4R and FAIM2). We also found strong evidence of association at the known TMEM18, GNPDA2, SEC16B, TFAP2B, SH2B1 and KCTD15 loci (p{<}0.001), and nominal association (p{<0.05) at a further 8 loci known to be associated with BMI. However, meta-analyses of our discovery and replication cohorts identified no novel associations. Significance Our results indicate that the detectable genetic variation associated with extreme overweight is very similar to that previously found for general BMI. This suggests that population-based study designs with enriched sampling of individuals with the extreme phenotype may be an efficient method for identifying common variants that influence quantitative traits and a valid alternative to genotyping all individuals in large population-based studies, which may require tens of thousands of subjects to achieve similar power.
  • Pereiro Estevan, Y., Wan, V., & Scharenborg, O. (2007). Finding maximum margin segments in speech. Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference, IV, 937-940. doi:10.1109/ICASSP.2007.367225.

    Abstract

    Maximum margin clustering (MMC) is a relatively new and promising kernel method. In this paper, we apply MMC to the task of unsupervised speech segmentation. We present three automatic speech segmentation methods based on MMC, which are tested on TIMIT and evaluated on the level of phoneme boundary detection. The results show that MMC is highly competitive with existing unsupervised methods for the automatic detection of phoneme boundaries. Furthermore, initial analyses show that MMC is a promising method for the automatic detection of sub-phonetic information in the speech signal.
  • Perniss, P. M. (2007). Achieving spatial coherence in German sign language narratives: The use of classifiers and perspective. Lingua, 117(7), 1315-1338. doi:10.1016/j.lingua.2005.06.013.

    Abstract

    Spatial coherence in discourse relies on the use of devices that provide information about where referents are and where events take place. In signed language, two primary devices for achieving and maintaining spatial coherence are the use of classifier forms and signing perspective. This paper gives a unified account of the relationship between perspective and classifiers, and divides the range of possible correspondences between these two devices into prototypical and non-prototypical alignments. An analysis of German Sign Language narratives of complex events investigates the role of different classifier-perspective constructions in encoding spatial information about location, orientation, action and motion, as well as size and shape of referents. In particular, I show how non-prototypical alignments, including simultaneity of perspectives, contribute to the maintenance of spatial coherence, and provide functional explanations in terms of efficiency and informativeness constraints on discourse.
  • Petersson, K. M., Silva, C., Castro-Caldas, A., Ingvar, M., & Reis, A. (2007). Literacy: A cultural influence on functional left-right differences in the inferior parietal cortex. European Journal of Neuroscience, 26(3), 791-799. doi:10.1111/j.1460-9568.2007.05701.x.

    Abstract

    The current understanding of hemispheric interaction is limited. Functional hemispheric specialization is likely to depend on both genetic and environmental factors. In the present study we investigated the importance of one factor, literacy, for the functional lateralization in the inferior parietal cortex in two independent samples of literate and illiterate subjects. The results show that the illiterate group are consistently more right-lateralized than their literate controls. In contrast, the two groups showed a similar degree of left-right differences in early speech-related regions of the superior temporal cortex. These results provide evidence suggesting that a cultural factor, literacy, influences the functional hemispheric balance in reading and verbal working memory-related regions. In a third sample, we investigated grey and white matter with voxel-based morphometry. The results showed differences between literacy groups in white matter intensities related to the mid-body region of the corpus callosum and the inferior parietal and parietotemporal regions (literate > illiterate). There were no corresponding differences in the grey matter. This suggests that the influence of literacy on brain structure related to reading and verbal working memory is affecting large-scale brain connectivity more than grey matter per se.
  • Phok, K., Moisan, A., Rinaldi, D., Brucato, N., Carpousis, A. J., Gaspin, C., & Clouet-d'Orval, B. (2011). Identification of CRISPR and riboswitch related RNAs among novel non-coding RNAs of the euryarchaeon Pyrococcus abyssi. BMC Genomics, 12, 312. doi:10.1186/1471-2164-12-312.

    Abstract

    Background

    Noncoding RNA (ncRNA) has been recognized as an important regulator of gene expression networks in Bacteria and Eucaryota. Little is known about ncRNA in thermococcal archaea except for the eukaryotic-like C/D and H/ACA modification guide RNAs.
    Results

    Using a combination of in silico and experimental approaches, we identified and characterized novel P. abyssi ncRNAs transcribed from 12 intergenic regions, ten of which are conserved throughout the Thermococcales. Several of them accumulate in the late-exponential phase of growth. Analysis of the genomic context and sequence conservation amongst related thermococcal species revealed two novel P. abyssi ncRNA families. The CRISPR family is comprised of crRNAs expressed from two of the four P. abyssi CRISPR cassettes. The 5'UTR derived family includes four conserved ncRNAs, two of which have features similar to known bacterial riboswitches. Several of the novel ncRNAs have sequence similarities to orphan OrfB transposase elements. Based on RNA secondary structure predictions and experimental results, we show that three of the twelve ncRNAs include Kink-turn RNA motifs, arguing for a biological role of these ncRNAs in the cell. Furthermore, our results show that several of the ncRNAs are subjected to processing events by enzymes that remain to be identified and characterized.
    Conclusions

    This work proposes a revised annotation of CRISPR loci in P. abyssi and expands our knowledge of ncRNAs in the Thermococcales, thus providing a starting point for studies needed to elucidate their biological function.
  • Piai, V., Roelofs, A., & Schriefers, H. (2011). Semantic interference in immediate and delayed naming and reading: Attention and task decisions. Journal of Memory and Language, 64, 404-423. doi:10.1016/j.jml.2011.01.004.

    Abstract

    Disagreement exists about whether lexical selection in word production is a competitive process. Competition predicts semanticinterference from distractor words in immediate but not in delayed picture naming. In contrast, Janssen, Schirm, Mahon, and Caramazza (2008) obtained semanticinterference in delayed picture naming when participants had to decide between picture naming and oral reading depending on the distractor word’s colour. We report three experiments that examined the role of such taskdecisions. In a single-task situation requiring picture naming only (Experiment 1), we obtained semanticinterference in immediate but not in delayednaming. In a task-decision situation (Experiments 2 and 3), no semantic effects were obtained in immediate and delayed picture naming and word reading using either the materials of Experiment 1 or the materials of Janssen et al. (2008). We present an attentional account in which taskdecisions may hide or reveal semanticinterference from lexical competition depending on the amount of parallelism between task-decision and picture–word processing.
  • Pickering, M. J., & Majid, A. (2007). What are implicit causality and consequentiality? Language and Cognitive Processes, 22(5), 780-788. doi:10.1080/01690960601119876.

    Abstract

    Much work in psycholinguistics and social psychology has investigated the notion of implicit causality associated with verbs. Crinean and Garnham (2006) relate implicit causality to another phenomenon, implicit consequentiality. We argue that they and other researchers have confused the meanings of events and the reasons for those events, so that particular thematic roles (e.g., Agent, Patient) are taken to be causes or consequences of those events by definition. In accord with Garvey and Caramazza (1974), we propose that implicit causality and consequentiality are probabilistic notions that are straightforwardly related to the explicit causes and consequences of events and are analogous to other biases investigated in psycholinguistics.
  • Pijnacker, J., Geurts, B., Van Lambalgen, M., Buitelaar, J., & Hagoort, P. (2011). Reasoning with exceptions: An event-related brain potentials study. Journal of Cognitive Neuroscience, 23, 471-480. doi:10.1162/jocn.2009.21360.

    Abstract

    Defeasible inferences are inferences that can be revised in the light of new information. Although defeasible inferences are pervasive in everyday communication, little is known about how and when they are processed by the brain. This study examined the electrophysiological signature of defeasible reasoning using a modified version of the suppression task. Participants were presented with conditional inferences (of the type “if p, then q; p, therefore q”) that were preceded by a congruent or a disabling context. The disabling context contained a possible exception or precondition that prevented people from drawing the conclusion. Acceptability of the conclusion was indeed lower in the disabling condition compared to the congruent condition. Further, we found a large sustained negativity at the conclusion of the disabling condition relative to the congruent condition, which started around 250 msec and was persistent throughout the entire epoch. Possible accounts for the observed effect are discussed.
  • Poletiek, F. H. (2011). You can't have your hypothesis and test it: The importance of utilities in theories of reasoning. Behavioral and Brain Sciences, 34(2), 87-88. doi:10.1017/S0140525X10002980.
  • St Pourcain, B., Mandy, W. P., Heron, J., Golding, J., Davey Smith, G., & Skuse, D. H. (2011). Links between co-occurring social-communication and hyperactive-inattentive trait trajectories. Journal of the American Academy of Child & Adolescent Psychiatry, 50(9), 892-902.e5. doi:10.1016/j.jaac.2011.05.015.

    Abstract

    OBJECTIVE: There is overlap between an autistic and hyperactive-inattentive symptomatology when studied cross-sectionally. This study is the first to examine the longitudinal pattern of association between social-communication deficits and hyperactive-inattentive symptoms in the general population, from childhood through adolescence. We explored the interrelationship between trajectories of co-occurring symptoms, and sought evidence for shared prenatal/perinatal risk factors. METHOD: Study participants were 5,383 singletons of white ethnicity from the Avon Longitudinal Study of Parents and Children (ALSPAC). Multiple measurements of hyperactive-inattentive traits (Strengths and Difficulties Questionnaire) and autistic social-communication impairment (Social Communication Disorder Checklist) were obtained between 4 and 17 years. Both traits and their trajectories were modeled in parallel using latent class growth analysis (LCGA). Trajectory membership was subsequently investigated with respect to prenatal/perinatal risk factors. RESULTS: LCGA analysis revealed two distinct social-communication trajectories (persistently impaired versus low-risk) and four hyperactive-inattentive trait trajectories (persistently impaired, intermediate, childhood-limited and low-risk). Autistic symptoms were more stable than those of attention-deficit/hyperactivity disorder (ADHD) behaviors, which showed greater variability. Trajectories for both traits were strongly but not reciprocally interlinked, such that the majority of children with a persistent hyperactive-inattentive symptomatology also showed persistent social-communication deficits but not vice versa. Shared predictors, especially for trajectories of persistent impairment, were maternal smoking during the first trimester, which included familial effects, and a teenage pregnancy. CONCLUSIONS: Our longitudinal study reveals that a complex relationship exists between social-communication and hyperactive-inattentive traits. Patterns of association change over time, with corresponding implications for removing exclusivity criteria for ASD and ADHD, as proposed for DSM-5.
  • Pozzoli, O., Vella, P., Iaffaldano, G., Parente, V., Devanna, P., Lacovich, M., Lamia, C. L., Fascio, U., Longoni, D., Cotelli, F., Capogrossi, M. C., & Pesce, M. (2011). Endothelial fate and angiogenic properties of human CD34+ progenitor cells in zebrafish. Arteriosclerosis, Thrombosis, and Vascular Biology, 31, 1589-1597. doi:10.1161/ATVBAHA.111.226969.

    Abstract

    Objective—The vascular competence of human-derived hematopoietic progenitors for postnatal vascularization is still poorly characterized. It is unclear whether, in the absence of ischemia, hematopoietic progenitors participate in neovascularization and whether they play a role in new blood vessel formation by incorporating into developing vessels or by a paracrine action. Methods and Results—In the present study, human cord blood–derived CD34+ (hCD34+) cells were transplanted into pre- and postgastrulation zebrafish embryos and in an adult vascular regeneration model induced by caudal fin amputation. When injected before gastrulation, hCD34+ cells cosegregated with the presumptive zebrafish hemangioblasts, characterized by Scl and Gata2 expression, in the anterior and posterior lateral mesoderm and were involved in early development of the embryonic vasculature. These morphogenetic events occurred without apparent lineage reprogramming, as shown by CD45 expression. When transplanted postgastrulation, hCD34+ cells were recruited into developing vessels, where they exhibited a potent paracrine proangiogenic action. Finally, hCD34+ cells rescued vascular defects induced by Vegf-c in vivo targeting and enhanced vascular repair in the zebrafish fin amputation model. Conclusion—These results indicate an unexpected developmental ability of human-derived hematopoietic progenitors and support the hypothesis of an evolutionary conservation of molecular pathways involved in endothelial progenitor differentiation in vivo.
  • Praamstra, P., Hagoort, P., Maassen, B., & Crul, T. (1991). Word deafness and auditory cortical function: A case history and hypothesis. Brain, 114, 1197-1225. doi:10.1093/brain/114.3.1197.

    Abstract

    A patient who already had Wernick's aphasia due to a left temporal lobe lesion suffered a severe deterioration specifically of auditory language comprehension, subsequent to right temporal lobe infarction. A detailed comparison of his new condition with his language status before the second stroke revealed that the newly acquired deficit was limited to tasks related to auditory input. Further investigations demonstrated a speech perceptual disorder, which we analysed as due to deficits both at the level of general auditory processes and at the level of phonetic analysis. We discuss some arguments related to hemisphere specialization of phonetic processing and to the disconnection explanation of word deafness that support the hypothesis of word deafness being generally caused by mixed deficits.
  • Prieto, P., & Torreira, F. (2007). The segmental anchoring hypothesis revisited: Syllable structure and speech rate effects on peak timing in Spanish. Journal of Phonetics, 35, 473-500. doi:10.1016/j.wocn.2007.01.001.

    Abstract

    This paper addresses the validity of the segmental anchoring hypothesis for tonal landmarks (henceforth, SAH) as described in recent work by (among others) Ladd, Faulkner, D., Faulkner, H., & Schepman [1999. Constant ‘segmental’ anchoring of f0 movements under changes in speech rate. Journal of the Acoustical Society of America, 106, 1543–1554], Ladd [2003. Phonological conditioning of f0 target alignment. In: M. J. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the XVth international congress of phonetic sciences, Vol. 1, (pp. 249–252). Barcelona: Causal Productions; in press. Segmental anchoring of pitch movements: Autosegmental association or gestural coordination? Italian Journal of Linguistics, 18 (1)]. The alignment of LH* prenuclear peaks with segmental landmarks in controlled speech materials in Peninsular Spanish is analyzed as a function of syllable structure type (open, closed) of the accented syllable, segmental composition, and speaking rate. Contrary to the predictions of the SAH, alignment was affected by syllable structure and speech rate in significant and consistent ways. In: CV syllables the peak was located around the end of the accented vowel, and in CVC syllables around the beginning-mid part of the sonorant coda, but still far from the syllable boundary. With respect to the effects of rate, peaks were located earlier in the syllable as speech rate decreased. The results suggest that the accent gestures under study are synchronized with the syllable unit. In general, the longer the syllable, the longer the rise time. Thus the fundamental idea of the anchoring hypothesis can be taken as still valid. On the other hand, the tonal alignment patterns reported here can be interpreted as the outcome of distinct modes of gestural coordination in syllable-initial vs. syllable-final position: gestures at syllable onsets appear to be more tightly coordinated than gestures at the end of syllables [Browman, C. P., & Goldstein, L.M. (1986). Towards an articulatory phonology. Phonology Yearbook, 3, 219–252; Browman, C. P., & Goldstein, L. (1988). Some notes on syllable structure in articulatory phonology. Phonetica, 45, 140–155; (1992). Articulatory Phonology: An overview. Phonetica, 49, 155–180; Krakow (1999). Physiological organization of syllables: A review. Journal of Phonetics, 27, 23–54; among others]. Intergestural timing can thus provide a unifying explanation for (1) the contrasting behavior between the precise synchronization of L valleys with the onset of the syllable and the more variable timing of the end of the f0 rise, and, more specifically, for (2) the right-hand tonal pressure effects and ‘undershoot’ patterns displayed by peaks at the ends of syllables and other prosodic domains.
  • Protopapas, A., Gerakaki, S., & Alexandri, S. (2007). Sources of information for stress assignment in reading Greek. Applied Psycholinguistics, 28(4), 695 -720. doi:10.1017/S0142716407070373.

    Abstract

    To assign lexical stress when reading, the Greek reader can potentially rely on lexical information (knowledge of the word), visual–orthographic information (processing of the written diacritic), or a default metrical strategy (penultimate stress pattern). Previous studies with secondary education children have shown strong lexical effects on stress assignment and have provided evidence for a default pattern. Here we report two experiments with adult readers, in which we disentangle and quantify the effects of these three potential sources using nonword materials. Stimuli either resembled or did not resemble real words, to manipulate availability of lexical information; and they were presented with or without a diacritic, in a word-congruent or word-incongruent position, to contrast the relative importance of the three sources. Dual-task conditions, in which cognitive load during nonword reading was increased with phonological retention carrying a metrical pattern different from the default, did not support the hypothesis that the default arises from cumulative lexical activation in working memory.
  • Qin, S., Piekema, C., Petersson, K. M., Han, B., Luo, J., & Fernández, G. (2007). Probing the transformation of discontinuous associations into episodic memory: An event-related fMRI study. NeuroImage, 38(1), 212-222. doi:10.1016/j.neuroimage.2007.07.020.

    Abstract

    Using event-related functional magnetic resonance imaging, we identified brain regions involved in storing associations of events discontinuous in time into long-term memory. Participants were scanned while memorizing item-triplets including simultaneous and discontinuous associations. Subsequent memory tests showed that participants remembered both types of associations equally well. First, by constructing the contrast between the subsequent memory effects for discontinuous associations and simultaneous associations, we identified the left posterior parahippocampal region, dorsolateral prefrontal cortex, the basal ganglia, posterior midline structures, and the middle temporal gyrus as being specifically involved in transforming discontinuous associations into episodic memory. Second, we replicated that the prefrontal cortex and the medial temporal lobe (MTL) especially the hippocampus are involved in associative memory formation in general. Our findings provide evidence for distinct neural operation(s) that supports the binding and storing discontinuous associations in memory. We suggest that top-down signals from the prefrontal cortex and MTL may trigger reactivation of internal representation in posterior midline structures of the first event, thus allowing it to be associated with the second event. The dorsolateral prefrontal cortex together with basal ganglia may support this encoding operation by executive and binding processes within working memory, and the posterior parahippocampal region may play a role in binding and memory formation.
  • Rahmany, R., Marefat, H., & Kidd, E. (2011). Persian speaking children's acquisition of relative clauses. European Journal of Developmental Psychology, 8(3), 367-388. doi:10.1080/17405629.2010.509056.

    Abstract

    The current study examined the acquisition of relative clauses (RCs) in Persian-speaking children. Persian is a relatively unique data point in crosslinguistic research in acquisition because it is a head-final language with post-nominal RCs. Children (N = 51) aged 2 to 7 years completed a picture-selection task that tested their comprehension of subject-, object-, and genitive-RCs. The results showed that the children experienced greater difficulty processing object and genitive RCs when compared to subject RCs, suggesting that the children have particular difficulty processing sentences with non-canonical word order. The results are discussed with reference to a number of theoretical accounts proposed to account for sentence difficulty.
  • Ramenzoni, V. C., Davis, T. J., Riley, M. A., Shockley, K., & Baker, A. A. (2011). Joint action in a cooperative precision task: Nested processes of intrapersonal and interpersonal coordination. Experimental Brain Research, 211, 447-457. doi:10.1007/s00221-011-2653-8.

    Abstract

    The authors determined the effects of changes in task demands on interpersonal and intrapersonal coordination. Participants performed a joint task in which one participant held a stick to which a circle was attached at the top (holding role), while the other held a pointer through the circle without touching its borders (pointing role). Experiment 1 investigated whether interpersonal and intrapersonal coordination varied depending on task difficulty. Results showed that interpersonal and intrapersonal coordination increased in degree and stability with increments in task difficulty. Experiment 2 explored the effects of individual constraints by increasing the balance demands of the task (one or both members of the pair stood in a less stable tandem stance). Results showed that interpersonal coordination increased in degree and stability as joint task demands increased and that coupling strength varied depending on joint and individual task constraints. In all, results suggest that interpersonal and intrapersonal coordination are affected by the nature of the task performed and the constraints it places on joint and individual performance.
  • Ravenscroft, G., Sollis, E., Charles, A. K., North, K. N., Baynam, G., & Laing, N. G. (2011). Fetal akinesia: review of the genetics of the neuromuscular causes. Journal of Medical Genetics (London), 48(12), 793-801.

    Abstract

    Fetal akinesia refers to a broad spectrum of disorders in which the unifying feature is a reduction or lack of fetal movement. Fetal akinesias may be caused by defects at any point along the motor system pathway including the central and peripheral nervous system, the neuromuscular junction and the muscle, as well as by restrictive dermopathy or external restriction of the fetus in utero. The fetal akinesias are clinically and genetically heterogeneous, with causative mutations identified to date in a large number of genes encoding disparate parts of the motor system. However, for most patients, the molecular cause remains unidentified. One reason for this is because the tools are only now becoming available to efficiently and affordably identify mutations in a large panel of disease genes. Next-generation sequencing offers the promise, if sufficient cohorts of patients can be assembled, to identify the majority of the remaining genes on a research basis and facilitate efficient clinical molecular diagnosis. The benefits of identifying the causative mutation(s) for each individual patient or family include accurate genetic counselling and the options of prenatal diagnosis or preimplantation genetic diagnosis.

    In this review, we summarise known single-gene disorders affecting the spinal cord, peripheral nerves, neuromuscular junction or skeletal muscles that result in fetal akinesia. This audit of these known molecular and pathophysiological mechanisms involved in fetal akinesia provides a basis for improved molecular diagnosis and completing disease gene discovery.
  • Reif, A., Nguyen, T. T., Weißflog, L., Jacob, C. P., Romanos, M., Renner, T. J., Buttenschon, H. N., Kittel-Schneider, S., Gessner, A., Weber, H., Neuner, M., Gross-Lesch, S., Zamzow, K., Kreiker, S., Walitza, S., Meyer, J., Freitag, C. M., Bosch, R., Casas, M., Gómez, N. and 24 moreReif, A., Nguyen, T. T., Weißflog, L., Jacob, C. P., Romanos, M., Renner, T. J., Buttenschon, H. N., Kittel-Schneider, S., Gessner, A., Weber, H., Neuner, M., Gross-Lesch, S., Zamzow, K., Kreiker, S., Walitza, S., Meyer, J., Freitag, C. M., Bosch, R., Casas, M., Gómez, N., Ribasès, M., Bayès, M., Buitelaar, J. K., Kiemeney, L. A. L. M., Kooij, J. J. S., Kan, C. C., Hoogman, M., Johansson, S., Jacobsen, K. K., Knappskog, P. M., Fasmer, O. B., Asherson, P., Warnke, A., Grabe, H.-J., Mahler, J., Teumer, A., Völzke, H., Mors, O. N., Schäfer, H., Ramos-Quiroga, J. A., Cormand, B., Haavik, J., Franke, B., & Lesch, K.-P. (2011). DIRAS2 is associated with Adult ADHD, related traits, and co-morbid disorders. Neuropsychopharmacology, 36, 2318-2327. doi:10.1038/npp.2011.120.

    Abstract

    Several linkage analyses implicated the chromosome 9q22 region in attention deficit/hyperactivity disorder (ADHD), a neurodevelopmental disease with remarkable persistence into adulthood. This locus contains the brain-expressed GTP-binding RAS-like 2 gene (DIRAS2) thought to regulate neurogenesis. As DIRAS2 is a positional and functional ADHD candidate gene, we conducted an association study in 600 patients suffering from adult ADHD (aADHD) and 420 controls. Replication samples consisted of 1035 aADHD patients and 1381 controls, as well as 166 families with a child affected from childhood ADHD. Given the high degree of co-morbidity with ADHD, we also investigated patients suffering from bipolar disorder (BD) (n=336) or personality disorders (PDs) (n=622). Twelve single-nucleotide polymorphisms (SNPs) covering the structural gene and the transcriptional control region of DIRAS2 were analyzed. Four SNPs and two haplotype blocks showed evidence of association with ADHD, with nominal p-values ranging from p=0.006 to p=0.05. In the adult replication samples, we obtained a consistent effect of rs1412005 and of a risk haplotype containing the promoter region (p=0.026). Meta-analysis resulted in a significant common OR of 1.12 (p=0.04) for rs1412005 and confirmed association with the promoter risk haplotype (OR=1.45, p=0.0003). Subsequent analysis in nuclear families with childhood ADHD again showed an association of the promoter haplotype block (p=0.02). rs1412005 also increased risk toward BD (p=0.026) and cluster B PD (p=0.031). Additional SNPs showed association with personality scores (p=0.008–0.048). Converging lines of evidence implicate genetic variance in the promoter region of DIRAS2 in the etiology of ADHD and co-morbid impulsive disorders.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2011). Speaking rate affects the perception of duration as a suprasegmental lexical-stress cue. Language and Speech, 54(2), 147-165. doi:10.1177/0023830910397489.

    Abstract

    Three categorization experiments investigated whether the speaking rate of a preceding sentence influences durational cues to the perception of suprasegmental lexical-stress patterns. Dutch two-syllable word fragments had to be judged as coming from one of two longer words that matched the fragment segmentally but differed in lexical stress placement. Word pairs contrasted primary stress on either the first versus the second syllable or the first versus the third syllable. Duration of the initial or the second syllable of the fragments and rate of the preceding context (fast vs. slow) were manipulated. Listeners used speaking rate to decide about the degree of stress on initial syllables whether the syllables' absolute durations were informative about stress (Experiment 1a) or not (Experiment 1b). Rate effects on the second syllable were visible only when the initial syllable was ambiguous in duration with respect to the preceding rate context (Experiment 2). Absolute second syllable durations contributed little to stress perception (Experiment 3). These results suggest that speaking rate is used to disambiguate words and that rate-modulated stress cues are more important on initial than non-initial syllables. Speaking rate affects perception of suprasegmental information.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2011). Speaking rate from proximal and distal contexts is used during word segmentation. Journal of Experimental Psychology: Human Perception and Performance, 37, 978-996. doi:10.1037/a0021923.

    Abstract

    A series of eye-tracking and categorization experiments investigated the use of speaking-rate information in the segmentation of Dutch ambiguous-word sequences. Juncture phonemes with ambiguous durations (e.g., [s] in 'eens (s)peer,' “once (s)pear,” [t] in 'nooit (t)rap,' “never staircase/quick”) were perceived as longer and hence more often as word-initial when following a fast than a slow context sentence. Listeners used speaking-rate information as soon as it became available. Rate information from a context proximal to the juncture phoneme and from a more distal context was used during on-line word recognition, as reflected in listeners' eye movements. Stronger effects of distal context, however, were observed in the categorization task, which measures the off-line results of the word-recognition process. In categorization, the amount of rate context had the greatest influence on the use of rate information, but in eye tracking, the rate information's proximal location was the most important. These findings constrain accounts of how speaking rate modulates the interpretation of durational cues during word recognition by suggesting that rate estimates are used to evaluate upcoming phonetic information continuously during prelexical speech processing.
  • Reis, A., Faísca, L., Mendonça, S., Ingvar, M., & Petersson, K. M. (2007). Semantic interference on a phonological task in illiterate subjects. Scandinavian Journal of Psychology, 48(1), 69-74. doi:10.1111/j.1467-9450.2006.00544.x.

    Abstract

    Previous research suggests that learning an alphabetic written language influences aspects of the auditory-verbal language system. In this study, we examined whether literacy influences the notion of words as phonological units independent of lexical semantics in literate and illiterate subjects. Subjects had to decide which item in a word- or pseudoword pair was phonologically longest. By manipulating the relationship between referent size and phonological length in three word conditions (congruent, neutral, and incongruent) we could examine to what extent subjects focused on form rather than meaning of the stimulus material. Moreover, the pseudoword condition allowed us to examine global phonological awareness independent of lexical semantics. The results showed that literate performed significantly better than illiterate subjects in the neutral and incongruent word conditions as well as in the pseudoword condition. The illiterate group performed least well in the incongruent condition and significantly better in the pseudoword condition compared to the neutral and incongruent word conditions and suggest that performance on phonological word length comparisons is dependent on literacy. In addition, the results show that the illiterate participants are able to perceive and process phonological length, albeit less well than the literate subjects, when no semantic interference is present. In conclusion, the present results confirm and extend the finding that illiterate subjects are biased towards semantic-conceptual-pragmatic types of cognitive processing.
  • Rekers, Y., Haun, D. B. M., & Tomasello, M. (2011). Children, but not chimpanzees, prefer to collaborate. Current Biology, 21, 1756-1758. doi:10.1016/j.cub.2011.08.066.

    Abstract

    Human societies are built on collaborative activities. Already from early childhood, human children are skillful and proficient collaborators. They recognize when they need help in solving a problem and actively recruit collaborators [ [1] and 2 F. Warneken, F. Chen and M. Tomasello, Cooperative activities in young children and chimpanzees. Child Dev., 77 (2006), pp. 640–663. | View Record in Scopus | [MPG-SFX] | | Full Text via CrossRef | Cited By in Scopus (56) [2] ]. The societies of other primates are also to some degree cooperative. Chimpanzees, for example, engage in a variety of cooperative activities such as border patrols, group hunting, and intra- and intergroup coalitionary behavior [ [3] , [4] and [5] ]. Recent studies have shown that chimpanzees possess many of the cognitive prerequisites necessary for human-like collaboration. Chimpanzees have been shown to recognize when they need help in solving a problem and to actively recruit good over bad collaborators [ [6] and [7] ]. However, cognitive abilities might not be all that differs between chimpanzees and humans when it comes to cooperation. Another factor might be the motivation to engage in a cooperative activity. Here, we hypothesized that a key difference between human and chimpanzee collaboration—and so potentially a key mechanism in the evolution of human cooperation—is a simple preference for collaborating (versus acting alone) to obtain food. Our results supported this hypothesis, finding that whereas children strongly prefer to work together with another to obtain food, chimpanzees show no such preference.
  • Reynolds, E., Stagnitti, K., & Kidd, E. (2011). Play, language and social skills of children attending a play-based curriculum school and a traditionally structured classroom curriculum school in low socioeconomic areas. Australasian Journal of Early Childhood, 36(4), 120-130.

    Abstract

    Aim and method: A comparison study of four six-year-old children attending a school with a play-based curriculum and a school with a traditionally structured classroom from low socioeconomic areas was conducted in Victoria, Australia. Children’s play, language and social skills were measured in February and again in August. At baseline assessment there was a combined sample of 31 children (mean age 5.5 years, SD 0.35 years; 13 females and 18 males). At follow-up there was a combined sample of 26 children (mean age 5.9 years, SD 0.35 years; 10 females, 16 males). Results: There was no significant difference between the school groups in play, language, social skills, age and sex at baseline assessment. Compared to norms on a standardised assessment, all the children were beginning school with delayed play ability. At follow-up assessment, children at the play-based curriculum school had made significant gains in all areas assessed (p values ranged from 0.000 to 0.05). Children at the school with the traditional structured classroom had made significant positive gains in use of symbols in play (p < 0.05) and semantic language (p < 0.05). At follow-up, there were significant differences between schools in elaborate play (p < 0.000), semantic language (p < 0.000), narrative language (p < 0.01) and social connection (p < 0.01), with children in the play-based curriculum school having significantly higher scores in play, narrative language and language and lower scores in social disconnection. Implications: Children from low SES areas begin school at risk of failure as skills in play, language and social skills are delayed. The school experience increases children’s skills, with children in the play-based curriculum showing significant improvements in all areas assessed. It is argued that a play-based curriculum meets children’s developmental and learning needs more effectively. More research is needed to replicate these results.
  • Rieffe, C., Oosterveld, P., Meerum Terwogt, M., Mootz, S., Van Leeuwen, E. J. C., & Stockmann, L. (2011). Emotion regulation and internalizing symptoms in children with Autism Spectrum Disorders. Autism, 15(6), 655-670. doi:10.1177/1362361310366571.

    Abstract

    The aim of this study was to examine the unique contribution of two aspects of emotion regulation (awareness and coping) to the development of internalizing problems in 11-year-old high-functioning children with an autism spectrum disorder (HFASD) and a control group, and the moderating effect of group membership on this. The results revealed overlap between the two groups, but also significant differences, suggesting a more fragmented emotion regulation pattern in children with HFASD, especially related to worry and rumination. Moreover, in children with HFASD, symptoms of depression were unrelated to positive mental coping strategies and the conviction that the emotion experience helps in dealing with the problem, suggesting that a positive approach to the problem and its subsequent emotion experience are less effective in the HFASD group.
  • Riley, M. A., Richardson, M. J., Shockley, K., & Ramenzoni, V. C. (2011). Interpersonal synergies. Frontiers in Psychology, 2, 38. doi:10.3389/fpsyg.2011.00038.

    Abstract

    We present the perspective that interpersonal movement coordination results from establishing interpersonal synergies. Interpersonal synergies are higher-order control systems formed by coupling movement system degrees of freedom of two (or more) actors. Characteristic features of synergies identified in studies of intrapersonal coordination – dimensional compression and reciprocal compensation – are revealed in studies of interpersonal coordination that applied the uncontrolled manifold approach and principal component analysis to interpersonal movement tasks. Broader implications of the interpersonal synergy approach for movement science include an expanded notion of mechanism and an emphasis on interaction-dominant dynamics.
  • Roberts, L., Marinis, T., Felser, C., & Clahsen, H. (2007). Antecedent priming at trace positions in children’s sentence processing. Journal of Psycholinguistic Research, 36(2), 175-188. doi: 10.1007/s10936-006-9038-3.

    Abstract

    The present study examines whether children reactivate a moved constituent at its gap position and how children’s more limited working memory span affects the way they process filler-gap dependencies. 46 5–7 year-old children and 54 adult controls participated in a cross-modal picture priming experiment and underwent a standardized working memory test. The results revealed a statistically significant interaction between the participants’ working memory span and antecedent reactivation: High-span children (n = 19) and high-span adults (n = 22) showed evidence of antecedent priming at the gap site, while for low-span children and adults, there was no such effect. The antecedent priming effect in the high-span participants indicates that in both children and adults, dislocated arguments access their antecedents at gap positions. The absence of an antecedent reactivation effect in the low-span participants could mean that these participants required more time to integrate the dislocated constituent and reactivated the filler later during the sentence.
  • Roberts, L. (2007). Investigating real-time sentence processing in the second language. Stem-, Spraak- en Taalpathologie, 15, 115-127.

    Abstract

    Second language (L2) acquisition researchers have always been concerned with what L2 learners know about the grammar of the target language but more recently there has been growing interest in how L2 learners put this knowledge to use in real-time sentence comprehension. In order to investigate real-time L2 sentence processing, the types of constructions studied and the methods used are often borrowed from the field of monolingual processing, but the overall issues are familiar from traditional L2 acquisition research. These cover questions relating to L2 learners’ native-likeness, whether or not L1 transfer is in evidence, and how individual differences such as proficiency and language experience might have an effect. The aim of this paper is to provide for those unfamiliar with the field, an overview of the findings of a selection of behavioral studies that have investigated such questions, and to offer a picture of how L2 learners and bilinguals may process sentences in real time.
  • Roberts, L., & Felser, C. (2011). Plausibility and recovery from garden paths in L2 sentence processing. Applied Psycholinguistics, 32, 299-331. doi:10.1017/S0142716410000421.

    Abstract

    In this study, the influence of plausibility information on the real-time processing of locally ambiguous (“garden path”) sentences in a nonnative language is investigated. Using self-paced reading, we examined how advanced Greek-speaking learners of English and native speaker controls read sentences containing temporary subject–object ambiguities, with the ambiguous noun phrase being either semantically plausible or implausible as the direct object of the immediately preceding verb. Besides providing evidence for incremental interpretation in second language processing, our results indicate that the learners were more strongly influenced by plausibility information than the native speaker controls in their on-line processing of the experimental items. For the second language learners an initially plausible direct object interpretation lead to increased reanalysis difficulty in “weak” garden-path sentences where the required reanalysis did not interrupt the current thematic processing domain. No such evidence of on-line recovery was observed, in contrast, for “strong” garden-path sentences that required more substantial revisions of the representation built thus far, suggesting that comprehension breakdown was more likely here.
  • Robotham, L., Sauter, D., Bachoud-Lévi, A.-C., & Trinkler, I. (2011). The impairment of emotion recognition in Huntington’s disease extends to positive emotions. Cortex, 47(7), 880-884. doi:10.1016/j.cortex.2011.02.014.

    Abstract

    Patients with Huntington’s Disease are impaired in the recognition of emotional signals. However, the nature and extent of the impairment is controversial: It has variously been argued to be disgust-specific (Sprengelmeyer et al., 1996; 1997), general for negative emotions (Snowden, et al., 2008), or a consequence of item difficulty (Milders, Crawford, Lamb, & Simpson, 2003). Yet no study to date has included more than one positive stimulus category in emotion recognition tasks. We present a study of 14 Huntington’s patients and 15 control participants performing a forced-choice task with a range of negative and positive non-verbal emotional vocalizations. Participants were found to be impaired in emotion recognition across the emotion categories, including positive emotions such as amusement and sensual pleasure, and negative emotions, such as anger, disgust, and fear. These data complement previous work by demonstrating that impairments are found in the recognition of positive, as well as negative, emotions in Huntington’s disease. Our results point to a global deficit in the recognition of emotional signals in Huntington’s Disease.
  • Roelofs, A. (2007). On the modelling of spoken word planning: Rejoinder to La Heij, Starreveld, and Kuipers (2007). Language and Cognitive Processes, 22(8), 1281-1286. doi:10.1080/01690960701462291.

    Abstract

    The author contests several claims of La Heij, Starreveld, and Kuipers (this issue) concerning the modelling of spoken word planning. The claims are about the relevance of error findings, the interaction between semantic and phonological factors, the explanation of word-word findings, the semantic relatedness paradox, and production rules.
  • Roelofs, A., & Piai, V. (2011). Attention demands of spoken word planning: A review. Frontiers in Psychology, 2, 307. doi:10.1037/a0023328.

    Abstract

    E. Dhooge and R. J. Hartsuiker (2010) reported experiments showing that picture naming takes longer with low- than high-frequency distractor words, replicating M. Miozzo and A. Caramazza (2003). In addition, they showed that this distractor-frequency effect disappears when distractors are masked or preexposed. These findings were taken to refute models like WEAVER++ (A. Roelofs, 2003) in which words are selected by competition. However, Dhooge and Hartsuiker do not take into account that according to this model, picture-word interference taps not only into word production but also into attentional processes. Here, the authors indicate that WEAVER++ contains an attentional mechanism that accounts for the distractor-frequency effect (A. Roelofs, 2005). Moreover, the authors demonstrate that the model accounts for the influence of masking and preexposure, and does so in a simpler way than the response exclusion through self-monitoring account advanced by Dhooge and Hartsuiker
  • Roelofs, A., Piai, V., & Garrido Rodriguez, G. (2011). Attentional inhibition in bilingual naming performance: Evidence from delta-plot analyses. Frontiers in Psychology, 2, 184. doi:10.3389/fpsyg.2011.00184.

    Abstract

    It has been argued that inhibition is a mechanism of attentional control in bilingual language performance. Evidence suggests that effects of inhibition are largest in the tail of a response time (RT) distribution in non-linguistic and monolingual performance domains. We examined this for bilingual performance by conducting delta-plot analyses of naming RTs. Dutch-English bilingual speakers named pictures using English while trying to ignore superimposed neutral Xs or Dutch distractor words that were semantically related, unrelated, or translations. The mean RTs revealed semantic, translation, and lexicality effects. The delta plots leveled off with increasing RT, more so when the mean distractor effect was smaller as compared with larger. This suggests that the influence of inhibition is largest toward the distribution tail, corresponding to what is observed in other performance domains. Moreover, the delta plots suggested that more inhibition was applied by high- than low-proficiency individuals in the unrelated than the other distractor conditions. These results support the view that inhibition is a domain-general mechanism that may be optionally engaged depending on the prevailing circumstances.
  • Roelofs, A. (2007). A critique of simple name-retrieval models of spoken word planning. Language and Cognitive Processes, 22(8), 1237-1260. doi:10.1080/01690960701461582.

    Abstract

    Simple name-retrieval models of spoken word planning (Bloem & La Heij, 2003; Starreveld & La Heij, 1996) maintain (1) that there are two levels in word planning, a conceptual and a lexical phonological level, and (2) that planning a word in both object naming and oral reading involves the selection of a lexical phonological representation. Here, the name retrieval models are compared to more complex models with respect to their ability to account for relevant data. It appears that the name retrieval models cannot easily account for several relevant findings, including some speech error biases, types of morpheme errors, and context effects on the latencies of responding to pictures and words. New analyses of the latency distributions in previous studies also pose a challenge. More complex models account for all these findings. It is concluded that the name retrieval models are too simple and that the greater complexity of the other models is warranted
  • Roelofs, A. (2007). Attention and gaze control in picture naming, word reading, and word categorizing. Journal of Memory and Language, 57(2), 232-251. doi:10.1016/j.jml.2006.10.001.

    Abstract

    The trigger for shifting gaze between stimuli requiring vocal and manual responses was examined. Participants were presented with picture–word stimuli and left- or right-pointing arrows. They vocally named the picture (Experiment 1), read the word (Experiment 2), or categorized the word (Experiment 3) and shifted their gaze to the arrow to manually indicate its direction. The experiments showed that the temporal coordination of vocal responding and gaze shifting depends on the vocal task and, to a lesser extent, on the type of relationship between picture and word. There was a close temporal link between gaze shifting and manual responding, suggesting that the gaze shifts indexed shifts of attention between the vocal and manual tasks. Computer simulations showed that a simple extension of WEAVER++ [Roelofs, A. (1992). A spreading-activation theory of lemma retrieval in speaking. Cognition, 42, 107–142.; Roelofs, A. (2003). Goal-referenced selection of verbal action: modeling attentional control in the Stroop task. Psychological Review, 110, 88–125.] with assumptions about attentional control in the coordination of vocal responding, gaze shifting, and manual responding quantitatively accounts for the key findings.
  • Roelofs, A., Özdemir, R., & Levelt, W. J. M. (2007). Influences of spoken word planning on speech recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(5), 900-913. doi:10.1037/0278-7393.33.5.900.

    Abstract

    In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway. (PsycINFO Database Record (c) 2007 APA, all rights reserved)
  • Roelofs, A., Piai, V., & Schriefers, H. (2011). Selective attention and distractor frequency in naming performance: Comment on Dhooge and Hartsuiker (2010). Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 1032-1038. doi:10.1037/a0023328.

    Abstract

    E. Dhooge and R. J. Hartsuiker (2010) reported experiments showing that picture naming takes longer with low- than high-frequency distractor words, replicating M. Miozzo and A. Caramazza (2003). In addition, they showed that this distractor-frequency effect disappears when distractors are masked or preexposed. These findings were taken to refute models like WEAVER++ (A. Roelofs, 2003) in which words are selected by competition. However, Dhooge and Hartsuiker do not take into account that according to this model, picture-word interference taps not only into word production but also into attentional processes. Here, the authors indicate that WEAVER++ contains an attentional mechanism that accounts for the distractor-frequency effect (A. Roelofs, 2005). Moreover, the authors demonstrate that the model accounts for the influence of masking and preexposure, and does so in a simpler way than the response exclusion through self-monitoring account advanced by Dhooge and Hartsuiker
  • Rossano, F., Rakoczy, H., & Tomasello, M. (2011). Young children’s understanding of violations of property rights. Cognition, 121, 219-227. doi:10.1016/j.cognition.2011.06.007.

    Abstract

    The present work investigated young children’s normative understanding of property rights using a novel methodology. Two- and 3-year-old children participated in situations in which an actor (1) took possession of an object for himself, and (2) attempted to throw it away. What varied was who owned the object: the actor himself, the child subject, or a third party. We found that while both 2- and 3-year-old children protested frequently when their own object was involved, only 3-year-old children protested more when a third party’s object was involved than when the actor was acting on his own object. This suggests that at the latest around 3 years of age young children begin to understand the normative dimensions of property rights.
  • Rossi, S., Jürgenson, I. B., Hanulikova, A., Telkemeyer, S., Wartenburger, I., & Obrig, H. (2011). Implicit processing of phonotactic cues: Evidence from electrophysiological and vascular responses. Journal of Cognitive Neuroscience, 23, 1752-1764. doi:10.1162/jocn.2010.21547.

    Abstract

    Spoken word recognition is achieved via competition between activated lexical candidates that match the incoming speech input. The competition is modulated by prelexical cues that are important for segmenting the auditory speech stream into linguistic units. One such prelexical cue that listeners rely on in spoken word recognition is phonotactics. Phonotactics defines possible combinations of phonemes within syllables or words in a given language. The present study aimed at investigating both temporal and topographical aspects of the neuronal correlates of phonotactic processing by simultaneously applying event-related brain potentials (ERPs) and functional near-infrared spectroscopy (fNIRS). Pseudowords, either phonotactically legal or illegal with respect to the participants' native language, were acoustically presented to passively listening adult native German speakers. ERPs showed a larger N400 effect for phonotactically legal compared to illegal pseudowords, suggesting stronger lexical activation mechanisms in phonotactically legal material. fNIRS revealed a left hemispheric network including fronto-temporal regions with greater response to phonotactically legal pseudowords than to illegal pseudowords. This confirms earlier hypotheses on a left hemispheric dominance of phonotactic processing most likely due to the fact that phonotactics is related to phonological processing and represents a segmental feature of language comprehension. These segmental linguistic properties of a stimulus are predominantly processed in the left hemisphere. Thus, our study provides first insights into temporal and topographical characteristics of phonotactic processing mechanisms in a passive listening task. Differential brain responses between known and unknown phonotactic rules thus supply evidence for an implicit use of phonotactic cues to guide lexical activation mechanisms.
  • Rowland, C. F. (2007). Explaining errors in children’s questions. Cognition, 104(1), 106-134. doi:10.1016/j.cognition.2006.05.011.

    Abstract

    The ability to explain the occurrence of errors in children’s speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813–842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children’s speech, and that errors occur when children resort to other operations to produce questions [e.g. Dąbrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83–102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157–181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.
  • Rowland, C. F., & Noble, C. L. (2011). The role of syntactic structure in children's sentence comprehension: Evidence from the dative. Language Learning and Development, 7(1), 55-75. doi:10.1080/15475441003769411.

    Abstract

    Research has demonstrated that young children quickly acquire knowledge of how the structure of their language encodes meaning. However, this work focused on structurally simple transitives. The present studies investigate childrens' comprehension of the double object dative (e.g., I gave him the box) and the prepositional dative (e.g., I gave the box to him). In Study 1, 3- and 4-year-olds correctly preferred a transfer event reading of prepositional datives with novel verbs (e.g., I'm glorping the rabbit to the duck) but were unable to interpret double object datives (e.g., I'm glorping the duck the rabbit). In Studies 2 and 3, they were able to interpret both dative types when the nouns referring to the theme and recipient were canonically marked (Study 2; I'm glorping the rabbit to Duck) and, to a lesser extent, when they were distinctively but noncanonically marked (Study 3: I'm glorping rabbit to the Duck). Overall, the results suggest that English children have some verb-general knowledge of how dative syntax encodes meaning by 3 years of age, but successful comprehension may require the presence of additional surface cues.
  • Rubio-Fernández, P. (2007). Suppression in metaphor interpretation: Differences between meaning selection and meaning construction. Journal of Semantics, 24(4), 345-371. doi:10.1093/jos/ffm006.

    Abstract

    Various accounts of metaphor interpretation propose that it involves constructing an ad hoc concept on the basis of the concept encoded by the metaphor vehicle (i.e. the expression used for conveying the metaphor). This paper discusses some of the differences between these theories and investigates their main empirical prediction: that metaphor interpretation involves enhancing properties of the metaphor vehicle that are relevant for interpretation, while suppressing those that are irrelevant. This hypothesis was tested in a cross-modal lexical priming study adapted from early studies on lexical ambiguity. The different patterns of suppression of irrelevant meanings observed in disambiguation studies and in the experiment on metaphor reported here are discussed in terms of differences between meaning selection and meaning construction.
  • De Ruiter, J. P. (2007). Postcards from the mind: The relationship between speech, imagistic gesture and thought. Gesture, 7(1), 21-38.

    Abstract

    In this paper, I compare three different assumptions about the relationship between speech, thought and gesture. These assumptions have profound consequences for theories about the representations and processing involved in gesture and speech production. I associate these assumptions with three simplified processing architectures. In the Window Architecture, gesture provides us with a 'window into the mind'. In the Language Architecture, properties of language have an influence on gesture. In the Postcard Architecture, gesture and speech are planned by a single process to become one multimodal message. The popular Window Architecture is based on the assumption that gestures come, as it were, straight out of the mind. I argue that during the creation of overt imagistic gestures, many processes, especially those related to (a) recipient design, and (b) effects of language structure, cause an observable gesture to be very different from the original thought that it expresses. The Language Architecture and the Postcard Architecture differ from the Window Architecture in that they both incorporate a central component which plans gesture and speech together, however they differ from each other in the way they align gesture and speech. The Postcard Architecture assumes that the process creating a multimodal message involving both gesture and speech has access to the concepts that are available in speech, while the Language Architecture relies on interprocess communication to resolve potential conflicts between the content of gesture and speech.
  • De Ruiter, L. E. (2011). Polynomial modeling of child and adult intonation in German spontaneous speech. Language and Speech, 54, 199-223. doi:10.1177/0023830910397495.

    Abstract

    In a data set of 291 spontaneous utterances from German 5-year-olds, 7-year-olds and adults, nuclear pitch contours were labeled manually using the GToBI annotation system. Ten different contour types were identified. The fundamental frequency (F0) of these contours was modeled using third-order orthogonal polynomials, following an approach similar to the one Grabe, Kochanski, and Coleman (2007) used for English. Statistical analyses showed that all but one contour pair differed significantly from each other in at least one of the four coefficients. This demonstrates that polynomial modeling can provide quantitative empirical support for phonological labels in unscripted speech, and for languages other than English. Furthermore, polynomial expressions can be used to derive the alignment of tonal targets relative to the syllable structure, making polynomial modeling more accessible to the phonological research community. Finally, within-contour comparisons of the three age groups showed that for children, the magnitude of the higher coefficients is lower, suggesting that they are not yet able to modulate their pitch as fast as adults.
  • Ruiter, M. B., Kolk, H. H. J., Rietveld, T. C. M., Dijkstra, N., & Lotgering, E. (2011). Towards a quantitative measure of verbal effectiveness and efficiency in the Amsterdam-Nijmegen Everyday Language Test (ANELT). Aphasiology, 25, 961-975. doi:10.1080/02687038.2011.569892.

    Abstract

    Background: A well-known test for measuring verbal adequacy (i.e., verbal effectiveness) in mildly impaired aphasic speakers is the Amsterdam-Nijmegen Everyday Language Test (ANELT; Blomert, Koster, & Kean, 1995). Aphasia therapy practitioners score verbal adequacy qualitatively when they administer the ANELT to their aphasic clients in clinical practice. Aims: The current study investigated whether the construct validity of the ANELT could be further improved by substituting the qualitative score by a quantitative one, which takes the number of essential information units into account. The new quantitative measure could have the following advantages: the ability to derive a quantitative score of verbal efficiency, as well as improved sensitivity to detect changes in functional communication over time. Methods & Procedures: The current study systematically compared a new quantitative measure of verbal effectiveness with the current ANELT Comprehensibility scale, which is based on qualitative judgements. A total of 30 speakers of Dutch participated: 20 non-aphasic speakers and 10 aphasic patients with predominantly expressive disturbances. Outcomes & Results: Although our findings need to be replicated in a larger group of aphasic speakers, the main results suggest that the new quantitative measure of verbal effectiveness is more sensitive to detect change in verbal effectiveness over time. What is more, it can be used to derive a measure of verbal efficiency. Conclusions: The fact that both verbal effectiveness and verbal efficiency can be reliably as well as validly measured in the ANELT is of relevance to clinicians. It allows them to obtain a more complete picture of aphasic speakers' functional communication skills.
  • Sadakata, M., & Sekiyama, K. (2011). Enhanced perception of various linguistic features by musicians: A cross-linguistic study. Acta Psychologica, 138, 1-10. doi:10.1016/j.actpsy.2011.03.007.

    Abstract

    Two cross-linguistic experiments comparing musicians and non-musicians were performed in order to examine whether musicians have enhanced perception of specific acoustical features of speech in a second language (L2). These discrimination and identification experiments examined the perception of various speech features; namely, the timing and quality of Japanese consonants, and the quality of Dutch vowels. We found that musical experience was more strongly associated with discrimination performance rather than identification performance. The enhanced perception was observed not only with respect to L2, but also L1. It was most pronounced when tested with Japanese consonant timing. These findings suggest the following: 1) musicians exhibit enhanced early acoustical analysis of speech, 2) musical training does not equally enhance the perception of all acoustic features automatically, and 3) musicians may enjoy an advantage in the perception of acoustical features that are important in both language and music, such as pitch and timing. Research Highlights We compared the perception of L1 and L2 speech by musicians and non-musicians. Discrimination and identification experiments examined perception of consonant timing, quality of Japanese consonants and of Dutch vowels. We compared results for Japanese native musicians and non-musicians as well as, Dutch native musicians and non-musicians. Musicians demonstrated enhanced perception for both L1 and L2. Most pronounced effect was found for Japanese consonant timing.
  • Salomo, D., Graf, E., Lieven, E., & Tomasello, M. (2011). The role of perceptual availability and discourse context in young children’s question answering. Journal of Child Language, 38, 918-931. doi:10.1017/S0305000910000395.

    Abstract

    Three- and four-year-old children were asked predicate-focus questions ('What's X doing?') about a scene in which an agent performed an action on a patient. We varied: (i) whether (or not) the preceding discourse context, which established the patient as given information, was available for the questioner; and (ii) whether (or not) the patient was perceptually available to the questioner when she asked the question. The main finding in our study differs from those of previous studies since it suggests that children are sensitive to the perceptual context at an earlier age than they are to previous discourse context if they need to take the questioner's perspective into account. Our finding indicates that, while children are in principle sensitive to both factors, young children rely on perceptual availability when a conflict arises.
  • Salverda, A. P., Dahan, D., Tanenhaus, M. K., Crosswhite, K., Masharov, M., & McDonough, J. (2007). Effects of prosodically modulated sub-phonetic variation on lexical competition. Cognition, 105(2), 466-476. doi:10.1016/j.cognition.2006.10.008.

    Abstract

    Eye movements were monitored as participants followed spoken instructions to manipulate one of four objects pictured on a computer screen. Target words occurred in utterance-medial (e.g., Put the cap next to the square) or utterance-final position (e.g., Now click on the cap). Displays consisted of the target picture (e.g., a cap), a monosyllabic competitor picture (e.g., a cat), a polysyllabic competitor picture (e.g., a captain) and a distractor (e.g., a beaker). The relative proportion of fixations to the two types of competitor pictures changed as a function of the position of the target word in the utterance, demonstrating that lexical competition is modulated by prosodically conditioned phonetic variation.
  • Sánchez-Mora, C., Ribasés, M., Casas, M., Bayés, M., Bosch, R., Fernàndez-Castillo, N., Brunso, L., Jacobsen, K. K., Landaas, E. T., Lundervold, A. J., Gross-Lesch, S., Kreiker, S., Jacob, C. P., Lesch, K.-P., Buitelaar, J. K., Hoogman, M., Kiemeney, L. A., Kooij, J. S., Mick, E., Asherson, P. and 7 moreSánchez-Mora, C., Ribasés, M., Casas, M., Bayés, M., Bosch, R., Fernàndez-Castillo, N., Brunso, L., Jacobsen, K. K., Landaas, E. T., Lundervold, A. J., Gross-Lesch, S., Kreiker, S., Jacob, C. P., Lesch, K.-P., Buitelaar, J. K., Hoogman, M., Kiemeney, L. A., Kooij, J. S., Mick, E., Asherson, P., Faraone, S. V., Franke, B., Reif, A., Johansson, S., Haavik, J., Ramos-Quiroga, J. A., & Cormand, B. (2011). Exploring DRD4 and its interaction with SLC6A3 as possible risk factors for adult ADHD: A meta-analysis in four European populations. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 156, 600-612. doi:10.1002/ajmg.b.31202.

    Abstract

    Attention-deficit hyperactivity disorder (ADHD) is a common behavioral disorder affecting about 4–8% of children. ADHD persists into adulthood in around 65% of cases, either as the full condition or in partial remission with persistence of symptoms. Pharmacological, animal and molecular genetic studies support a role for genes of the dopaminergic system in ADHD due to its essential role in motor control, cognition, emotion, and reward. Based on these data, we analyzed two functional polymorphisms within the DRD4 gene (120 bp duplication in the promoter and 48 bp VNTR in exon 3) in a clinical sample of 1,608 adult ADHD patients and 2,352 controls of Caucasian origin from four European countries that had been recruited in the context of the International Multicentre persistent ADHD CollaboraTion (IMpACT). Single-marker analysis of the two polymorphisms did not reveal association with ADHD. In contrast, multiple-marker meta-analysis showed a nominal association (P  = 0.02) of the L-4R haplotype (dup120bp-48bpVNTR) with adulthood ADHD, especially with the combined clinical subtype. Since we previously described association between adulthood ADHD and the dopamine transporter SLC6A3 9R-6R haplotype (3′UTR VNTR-intron 8 VNTR) in the same dataset, we further tested for gene × gene interaction between DRD4 and SLC6A3. However, we detected no epistatic effects but our results rather suggest additive effects of the DRD4 risk haplotype and the SLC6A3 gene.
  • Sauter, D., Le Guen, O., & Haun, D. B. M. (2011). Categorical perception of emotional expressions does not require lexical categories. Emotion, 11, 1479-1483. doi:10.1037/a0025336.

    Abstract

    Does our perception of others’ emotional signals depend on the language we speak or is our perception the same regardless of language and culture? It is well established that human emotional facial expressions are perceived categorically by viewers, but whether this is driven by perceptual or linguistic mechanisms is debated. We report an investigation into the perception of emotional facial expressions, comparing German speakers to native speakers of Yucatec Maya, a language with no lexical labels that distinguish disgust from anger. In a free naming task, speakers of German, but not Yucatec Maya, made lexical distinctions between disgust and anger. However, in a delayed match-to-sample task, both groups perceived emotional facial expressions of these and other emotions categorically. The magnitude of this effect was equivalent across the language groups, as well as across emotion continua with and without lexical distinctions. Our results show that the perception of affective signals is not driven by lexical labels, instead lending support to accounts of emotions as a set of biologically evolved mechanisms.
  • Sauter, D., & Scott, S. K. (2007). More than one kind of happiness: Can we recognize vocal expressions of different positive states? Motivation and Emotion, 31(3), 192-199.

    Abstract

    Several theorists have proposed that distinctions are needed between different positive emotional states, and that these discriminations may be particularly useful in the domain of vocal signals (Ekman, 1992b, Cognition and Emotion, 6, 169–200; Scherer, 1986, Psychological Bulletin, 99, 143–165). We report an investigation into the hypothesis that positive basic emotions have distinct vocal expressions (Ekman, 1992b, Cognition and Emotion, 6, 169–200). Non-verbal vocalisations are used that map onto five putative positive emotions: Achievement/Triumph, Amusement, Contentment, Sensual Pleasure, and Relief. Data from categorisation and rating tasks indicate that each vocal expression is accurately categorised and consistently rated as expressing the intended emotion. This pattern is replicated across two language groups. These data, we conclude, provide evidence for the existence of robustly recognisable expressions of distinct positive emotions.
  • Schaefer, R. S., Farquhar, J., Blokland, Y., Sadakata, M., & Desain, P. (2011). Name that tune: Decoding music from the listening brain. NeuroImage, 56, 843-849. doi:10.1016/j.neuroimage.2010.05.084.

    Abstract

    In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven different musical fragments of about three seconds, both individually and cross-participants, using only time domain information (the event-related potential, ERP). The best individual results are 70% correct in a seven-class problem while using single trials, and when using multiple trials we achieve 100% correct after six presentations of the stimulus. When classifying across participants, a maximum rate of 53% was reached, supporting a general representation of each musical fragment over participants. While for some music stimuli the amplitude envelope correlated well with the ERP, this was not true for all stimuli. Aspects of the stimulus that may contribute to the differences between the EEG responses to the pieces of music are discussed.

    Additional information

    supp_f.pdf
  • Schapper, A., & San Roque, L. (2011). Demonstratives and non-embedded nominalisations in three Papuan languages of the Timor-Alor-Pantar family. Studies in Language, 35, 380-408. doi:10.1075/sl.35.2.05sch.

    Abstract

    This paper explores the use of demonstratives in non-embedded clausal nominalisations. We present data and analysis from three Papuan languages of the Timor-Alor-Pantar family in south-east Indonesia. In these languages, demonstratives can apply to the clausal as well as to the nominal domain, contributing contrastive semantic content in assertive stance-taking and attention-directing utterances. In the Timor-Alor-Pantar constructions, meanings that are to do with spatial and discourse locations at the participant level apply to spatial, temporal and mental locations at the state or event leve
  • Scharenborg, O., Seneff, S., & Boves, L. (2007). A two-pass approach for handling out-of-vocabulary words in a large vocabulary recognition task. Computer, Speech & Language, 21, 206-218. doi:10.1016/j.csl.2006.03.003.

    Abstract

    This paper addresses the problem of recognizing a vocabulary of over 50,000 city names in a telephone access spoken dialogue system. We adopt a two-stage framework in which only major cities are represented in the first stage lexicon. We rely on an unknown word model encoded as a phone loop to detect OOV city names (referred to as ‘rare city’ names). We use SpeM, a tool that can extract words and word-initial cohorts from phone graphs from a large fallback lexicon, to provide an N-best list of promising city name hypotheses on the basis of the phone graph corresponding to the OOV. This N-best list is then inserted into the second stage lexicon for a subsequent recognition pass. Experiments were conducted on a set of spontaneous telephone-quality utterances; each containing one rare city name. It appeared that SpeM was able to include nearly 75% of the correct city names in an N-best hypothesis list of 3000 city names. With the names found by SpeM to extend the lexicon of the second stage recognizer, a word accuracy of 77.3% could be obtained. The best one-stage system yielded a word accuracy of 72.6%. The absolute number of correctly recognized rare city names almost doubled, from 62 for the best one-stage system to 102 for the best two-stage system. However, even the best two-stage system recognized only about one-third of the rare city names retrieved by SpeM. The paper discusses ways for improving the overall performance in the context of an application.
  • Scharenborg, O., Ten Bosch, L., & Boves, L. (2007). 'Early recognition' of polysyllabic words in continuous speech. Computer, Speech & Language, 21, 54-71. doi:10.1016/j.csl.2005.12.001.

    Abstract

    Humans are able to recognise a word before its acoustic realisation is complete. This in contrast to conventional automatic speech recognition (ASR) systems, which compute the likelihood of a number of hypothesised word sequences, and identify the words that were recognised on the basis of a trace back of the hypothesis with the highest eventual score, in order to maximise efficiency and performance. In the present paper, we present an ASR system, SpeM, based on principles known from the field of human word recognition that is able to model the human capability of ‘early recognition’ by computing word activation scores (based on negative log likelihood scores) during the speech recognition process. Experiments on 1463 polysyllabic words in 885 utterances showed that 64.0% (936) of these polysyllabic words were recognised correctly at the end of the utterance. For 81.1% of the 936 correctly recognised polysyllabic words the local word activation allowed us to identify the word before its last phone was available, and 64.1% of those words were already identified one phone after their lexical uniqueness point. We investigated two types of predictors for deciding whether a word is considered as recognised before the end of its acoustic realisation. The first type is related to the absolute and relative values of the word activation, which trade false acceptances for false rejections. The second type of predictor is related to the number of phones of the word that have already been processed and the number of phones that remain until the end of the word. The results showed that SpeM’s performance increases if the amount of acoustic evidence in support of a word increases and the risk of future mismatches decreases.
  • Scharenborg, O. (2007). Reaching over the gap: A review of efforts to link human and automatic speech recognition research. Speech Communication, 49, 336-347. doi:10.1016/j.specom.2007.01.009.

    Abstract

    The fields of human speech recognition (HSR) and automatic speech recognition (ASR) both investigate parts of the speech recognition process and have word recognition as their central issue. Although the research fields appear closely related, their aims and research methods are quite different. Despite these differences there is, however, lately a growing interest in possible cross-fertilisation. Researchers from both ASR and HSR are realising the potential benefit of looking at the research field on the other side of the ‘gap’. In this paper, we provide an overview of past and present efforts to link human and automatic speech recognition research and present an overview of the literature describing the performance difference between machines and human listeners. The focus of the paper is on the mutual benefits to be derived from establishing closer collaborations and knowledge interchange between ASR and HSR. The paper ends with an argument for more and closer collaborations between researchers of ASR and HSR to further improve research in both fields.
  • Scharenborg, O., Wan, V., & Moore, R. K. (2007). Towards capturing fine phonetic variation in speech using articulatory features. Speech Communication, 49, 811-826. doi:10.1016/j.specom.2007.01.005.

    Abstract

    The ultimate goal of our research is to develop a computational model of human speech recognition that is able to capture the effects of fine-grained acoustic variation on speech recognition behaviour. As part of this work we are investigating automatic feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. In the experiments reported here, we analysed the classification results from support vector machines (SVMs) and multilayer perceptrons (MLPs). MLPs have been widely and successfully used for the task of multi-value articulatory feature classification, while (to the best of our knowledge) SVMs have not. This paper compares the performance of the two classifiers and analyses the results in order to better understand the articulatory representations. It was found that the SVMs outperformed the MLPs for five out of the seven articulatory feature classes we investigated while using only 8.8–44.2% of the training material used for training the MLPs. The structure in the misclassifications of the SVMs and MLPs suggested that there might be a mismatch between the characteristics of the classification systems and the characteristics of the description of the AF values themselves. The analyses showed that some of the misclassified features are inherently confusable given the acoustic space. We concluded that in order to come to a feature set that can be used for a reliable and accurate automatic description of the speech signal; it could be beneficial to move away from quantised representations.
  • Scheeringa, R., Fries, P., Petersson, K. M., Oostenveld, R., Grothe, I., Norris, D. G., Hagoort, P., & Bastiaansen, M. C. M. (2011). Neuronal dynamics underlying high- and low- frequency EEG oscillations contribute independently to the human BOLD signal. Neuron, 69, 572-583. doi:10.1016/j.neuron.2010.11.044.

    Abstract

    Work on animals indicates that BOLD is preferentially sensitive to local field potentials, and that it correlates most strongly with gamma band neuronal synchronization. Here we investigate how the BOLD signal in humans performing a cognitive task is related to neuronal synchronization across different frequency bands. We simultaneously recorded EEG and BOLD while subjects engaged in a visual attention task known to induce sustained changes in neuronal synchronization across a wide range of frequencies. Trial-by-trial BOLD luctuations correlated positively with trial-by-trial fluctuations in high-EEG gamma power (60–80 Hz) and negatively with alpha and beta power. Gamma power on the one hand, and alpha and beta power on the other hand, independently contributed to explaining BOLD variance. These results indicate that the BOLD-gamma coupling observed in animals can be extrapolated to humans performing a task and that neuronal dynamics underlying high- and low-frequency synchronization contribute independently to the BOLD signal.

    Additional information

    mmc1.pdf
  • Schimke, S. (2011). Variable verb placement in second-language German and French: Evidence from production and elicited imitation of finite and nonfinite negated sentences. Applied Psycholinguistics, 32, 635-685. doi:10.1017/S0142716411000014.

    Abstract

    This study examines the placement of finite and nonfinite lexical verbs and finite light verbs (LVs) in semispontaneous production and elicited imitation of adult beginning learners of German and French. Theories assuming nonnativelike syntactic representations at early stages of development predict variable placement of lexical verbs and consistent placement of LVs, whereas theories assuming nativelike syntax predict variability for nonfinite verbs and consistent placement of all finite verbs. The results show that beginning learners of German have consistent preferences only for LVs. More advanced learners of German and learners of French produce and imitate finite verbs in more variable positions than nonfinite verbs. This is argued to support a structure-building view of second-language development.
  • Schoffelen, J.-M., & Gross, J. (2011). Improving the interpretability of all-to-all pairwise source connectivity analysis in MEG with nonhomogeneous smoothing. Human brain mapping, 32, 426-437. doi:10.1002/hbm.21031.

    Abstract

    Studying the interaction between brain regions is important to increase our understanding of brain function. Magnetoencephalography (MEG) is well suited to investigate brain connectivity, because it provides measurements of activity of the whole brain at very high temporal resolution. Typically, brain activity is reconstructed from the sensor recordings with an inverse method such as a beamformer, and subsequently a connectivity metric is estimated between predefined reference regions-of-interest (ROIs) and the rest of the source space. Unfortunately, this approach relies on a robust estimate of the relevant reference regions and on a robust estimate of the activity in those reference regions, and is not generally applicable to a wide variety of cognitive paradigms. Here, we investigate the possibility to perform all-to-all pairwise connectivity analysis, thus removing the need to define ROIs. Particularly, we evaluate the effect of nonhomogeneous spatial smoothing of differential connectivity maps. This approach is inspired by the fact that the spatial resolution of source reconstructions is typically spatially nonhomogeneous. We use this property to reduce the spatial noise in the cerebro-cerebral connectivity map, thus improving interpretability. Using extensive data simulations we show a superior detection rate and a substantial reduction in the number of spurious connections. We conclude that nonhomogeneous spatial smoothing of cerebro-cerebral connectivity maps could be an important improvement of the existing analysis tools to study neuronal interactions noninvasively.
  • Schoffelen, J.-M., Poort, J., Oostenveld, R., & Fries, P. (2011). Selective movement preparation is subserved by selective increases in corticomuscular gamma-band coherence. Journal of Neuroscience, 31, 6750-6758. doi:10.1523/​JNEUROSCI.4882-10.2011.

    Abstract

    Local groups of neurons engaged in a cognitive task often exhibit rhythmically synchronized activity in the gamma band, a phenomenon that likely enhances their impact on downstream areas. The efficacy of neuronal interactions may be enhanced further by interareal synchronization of these local rhythms, establishing mutually well timed fluctuations in neuronal excitability. This notion suggests that long-range synchronization is enhanced selectively for connections that are behaviorally relevant. We tested this prediction in the human motor system, assessing activity from bilateral motor cortices with magnetoencephalography and corresponding spinal activity through electromyography of bilateral hand muscles. A bimanual isometric wrist extension task engaged the two motor cortices simultaneously into interactions and coherence with their respective corresponding contralateral hand muscles. One of the hands was cued before each trial as the response hand and had to be extended further to report an unpredictable visual go cue. We found that, during the isometric hold phase, corticomuscular coherence was enhanced, spatially selective for the corticospinal connection that was effectuating the subsequent motor response. This effect was spectrally selective in the low gamma-frequency band (40–47 Hz) and was observed in the absence of changes in motor output or changes in local cortical gamma-band synchronization. These findings indicate that, in the anatomical connections between the cortex and the spinal cord, gamma-band synchronization is a mechanism that may facilitate behaviorally relevant interactions between these distant neuronal groups.
  • Schuppler, B., Ernestus, M., Scharenborg, O., & Boves, L. (2011). Acoustic reduction in conversational Dutch: A quantitative analysis based on automatically generated segmental transcriptions [Letter to the editor]. Journal of Phonetics, 39(1), 96-109. doi:10.1016/j.wocn.2010.11.006.

    Abstract

    In spontaneous, conversational speech, words are often reduced compared to their citation forms, such that a word like yesterday may sound like [’jεsmall eshei]. The present chapter investigates such acoustic reduction. The study of reduction needs large corpora that are transcribed phonetically. The first part of this chapter describes an automatic transcription procedure used to obtain such a large phonetically transcribed corpus of Dutch spontaneous dialogues, which is subsequently used for the investigation of acoustic reduction. First, the orthographic transcriptions were adapted for automatic processing. Next, the phonetic transcription of the corpus was created by means of a forced alignment using a lexicon with multiple pronunciation variants per word. These variants were generated by applying phonological and reduction rules to the canonical phonetic transcriptions of the words. The second part of this chapter reports the results of a quantitative analysis of reduction in the corpus on the basis of the generated transcriptions and gives an inventory of segmental reductions in standard Dutch. Overall, we found that reduction is more pervasive in spontaneous Dutch than previously documented.
  • Segaert, K., Menenti, L., Weber, K., & Hagoort, P. (2011). A paradox of syntactic priming: Why response tendencies show priming for passives, and response latencies show priming for actives. PLoS One, 6(10), e24209. doi:10.1371/journal.pone.0024209.

    Abstract

    Speakers tend to repeat syntactic structures across sentences, a phenomenon called syntactic priming. Although it has been suggested that repeating syntactic structures should result in speeded responses, previous research has focused on effects in response tendencies. We investigated syntactic priming effects simultaneously in response tendencies and response latencies for active and passive transitive sentences in a picture description task. In Experiment 1, there were priming effects in response tendencies for passives and in response latencies for actives. However, when participants' pre-existing preference for actives was altered in Experiment 2, syntactic priming occurred for both actives and passives in response tendencies as well as in response latencies. This is the first investigation of the effects of structure frequency on both response tendencies and latencies in syntactic priming. We discuss the implications of these data for current theories of syntactic processing.

    Additional information

    Segaert_2011_Supporting_Info.doc
  • Segurado, R., Hamshere, M. L., Glaser, B., Nikolov, I., Moskvina, V., & Holmans, P. A. (2007). Combining linkage data sets for meta-analysis and mega-analysis: the GAW15 rheumatoid arthritis data set. BMC Proceedings, 1(Suppl 1): S104.

    Abstract

    We have used the genome-wide marker genotypes from Genetic Analysis Workshop 15 Problem 2 to explore joint evidence for genetic linkage to rheumatoid arthritis across several samples. The data consisted of four high-density genome scans on samples selected for rheumatoid arthritis. We cleaned the data, removed intermarker linkage disequilibrium, and assembled the samples onto a common genetic map using genome sequence positions as a reference for map interpolation. The individual studies were combined first at the genotype level (mega-analysis) prior to a multipoint linkage analysis on the combined sample, and second using the genome scan meta-analysis method after linkage analysis of each sample. The two approaches were compared, and give strong support to the HLA locus on chromosome 6 as a susceptibility locus. Other regions of interest include loci on chromosomes 11, 2, and 12.
  • Sekine, K. (2011). The role of gesture in the language production of preschool children. Gesture, 11(2), 148-173. doi:10.1075/gest.11.2.03sek.

    Abstract

    The present study investigates the functions of gestures in preschoolers’ descriptions of activities. Specifically, utilizing McNeill’s growth point theory (1992), I examine how gestures contribute to the creation of contrast from the immediate context in the spoken discourse of children. When preschool children describe an activity consisting of multiple actions, like playing on a slide, they often begin with the central action (e.g., sliding-down) instead of with the beginning of the activity sequence (e.g., climbing-up). This study indicates that, in descriptions of activities, gestures may be among the cues the speaker uses for forming a next idea or for repairing the temporal order of the activities described. Gestures may function for the speaker as visual feedback and contribute to the process of utterance formation and provide an index for assessing language development.
  • Senft, G. (1985). Emic or etic or just another catch 22? A repartee to Hartmut Haberland. Journal of Pragmatics, 9, 845.
  • Senft, G. (1991). [Review of the book Einführung in die deskriptive Linguistik by Michael Dürr and Peter Schlobinski]. Linguistics, 29, 722-725.
  • Senft, G. (1991). [Review of the book The sign languages of Aboriginal Australia by Adam Kendon]. Journal of Pragmatics, 15, 400-405. doi:10.1016/0378-2166(91)90040-5.
  • Senft, G. (2007). [Review of the book Bislama reference grammar by Terry Crowley]. Linguistics, 45(1), 235-239.
  • Senft, G. (2007). [Review of the book Serial verb constructions - A cross-linguistic typology by Alexandra Y. Aikhenvald and Robert M. W. Dixon]. Linguistics, 45(4), 833-840. doi:10.1515/LING.2007.024.
  • Senft, G. (1985). How to tell - and understand - a 'dirty' joke in Kilivila. Journal of Pragmatics, 9, 815-834.
  • Senft, G. (1985). Kilivila: Die Sprache der Trobriander. Studium Linguistik, 17/18, 127-138.
  • Senft, G. (1985). Klassifikationspartikel im Kilivila: Glossen zu ihrer morphologischen Rolle, ihrem Inventar und ihrer Funktion in Satz und Diskurs. Linguistische Berichte, 99, 373-393.

Share this page