Publications

Displaying 801 - 900 of 1008
  • Shao, Z., Janse, E., Visser, K., & Meyer, A. S. (2014). What do verbal fluency tasks measure? Predictors of verbal fluency performance in older adults. Frontiers in Psychology, 5: 772. doi:10.3389/fpsyg.2014.00772.

    Abstract

    This study examined the contributions of verbal ability and executive control to verbal fluency performance in older adults (n=82). Verbal fluency was assessed in letter and category fluency tasks, and performance on these tasks was related to indicators of vocabulary size, lexical access speed, updating, and inhibition ability. In regression analyses the number of words produced in both fluency tasks was predicted by updating ability, and the speed of the first response was predicted by vocabulary size and, for category fluency only, lexical access speed. These results highlight the hybrid character of both fluency tasks, which may limit their usefulness for research and clinical purposes.
  • Shayan, S., Moreira, A., Windhouwer, M., Koenig, A., & Drude, S. (2013). LEXUS 3 - a collaborative environment for multimedia lexica. In Proceedings of the Digital Humanities Conference 2013 (pp. 392-395).
  • Shayan, S., Ozturk, O., Bowerman, M., & Majid, A. (2014). Spatial metaphor in language can promote the development of cross-modal mappings in children. Developmental Science, 17(4), 636-643. doi:10.1111/desc.12157.

    Abstract

    Pitch is often described metaphorically: for example, Farsi and Turkish speakers use a ‘thickness’ metaphor (low sounds are ‘thick’ and high sounds are ‘thin’), while German and English speakers use a height metaphor (‘low’, ‘high’). This study examines how child and adult speakers of Farsi, Turkish, and German map pitch and thickness using a cross-modal association task. All groups, except for German children, performed significantly better than chance. German-speaking adults’ success suggests the pitch-to-thickness association can be learned by experience. But the fact that German children were at chance indicates that this learning takes time. Intriguingly, Farsi and Turkish children's performance suggests that learning cross-modal associations can be boosted through experience with consistent metaphorical mappings in the input language
  • Shkaravska, O., Van Eekelen, M., & Tamalet, A. (2014). Collected size semantics for strict functional programs over general polymorphic lists. In U. Dal Lago, & R. Pena (Eds.), Foundational and Practical Aspects of Resource Analysis: Third International Workshop, FOPARA 2013, Bertinoro, Italy, August 29-31, 2013, Revised Selected Papers (pp. 143-159). Berlin: Springer.

    Abstract

    Size analysis can be an important part of heap consumption analysis. This paper is a part of ongoing work about typing support for checking output-on-input size dependencies for function definitions in a strict functional language. A significant restriction for our earlier results is that inner data structures (e.g. in a list of lists) all must have the same size. Here, we make a big step forwards by overcoming this limitation via the introduction of higher-order size annotations such that variate sizes of inner data structures can be expressed. In this way the analysis becomes applicable for general, polymorphic nested lists.
  • Shkaravska, O., & Van Eekelen, M. (2014). Univariate polynomial solutions of algebraic difference equations. Journal of Symbolic Computation, 60, 15-28. doi:10.1016/j.jsc.2013.10.010.

    Abstract

    Contrary to linear difference equations, there is no general theory of difference equations of the form G(P(x−τ1),…,P(x−τs))+G0(x)=0, with τi∈K, G(x1,…,xs)∈K[x1,…,xs] of total degree D⩾2 and G0(x)∈K[x], where K is a field of characteristic zero. This article is concerned with the following problem: given τi, G and G0, find an upper bound on the degree d of a polynomial solution P(x), if it exists. In the presented approach the problem is reduced to constructing a univariate polynomial for which d is a root. The authors formulate a sufficient condition under which such a polynomial exists. Using this condition, they give an effective bound on d, for instance, for all difference equations of the form G(P(x−a),P(x−a−1),P(x−a−2))+G0(x)=0 with quadratic G, and all difference equations of the form G(P(x),P(x−τ))+G0(x)=0 with G having an arbitrary degree.
  • Sidnell, J., & Enfield, N. J. (2014). Deixis and the interactional foundations of reference. In Y. Huang (Ed.), The Oxford handbook of pragmatics.
  • Sidnell, J., Kockelman, P., & Enfield, N. J. (2014). Community and social life. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 481-483). Cambridge: Cambridge University Press.
  • Sidnell, J., Enfield, N. J., & Kockelman, P. (2014). Interaction and intersubjectivity. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 343-345). Cambridge: Cambridge University Press.
  • Sidnell, J., & Enfield, N. J. (2014). The ontology of action, in interaction. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 423-446). Cambridge: Cambridge University Press.
  • Silva, S., Branco, P., Barbosa, F., Marques-Teixeira, J., Petersson, K. M., & Castro, S. L. (2014). Musical phrase boundaries, wrap-up and the closure positive shift. Brain Research, 1585, 99-107. doi:10.1016/j.brainres.2014.08.025.

    Abstract

    We investigated global integration (wrap-up) processes at the boundaries of musical phrases by comparing the effects of well and non-well formed phrases on event-related potentials time-locked to two boundary points: the onset and the offset of the boundary pause. The Closure Positive Shift, which is elicited at the boundary offset, was not modulated by the quality of phrase structure (well vs. non-well formed). In contrast, the boundary onset potentials showed different patterns for well and non-well formed phrases. Our results contribute to specify the functional meaning of the Closure Positive Shift in music, shed light on the large-scale structural integration of musical input, and raise new hypotheses concerning shared resources between music and language.
  • Silva, S., Barbosa, F., Marques-Teixeira, J., Petersson, K. M., & Castro, S. L. (2014). You know when: Event-related potentials and theta/beat power indicate boundary prediction in music. Journal of Integrative Neuroscience, 13(1), 19-34. doi:10.1142/S0219635214500022.

    Abstract

    Neuroscientific and musicological approaches to music cognition indicate that listeners familiarized in the Western tonal tradition expect a musical phrase boundary at predictable time intervals. However, phrase boundary prediction processes in music remain untested. We analyzed event-related potentials (ERPs) and event-related induced power changes at the onset and offset of a boundary pause. We made comparisons with modified melodies, where the pause was omitted and filled by tones. The offset of the pause elicited a closure positive shift (CPS), indexing phrase boundary detection. The onset of the filling tones elicited significant increases in theta and beta powers. In addition, the P2 component was larger when the filling tones started than when they ended. The responses to boundary omission suggest that listeners expected to hear a boundary pause. Therefore, boundary prediction seems to coexist with boundary detection in music segmentation.
  • Simanova, I. (2014). In search of conceptual representations in the brain: Towards mind-reading. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Simanova, I., Hagoort, P., Oostenveld, R., & Van Gerven, M. A. J. (2014). Modality-independent decoding of semantic information from the human brain. Cerebral Cortex, 24, 426-434. doi:10.1093/cercor/bhs324.

    Abstract

    An ability to decode semantic information from fMRI spatial patterns has been demonstrated in previous studies mostly for 1 specific input modality. In this study, we aimed to decode semantic category independent of the modality in which an object was presented. Using a searchlight method, we were able to predict the stimulus category from the data while participants performed a semantic categorization task with 4 stimulus modalities (spoken and written names, photographs, and natural sounds). Significant classification performance was achieved in all 4 modalities. Modality-independent decoding was implemented by training and testing the searchlight method across modalities. This allowed the localization of those brain regions, which correctly discriminated between the categories, independent of stimulus modality. The analysis revealed large clusters of voxels in the left inferior temporal cortex and in frontal regions. These voxels also allowed category discrimination in a free recall session where subjects recalled the objects in the absence of external stimuli. The results show that semantic information can be decoded from the fMRI signal independently of the input modality and have clear implications for understanding the functional mechanisms of semantic memory.
  • Simon, E., & Sjerps, M. J. (2014). Developing non-native vowel representations: a study on child second language acquisition. COPAL: Concordia Working Papers in Applied Linguistics, 5, 693-708.

    Abstract

    This study examines what stage 9‐12‐year‐old Dutch‐speaking children have reached in the development of their L2 lexicon, focusing on its phonological specificity. Two experiments were carried out with a group of Dutch‐speaking children and adults learning English. In a first task, listeners were asked to judge Dutch words which were presented with either the target Dutch vowel or with an English vowel synthetically inserted. The second experiment was a mirror of the first, i.e. with English words and English or Dutch vowels inserted. It was examined to what extent the listeners accepted substitutions of Dutch vowels by English ones, and vice versa. The results of the experiments suggest that the children have not reached the same degree of phonological specificity of L2 words as the adults. Children not only experience a strong influence of their native vowel categories when listening to L2 words, they also apply less strict criteria.
  • Simon, E., Sjerps, M. J., & Fikkert, P. (2014). Phonological representations in children’s native and non-native lexicon. Bilingualism: Language and Cognition, 17(1), 3-21. doi:10.1017/S1366728912000764.

    Abstract

    This study investigated the phonological representations of vowels in children's native and non-native lexicons. Two experiments were mispronunciation tasks (i.e., a vowel in words was substituted by another vowel from the same language). These were carried out by Dutch-speaking 9–12-year-old children and Dutch-speaking adults, in their native (Experiment 1, Dutch) and non-native (Experiment 2, English) language. A third experiment tested vowel discrimination. In Dutch, both children and adults could accurately detect mispronunciations. In English, adults, and especially children, detected substitutions of native vowels (i.e., vowels that are present in the Dutch inventory) by non-native vowels more easily than changes in the opposite direction. Experiment 3 revealed that children could accurately discriminate most of the vowels. The results indicate that children's L1 categories strongly influenced their perception of English words. However, the data also reveal a hint of the development of L2 phoneme categories.

    Additional information

    Simon_SuppMaterial.pdf
  • Simpson, N. H., Addis, L., Brandler, W. M., Slonims, V., Clark, A., Watson, J., Scerri, T. S., Hennessy, E. R., Stein, J., Talcott, J., Conti-Ramsden, G., O'Hare, A., Baird, G., Fairfax, B. P., Knight, J. C., Paracchini, S., Fisher, S. E., Newbury, D. F., & The SLI Consortium (2014). Increased prevalence of sex chromosome aneuploidies in specific language impairment and dyslexia. Developmental Medicine and Child Neurology, 56, 346-353. doi:10.1111/dmcn.12294.

    Abstract

    Aim Sex chromosome aneuploidies increase the risk of spoken or written language disorders but individuals with specific language impairment (SLI) or dyslexia do not routinely undergo cytogenetic analysis. We assess the frequency of sex chromosome aneuploidies in individuals with language impairment or dyslexia. Method Genome-wide single nucleotide polymorphism genotyping was performed in three sample sets: a clinical cohort of individuals with speech and language deficits (87 probands: 61 males, 26 females; age range 4 to 23 years), a replication cohort of individuals with SLI, from both clinical and epidemiological samples (209 probands: 139 males, 70 females; age range 4 to 17 years), and a set of individuals with dyslexia (314 probands: 224 males, 90 females; age range 7 to 18 years). Results In the clinical language-impaired cohort, three abnormal karyotypic results were identified in probands (proband yield 3.4%). In the SLI replication cohort, six abnormalities were identified providing a consistent proband yield (2.9%). In the sample of individuals with dyslexia, two sex chromosome aneuploidies were found giving a lower proband yield of 0.6%. In total, two XYY, four XXY (Klinefelter syndrome), three XXX, one XO (Turner syndrome), and one unresolved karyotype were identified. Interpretation The frequency of sex chromosome aneuploidies within each of the three cohorts was increased over the expected population frequency (approximately 0.25%) suggesting that genetic testing may prove worthwhile for individuals with language and literacy problems and normal non-verbal IQ. Early detection of these aneuploidies can provide information and direct the appropriate management for individuals.
  • Sjerps, M. J., & Smiljanic, R. (2013). Compensation for vocal tract characteristics across native and non-native languages. Journal of Phonetics, 41, 145-155. doi:10.1016/j.wocn.2013.01.005.

    Abstract

    Perceptual compensation for speaker vocal tract properties was investigated in four groups of listeners: native speakers of English and native speakers of Dutch, native speakers of Spanish with low proficiency in English, and Spanish-English bilinguals. Listeners categorized targets on a [sofo] to [sufu] continuum. Targets were preceded by sentences that were manipulated to have either a high or a low F1 contour. All listeners performed the categorization task for targets that were preceded by Spanish, English and Dutch precursors. Results show that listeners from each of the four language backgrounds compensate for speaker vocal tract properties regardless of language-specific vowel inventory properties. Listeners also compensate when they listen to stimuli in another language. The results suggest that patterns of compensation are mainly determined by auditory properties of precursor sentences.
  • Sjerps, M. J. (2013). [Contribution to NextGen VOICES survey: Science communication's future]. Science, 340 (no. 6128, online supplement). Retrieved from http://www.sciencemag.org/content/340/6128/28/suppl/DC1.

    Abstract

    One of the important challenges for the development of science communication concerns the current problems with the under-exposure of null results. I suggest that each article published in a top scientific journal can get tagged (online) with attempts to replicate. As such, a future reader of an article will also be able to see whether replications have been attempted and how these turned out. Editors and/or reviewers decide whether a replication is of sound quality. The authors of the main article have the option to review the replication and can provide a supplementary comment with each attempt that is added. After 5 or 10 years, and provided enough attempts to replicate, the authors of the main article get the opportunity to discuss/review their original study in light of the outcomes of the replications. This approach has two important strengths: 1) The approach would provide researchers with the opportunity to show that they deliver scientifically thorough work, but sometimes just fail to replicate the result that others have reported. This can be especially valuable for the career opportunities of promising young researchers; 2) perhaps even more important, the visibility of replications provides an important incentive for researchers to publish findings only if they are sure that their effects are reliable (and thereby reduce the influence of "experimenter degrees of freedom" or even outright fraud). The proposed approach will stimulate researchers to look beyond the point of publication of their studies.
  • Sjerps, M. J., McQueen, J. M., & Mitterer, H. (2013). Evidence for precategorical extrinsic vowel normalization. Attention, Perception & Psychophysics, 75, 576-587. doi:10.3758/s13414-012-0408-7.

    Abstract

    Three experiments investigated whether extrinsic vowel normalization takes place largely at a categorical or a precategorical level of processing. Traditional vowel normalization effects in categorization were replicated in Experiment 1: Vowels taken from an [ɪ]-[ε] continuum were more often interpreted as /ɪ/ (which has a low first formant, F (1)) when the vowels were heard in contexts that had a raised F (1) than when the contexts had a lowered F (1). This was established with contexts that consisted of only two syllables. These short contexts were necessary for Experiment 2, a discrimination task that encouraged listeners to focus on the perceptual properties of vowels at a precategorical level. Vowel normalization was again found: Ambiguous vowels were more easily discriminated from an endpoint [ε] than from an endpoint [ɪ] in a high-F (1) context, whereas the opposite was true in a low-F (1) context. Experiment 3 measured discriminability between pairs of steps along the [ɪ]-[ε] continuum. Contextual influences were again found, but without discrimination peaks, contrary to what was predicted from the same participants' categorization behavior. Extrinsic vowel normalization therefore appears to be a process that takes place at least in part at a precategorical processing level.
  • Skiba, R. (1991). Eine Datenbank für Deutsch als Zweitsprache Materialien: Zum Einsatz von PC-Software bei Planung von Zweitsprachenunterricht. In H. Barkowski, & G. Hoff (Eds.), Berlin interkulturell: Ergebnisse einer Berliner Konferenz zu Migration und Pädagogik. (pp. 131-140). Berlin: Colloquium.
  • Skiba, R. (1998). Fachsprachenforschung in wissenschaftstheoretischer Perspektive. Tübingen: Gunter Narr.
  • Slobin, D. I., Ibarretxe-Antuñano, I., Kopecka, A., & Majid, A. (2014). Manners of human gait: A crosslinguistic event-naming study. Cognitive Linguistics, 25, 701-741. doi:10.1515/cog-2014-0061.

    Abstract

    Crosslinguistic studies of expressions of motion events have found that Talmy's binary typology of verb-framed and satellite-framed languages is reflected in language use. In particular, Manner of motion is relatively more elaborated in satellite-framed languages (e.g., in narrative, picture description, conversation, translation). The present research builds on previous controlled studies of the domain of human motion by eliciting descriptions of a wide range of manners of walking and running filmed in natural circumstances. Descriptions were elicited from speakers of two satellite-framed languages (English, Polish) and three verb-framed languages (French, Spanish, Basque). The sampling of events in this study resulted in four major semantic clusters for these five languages: walking, running, non-canonical gaits (divided into bounce-and-recoil and syncopated movements), and quadrupedal movement (crawling). Counts of verb types found a broad tendency for satellite-framed languages to show greater lexical diversity, along with substantial within group variation. Going beyond most earlier studies, we also examined extended descriptions of manner of movement, isolating types of manner. The following categories of manner were identified and compared: attitude of actor, rate, effort, posture, and motor patterns of legs and feet. Satellite-framed speakers tended to elaborate expressive manner verbs, whereas verb-framed speakers used modification to add manner to neutral motion verbs
  • Sloetjes, H. (2014). ELAN: Multimedia annotation application. In J. Durand, U. Gut, & G. Kristoffersen (Eds.), The Oxford Handbook of Corpus Phonology (pp. 305-320). Oxford: Oxford University Press.
  • Sloetjes, H. (2013). The ELAN annotation tool. In H. Lausberg (Ed.), Understanding body movement: A guide to empirical research on nonverbal behaviour with an introduction to the NEUROGES coding system (pp. 193-198). Frankfurt a/M: Lang.
  • Sloetjes, H. (2013). Step by step introduction in NEUROGES coding with ELAN. In H. Lausberg (Ed.), Understanding body movement: A guide to empirical research on nonverbal behaviour with an introduction to the NEUROGES coding system (pp. 201-212). Frankfurt a/M: Lang.
  • De Smedt, K., Hinrichs, E., Meurers, D., Skadiņa, I., Sanford Pedersen, B., Navarretta, C., Bel, N., Lindén, K., Lopatková, M., Hajič, J., Andersen, G., & Lenkiewicz, P. (2014). CLARA: A new generation of researchers in common language resources and their applications. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2014: 9th International Conference on Language Resources and Evaluation (pp. 2166-2174).
  • De Smedt, K., & Kempen, G. (1991). Segment Grammar: A formalism for incremental sentence generation. In C. Paris, W. Swartout, & W. Mann (Eds.), Natural language generation and computational linguistics (pp. 329-349). Dordrecht: Kluwer Academic Publishers.

    Abstract

    Incremental sentence generation imposes special constraints on the representation of the grammar and the design of the formulator (the module which is responsible for constructing the syntactic and morphological structure). In the model of natural speech production presented here, a formalism called Segment Grammar is used for the representation of linguistic knowledge. We give a definition of this formalism and present a formulator design which relies on it. Next, we present an object- oriented implementation of Segment Grammar. Finally, we compare Segment Grammar with other formalisms.
  • Smeets, C. J. L. M., & Verbeek, D. (2014). Review Cerebellar ataxia and functional genomics: Identifying the routes to cerebellar neurodegeneration. Biochimica et Biophysica Acta: BBA, 1842(10), 2030-2038. doi:10.1016/j.bbadis.2014.04.004.

    Abstract

    Cerebellar ataxias are progressive neurodegenerative disorders characterized by atrophy of the cerebellum leading to motor dysfunction, balance problems, and limb and gait ataxia. These include among others, the dominantly inherited spinocerebellar ataxias, recessive cerebellar ataxias such as Friedreich's ataxia, and X-linked cerebellar ataxias. Since all cerebellar ataxias display considerable overlap in their disease phenotypes, common pathological pathways must underlie the selective cerebellar neurodegeneration. Therefore, it is important to identify the molecular mechanisms and routes to neurodegeneration that cause cerebellar ataxia. In this review, we discuss the use of functional genomic approaches including whole-exome sequencing, genome-wide gene expression profiling, miRNA profiling, epigenetic profiling, and genetic modifier screens to reveal the underlying pathogenesis of various cerebellar ataxias. These approaches have resulted in the identification of many disease genes, modifier genes, and biomarkers correlating with specific stages of the disease. This article is part of a Special Issue entitled: From Genome to Function.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Examining strains and symptoms of the ‘Literacy Virus’: The effects of orthographic transparency on phonological processing in a connectionist model of reading. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014). Austin, TX: Cognitive Science Society.

    Abstract

    The effect of literacy on phonological processing has been described in terms of a virus that “infects all speech processing” (Frith, 1998). Empirical data has established that literacy leads to changes to the way in which phonological information is processed. Harm & Seidenberg (1999) demonstrated that a connectionist network trained to map between English orthographic and phonological representations display’s more componential phonological processing than a network trained only to stably represent the phonological forms of words. Within this study we use a similar model yet manipulate the transparency of orthographic-to-phonological mappings. We observe that networks trained on a transparent orthography are better at restoring phonetic features and phonemes. However, networks trained on non-transparent orthographies are more likely to restore corrupted phonological segments with legal, coarser linguistic units (e.g. onset, coda). Our study therefore provides an explicit description of how differences in orthographic transparency can lead to varying strains and symptoms of the ‘literacy virus’.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). A comprehensive model of spoken word recognition must be multimodal: Evidence from studies of language-mediated visual attention. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014). Austin, TX: Cognitive Science Society.

    Abstract

    When processing language, the cognitive system has access to information from a range of modalities (e.g. auditory, visual) to support language processing. Language mediated visual attention studies have shown sensitivity of the listener to phonological, visual, and semantic similarity when processing a word. In a computational model of language mediated visual attention, that models spoken word processing as the parallel integration of information from phonological, semantic and visual processing streams, we simulate such effects of competition within modalities. Our simulations raised untested predictions about stronger and earlier effects of visual and semantic similarity compared to phonological similarity around the rhyme of the word. Two visual world studies confirmed these predictions. The model and behavioral studies suggest that, during spoken word comprehension, multimodal information can be recruited rapidly to constrain lexical selection to the extent that phonological rhyme information may exert little influence on this process.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). An amodal shared resource model of language-mediated visual attention. Frontiers in Psychology, 4: 528. doi:10.3389/fpsyg.2013.00528.

    Abstract

    Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language-mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language-mediated eye gaze.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Modelling language – vision interactions in the hub and spoke framework. In J. Mayor, & P. Gomez (Eds.), Computational Models of Cognitive Processes: Proceedings of the 13th Neural Computation and Psychology Workshop (NCPW13). (pp. 3-16). Singapore: World Scientific Publishing.

    Abstract

    Multimodal integration is a central characteristic of human cognition. However our understanding of the interaction between modalities and its influence on behaviour is still in its infancy. This paper examines the value of the Hub & Spoke framework (Plaut, 2002; Rogers et al., 2004; Dilkina et al., 2008; 2010) as a tool for exploring multimodal interaction in cognition. We present a Hub and Spoke model of language–vision information interaction and report the model’s ability to replicate a range of phonological, visual and semantic similarity word-level effects reported in the Visual World Paradigm (Cooper, 1974; Tanenhaus et al, 1995). The model provides an explicit connection between the percepts of language and the distribution of eye gaze and demonstrates the scope of the Hub-and-Spoke architectural framework by modelling new aspects of multimodal cognition.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Literacy effects on language and vision: Emergent effects from an amodal shared resource (ASR) computational model. Cognitive Psychology, 75, 28-54. doi:10.1016/j.cogpsych.2014.07.002.

    Abstract

    Learning to read and write requires an individual to connect additional orthographic representations to pre-existing mappings between phonological and semantic representations of words. Past empirical results suggest that the process of learning to read and write (at least in alphabetic languages) elicits changes in the language processing system, by either increasing the cognitive efficiency of mapping between representations associated with a word, or by changing the granularity of phonological processing of spoken language, or through a combination of both. Behavioural effects of literacy have typically been assessed in offline explicit tasks that have addressed only phonological processing. However, a recent eye tracking study compared high and low literate participants on effects of phonology and semantics in processing measured implicitly using eye movements. High literates’ eye movements were more affected by phonological overlap in online speech than low literates, with only subtle differences observed in semantics. We determined whether these effects were due to cognitive efficiency and/or granularity of speech processing in a multimodal model of speech processing – the amodal shared resource model (ASR, Smith, Monaghan, & Huettig, 2013). We found that cognitive efficiency in the model had only a marginal effect on semantic processing and did not affect performance for phonological processing, whereas fine-grained versus coarse-grained phonological representations in the model simulated the high/low literacy effects on phonological processing, suggesting that literacy has a focused effect in changing the grain-size of phonological mappings.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Modelling the effects of formal literacy training on language mediated visual attention. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 3420-3425). Austin, TX: Cognitive Science Society.

    Abstract

    Recent empirical evidence suggests that language-mediated eye gaze is partly determined by level of formal literacy training. Huettig, Singh and Mishra (2011) showed that high-literate individuals' eye gaze was closely time locked to phonological overlap between a spoken target word and items presented in a visual display. In contrast, low-literate individuals' eye gaze was not related to phonological overlap, but was instead strongly influenced by semantic relationships between items. Our present study tests the hypothesis that this behavior is an emergent property of an increased ability to extract phonological structure from the speech signal, as in the case of high-literates, with low-literates more reliant on more coarse grained structure. This hypothesis was tested using a neural network model, that integrates linguistic information extracted from the speech signal with visual and semantic information within a central resource. We demonstrate that contrasts in fixation behavior similar to those observed between high and low literates emerge when models are trained on speech signals of contrasting granularity.
  • Smits, R. (1998). A model for dependencies in phonetic categorization. Proceedings of the 16th International Congress on Acoustics and the 135th Meeting of the Acoustical Society of America, 2005-2006.

    Abstract

    A quantitative model of human categorization behavior is proposed, which can be applied to 4-alternative forced-choice categorization data involving two binary classifications. A number of processing dependencies between the two classifications are explicitly formulated, such as the dependence of the location, orientation, and steepness of the class boundary for one classification on the outcome of the other classification. The significance of various types of dependencies can be tested statistically. Analyses of a data set from the literature shows that interesting dependencies in human speech recognition can be uncovered using the model.
  • Snijders, T. M., Milivojevic, B., & Kemner, C. (2013). Atypical excitation-inhibition balance in autism captured by the gamma response to contextual modulation. NeuroImage: Clinical, 3, 65-72. doi:10.1016/j.nicl.2013.06.015.

    Abstract

    Atypical visual perception in people with autism spectrum disorders (ASD) is hypothesized to stem from an imbalance in excitatory and inhibitory processes in the brain. We used neuronal oscillations in the gamma frequency range (30 – 90 Hz), which emerge from a balanced interaction of excitation and inhibition in the brain, to assess contextual modulation processes in early visual perception. Electroencephalography was recorded in 12 high-functioning adults with ASD and 12 age- and IQ-matched control participants. Oscilla- tions in the gamma frequency range were analyzed in response to stimuli consisting of small line-like elements. Orientation-speci fi c contextual modulation was manipulated by parametrically increasing the amount of homogeneously oriented elements in the stimuli. The stimuli elicited a strong steady-state gamma response around the refresh-rate of 60 Hz, which was larger for controls than for participants with ASD. The amount of orientation homogeneity (contextual modulation) in fl uenced the gamma response in control subjects, while for subjects with ASD this was not the case. The atypical steady-state gamma response to contextual modulation in subjects with ASD may capture the link between an imbalance in excitatory and inhibitory neuronal processing and atypical visual processing in ASD
  • Sotaro, K., & Dickey, L. W. (Eds.). (1998). Max Planck Institute for Psycholinguistics: Annual report 1998. Nijmegen: Max Planck Institute for Psycholinguistics.
  • Spada, D., Verga, L., Iadanza, A., Tettamanti, M., & Perani, D. (2014). The auditory scene: An fMRI study on melody and accompaniment in professional pianists. NeuroImage, 102(2), 764-775. doi:10.1016/j.neuroimage.2014.08.036.

    Abstract

    The auditory scene is a mental representation of individual sounds extracted from the summed sound waveform reaching the ears of the listeners. Musical contexts represent particularly complex cases of auditory scenes. In such a scenario, melody may be seen as the main object moving on a background represented by the accompaniment. Both melody and accompaniment vary in time according to harmonic rules, forming a typical texture with melody in the most prominent, salient voice. In the present sparse acquisition functional magnetic resonance imaging study, we investigated the interplay between melody and accompaniment in trained pianists, by observing the activation responses elicited by processing: (1) melody placed in the upper and lower texture voices, leading to, respectively, a higher and lower auditory salience; (2) harmonic violations occurring in either the melody, the accompaniment, or both. The results indicated that the neural activation elicited by the processing of polyphonic compositions in expert musicians depends upon the upper versus lower position of the melodic line in the texture, and showed an overall greater activation for the harmonic processing of melody over accompaniment. Both these two predominant effects were characterized by the involvement of the posterior cingulate cortex and precuneus, among other associative brain regions. We discuss the prominent role of the posterior medial cortex in the processing of melodic and harmonic information in the auditory stream, and propose to frame this processing in relation to the cognitive construction of complex multimodal sensory imagery scenes.
  • Spapé, M., Verdonschot, R. G., Van Dantzig, S., & Van Steenbergen, H. (2014). The E-Primer: An introduction to creating psychological experiments in E-Prime®. Leiden: Leiden University Press.

    Abstract

    E-Prime, the software suite by Psychology Software Tools, is used for designing, developing and running custom psychological experiments. Aimed at students and researchers alike, this book provides a much needed, down-to-earth introduction into a wide range of experiments that can be set up using E-Prime. Many tutorials are provided to teach the reader how to develop experiments typical for the broad fields of psychological and cognitive science. Apart from explaining the basic structure of E-Prime and describing how it fits into daily scientific practice, this book also offers an introduction into programming using E-Prime’s own E-Basic language. The authors guide the readers step-by-step through the software, from an elementary to an advanced level, enabling them to benefit from the enormous possibilities for experimental design offered by E-Prime.
  • Starreveld, P. A., La Heij, W., & Verdonschot, R. G. (2013). Time course analysis of the effects of distractor frequency and categorical relatedness in picture naming: An evaluation of the response exclusion account. Language and Cognitive Processes, 28(5), 633-654. doi:10.1080/01690965.2011.608026.

    Abstract

    The response exclusion account (REA), advanced by Mahon and colleagues, localises the distractor frequency effect and the semantic interference effect in picture naming at the level of the response output buffer. We derive four predictions from the REA: (1) the size of the distractor frequency effect should be identical to the frequency effect obtained when distractor words are read aloud, (2) the distractor frequency effect should not change in size when stimulus-onset asynchrony (SOA) is manipulated, (3) the interference effect induced by a distractor word (as measured from a nonword control distractor) should increase in size with increasing SOA, and (4) the word frequency effect and the semantic interference effect should be additive. The results of the picture-naming task in Experiment 1 and the word-reading task in Experiment 2 refute all four predictions. We discuss a tentative account of the findings obtained within a traditional selection-by-competition model in which both context effects are localised at the level of lexical selection.
  • Stephens, S., Hartz, S., Hoft, N., Saccone, N., Corley, R., Hewitt, J., Hopfer, C., Breslau, N., Coon, H., Chen, X., Ducci, F., Dueker, N., Franceschini, N., Frank, J., Han, Y., Hansel, N., Jiang, C., Korhonen, T., Lind, P., Liu, J. and 105 moreStephens, S., Hartz, S., Hoft, N., Saccone, N., Corley, R., Hewitt, J., Hopfer, C., Breslau, N., Coon, H., Chen, X., Ducci, F., Dueker, N., Franceschini, N., Frank, J., Han, Y., Hansel, N., Jiang, C., Korhonen, T., Lind, P., Liu, J., Michel, M., Lyytikäinen, L.-P., Shaffer, J., Short, S., Sun, J., Teumer, A., Thompson, J., Vogelzangs, N., Vink, J., Wenzlaff, A., Wheeler, W., Yang, B.-Z., Aggen, S., Balmforth, A., Baumesiter, S., Beaty, T., Benjamin, D., Bergen, A., Broms, U., Cesarini, D., Chatterjee, N., Chen, J., Cheng, Y.-C., Cichon, S., Couper, D., Cucca, F., Dick, D., Foround, T., Furberg, H., Giegling, I., Gillespie, N., Gu, F.,.Hall, A., Hällfors, J., Han, S., Hartmann, A., Heikkilä, K., Hickie, I., Hottenga, J., Jousilahti, P., Kaakinen, M., Kähönen, M., Koellinger, P., Kittner, S., Konte, B., Landi, M.-T., Laatikainen, T., Leppert, M., Levy, S., Mathias, R., McNeil, D., Medlund, S., Montgomery, G., Murray, T., Nauck, M., North, K., Paré, P., Pergadia, M., Ruczinski, I., Salomaa, V., Viikari, J., Willemsen, G., Barnes, K., Boerwinkle, E., Boomsma, D., Caporaso, N., Edenberg, H., Francks, C., Gelernter, J., Grabe, H., Hops, H., Jarvelin, M.-R., Johannesson, M., Kendler, K., Lehtimäki, T., Magnusson, P., Marazita, M., Marchini, J., Mitchell, B., Nöthen, M., Penninx, B., Raitakari, O., Rietschel, M., Rujescu, D., Samani, N., Schwartz, A., Shete, S., Spitz, M., Swan, G., Völzke, H., Veijola, J., Wei, Q., Amos, C., Canon, D., Grucza, R., Hatsukami, D., Heath, A., Johnson, E., Kaprio, J., Madden, P., Martin, N., Stevens, V., Weiss, R., Kraft, P., Bierut, L., & Ehringer, M. (2013). Distinct Loci in the CHRNA5/CHRNA3/CHRNB4 Gene Cluster are Associated with Onset of Regular Smoking. Genetic Epidemiology, 37, 846-859. doi:10.1002/gepi.21760.

    Abstract

    Neuronal nicotinic acetylcholine receptor (nAChR) genes (CHRNA5/CHRNA3/CHRNB4) have been reproducibly associated with nicotine dependence, smoking behaviors, and lung cancer risk. Of the few reports that have focused on early smoking behaviors, association results have been mixed. This meta-analysis examines early smoking phenotypes and SNPs in the gene cluster to determine: (1) whether the most robust association signal in this region (rs16969968) for other smoking behaviors is also associated with early behaviors, and/or (2) if additional statistically independent signals are important in early smoking. We focused on two phenotypes: age of tobacco initiation (AOI) and age of first regular tobacco use (AOS). This study included 56,034 subjects (41 groups) spanning nine countries and evaluated five SNPs including rs1948, rs16969968, rs578776, rs588765, and rs684513. Each dataset was analyzed using a centrally generated script. Meta-analyses were conducted from summary statistics. AOS yielded significant associations with SNPs rs578776 (beta = 0.02, P = 0.004), rs1948 (beta = 0.023, P = 0.018), and rs684513 (beta = 0.032, P = 0.017), indicating protective effects. There were no significant associations for the AOI phenotype. Importantly, rs16969968, the most replicated signal in this region for nicotine dependence, cigarettes per day, and cotinine levels, was not associated with AOI (P = 0.59) or AOS (P = 0.92). These results provide important insight into the complexity of smoking behavior phenotypes, and suggest that association signals in the CHRNA5/A3/B4 gene cluster affecting early smoking behaviors may be different from those affecting the mature nicotine dependence phenotype

    Files private

    Request files
  • Stergiakouli, E., Gaillard, R., Tavaré, J. M., Balthasar, N., Loos, R. J., Taal, H. R., Evans, D. M., Rivadeneira, F., St Pourcain, B., Uitterlinden, A. G., Kemp, J. P., Hofman, A., Ring, S. M., Cole, T. J., Jaddoe, V. W. V., Davey Smith, G., & Timpson, N. J. (2014). Genome-wide association study of height-adjusted BMI in childhood identifies functional variant in ADCY3. Obesity, 22(10), 2252-2259. doi:10.1002/oby.20840.

    Abstract

    OBJECTIVE: Genome-wide association studies (GWAS) of BMI are mostly undertaken under the assumption that "kg/m(2) " is an index of weight fully adjusted for height, but in general this is not true. The aim here was to assess the contribution of common genetic variation to a adjusted version of that phenotype which appropriately accounts for covariation in height in children. METHODS: A GWAS of height-adjusted BMI (BMI[x] = weight/height(x) ), calculated to be uncorrelated with height, in 5809 participants (mean age 9.9 years) from the Avon Longitudinal Study of Parents and Children (ALSPAC) was performed. RESULTS: GWAS based on BMI[x] yielded marked differences in genomewide results profile. SNPs in ADCY3 (adenylate cyclase 3) were associated at genome-wide significance level (rs11676272 (0.28 kg/m(3.1) change per allele G (0.19, 0.38), P = 6 × 10(-9) ). In contrast, they showed marginal evidence of association with conventional BMI [rs11676272 (0.25 kg/m(2) (0.15, 0.35), P = 6 × 10(-7) )]. Results were replicated in an independent sample, the Generation R study. CONCLUSIONS: Analysis of BMI[x] showed differences to that of conventional BMI. The association signal at ADCY3 appeared to be driven by a missense variant and it was strongly correlated with expression of this gene. Our work highlights the importance of well understood phenotype use (and the danger of convention) in characterising genetic contributions to complex traits.

    Additional information

    oby20840-sup-0001-suppinfo.docx
  • Stewart, L., Verdonschot, R. G., Nasralla, P., & Lanipekun, J. (2013). Action–perception coupling in pianists: Learned mappings or spatial musical association of response codes (SMARC) effect? Quarterly Journal of Experimental Psychology, 66(1), 37-50. doi:10.1080/17470218.2012.687385.

    Abstract

    The principle of common coding suggests that a joint representation is formed when actions are repeatedly paired with a specific perceptual event. Musicians are occupationally specialized with regard to the coupling between actions and their auditory effects. In the present study, we employed a novel paradigm to demonstrate automatic action–effect associations in pianists. Pianists and nonmusicians pressed keys according to aurally presented number sequences. Numbers were presented at pitches that were neutral, congruent, or incongruent with respect to pitches that would normally be produced by such actions. Response time differences were seen between congruent and incongruent sequences in pianists alone. A second experiment was conducted to determine whether these effects could be attributed to the existence of previously documented spatial/pitch compatibility effects. In a “stretched” version of the task, the pitch distance over which the numbers were presented was enlarged to a range that could not be produced by the hand span used in Experiment 1. The finding of a larger response time difference between congruent and incongruent trials in the original, standard, version compared with the stretched version, in pianists, but not in nonmusicians, indicates that the effects obtained are, at least partially, attributable to learned action effects.
  • Stine-Morrow, E., Payne, B., Roberts, B., Kramer, A., Morrow, D., Payne, L., Hill, P., Jackson, J., Gao, X., Noh, S., Janke, M., & Parisi, J. (2014). Training versus engagement as paths to cognitive enrichment with aging. Psychology and Aging, 29, 891-906. doi:10.1037/a0038244.

    Abstract

    While a training model of cognitive intervention targets the improvement of particular skills through instruction and practice, an engagement model is based on the idea that being embedded in an intellectually and socially complex environment can impact cognition, perhaps even broadly, without explicit instruction. We contrasted these 2 models of cognitive enrichment by randomly assigning healthy older adults to a home-based inductive reasoning training program, a team-based competitive program in creative problem solving, or a wait-list control. As predicted, those in the training condition showed selective improvement in inductive reasoning. Those in the engagement condition, on the other hand, showed selective improvement in divergent thinking, a key ability exercised in creative problem solving. On average, then, both groups appeared to show ability-specific effects. However, moderators of change differed somewhat for those in the engagement and training interventions. Generally, those who started either intervention with a more positive cognitive profile showed more cognitive growth, suggesting that cognitive resources enabled individuals to take advantage of environmental enrichment. Only in the engagement condition did initial levels of openness and social network size moderate intervention effects on cognition, suggesting that comfort with novelty and an ability to manage social resources may be additional factors contributing to the capacity to take advantage of the environmental complexity associated with engagement. Collectively, these findings suggest that training and engagement models may offer alternative routes to cognitive resilience in late life

    Files private

    Request files
  • Stivers, T. (1998). Prediagnostic commentary in veterinarian-client interaction. Research on Language and Social Interaction, 31(2), 241-277. doi:10.1207/s15327973rlsi3102_4.
  • Stivers, T., & Sidnell, J. (Eds.). (2013). The handbook on conversation analysis. Malden, MA: Wiley-Blackwell.

    Abstract

    Presenting a comprehensive, state-of-the-art overview of theoretical and descriptive research in the field, The Handbook of Conversation Analysis brings together contributions by leading international experts to provide an invaluable information resource and reference for scholars of social interaction across the areas of conversation analysis, discourse analysis, linguistic anthropology, interpersonal communication, discursive psychology and sociolinguistics. Ideal as an introduction to the field for upper level undergraduates and as an in-depth review of the latest developments for graduate level students and established scholars Five sections outline the history and theory, methods, fundamental concepts, and core contexts in the study of conversation, as well as topics central to conversation analysis Written by international conversation analysis experts, the book covers a wide range of topics and disciplines, from reviewing underlying structures of conversation, to describing conversation analysis' relationship to anthropology, communication, linguistics, psychology, and sociology
  • Stolk, A., Noordzij, M. L., Verhagen, L., Volman, I., Schoffelen, J.-M., Oostenveld, R., Hagoort, P., & Toni, I. (2014). Cerebral coherence between communicators marks the emergence of meaning. Proceedings of the National Academy of Sciences of the United States of America, 111, 18183-18188. doi:10.1073/pnas.1414886111.

    Abstract

    How can we understand each other during communicative interactions? An influential suggestion holds that communicators are primed by each other’s behaviors, with associative mechanisms automatically coordinating the production of communicative signals and the comprehension of their meanings. An alternative suggestion posits that mutual understanding requires shared conceptualizations of a signal’s use, i.e., “conceptual pacts” that are abstracted away from specific experiences. Both accounts predict coherent neural dynamics across communicators, aligned either to the occurrence of a signal or to the dynamics of conceptual pacts. Using coherence spectral-density analysis of cerebral activity simultaneously measured in pairs of communicators, this study shows that establishing mutual understanding of novel signals synchronizes cerebral dynamics across communicators’ right temporal lobes. This interpersonal cerebral coherence occurred only within pairs with a shared communicative history, and at temporal scales independent from signals’ occurrences. These findings favor the notion that meaning emerges from shared conceptualizations of a signal’s use.
  • Stolk, A., Verhagen, L., Schoffelen, J.-M., Oostenveld, R., Blokpoel, M., Hagoort, P., van Rooij, I., & Tonia, I. (2013). Neural mechanisms of communicative innovation. Proceedings of the National Academy of Sciences of the United States of America, 110(36), 14574-14579. doi:10.1073/pnas.1303170110.

    Abstract

    Human referential communication is often thought as coding-decoding a set of symbols, neglecting that establishing shared meanings requires a computational mechanism powerful enough to mutually negotiate them. Sharing the meaning of a novel symbol might rely on similar conceptual inferences across communicators or on statistical similarities in their sensorimotor behaviors. Using magnetoencephalography, we assess spectral, temporal, and spatial characteristics of neural activity evoked when people generate and understand novel shared symbols during live communicative interactions. Solving those communicative problems induced comparable changes in the spectral profile of neural activity of both communicators and addressees. This shared neuronal up-regulation was spatially localized to the right temporal lobe and the ventromedial prefrontal cortex and emerged already before the occurrence of a specific communicative problem. Communicative innovation relies on neuronal computations that are shared across generating and understanding novel shared symbols, operating over temporal scales independent from transient sensorimotor behavior.
  • Stolk, A., Todorovic, A., Schoffelen, J.-M., & Oostenveld, R. (2013). Online and offline tools for head movement compensation in MEG. NeuroImage, 68, 39-48. doi:10.1016/j.neuroimage.2012.11.047.

    Abstract

    Magnetoencephalography (MEG) is measured above the head, which makes it sensitive to variations of the head position with respect to the sensors. Head movements blur the topography of the neuronal sources of the MEG signal, increase localization errors, and reduce statistical sensitivity. Here we describe two novel and readily applicable methods that compensate for the detrimental effects of head motion on the statistical sensitivity of MEG experiments. First, we introduce an online procedure that continuously monitors head position. Second, we describe an offline analysis method that takes into account the head position time-series. We quantify the performance of these methods in the context of three different experimental settings, involving somatosensory, visual and auditory stimuli, assessing both individual and group-level statistics. The online head localization procedure allowed for optimal repositioning of the subjects over multiple sessions, resulting in a 28% reduction of the variance in dipole position and an improvement of up to 15% in statistical sensitivity. Offline incorporation of the head position time-series into the general linear model resulted in improvements of group-level statistical sensitivity between 15% and 29%. These tools can substantially reduce the influence of head movement within and between sessions, increasing the sensitivity of many cognitive neuroscience experiments.
  • Stolk, A., Noordzij, M. L., Volman, I., Verhagen, L., Overeem, S., van Elswijk, G., Bloem, B., Hagoort, P., & Toni, I. (2014). Understanding communicative actions: A repetitive TMS study. Cortex, 51, 25-34. doi:10.1016/j.cortex.2013.10.005.

    Abstract

    Despite the ambiguity inherent in human communication, people are remarkably efficient in establishing mutual understanding. Studying how people communicate in novel settings provides a window into the mechanisms supporting the human competence to rapidly generate and understand novel shared symbols, a fundamental property of human communication. Previous work indicates that the right posterior superior temporal sulcus (pSTS) is involved when people understand the intended meaning of novel communicative actions. Here, we set out to test whether normal functioning of this cerebral structure is required for understanding novel communicative actions using inhibitory low-frequency repetitive transcranial magnetic stimulation (rTMS). A factorial experimental design contrasted two tightly matched stimulation sites (right pSTS vs. left MT+, i.e. a contiguous homotopic task-relevant region) and tasks (a communicative task vs. a visual tracking task that used the same sequences of stimuli). Overall task performance was not affected by rTMS, whereas changes in task performance over time were disrupted according to TMS site and task combinations. Namely, rTMS over pSTS led to a diminished ability to improve action understanding on the basis of recent communicative history, while rTMS over MT+ perturbed improvement in visual tracking over trials. These findings qualify the contributions of the right pSTS to human communicative abilities, showing that this region might be necessary for incorporating previous knowledge, accumulated during interactions with a communicative partner, to constrain the inferential process that leads to action understanding.
  • Stolker, C. J. J. M., & Poletiek, F. H. (1998). Smartengeld - Wat zijn we eigenlijk aan het doen? Naar een juridische en psychologische evaluatie. In F. Stadermann (Ed.), Bewijs en letselschade (pp. 71-86). Lelystad, The Netherlands: Koninklijke Vermande.
  • Sumer, B., Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2013). Acquisition of locative expressions in children learning Turkish Sign Language (TİD) and Turkish. In E. Arik (Ed.), Current directions in Turkish Sign Language research (pp. 243-272). Newcastle upon Tyne: Cambridge Scholars Publishing.

    Abstract

    In sign languages, where space is often used to talk about space, expressions of spatial relations (e.g., ON, IN, UNDER, BEHIND) may rely on analogue mappings of real space onto signing space. In contrast, spoken languages express space in mostly categorical ways (e.g. adpositions). This raises interesting questions about the role of language modality in the acquisition of expressions of spatial relations. However, whether and to what extent modality influences the acquisition of spatial language is controversial – mostly due to the lack of direct comparisons of Deaf children to Deaf adults and to age-matched hearing children in similar tasks. Furthermore, the previous studies have taken English as the only model for spoken language development of spatial relations.
    Therefore, we present a balanced study in which spatial expressions by deaf and hearing children in two different age-matched groups (preschool children and school-age children) are systematically compared, as well as compared to the spatial expressions of adults. All participants performed the same tasks, describing angular (LEFT, RIGHT, FRONT, BEHIND) and non-angular spatial configurations (IN, ON, UNDER) of different objects (e.g. apple in box; car behind box).
    The analysis of the descriptions with non-angular spatial relations does not show an effect of modality on the development of
    locative expressions in TİD and Turkish. However, preliminary results of the analysis of expressions of angular spatial relations suggest that signers provide angular information in their spatial descriptions
    more frequently than Turkish speakers in all three age groups, and thus showing a potentially different developmental pattern in this domain. Implications of the findings with regard to the development of relations in spatial language and cognition will be discussed.
  • Sumer, B., Perniss, P., Zwitserlood, I., & Ozyurek, A. (2014). Learning to express "left-right" & "front-behind" in a sign versus spoken language. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1550-1555). Austin, Tx: Cognitive Science Society.

    Abstract

    Developmental studies show that it takes longer for
    children learning spoken languages to acquire viewpointdependent
    spatial relations (e.g., left-right, front-behind),
    compared to ones that are not viewpoint-dependent (e.g.,
    in, on, under). The current study investigates how
    children learn to express viewpoint-dependent relations
    in a sign language where depicted spatial relations can be
    communicated in an analogue manner in the space in
    front of the body or by using body-anchored signs (e.g.,
    tapping the right and left hand/arm to mean left and
    right). Our results indicate that the visual-spatial
    modality might have a facilitating effect on learning to
    express these spatial relations (especially in encoding of
    left-right) in a sign language (i.e., Turkish Sign
    Language) compared to a spoken language (i.e.,
    Turkish).
  • Sumner, M., Kurumada, C., Gafter, R., & Casillas, M. (2013). Phonetic variation and the recognition of words with pronunciation variants. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 3486-3492). Austin, TX: Cognitive Science Society.
  • Suppes, P., Böttner, M., & Liang, L. (1998). Machine Learning of Physics Word Problems: A Preliminary Report. In A. Aliseda, R. van Glabbeek, & D. Westerståhl (Eds.), Computing Natural Language (pp. 141-154). Stanford, CA, USA: CSLI Publications.
  • Swaab, T. Y., Brown, C. M., & Hagoort, P. (1998). Understanding ambiguous words in sentence contexts: Electrophysiological evidence for delayed contextual selection in Broca's aphasia. Neuropsychologia, 36(8), 737-761. doi:10.1016/S0028-3932(97)00174-7.

    Abstract

    This study investigates whether spoken sentence comprehension deficits in Broca's aphasics results from their inability to access the subordinate meaning of ambiguous words (e.g. bank), or alternatively, from a delay in their selection of the contextually appropriate meaning. Twelve Broca's aphasics and twelve elderly controls were presented with lexical ambiguities in three context conditions, each followed by the same target words. In the concordant condition, the sentence context biased the meaning of the sentence final ambiguous word that was related to the target. In the discordant condition, the sentence context biased the meaning of the sentence final ambiguous word that was incompatible with the target.In the unrelated condition, the sentence-final word was unambiguous and unrelated to the target. The task of the subjects was to listen attentively to the stimuli The activational status of the ambiguous sentence-final words was inferred from the amplitude of the N399 to the targets at two inter-stimulus intervals (ISIs) (100 ms and 1250 ms). At the short ISI, the Broca's aphasics showed clear evidence of activation of the subordinate meaning. In contrast to elderly controls, however, the Broca's aphasics were not successful at selecting the appropriate meaning of the ambiguity in the short ISI version of the experiment. But at the long ISI, in accordance with the performance of the elderly controls, the patients were able to successfully complete the contextual selection process. These results indicate that Broca's aphasics are delayed in the process of contextual selection. It is argued that this finding of delayed selection is compatible with the idea that comprehension deficits in Broca's aphasia result from a delay in the process of integrating lexical information.
  • De Swart, P., & Van Bergen, G. (2014). Unscrambling the lexical nature of weak definites. In A. Aguilar-Guevara, B. Le Bruyn, & J. Zwarts (Eds.), Weak referentiality (pp. 287-310). Amsterdam: Benjamins.

    Abstract

    We investigate how the lexical nature of weak definites influences the phenomenon of direct object scrambling in Dutch. Earlier experiments have indicated that weak definites are more resistant to scrambling than strong definites. We examine how the notion of weak definiteness used in this experimental work can be reduced to lexical connectedness. We explore four different ways of quantifying the relation between a direct object and the verb. Our results show that predictability of a verb given the object (verb cloze probability) provides the best fit to the weak/strong distinction used in the earlier experiments
  • Swift, M. (1998). [Book review of LOUIS-JACQUES DORAIS, La parole inuit: Langue, culture et société dans l'Arctique nord-américain]. Language in Society, 27, 273-276. doi:10.1017/S0047404598282042.

    Abstract

    This volume on Inuit speech follows the evolution of a native language of the North American Arctic, from its historical roots to its present-day linguistic structure and patterns of use from Alaska to Greenland. Drawing on a wide range of research from the fields of linguistics, anthropology, and sociology, Dorais integrates these diverse perspectives in a comprehensive view of native language development, maintenance, and use under conditions of marginalization due to social transition.
  • Takashima, A., Wagensveld, B., Van Turennout, M., Zwitserlood, P., Hagoort, P., & Verhoeven, L. (2014). Training-induced neural plasticity in visual-word decoding and the role of syllables. Neuropsychologia, 61, 299-314. doi:10.1016/j.neuropsychologia.2014.06.017.

    Abstract

    To investigate the neural underpinnings of word decoding, and how it changes as a function of repeated exposure, we trained Dutch participants repeatedly over the course of a month of training to articulate a set of novel disyllabic input strings written in Greek script to avoid the use of familiar orthographic representations. The syllables in the input were phonotactically legal combinations but non-existent in the Dutch language, allowing us to assess their role in novel word decoding. Not only trained disyllabic pseudowords were tested but also pseudowords with recombined patterns of syllables to uncover the emergence of syllabic representations. We showed that with extensive training, articulation became faster and more accurate for the trained pseudowords. On the neural level, the initial stage of decoding was reflected by increased activity in visual attention areas of occipito-temporal and occipito-parietal cortices, and in motor coordination areas of the precentral gyrus and the inferior frontal gyrus. After one month of training, memory representations for holistic information (whole word unit) were established in areas encompassing the angular gyrus, the precuneus and the middle temporal gyrus. Syllabic representations also emerged through repeated training of disyllabic pseudowords, such that reading recombined syllables of the trained pseudowords showed similar brain activation to trained pseudowords and were articulated faster than novel combinations of letter strings used in the trained pseudowords.
  • Takashima, A., Bakker, I., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2014). Richness of information about novel words influences how episodic and semantic memory networks interact during lexicalization. NeuroImage, 84, 265-278. doi:10.1016/j.neuroimage.2013.08.023.

    Abstract

    The complementary learning systems account of declarative memory suggests two distinct memory networks, a fast-mapping, episodic system involving the hippocampus, and a slower semantic memory system distributed across the neocortex in which new information is gradually integrated with existing representations. In this study, we investigated the extent to which these two networks are involved in the integration of novel words into the lexicon after extensive learning, and how the involvement of these networks changes after 24 hours. In particular, we explored whether having richer information at encoding influences the lexicalization trajectory. We trained participants with two sets of novel words, one where exposure was only to the words’ phonological forms (the form-only condition), and one where pictures of unfamiliar objects were associated with the words’ phonological forms (the picture-associated condition). A behavioral measure of lexical competition (indexing lexicalization) indicated stronger competition effects for the form-only words. Imaging (fMRI) results revealed greater involvement of phonological lexical processing areas immediately after training in the form-only condition, suggesting tight connections were formed between novel words and existing lexical entries already at encoding. Retrieval of picture-associated novel words involved the episodic/hippocampal memory system more extensively. Although lexicalization was weaker in the picture-associated condition, overall memory strength was greater when tested after a 24 hours’ delay, probably due to the availability of both episodic and lexical memory networks to aid retrieval. It appears that, during lexicalization of a novel word, the relative involvement of different memory networks differs according to the richness of the information about that word available at encoding.
  • Tamaoka, K., Saito, N., Kiyama, S., Timmer, K., & Verdonschot, R. G. (2014). Is pitch accent necessary for comprehension by native Japanese speakers? - An ERP investigation. Journal of Neurolinguistics, 27(1), 31-40. doi:10.1016/j.jneuroling.2013.08.001.

    Abstract

    Not unlike the tonal system in Chinese, Japanese habitually attaches pitch accents to the production of words. However, in contrast to Chinese, few homophonic word-pairs are really distinguished by pitch accents (Shibata & Shibata, 1990). This predicts that pitch accent plays a small role in lexical selection for Japanese language comprehension. The present study investigated whether native Japanese speakers necessarily use pitch accent in the processing of accent-contrasted homophonic pairs (e.g., ame [LH] for 'candy' and ame [HI] for 'rain') measuring electroencephalographic (EEG) potentials. Electrophysiological evidence (i.e., N400) was obtained when a word was semantically incorrect for a given context but not for incorrectly accented homophones. This suggests that pitch accent indeed plays a minor role when understanding Japanese. (C) 2013 Elsevier Ltd. All rights reserved.
  • Tan, Y., Martin, R. C., & Van Dyke, J. (2013). Verbal WM capacities in sentence comprehension: Evidence from aphasia. Procedia - Social and Behavioral Sciences, 94, 108-109. doi:10.1016/j.sbspro.2013.09.052.
  • Tanner, D., Nicol, J., & Brehm, L. (2014). The time-course of feature interference in agreement comprehension: Multiple mechanisms and asymmetrical attraction. Journal of Memory and Language, 76, 195-215. doi:10.1016/j.jml.2014.07.003.

    Abstract

    Attraction interference in language comprehension and production may be as a result of common or different processes. In the present paper, we investigate attraction interference during language comprehension, focusing on the contexts in which interference arises and the time-course of these effects. Using evidence from event-related brain potentials (ERPs) and sentence judgment times, we show that agreement attraction in comprehension is best explained as morphosyntactic interference during memory retrieval. This stands in contrast to attraction as a message-level process involving the representation of the subject NP's number features, which is a strong contributor to attraction in production. We thus argue that the cognitive antecedents of agreement attraction in comprehension are non-identical with those of attraction in production, and moreover, that attraction in comprehension is primarily a consequence of similarity-based interference in cue-based memory retrieval processes. We suggest that mechanisms responsible for attraction during language comprehension are a subset of those involved in language production.
  • Ten Oever, S., Sack, A. T., Wheat, K. L., Bien, N., & Van Atteveldt, N. (2013). Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs. Frontiers in Psychology, 4: 331. doi:10.3389/fpsyg.2013.00331.

    Abstract

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.
  • Ten Bosch, L., Ernestus, M., & Boves, L. (2014). Comparing reaction time sequences from human participants and computational models. In Proceedings of Interspeech 2014: 15th Annual Conference of the International Speech Communication Association (pp. 462-466).

    Abstract

    This paper addresses the question how to compare reaction times computed by a computational model of speech comprehension with observed reaction times by participants. The question is based on the observation that reaction time sequences substantially differ per participant, which raises the issue of how exactly the model is to be assessed. Part of the variation in reaction time sequences is caused by the so-called local speed: the current reaction time correlates to some extent with a number of previous reaction times, due to slowly varying variations in attention, fatigue etc. This paper proposes a method, based on time series analysis, to filter the observed reaction times in order to separate the local speed effects. Results show that after such filtering the between-participant correlations increase as well as the average correlation between participant and model increases. The presented technique provides insights into relevant aspects that are to be taken into account when comparing reaction time sequences
  • Ten Oever, S., Schroeder, C. E., Poeppel, D., Van Atteveldt, N., & Zion-Golumbic, E. (2014). Rhythmicity and cross-modal temporal cues facilitate detection. Neuropsychologia, 63, 43-50. doi:10.1016/j.neuropsychologia.2014.08.008.

    Abstract

    Temporal structure in the environment often has predictive value for anticipating the occurrence of forthcoming events. In this study we investigated the influence of two types of predictive temporal information on the perception of near-threshold auditory stimuli: 1) intrinsic temporal rhythmicity within an auditory stimulus stream and 2) temporally-predictive visual cues. We hypothesized that combining predictive temporal information within- and across-modality should decrease the threshold at which sounds are detected, beyond the advantage provided by each information source alone. Two experiments were conducted in which participants had to detect tones in noise. Tones were presented in either rhythmic or random sequences and were preceded by a temporally predictive visual signal in half of the trials. We show that detection intensities are lower for rhythmic (vs. random) and audiovisual (vs. auditory-only) presentation, independent from response bias, and that this effect is even greater for rhythmic audiovisual presentation. These results suggest that both types of temporal information are used to optimally process sounds that occur at expected points in time (resulting in enhanced detection), and that multiple temporal cues are combined to improve temporal estimates. Our findings underscore the flexibility and proactivity of the perceptual system which uses within- and across-modality temporal cues to anticipate upcoming events and process them optimally. (C) 2014 Elsevier Ltd. All rights reserved.
  • Ten Bosch, L., Boves, L., & Ernestus, M. (2013). Towards an end-to-end computational model of speech comprehension: simulating a lexical decision task. In Proceedings of INTERSPEECH 2013: 14th Annual Conference of the International Speech Communication Association (pp. 2822-2826).

    Abstract

    This paper describes a computational model of speech comprehension that takes the acoustic signal as input and predicts reaction times as observed in an auditory lexical decision task. By doing so, we explore a new generation of end-to-end computational models that are able to simulate the behaviour of human subjects participating in a psycholinguistic experiment. So far, nearly all computational models of speech comprehension do not start from the speech signal itself, but from abstract representations of the speech signal, while the few existing models that do start from the acoustic signal cannot directly model reaction times as obtained in comprehension experiments. The main functional components in our model are the perception stage, which is compatible with the psycholinguistic model Shortlist B and is implemented with techniques from automatic speech recognition, and the decision stage, which is based on the linear ballistic accumulation decision model. We successfully tested our model against data from 20 participants performing a largescale auditory lexical decision experiment. Analyses show that the model is a good predictor for the average judgment and reaction time for each word.
  • Terrill, A. (1998). Biri. München: Lincom Europa.

    Abstract

    This work presents a salvage grammar of the Biri language of Eastern Central Queensland, a Pama-Nyungan language belonging to the large Maric subgroup. As the language is no longer used, the grammatical description is based on old written sources and on recordings made by linguists in the 1960s and 1970s. Biri is in many ways typical of the Pama-Nyungan languages of Southern Queensland. It has split case marking systems, marking nouns according to an ergative/absolutive system and pronouns according to a nominative/accusative system. Unusually for its area, Biri also has bound pronouns on its verb, cross-referencing the person, number and case of core participants. As far as it is possible, the grammatical discussion is ‘theory neutral’. The first four chapters deal with the phonology, morphology, and syntax of the language. The last two chapters contain a substantial discussion of Biri’s place in the Pama-Nyungan family. In chapter 6 the numerous dialects of the Biri language are discussed. In chapter 7 the close linguistic relationship between Biri and the surrounding languages is examined.
  • Terwisscha van Scheltinga, A. F., Bakker, S. C., Van Haren, N. E., Boos, H. B., Schnack, H. G., Cahn, W., Hoogman, M., Zwiers, M. P., Fernandez, G., Franke, B., Hulshoff Pol, H. E., & Kahn, R. S. (2014). Association study of fibroblast growth factor genes and brain volumes in schizophrenic patients and healthy controls. Psychiatric Genetics, 24, 283-284. doi:10.1097/YPG.0000000000000057.
  • Theakston, A., Coates, A., & Holler, J. (2014). Handling agents and patients: Representational cospeech gestures help children comprehend complex syntactic constructions. Developmental Psychology, 50(7), 1973-1984. doi:10.1037/a0036694.

    Abstract

    Gesture is an important precursor of children’s early language development, for example, in the transition to multiword speech and as a predictor of later language abilities. However, it is unclear whether gestural input can influence children’s comprehension of complex grammatical constructions. In Study 1, 3- (M = 3 years 5 months) and 4-year-old (M = 4 years 6 months) children witnessed 2-participant actions described using the infrequent object-cleft-construction (OCC; It was the dog that the cat chased). Half saw an experimenter accompanying her descriptions with gestures representing the 2 participants and indicating the direction of action; the remaining children did not witness gesture. Children who witnessed gestures showed better comprehension of the OCC than those who did not witness gestures, both in and beyond the immediate physical context, but this benefit was restricted to the oldest 4-year-olds. In Study 2, a further group of older 4-year-old children (M = 4 years 7 months) witnessed the same 2-participant actions described by an experimenter and accompanied by gestures, but the gesture represented only the 2 participants and not the direction of the action. Again, a benefit of gesture was observed on subsequent comprehension of the OCC. We interpret these findings as demonstrating that representational cospeech gestures can help children comprehend complex linguistic structures by highlighting the roles played by the participants in the event.

    Files private

    Request files
  • Thompson, P. M., Stein, J. L., Medland, S. E., Hibar, D. P., Vasquez, A. A., Renteria, M. E., Toro, R., Jahanshad, N., Schumann, G., Franke, B., Wright, M. J., Martin, N. G., Agartz, I., Alda, M., Alhusaini, S., Almasy, L., Almeida, J., Alpert, K., Andreasen, N. C., Andreassen, O. A. and 269 moreThompson, P. M., Stein, J. L., Medland, S. E., Hibar, D. P., Vasquez, A. A., Renteria, M. E., Toro, R., Jahanshad, N., Schumann, G., Franke, B., Wright, M. J., Martin, N. G., Agartz, I., Alda, M., Alhusaini, S., Almasy, L., Almeida, J., Alpert, K., Andreasen, N. C., Andreassen, O. A., Apostolova, L. G., Appel, K., Armstrong, N. J., Aribisala, B., Bastin, M. E., Bauer, M., Bearden, C. E., Bergmann, Ø., Binder, E. B., Blangero, J., Bockholt, H. J., Bøen, E., Bois, C., Boomsma, D. I., Booth, T., Bowman, I. J., Bralten, J., Brouwer, R. M., Brunner, H. G., Brohawn, D. G., Buckner, R. L., Buitelaar, J., Bulayeva, K., Bustillo, J. R., Calhoun, V. D., Cannon, D. M., Cantor, R. M., Carless, M. A., Caseras, X., Cavalleri, G. L., Chakravarty, M. M., Chang, K. D., Ching, C. R. K., Christoforou, A., Cichon, S., Clark, V. P., Conrod, P., Coppola, G., Crespo-Facorro, B., Curran, J. E., Czisch, M., Deary, I. J., de Geus, E. J. C., den Braber, A., Delvecchio, G., Depondt, C., de Haan, L., de Zubicaray, G. I., Dima, D., Dimitrova, R., Djurovic, S., Dong, H., Donohoe, G., Duggirala, R., Dyer, T. D., Ehrlich, S., Ekman, C. J., Elvsåshagen, T., Emsell, L., Erk, S., Espeseth, T., Fagerness, J., Fears, S., Fedko, I., Fernández, G., Fisher, S. E., Foroud, T., Fox, P. T., Francks, C., Frangou, S., Frey, E. M., Frodl, T., Frouin, V., Garavan, H., Giddaluru, S., Glahn, D. C., Godlewska, B., Goldstein, R. Z., Gollub, R. L., Grabe, H. J., Grimm, O., Gruber, O., Guadalupe, T., Gur, R. E., Gur, R. C., Göring, H. H. H., Hagenaars, S., Hajek, T., Hall, G. B., Hall, J., Hardy, J., Hartman, C. A., Hass, J., Hatton, S. N., Haukvik, U. K., Hegenscheid, K., Heinz, A., Hickie, I. B., Ho, B.-C., Hoehn, D., Hoekstra, P. J., Hollinshead, M., Holmes, A. J., Homuth, G., Hoogman, M., Hong, L. E., Hosten, N., Hottenga, J.-J., Pol, H. E. H., Hwang, K. S., Jr, C. R. J., Jenkinson, M., Johnston, C., Jönsson, E. G., Kahn, R. S., Kasperaviciute, D., Kelly, S., Kim, S., Kochunov, P., Koenders, L., Krämer, B., Kwok, J. B. J., Lagopoulos, J., Laje, G., Landen, M., Landman, B. A., Lauriello, J., Lawrie, S. M., Lee, P. H., Le Hellard, S., Lemaître, H., Leonardo, C. D., Li, C.-s., Liberg, B., Liewald, D. C., Liu, X., Lopez, L. M., Loth, E., Lourdusamy, A., Luciano, M., Macciardi, F., Machielsen, M. W. J., MacQueen, G. M., Malt, U. F., Mandl, R., Manoach, D. S., Martinot, J.-L., Matarin, M., Mather, K. A., Mattheisen, M., Mattingsdal, M., Meyer-Lindenberg, A., McDonald, C., McIntosh, A. M., McMahon, F. J., McMahon, K. L., Meisenzahl, E., Melle, I., Milaneschi, Y., Mohnke, S., Montgomery, G. W., Morris, D. W., Moses, E. K., Mueller, B. A., Maniega, S. M., Mühleisen, T. W., Müller-Myhsok, B., Mwangi, B., Nauck, M., Nho, K., Nichols, T. E., Nilsson, L.-G., Nugent, A. C., Nyberg, L., Olvera, R. L., Oosterlaan, J., Ophoff, R. A., Pandolfo, M., Papalampropoulou-Tsiridou, M., Papmeyer, M., Paus, T., Pausova, Z., Pearlson, G. D., Penninx, B. W., Peterson, C. P., Pfennig, A., Phillips, M., Pike, G. B., Poline, J.-B., Potkin, S. G., Pütz, B., Ramasamy, A., Rasmussen, J., Rietschel, M., Rijpkema, M., Risacher, S. L., Roffman, J. L., Roiz-Santiañez, R., Romanczuk-Seiferth, N., Rose, E. J., Royle, N. A., Rujescu, D., Ryten, M., Sachdev, P. S., Salami, A., Satterthwaite, T. D., Savitz, J., Saykin, A. J., Scanlon, C., Schmaal, L., Schnack, H. G., Schork, A. J., Schulz, S. C., Schür, R., Seidman, L., Shen, L., Shoemaker, J. M., Simmons, A., Sisodiya, S. M., Smith, C., Smoller, J. W., Soares, J. C., Sponheim, S. R., Sprooten, E., Starr, J. M., Steen, V. M., Strakowski, S., Strike, L., Sussmann, J., Sämann, P. G., Teumer, A., Toga, A. W., Tordesillas-Gutierrez, D., Trabzuni, D., Trost, S., Turner, J., Van den Heuvel, M., van der Wee, N. J., van Eijk, K., van Erp, T. G. M., van Haren, N. E. M., van Ent, D. ‘., van Tol, M.-J., Hernández, M. C. V., Veltman, D. J., Versace, A., Völzke, H., Walker, R., Walter, H., Wang, L., Wardlaw, J. M., Weale, M. E., Weiner, M. W., Wen, W., Westlye, L. T., Whalley, H. C., Whelan, C. D., White, T., Winkler, A. M., Wittfeld, K., Woldehawariat, G., Wolf, C., Zilles, D., Zwiers, M. P., Thalamuthu, A., Schofield, P. R., Freimer, N. B., Lawrence, N. S., & Drevets, W. (2014). The ENIGMA Consortium: Large-scale collaborative analyses of neuroimaging and genetic data. Brain Imaging and Behavior, 8(2), 153-182. doi:10.1007/s11682-013-9269-5.

    Abstract

    The Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA) Consortium is a collaborative network of researchers working together on a range of large-scale studies that integrate data from 70 institutions worldwide. Organized into Working Groups that tackle questions in neuroscience, genetics, and medicine, ENIGMA studies have analyzed neuroimaging data from over 12,826 subjects. In addition, data from 12,171 individuals were provided by the CHARGE consortium for replication of findings, in a total of 24,997 subjects. By meta-analyzing results from many sites, ENIGMA has detected factors that affect the brain that no individual site could detect on its own, and that require larger numbers of subjects than any individual neuroimaging study has currently collected. ENIGMA’s first project was a genome-wide association study identifying common variants in the genome associated with hippocampal volume or intracranial volume. Continuing work is exploring genetic associations with subcortical volumes (ENIGMA2) and white matter microstructure (ENIGMA-DTI). Working groups also focus on understanding how schizophrenia, bipolar illness, major depression and attention deficit/hyperactivity disorder (ADHD) affect the brain. We review the current progress of the ENIGMA Consortium, along with challenges and unexpected discoveries made on the way
  • Thompson-Schill, S., Hagoort, P., Dominey, P. F., Honing, H., Koelsch, S., Ladd, D. R., Lerdahl, F., Levinson, S. C., & Steedman, M. (2013). Multiple levels of structure in language and music. In M. A. Arbib (Ed.), Language, music, and the brain: A mysterious relationship (pp. 289-303). Cambridge, MA: MIT Press.

    Abstract

    A forum devoted to the relationship between music and language begins with an implicit assumption: There is at least one common principle that is central to all human musical systems and all languages, but that is not characteristic of (most) other domains. Why else should these two categories be paired together for analysis? We propose that one candidate for a common principle is their structure. In this chapter, we explore the nature of that structure—and its consequences for psychological and neurological processing mechanisms—within and across these two domains.
  • Thorgrimsson, G. (2014). Infants' understanding of communication as participants and observers. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Thorgrimsson, G., Fawcett, C., & Liszkowski, U. (2014). Infants’ expectations about gestures and actions in third-party interactions. Frontiers in Psychology, 5: 321. doi:10.3389/fpsyg.2014.00321.

    Abstract

    We investigated 14-month-old infants’ expectations toward a third party addressee of communicative gestures and an instrumental action. Infants’ eye movements were tracked as they observed a person (the Gesturer) point, direct a palm-up request gesture, or reach toward an object, and another person (the Addressee) respond by grasping it. Infants’ looking patterns indicate that when the Gesturer pointed or used the palm-up request, infants anticipated that the Addressee would give the object to the Gesturer, suggesting that they ascribed a motive of request to the gestures. In contrast, when the Gesturer reached for the object, and in a control condition where no action took place, the infants did not anticipate the Addressee’s response. The results demonstrate that infants’ recognition of communicative gestures extends to others’ interactions, and that infants can anticipate how third-party addressees will respond to others’ gestures.
  • Tilot, A. K., Gaugler, M. K., Yu, Q., Romigh, T., Yu, W., Miller, R. H., Frazier, T. W., & Eng, C. (2014). Germline disruption of Pten localization causes enhanced sex-dependent social motivation and increased glial production. Human Molecular Genetics, 23(12), 3212-3227. doi:10.1093/hmg/ddu031.

    Abstract

    PTEN Hamartoma Tumor Syndrome (PHTS) is an autosomal-dominant genetic condition underlying a subset of autism spectrum disorder (ASD) with macrocephaly. Caused by germline mutations in PTEN, PHTS also causes increased risks of multiple cancers via dysregulation of the PI3K and MAPK signaling pathways. Conditional knockout models have shown that neural Pten regulates social behavior, proliferation and cell size. Although much is known about how the intracellular localization of PTEN regulates signaling in cancer cell lines, we know little of how PTEN localization influences normal brain physiology and behavior. To address this, we generated a germline knock-in mouse model of cytoplasm-predominant Pten and characterized its behavioral and cellular phenotypes. The homozygous Ptenm3m4 mice have decreased total Pten levels including a specific drop in nuclear Pten and exhibit region-specific increases in brain weight. The Ptenm3m4 model displays sex-specific increases in social motivation, poor balance and normal recognition memory—a profile reminiscent of some individuals with high functioning ASD. The cytoplasm-predominant protein caused cellular hypertrophy limited to the soma and led to increased NG2 cell proliferation and accumulation of glia. The animals also exhibit significant astrogliosis and microglial activation, indicating a neuroinflammatory phenotype. At the signaling level, Ptenm3m4 mice show brain region-specific differences in Akt activation. These results demonstrate that differing alterations to the same autism-linked gene can cause distinct behavioral profiles. The Ptenm3m4 model is the first murine model of inappropriately elevated social motivation in the context of normal cognition and may expand the range of autism-related behaviors replicated in animal models.
  • Timmer, K., Ganushchak, L. Y., Mitlina, Y., & Schiller, N. O. (2013). Choosing first or second language phonology in 125 ms [Abstract]. Journal of Cognitive Neuroscience, 25 Suppl., 164.

    Abstract

    We are often in a bilingual situation (e.g., overhearing a conversation in the train). We investigated whether first (L1) and second language (L2) phonologies are automatically activated. A masked priming paradigm was used, with Russian words as targets and either Russian or English words as primes. Event-related potentials (ERPs) were recorded while Russian (L1) – English (L2) bilinguals read aloud L1 target words (e.g. РЕЙС /reis/ ‘fl ight’) primed with either L1 (e.g. РАНА /rana/ ‘wound’) or L2 words (e.g. PACK). Target words were read faster when they were preceded by phonologically related L1 primes but not by orthographically related L2 primes. ERPs showed orthographic priming in the 125-200 ms time window. Thus, both L1 and L2 phonologies are simultaneously activated during L1 reading. The results provide support for non-selective models of bilingual reading, which assume automatic activation of the non-target language phonology even when it is not required by the task.
  • Tooley, K., Konopka, A. E., & Watson, D. (2014). Can intonational phrase structure be primed (like syntactic structure)? Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(2), 348-363. doi:10.1037/a0034900.

    Abstract

    In 3 experiments, we investigated whether intonational phrase structure can be primed. In all experiments, participants listened to sentences in which the presence and location of intonational phrase boundaries were manipulated such that the recording included either no intonational phrase boundaries, a boundary in a structurally dispreferred location, a boundary in a preferred location, or boundaries in both locations. In Experiment 1, participants repeated the sentences to test whether they would reproduce the prosodic structure they had just heard. Experiments 2 and 3 used a prime–target paradigm to evaluate whether the intonational phrase structure heard in the prime sentence might influence that of a novel target sentence. Experiment 1 showed that participants did repeat back sentences that they had just heard with the original intonational phrase structure, yet Experiments 2 and 3 found that exposure to intonational phrase boundaries on prime trials did not influence how a novel target sentence was prosodically phrased. These results suggest that speakers may retain the intonational phrasing of a sentence, but this effect is not long-lived and does not generalize across unrelated sentences. Furthermore, these findings provide no evidence that intonational phrase structure is formulated during a planning stage that is separate from other sources of linguistic information.
  • Tornero, D., Wattananit, S., Madsen, M. G., Koch, P., Wood, J., Tatarishvili, J., Mine, Y., Ge, R., Monni, E., Devaraju, K., Hevner, R. F., Bruestle, O., Lindval, O., & Kokaia, Z. (2013). Human induced pluripotent stem cell-derived cortical neurons integrate in stroke-injured cortex and improve functional recovery. Brain, 136(12), 3561-3577. doi:10.1093/brain/awt278.

    Abstract

    Stem cell-based approaches to restore function after stroke through replacement of dead neurons require the generation of specific neuronal subtypes. Loss of neurons in the cerebral cortex is a major cause of stroke-induced neurological deficits in adult humans. Reprogramming of adult human somatic cells to induced pluripotent stem cells is a novel approach to produce patient-specific cells for autologous transplantation. Whether such cells can be converted to functional cortical neurons that survive and give rise to behavioural recovery after transplantation in the stroke-injured cerebral cortex is not known. We have generated progenitors in vitro, expressing specific cortical markers and giving rise to functional neurons, from long-term self-renewing neuroepithelial-like stem cells, produced from adult human fibroblast-derived induced pluripotent stem cells. At 2 months after transplantation into the stroke-damaged rat cortex, the cortically fated cells showed less proliferation and more efficient conversion to mature neurons with morphological and immunohistochemical characteristics of a cortical phenotype and higher axonal projection density as compared with non-fated cells. Pyramidal morphology and localization of the cells expressing the cortex-specific marker TBR1 in a certain layered pattern provided further evidence supporting the cortical phenotype of the fated, grafted cells, and electrophysiological recordings demonstrated their functionality. Both fated and non-fated cell-transplanted groups showed bilateral recovery of the impaired function in the stepping test compared with vehicle-injected animals. The behavioural improvement at this early time point was most likely not due to neuronal replacement and reconstruction of circuitry. At 5 months after stroke in immunocompromised rats, there was no tumour formation and the grafted cells exhibited electrophysiological properties of mature neurons with evidence of integration in host circuitry. Our findings show, for the first time, that human skin-derived induced pluripotent stem cells can be differentiated to cortical neuronal progenitors, which survive, differentiate to functional neurons and improve neurological outcome after intracortical implantation in a rat stroke model.
  • Torreira, F., Roberts, S. G., & Hammarström, H. (2014). Functional trade-off between lexical tone and intonation: Typological evidence from polar-question marking. In C. Gussenhoven, Y. Chen, & D. Dediu (Eds.), Proceedings of the 4th International Symposium on Tonal Aspects of Language (pp. 100-103).

    Abstract

    Tone languages are often reported to make use of utterancelevel intonation as well as of lexical tone. We test the alternative hypotheses that a) the coexistence of lexical tone and utterance-level intonation in tone languages results in a diminished functional load for intonation, and b) that lexical tone and intonation can coexist in tone languages without undermining each other’s functional load in a substantial way. In order to do this, we collected data from two large typological databases, and performed mixed-effects and phylogenetic regression analyses controlling for genealogical and areal factors to estimate the probability of a language exhibiting grammatical devices for encoding polar questions given its status as a tonal or an intonation-only language. Our analyses indicate that, while both tone and intonational languages tend to develop grammatical devices for marking polar questions above chance level, tone languages do this at a significantly higher frequency, with estimated probabilities ranging between 0.88 and .98. This statistical bias provides cross-linguistic empirical support to the view that the use of tonal features to mark lexical contrasts leads to a diminished functional load for utterance-level intonation.
  • Torreira, F., Simonet, M., & Hualde, J. I. (2014). Quasi-neutralization of stress contrasts in Spanish. In N. Campbell, D. Gibbon, & D. Hirst (Eds.), Proceedings of Speech Prosody 2014 (pp. 197-201).

    Abstract

    We investigate the realization and discrimination of lexical stress contrasts in pitch-unaccented words in phrase-medial position in Spanish, a context in which intonational pitch accents are frequently absent. Results from production and perception experiments show that in this context durational and intensity cues to stress are produced by speakers and used by listeners above chance level. However, due to substantial amounts of phonetic overlap between stress categories in production, and of numerous errors in the identification of stress categories in perception, we suggest that, in the absence of intonational cues, Spanish speakers engaged in online language use must rely on contextual information in order to distinguish stress contrasts.
  • Tosato, S., Zanoni, M., Bonetto, C., Tozzi, F., Francks, C., Ira, E., Tomassi, S., Bertani, M., Rujescu, D., Giegling, I., St Clair, D., Tansella, M., Ruggeri, M., & Muglia, P. (2014). No association between NRG1 and ErbB4 genes and psychopathological symptoms of Schizophrenia. Neuromolecular Medicine, 16, 742-751. doi:10.1007/s12017-014-8323-9.

    Abstract

    Neuregulin 1 (NRG1) and v-erb-a erythroblastic leukemia viral oncogene homolog 4 (ErbB4) have been extensively studied in schizophrenia susceptibility because of their pivotal role in key neurodevelopmental processes. One of the reasons for the inconsistencies in results could be the fact that the phenotype investigated has mostly the diagnosis of schizophrenia per se, which is widely heterogeneous, both clinically and biologically. In the present study we tested, in a large cohort of 461 schizophrenia patients recruited in Scotland, whether several SNPs in NRG1 and/or ErbB4 are associated with schizophrenia symptom dimensions as evaluated by the Positive and Negative Syndrome Scale (PANSS). We then followed up nominally significant results in a second cohort of 439 schizophrenia subjects recruited in Germany. Using linear regression, we observed two different groups of polymorphisms in NRG1 gene: one showing a nominal association with higher scores of the PANSS positive dimension and the other one with higher scores of the PANSS negative dimension. Regarding ErbB4, a small cluster located in the 5' end of the gene was detected, showing nominal association mainly with negative, general and total dimensions of the PANSS. These findings suggest that some regions of NRG1 and ErbB4 are functionally involved in biological processes that underlie some of the phenotypic manifestations of schizophrenia. Because of the lack of significant association after correction for multiple testing, our analyses should be considered as exploratory and hypothesis generating for future studies.
  • Trilsbeek, P., & Koenig, A. (2014). Increasing the future usage of endangered language archives. In D. Nathan, & P. Austin (Eds.), Language Documentation and Description vol 12 (pp. 151-163). London: SOAS. Retrieved from http://www.elpublishing.org/PID/142.
  • Trippel, T., Broeder, D., Durco, M., & Ohren, O. (2014). Towards automatic quality assessment of component metadata. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2014: 9th International Conference on Language Resources and Evaluation (pp. 3851-3856).

    Abstract

    Measuring the quality of metadata is only possible by assessing the quality of the underlying schema and the metadata instance. We propose some factors that are measurable automatically for metadata according to the CMD framework, taking into account the variability of schemas that can be defined in this framework. The factors include among others the number of elements, the (re-)use of reusable components, the number of filled in elements. The resulting score can serve as an indicator of the overall quality of the CMD instance, used for feedback to metadata providers or to provide an overview of the overall quality of metadata within a reposi-tory. The score is independent of specific schemas and generalizable. An overall assessment of harvested metadata is provided in form of statistical summaries and the distribution, based on a corpus of harvested metadata. The score is implemented in XQuery and can be used in tools, editors and repositories
  • Tsuji, S., Bergmann, C., & Cristia, A. (2014). Community-Augmented Meta-Analyses: Toward Cumulative Data Assessment. Perspectives on Psychological Science, 9(6), 661-665. doi:10.1177/1745691614552498.

    Abstract

    We present the concept of a community-augmented meta-analysis (CAMA), a simple yet novel tool that significantly facilitates the accumulation and evaluation of previous studies within a specific scientific field. A CAMA is a combination of a meta-analysis and an open repository. Like a meta-analysis, it is centered around a psychologically relevant topic and includes methodological details and standardized effect sizes. As in a repository, data do not remain undisclosed and static after publication but can be used and extended by the research community, as anyone can download all information and can add new data via simple forms. Based on our experiences with building three CAMAs, we illustrate the concept and explain how CAMAs can facilitate improving our research practices via the integration of past research, the accumulation of knowledge, and the documentation of file-drawer studies
  • Tsuji, S., & Cristia, A. (2013). Fifty years of infant vowel discrimination research: What have we learned? Journal of the Phonetic Society of Japan, 17(3), 1-11.
  • Tsuji, S., & Cristia, A. (2014). Perceptual attunement in vowels: A meta-analysis. Developmental Psychobiology, 56(2), 179-191. doi:10.1002/dev.21179.

    Abstract

    Although the majority of evidence on perceptual narrowing in speech sounds is based on consonants, most models of infant speech perception generalize these findings to vowels, assuming that vowel perception improves for vowel sounds that are present in the infant's native language within the first year of life, and deteriorates for non-native vowel sounds over the same period of time. The present meta-analysis contributes to assessing to what extent these descriptions are accurate in the first comprehensive quantitative meta-analysis of perceptual narrowing in infant vowel discrimination, including results from behavioral, electrophysiological, and neuroimaging methods applied to infants 0–14 months of age. An analysis of effect sizes for native and non-native vowel discrimination over the first year of life revealed that they changed with age in opposite directions, being significant by about 6 months of age
  • Tsuji, S., Nishikawa, K., & Mazuka, R. (2014). Segmental distributions and consonant-vowel association patterns in Japanese infant- and adult-directed speech. Journal of Child Language, 41, 1276-1304. doi:10.1017/S0305000913000469.

    Abstract

    Japanese infant-directed speech (IDS) and adult-directed speech (ADS) were compared on their segmental distributions and consonant-vowel association patterns. Consistent with findings in other languages, a higher ratio of segments that are generally produced early was found in IDS compared to ADS: more labial consonants and low-central vowels, but fewer fricatives. Consonant-vowel associations also favored the early-produced labial-central, coronal-front, coronal-central, and dorsal-back patterns. On the other hand, clear language-specific patterns included a higher frequency of dorsals, affricates, geminates and moraic nasals in IDS. These segments are frequent in adult Japanese, but not in the early productions or the IDS of other studied languages. In combination with previous results, the current study suggests that both fine-tuning (an increased use of early-produced segments) and highlighting (an increased use of language-specifically relevant segments) might modify IDS on segmental level.
  • Tsuji, S. (2014). The road to native listening. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Tuinman, A., Mitterer, H., & Cutler, A. (2014). Use of syntax in perceptual compensation for phonological reduction. Language and Speech, 57, 68-85. doi:10.1177/0023830913479106.

    Abstract

    Listeners resolve ambiguity in speech by consulting context. Extensive research on this issue has largely relied on continua of sounds constructed to vary incrementally between two phonemic endpoints. In this study we presented listeners instead with phonetic ambiguity of a kind with which they have natural experience: varying degrees of word-final /t/-reduction. In two experiments, Dutch listeners decided whether or not the verb in a sentence such as Maar zij ren(t) soms ‘But she sometimes run(s)’ ended in /t/. In Dutch, presence versus absence of final /t/ distinguishes third- from first-person singular present-tense verbs. Acoustic evidence for /t/ varied from clear to absent, and immediately preceding phonetic context was consistent with more versus less likely deletion of /t/. In both experiments, listeners reported more /t/s in sentences in which /t/ would be syntactically correct. In Experiment 1, the disambiguating syntactic information preceded the target verb, as above, while in Experiment 2, it followed the verb. The syntactic bias was greater for fast than for slow responses in Experiment 1, but no such difference appeared in Experiment 2. We conclude that syntactic information does not directly influence pre-lexical processing, but is called upon in making phoneme decisions.
  • Turco, G. (2014). Contrasting opposite polarity in Germanic and Romance languages: Verum focus and affirmative particles in native speakers and advanced L2 learners. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Turco, G., Dimroth, C., & Braun, B. (2013). Intonational means to mark verum focus in German and French. Language and Speech., 56(4), 461-491. doi:10.1177/0023830912460506.

    Abstract

    German and French differ in a number of aspects. Regarding the prosody-pragmatics interface, German is said to have a direct focus-to-accent mapping, which is largely absent in French – owing to strong structural constraints. We used a semi-spontaneous dialogue setting to investigate the intonational marking of Verum Focus, a focus on the polarity of an utterance in the two languages (e.g. the child IS tearing the banknote as an opposite claim to the child is not tearing the banknote). When Verum Focus applies to auxiliaries, pragmatic aspects (i.e. highlighting the contrast) directly compete with structural constraints (e.g. avoiding an accent on phonologically weak elements such as monosyllabic function words). Intonational analyses showed that auxiliaries were predominantly accented in German, as expected. Interestingly, we found a high number of (as yet undocumented) focal accents on phrase-initial auxiliaries in French Verum Focus contexts. When French accent patterns were equally distributed across information structural contexts, relative prominence (in terms of peak height) between initial and final accents was shifted towards initial accents in Verum Focus compared to non-Verum Focus contexts. Our data hence suggest that French also may mark Verum Focus by focal accents but that this tendency is partly overridden by strong structural constraints.
  • Turco, G., Braun, B., & Dimroth, C. (2014). When contrasting polarity, the Dutch use particles, Germans intonation. Journal of Pragmatics, 62, 94-106. doi:10.1016/j.pragma.2013.09.020.

    Abstract

    This study compares how Dutch and German, two closely related languages, signal a shift from a negative to a positive polarity in two contexts, when contrasting the polarity relative to a different topic situation (In my picture the man washes the car following after In my picture the man does not wash the car, henceforth polarity contrast) and when correcting the polarity of a proposition (The man washes the car following after The man does not wash the car, henceforth polarity correction). Production data show that in both contexts German speakers produced Verum focus (i.e., a high-falling pitch accent on the finite verb), while Dutch speakers mostly used the accented affirmative particle wel. This shows that even lexically and syntactically close languages behave differently when it comes to signalling certain pragmatic functions. Furthermore, we found that in polarity correction contexts, both affirmative particles and Verum focus were realized with stronger prosodic prominence. This difference was found in both languages and might be due to a secondary (syntagmatic) effect of the information structure of the utterance (absence or presence of a contrastive topic).
  • Tzekov, R., Quezada, A., Gautier, M., Biggins, D., Frances, C., Mouzon, B., Jamison, J., Mullan, M., & Crawford, F. (2014). Repetitive mild traumatic brain injury causes optic nerve and retinal damage in a mouse model. Journal of Neuropathology and Experimental Neurology, 73(4), 345-361. doi:10.1097/NEN.0000000000000059.

    Abstract

    There is increasing evidence that long-lasting morphologic and
    functional consequences can be present in the human visual system
    after repetitive mild traumatic brain injury (r-mTBI). The exact lo-
    cation and extent of the damage in this condition are not well un-
    derstood. Using a recently developed mouse model of r-mTBI, we
    assessed the effects on the retina and optic nerve using histology and
    immunohistochemistry, electroretinography (ERG), and spectral-
    domain optical coherence tomography (SD-OCT) at 10 and 13 weeks
    after injury. Control mice received repetitive anesthesia alone (r-sham).
    We observed decreased optic nerve diameters and increased cellularity
    and areas of demyelination in optic nerves in r-mTBI versus r-sham
    mice. There were concomitant areas of decreased cellularity in the
    retinal ganglion cell layer and approximately 67% decrease in brain-
    specific homeobox/POU domain protein 3AYpositive retinal ganglion
    cells in retinal flat mounts. Furthermore, SD-OCT demonstrated a de-
    tectable thinning of the inner retina; ERG demonstrated a decrease in
    the amplitude of the photopic negative response without any change in
    a- or b-wave amplitude or timing. Thus, the ERG and SD-OCT data
    correlated well with changes detected by morphometric, histologic,
    and immunohistochemical methods, thereby supporting the use of
    these noninvasive methods in the assessment of visual function and
    morphology in clinical cases of mTBI.
  • Ünal, E., & Papafragou, A. (2013). Linguistic and conceptual representations of inference as a knowledge source. In S. Baiz, N. Goldman, & R. Hawkes (Eds.), Proceedings of the 37th Annual Boston University Conference on Language Development (BUCLD 37) (pp. 433-443). Boston: Cascadilla Press.
  • Valtersson, E., & Torreira, F. (2014). Rising intonation in spontaneous French: How well can continuation statements and polar questions be distinguished? In N. Campbell, D. Gibbon, & D. Hirst (Eds.), Proceedings of Speech Prosody 2014 (pp. 785-789).

    Abstract

    This study investigates whether a clear distinction can be made between the prosody of continuation statements and polar questions in conversational French, which are both typically produced with final rising intonation. We show that the two utterance types can be distinguished over chance level by several pitch, duration, and intensity cues. However, given the substantial amount of phonetic overlap and the nature of the observed differences between the two utterance types (i.e. overall F0 scaling, final intensity drop and degree of final lengthening), we propose that variability in the phonetic detail of intonation rises in French is due to the effects of interactional factors (e.g. turn-taking context, type of speech act) rather than to the existence of two distinct rising intonation contour types in this language.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1998). Brain activitity during speaking: From syntax to phonology in 40 milliseconds. Science, 280, 572-574.

    Abstract

    In normal conversation, speakers translate thoughts into words at high speed. To enable this speed, the retrieval of distinct types of linguistic knowledge has to be orchestrated with millisecond precision. The nature of this orchestration is still largely unknown. This report presents dynamic measures of the real-time activation of two basic types of linguistic knowledge, syntax and phonology. Electrophysiological data demonstrate that during noun-phrase production speakers retrieve the syntactic gender of a noun before its abstract phonological properties. This two-step process operates at high speed: the data show that phonological information is already available 40 milliseconds after syntactic properties have been retrieved.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1998). Brain activity during speaking: From syntax to phonology in 40 milliseconds. Science, 280(5363), 572-574. doi:10.1126/science.280.5363.572.
  • Van Leeuwen, E. J. C., Cronin, K. A., & Haun, D. B. M. (2014). A group-specific arbitrary tradition in chimpanzees (Pan troglodytes). Animal Cognition, 17, 1421-1425. doi:10.1007/s10071-014-0766-8.

    Abstract

    Social learning in chimpanzees has been studied extensively and it is now widely accepted that chimpanzees have the capacity to learn from conspecifics through a multitude of mechanisms. Very few studies, however, have documented the existence of spontaneously emerged 'traditions' in chimpanzee communities. While the rigor of experimental studies is helpful to investigate social learning mechanisms, documentation of naturally occurring traditions is necessary to understand the relevance of social learning in the real lives of animals. In this study, we report on chimpanzees spontaneously copying a seemingly non-adaptive behaviour ("grass-in- ear behaviour"). The behaviour entailed chimpanzees selecting a stiff, straw-like blade of grass, inserting the grass into one of their own ears, adjusting the position, and then leaving it in their ear during subsequent activities. Using a daily focal follow procedure, over the course of one year, we observed 8 (out of 12) group members engaging in this peculiar behaviour. Importantly, in the 3 neighbouring groups of chimpanzees (n=82), this behaviour was only observed once, indicating that ecological factors were not determiners of the prevalence of this behaviour. These observations show that chimpanzees have a tendency to copy each other's behaviour, even when the adaptive value of the behaviour is presumably absent.
  • Van Leeuwen, E. J. C., & Haun, D. B. M. (2013). Conformity in nonhuman primates: Fad or fact? Evolution and Human Behavior, 34, 1-7. doi:10.1016/j.evolhumbehav.2012.07.005.

    Abstract

    Majority influences have long been a subject of great interest for social psychologists and, more recently, for researchers investigating social influences in nonhuman primates. Although this empirical endeavor has culminated in the conclusion that some ape and monkey species show “conformist” tendencies, the current approach seems to suffer from two fundamental limitations: (a) majority influences have not been operationalized in accord with any of the existing definitions, thereby compromising the validity of cross-species comparisons, and (b) the results have not been systematically scrutinized in light of alternative explanations. In this review, we aim to address these limitations theoretically. First, we will demonstrate how the experimental designs used in nonhuman primate studies cannot test for conformity unambiguously and address alternative explanations and potential confounds for the presented results in the form of primacy effects, frequency exposure, and perception ambiguity. Second, we will show how majority influences have been defined differently across disciplines and, therefore, propose a set of definitions in order to streamline majority influence research, where conformist transmission and conformity will be put forth as operationalizations of the overarching denominator majority influences. Finally, we conclude with suggestions to foster the study of majority influences by clarifying the empirical scope of each proposed definition, exploring compatible research designs and highlighting how majority influences are inherently contingent on situational trade-offs.
  • Van Leeuwen, E. J. C., & Haun, D. B. M. (2014). Conformity without majority? The case for demarcating social from majority influences. Animal Behaviour, 96, 187-194. doi:10.1016/j.anbehav.2014.08.004.

    Abstract

    In this review, we explore the extent to which the recent evidence for conformity in nonhuman animals may alternatively be explained by the animals' preference for social information regardless of the number of individuals demonstrating the respective behaviour. Conformity as a research topic originated in human psychology and has been described as the phenomenon in which individuals change their behaviour to match the behaviour displayed by the majority of group members. Recent studies have aimed to investigate the same process in nonhuman animals; however, most of the adopted designs have not been able to control for social influences independent of any majority influence and some studies have not even incorporated a majority in their designs. This begs the question to what extent the ‘conformity interpretation’ is preliminary and should be revisited in light of animals' general susceptibility to social influences. Similarly, demarcating social from majority influences sheds new light on the original findings in human psychology and motivates reinterpretation of the reported behavioural patterns in terms of social instead of majority influences. Conformity can have profound ramifications for individual fitness and group dynamics; identifying the exact source responsible for animals' behavioural adjustments is essential for understanding animals' learning biases and interpreting cross-species data in terms of evolutionary processes.

Share this page