Publications

Displaying 301 - 400 of 457
  • Paterson, K. B., Liversedge, S. P., Rowland, C. F., & Filik, R. (2003). Children's comprehension of sentences with focus particles. Cognition, 89(3), 263-294. doi:10.1016/S0010-0277(03)00126-4.

    Abstract

    We report three studies investigating children's and adults' comprehension of sentences containing the focus particle only. In Experiments 1 and 2, four groups of participants (6–7 years, 8–10 years, 11–12 years and adult) compared sentences with only in different syntactic positions against pictures that matched or mismatched events described by the sentence. Contrary to previous findings (Crain, S., Ni, W., & Conway, L. (1994). Learning, parsing and modularity. In C. Clifton, L. Frazier, & K. Rayner (Eds.), Perspectives on sentence processing. Hillsdale, NJ: Lawrence Erlbaum; Philip, W., & Lynch, E. (1999). Felicity, relevance, and acquisition of the grammar of every and only. In S. C. Howell, S. A. Fish, & T. Keith-Lucas (Eds.), Proceedings of the 24th annual Boston University conference on language development. Somerville, MA: Cascadilla Press) we found that young children predominantly made errors by failing to process contrast information rather than errors in which they failed to use syntactic information to restrict the scope of the particle. Experiment 3 replicated these findings with pre-schoolers.
  • Peeters, D., & Ozyurek, A. (2016). This and that revisited: A social and multimodal approach to spatial demonstratives. Frontiers in Psychology, 7: 222. doi:10.3389/fpsyg.2016.00222.
  • Perdue, C., & Klein, W. (1992). Why does the production of some learners not grammaticalize? Studies in Second Language Acquisition, 14, 259-272. doi:10.1017/S0272263100011116.

    Abstract

    In this paper we follow two beginning learners of English, Andrea and Santo, over a period of 2 years as they develop means to structure the declarative utterances they produce in various production tasks, and then we look at the following problem: In the early stages of acquisition, both learners develop a common learner variety; during these stages, we see a picture of two learner varieties developing similar regularities determined by the minimal requirements of the tasks we examine. Andrea subsequently develops further morphosyntactic means to achieve greater cohesion in his discourse. But Santo does not. Although we can identify contexts where the grammaticalization of Andrea's production allows him to go beyond the initial constraints of his variety, it is much more difficult to ascertain why Santo, faced with the same constraints in the same contexts, does not follow this path. Some lines of investigation into this problem are then suggested.
  • Petersson, K. M., Elfgren, C., & Ingvar, M. (1997). A dynamic role of the medial temporal lobe during retrieval of declarative memory in man. NeuroImage, 6, 1-11.

    Abstract

    Understanding the role of the medial temporal lobe (MTL) in learning and memory is an important problem in cognitive neuroscience. Memory and learning processes that depend on the function of the MTL and related diencephalic structures (e.g., the anterior and mediodorsal thalamic nuclei) are defined as declarative. We have studied the MTL activity as indicated by regional cerebral blood flow with positron emission tomography and statistical parametric mapping during recall of abstract designs in a less practiced memory state as well as in a well-practiced (well-encoded) memory state. The results showed an increased activity of the MTL bilaterally (including parahippocampal gyrus extending into hippocampus proper, as well as anterior lingual and anterior fusiform gyri) during retrieval in the less practiced memory state compared to the well-practiced memory state, indicating a dynamic role of the MTL in retrieval during the learning processes. The results also showed that the activation of the MTL decreases as the subjects learn to draw abstract designs from memory, indicating a changing role of the MTL during recall in the earlier stages of acquisition compared to the well-encoded declarative memory state.
  • Petersson, K. M., Sandblom, J., Elfgren, C., & Ingvar, M. (2003). Instruction-specific brain activations during episodic encoding: A generalized level of processing effect. Neuroimage, 20, 1795-1810. doi:10.1016/S1053-8119(03)00414-2.

    Abstract

    In a within-subject design we investigated the levels-of-processing (LOP) effect using visual material in a behavioral and a corresponding PET study. In the behavioral study we characterize a generalized LOP effect, using pleasantness and graphical quality judgments in the encoding situation, with two types of visual material, figurative and nonfigurative line drawings. In the PET study we investigate the related pattern of brain activations along these two dimensions. The behavioral results indicate that instruction and material contribute independently to the level of recognition performance. Therefore the LOP effect appears to stem both from the relative relevance of the stimuli (encoding opportunity) and an altered processing of stimuli brought about by the explicit instruction (encoding mode). In the PET study, encoding of visual material under the pleasantness (deep) instruction yielded left lateralized frontoparietal and anterior temporal activations while surface-based perceptually oriented processing (shallow instruction) yielded right lateralized frontoparietal, posterior temporal, and occipitotemporal activations. The result that deep encoding was related to the left prefrontal cortex while shallow encoding was related to the right prefrontal cortex, holding the material constant, is not consistent with the HERA model. In addition, we suggest that the anterior medial superior frontal region is related to aspects of self-referential semantic processing and that the inferior parts of the anterior cingulate as well as the medial orbitofrontal cortex is related to affective processing, in this case pleasantness evaluation of the stimuli regardless of explicit semantic content. Finally, the left medial temporal lobe appears more actively engaged by elaborate meaning-based processing and the complex response pattern observed in different subregions of the MTL lends support to the suggestion that this region is functionally segregated.
  • Petras, K., Ten Oever, S., & Jansma, B. M. (2016). The effect of distance on moral engagement: Event related potentials and alpha power are sensitive to perspective in a virtual shooting task. Frontiers in Psychology, 6: 2008. doi:10.3389/fpsyg.2015.02008.

    Abstract

    In a shooting video game we investigated whether increased distance reduces moral conflict. We measured and analyzed the event related potential (ERP), including the N2 component, which has previously been linked to cognitive conflict from competing decision tendencies. In a modified Go/No-go task designed to trigger moral conflict participants had to shoot suddenly appearing human like avatars in a virtual reality scene. The scene was seen either from an ego perspective with targets appearing directly in front of the participant or from a bird's view, where targets were seen from above and more distant. To control for low level visual features, we added a visually identical control condition, where the instruction to shoot was replaced by an instruction to detect. ERP waveforms showed differences between the two tasks as early as in the N1 time-range, with higher N1 amplitudes for the close perspective in the shoot task. Additionally, we found that pre-stimulus alpha power was significantly decreased in the ego, compared to the bird's view only for the shoot but not for the detect task. In the N2 time window, we observed main amplitude effects for response (No-go > Go) and distance (ego > bird perspective) but no interaction with task type (shoot vs. detect). We argue that the pre-stimulus and N1 effects can be explained by reduced attention and arousal in the distance condition when people are instructed to shoot. These results indicate a reduced moral engagement for increased distance. The lack of interaction in the N2 across tasks suggests that at that time point response execution dominates. We discuss potential implications for real life shooting situations, especially considering recent developments in drone shootings which are per definition of a distant view.
  • Pine, J. M., Lieven, E. V., & Rowland, C. F. (1997). Stylistic variation at the “single-word” stage: Relations between maternal speech characteristics and children's vocabulary composition and usage. Child Development, 68(5), 807-819. doi:10.1111/j.1467-8624.1997.tb01963.x.

    Abstract

    In this study we test a number of different claims about the nature of stylistic variation at the “single-word” stage by examining the relation between variation in early vocabulary composition, variation in early language use, and variation in the structural and functional propreties of mothers' child-directed speech. Maternal-report and observational data were collected for 26 children at 10, 50, and 100 words, These were then correlated with a variety of different measures of maternal speech at 10 words, The results show substantial variation in the percentage of common nouns and unanalyzed phrases in children's vocabularies, and singficant relations between this variation and the way in which language is used by the child. They also reveal singficant relations between the way in whch mothers use language at 10 words and the way in chich their children use language at 50 words and between certain formal properties of mothers speech at 10 words and the percentage of common nouns and unanalyzed phrases in children's early vocabularies, However, most of these relations desappear when an attempt is made to control for ossible effects of the child on the mother at Time 1. The exception is a singficant negative correlation between mothers tendency to produce speech that illustrates word boundaries and the percentage of unanalyzed phrases at 50 and 100 words. This suggests that mothers whose sprech provides the child with information about where new words begin and end tend to have children with few unanalyzed. phrases in their early vocabularies.
  • Poletiek, F. H., & Olfers, K. J. F. (2016). Authentication by the crowd: How lay students identify the style of a 17th century artist. CODART e-Zine, 8. Retrieved from http://ezine.codart.nl/17/issue/57/artikel/19-21-june-madrid/?id=349#!/page/3.
  • Poletiek, F. H. (1997). De wet 'bijzondere opnemingen in psychiatrische ziekenhuizen' aan de cijfers getoetst. Maandblad voor Geestelijke Volksgezondheid, 4, 349-361.
  • Poletiek, F. H. (in preparation). Inside the juror: The psychology of juror decision-making [Bespreking van De geest van de jury (1997)].
  • Poletiek, F. H., Fitz, H., & Bocanegra, B. R. (2016). What baboons can (not) tell us about natural language grammars. Cognition, 151, 108-112. doi:10.1016/j.cognition.2015.04.016.

    Abstract

    Rey et al. (2012) present data from a study with baboons that they interpret in support of the idea that center-embedded structures in human language have their origin in low level memory mechanisms and associative learning. Critically, the authors claim that the baboons showed a behavioral preference that is consistent with center-embedded sequences over other types of sequences. We argue that the baboons’ response patterns suggest that two mechanisms are involved: first, they can be trained to associate a particular response with a particular stimulus, and, second, when faced with two conditioned stimuli in a row, they respond to the most recent one first, copying behavior they had been rewarded for during training. Although Rey et al. (2012) ‘experiment shows that the baboons’ behavior is driven by low level mechanisms, it is not clear how the animal behavior reported, bears on the phenomenon of Center Embedded structures in human syntax. Hence, (1) natural language syntax may indeed have been shaped by low level mechanisms, and (2) the baboons’ behavior is driven by low level stimulus response learning, as Rey et al. propose. But is the second evidence for the first? We will discuss in what ways this study can and cannot give evidential value for explaining the origin of Center Embedded recursion in human grammar. More generally, their study provokes an interesting reflection on the use of animal studies in order to understand features of the human linguistic system.
  • Poort, E. D., Warren, J. E., & Rodd, J. M. (2016). Recent experience with cognates and interlingual homographs in one language affects subsequent processing in another language. Bilingualism: Language and Cognition, 19(1), 206-212. doi:10.1017/S1366728915000395.

    Abstract

    This experiment shows that recent experience in one language influences subsequent processing of the same word-forms in a different language. Dutch–English bilinguals read Dutch sentences containing Dutch–English cognates and interlingual homographs, which were presented again 16 minutes later in isolation in an English lexical decision task. Priming produced faster responses for the cognates but slower responses for the interlingual homographs. These results show that language switching can influence bilingual speakers at the level of individual words, and require models of bilingual word recognition (e.g., BIA+) to allow access to word meanings to be modulated by recent experience.
  • Pouw, W., Van Gog, T., Zwaan, R. A., & Paas, F. (2016). Augmenting instructional animations with a body analogy to help children learn about physical systems. Frontiers in Psychology, 7: 860. doi:10.3389/fpsyg.2016.00860.

    Abstract

    We investigated whether augmenting instructional animations with a body analogy (BA) would improve 10- to 13-year-old children’s learning about class-1 levers. Children with a lower level of general math skill who learned with an instructional animation that provided a BA of the physical system, showed higher accuracy on a lever problem-solving reaction time task than children studying the instructional animation without this BA. Additionally, learning with a BA led to a higher speed–accuracy trade-off during the transfer task for children with a lower math skill, which provided additional evidence that especially this group is likely to be affected by learning with a BA. However, overall accuracy and solving speed on the transfer task was not affected by learning with or without this BA. These results suggest that providing children with a BA during animation study provides a stepping-stone for understanding mechanical principles of a physical system, which may prove useful for instructional designers. Yet, because the BA does not seem effective for all children, nor for all tasks, the degree of effectiveness of body analogies should be studied further. Future research, we conclude, should be more sensitive to the necessary degree of analogous mapping between the body and physical systems, and whether this mapping is effective for reasoning about more complex instantiations of such physical systems.
  • Pouw, W., Eielts, C., Van Gog, T., Zwaan, R. A., & Paas, F. (2016). Does (non‐)meaningful sensori‐motor engagement promote learning with animated physical systems? Mind, Brain and Education, 10(2), 91-104. doi:10.1111/mbe.12105.

    Abstract

    Previous research indicates that sensori‐motor experience with physical systems can have a positive effect on learning. However, it is not clear whether this effect is caused by mere bodily engagement or the intrinsically meaningful information that such interaction affords in performing the learning task. We investigated (N = 74), through the use of a Wii Balance Board, whether different forms of physical engagement that was either meaningfully, non‐meaningfully, or minimally related to the learning content would be beneficial (or detrimental) to learning about the workings of seesaws from instructional animations. The results were inconclusive, indicating that motoric competency on lever problem solving did not significantly differ between conditions, nor were response speed and transfer performance affected. These findings suggest that adult's implicit and explicit knowledge about physical systems is stable and not easily affected by (contradictory) sensori‐motor experiences. Implications for embodied learning are discussed.
  • Pouw, W., & Hostetter, A. B. (2016). Gesture as predictive action. Reti, Saperi, Linguaggi: Italian Journal of Cognitive Sciences, 3, 57-80. doi:10.12832/83918.

    Abstract

    Two broad approaches have dominated the literature on the production of speech-accompanying gestures. On the one hand, there are approaches that aim to explain the origin of gestures by specifying the mental processes that give rise to them. On the other, there are approaches that aim to explain the cognitive function that gestures have for the gesturer or the listener. In the present paper we aim to reconcile both approaches in one single perspective that is informed by a recent sea change in cognitive science, namely, Predictive Processing Perspectives (PPP; Clark 2013b; 2015). We start with the idea put forth by the Gesture as Simulated Action (GSA) framework (Hostetter, Alibali 2008). Under this view, the mental processes that give rise to gesture are re-enactments of sensori-motor experiences (i.e., simulated actions). We show that such anticipatory sensori-motor states and the constraints put forth by the GSA framework can be understood as top-down kinesthetic predictions that function in a broader predictive machinery as proposed by PPP. By establishing this alignment, we aim to show how gestures come to fulfill a genuine cognitive function above and beyond the mental processes that give rise to gesture.
  • Pouw, W., Myrto-Foteini, M., Van Gog, T., & Paas, F. (2016). Gesturing during mental problem solving reduces eye movements, especially for individuals with lower visual working memory capacity. Cognitive Processing, 17, 269-277. doi:10.1007/s10339-016-0757-6.

    Abstract

    Non-communicative hand gestures have been found to benefit problem-solving performance. These gestures seem to compensate for limited internal cognitive capacities, such as visual working memory capacity. Yet, it is not clear how gestures might perform this cognitive function. One hypothesis is that gesturing is a means to spatially index mental simulations, thereby reducing the need for visually projecting the mental simulation onto the visual presentation of the task. If that hypothesis is correct, less eye movements should be made when participants gesture during problem solving than when they do not gesture. We therefore used mobile eye tracking to investigate the effect of co-thought gesturing and visual working memory capacity on eye movements during mental solving of the Tower of Hanoi problem. Results revealed that gesturing indeed reduced the number of eye movements (lower saccade counts), especially for participants with a relatively lower visual working memory capacity. Subsequent problem-solving performance was not affected by having (not) gestured during the mental solving phase. The current findings suggest that our understanding of gestures in problem solving could be improved by taking into account eye movements during gesturing.
  • Ramenzoni, V. C., & Liszkowski, U. (2016). The social reach: 8-month-olds reach for unobtainable objects in the presence of another person. Psychological Science, 27(9), 1278-1285. doi:10.1177/0956797616659938.

    Abstract

    Linguistic communication builds on prelinguistic communicative gestures, but the ontogenetic origins and complexities of these prelinguistic gestures are not well known. The current study tested whether 8-month-olds, who do not yet point communicatively, use instrumental actions for communicative purposes. In two experiments, infants reached for objects when another person was present and when no one else was present; the distance to the objects was varied. When alone, the infants reached for objects within their action boundaries and refrained from reaching for objects out of their action boundaries; thus, they knew about their individual action efficiency. However, when a parent (Experiment 1) or a less familiar person (Experiment 2) sat next to them, the infants selectively increased their reaching for out-of-reach objects. The findings reveal that before they communicate explicitly through pointing gestures, infants use instrumental actions with the apparent expectation that a partner will adopt and complete their goals.
  • Ravignani, A., Delgado, T., & Kirby, S. (2016). Musical evolution in the lab exhibits rhythmic universals. Nature Human Behaviour, 1: 0007. doi:10.1038/s41562-016-0007.

    Abstract

    Music exhibits some cross-cultural similarities, despite its variety across the world. Evidence from a broad range of human cultures suggests the existence of musical universals1, here defined as strong regularities emerging across cultures above chance. In particular, humans demonstrate a general proclivity for rhythm2, although little is known about why music is particularly rhythmic and why the same structural regularities are present in rhythms around the world. We empirically investigate the mechanisms underlying musical universals for rhythm, showing how music can evolve culturally from randomness. Human participants were asked to imitate sets of randomly generated drumming sequences and their imitation attempts became the training set for the next participants in independent transmission chains. By perceiving and imitating drumming sequences from each other, participants turned initially random sequences into rhythmically structured patterns. Drumming patterns developed into rhythms that are more structured, easier to learn, distinctive for each experimental cultural tradition and characterized by all six statistical universals found among world music1; the patterns appear to be adapted to human learning, memory and cognition. We conclude that musical rhythm partially arises from the influence of human cognitive and biological biases on the process of cultural evolution.

    Additional information

    Supplementary information Raw data
  • Ravignani, A., & Cook, P. F. (2016). The evolutionary biology of dance without frills. Current Biology, 26(19), R878-R879. doi:10.1016/j.cub.2016.07.076.

    Abstract

    Recently psychologists have taken up the question of whether dance is reliant on unique human adaptations, or whether it is rooted in neural and cognitive mechanisms shared with other species 1, 2. In its full cultural complexity, human dance clearly has no direct analog in animal behavior. Most definitions of dance include the consistent production of movement sequences timed to an external rhythm. While not sufficient for dance, modes of auditory-motor timing, such as synchronization and entrainment, are experimentally tractable constructs that may be analyzed and compared between species. In an effort to assess the evolutionary precursors to entrainment and social features of human dance, Laland and colleagues [2] have suggested that dance may be an incidental byproduct of adaptations supporting vocal or motor imitation — referred to here as the ‘imitation and sequencing’ hypothesis. In support of this hypothesis, Laland and colleagues rely on four convergent lines of evidence drawn from behavioral and neurobiological research on dance behavior in humans and rhythmic behavior in other animals. Here, we propose a less cognitive, more parsimonious account for the evolution of dance. Our ‘timing and interaction’ hypothesis suggests that dance is scaffolded off of broadly conserved timing mechanisms allowing both cooperative and antagonistic social coordination.
  • Ravignani, A., Fitch, W. T., Hanke, F. D., Heinrich, T., Hurgitsch, B., Kotz, S. A., Scharff, C., Stoeger, A. S., & de Boer, B. (2016). What pinnipeds have to say about human speech, music, and the evolution of rhythm. Frontiers in Neuroscience, 10: 274. doi:10.3389/fnins.2016.00274.

    Abstract

    Research on the evolution of human speech and music benefits from hypotheses and data generated in a number of disciplines. The purpose of this article is to illustrate the high relevance of pinniped research for the study of speech, musical rhythm, and their origins, bridging and complementing current research on primates and birds. We briefly discuss speech, vocal learning, and rhythm from an evolutionary and comparative perspective. We review the current state of the art on pinniped communication and behavior relevant to the evolution of human speech and music, showing interesting parallels to hypotheses on rhythmic behavior in early hominids. We suggest future research directions in terms of species to test and empirical data needed.
  • Reis, A., Guerreiro, M., & Petersson, K. M. (2003). A sociodemographic and neuropsychological characterization of an illiterate population. Applied Neuropsychology, 10, 191-204. doi:10.1207/s15324826an1004_1.

    Abstract

    The objectives of this article are to characterize the performance and to discuss the performance differences between literate and illiterate participants in a well-defined study population.We describe the participant-selection procedure used to investigate this population. Three groups with similar sociocultural backgrounds living in a relatively homogeneous fishing community in southern Portugal were characterized in terms of socioeconomic and sociocultural background variables and compared on a simple neuropsychological test battery; specifically, a literate group with more than 4 years of education (n = 9), a literate group with 4 years of education (n = 26), and an illiterate group (n = 31) were included in this study.We compare and discuss our results with other similar studies on the effects of literacy and illiteracy. The results indicate that naming and identification of real objects, verbal fluency using ecologically relevant semantic criteria, verbal memory, and orientation are not affected by literacy or level of formal education. In contrast, verbal working memory assessed with digit span, verbal abstraction, long-term semantic memory, and calculation (i.e., multiplication) are significantly affected by the level of literacy. We indicate that it is possible, with proper participant-selection procedures, to exclude general cognitive impairment and to control important sociocultural factors that potentially could introduce bias when studying the specific effects of literacy and level of formal education on cognitive brain function.
  • Reis, A., & Petersson, K. M. (2003). Educational level, socioeconomic status and aphasia research: A comment on Connor et al. (2001)- Effect of socioeconomic status on aphasia severity and recovery. Brain and Language, 87, 449-452. doi:10.1016/S0093-934X(03)00140-8.

    Abstract

    Is there a relation between socioeconomic factors and aphasia severity and recovery? Connor, Obler, Tocco, Fitzpatrick, and Albert (2001) describe correlations between the educational level and socioeconomic status of aphasic subjects with aphasia severity and subsequent recovery. As stated in the introduction by Connor et al. (2001), studies of the influence of educational level and literacy (or illiteracy) on aphasia severity have yielded conflicting results, while no significant link between socioeconomic status and aphasia severity and recovery has been established. In this brief note, we will comment on their findings and conclusions, beginning first with a brief review of literacy and aphasia research, and complexities encountered in these fields of investigation. This serves as a general background to our specific comments on Connor et al. (2001), which will be focusing on methodological issues and the importance of taking normative values in consideration when subjects with different socio-cultural or socio-economic backgrounds are assessed.
  • Richter, N., Tiddeman, B., & Haun, D. (2016). Social Preference in Preschoolers: Effects of Morphological Self-Similarity and Familiarity. PLoS One, 11(1): e0145443. doi:10.1371/journal.pone.0145443.

    Abstract

    Adults prefer to interact with others that are similar to themselves. Even slight facial self-resemblance can elicit trust towards strangers. Here we investigate if preschoolers at the age of 5 years already use facial self-resemblance when they make social judgments about others. We found that, in the absence of any additional knowledge about prospective peers, children preferred those who look subtly like themselves over complete strangers. Thus, subtle morphological similarities trigger social preferences well before adulthood.
  • Roberts, S. G., & Verhoef, T. (2016). Double-blind reviewing at EvoLang 11 reveals gender bias. Journal of Language Evolution, 1(2), 163-167. doi:10.1093/jole/lzw009.

    Abstract

    The impact of introducing double-blind reviewing in the most recent Evolution of Language conference is assessed. The ranking of papers is compared between EvoLang 11 (double-blind review) and EvoLang 9 and 10 (single-blind review). Main effects were found for first author gender by conference. The results mirror some findings in the literature on the effects of double-blind review, suggesting that it helps reduce a bias against female authors.

    Additional information

    SI.pdf
  • Robinson, E. B., St Pourcain, B., Anttila, V., Kosmicki, J. A., Bulik-Sullivan, B., Grove, J., Maller, J., Samocha, K. E., Sanders, S. J., Ripke, S., Martin, J., Hollegaard, M. V., Werge, T., Hougaard, D. M., i Psych- S. S. I. Broad Autism Group, Neale, B. M., Evans, D. M., Skuse, D., Mortensen, P. B., Borglum, A. D., Ronald, A. and 2 moreRobinson, E. B., St Pourcain, B., Anttila, V., Kosmicki, J. A., Bulik-Sullivan, B., Grove, J., Maller, J., Samocha, K. E., Sanders, S. J., Ripke, S., Martin, J., Hollegaard, M. V., Werge, T., Hougaard, D. M., i Psych- S. S. I. Broad Autism Group, Neale, B. M., Evans, D. M., Skuse, D., Mortensen, P. B., Borglum, A. D., Ronald, A., Smith, G. D., & Daly, M. J. (2016). Genetic risk for autism spectrum disorders and neuropsychiatric variation in the general population. Nature Genetics, 48, 552-555. doi:10.1038/ng.3529.

    Abstract

    Almost all genetic risk factors for autism spectrum disorders (ASDs) can be found in the general population, but the effects of this risk are unclear in people not ascertained for neuropsychiatric symptoms. Using several large ASD consortium and population-based resources (total n > 38,000), we find genome-wide genetic links between ASDs and typical variation in social behavior and adaptive functioning. This finding is evidenced through both LD score correlation and de novo variant analysis, indicating that multiple types of genetic risk for ASDs influence a continuum of behavioral and developmental traits, the severe tail of which can result in diagnosis with an ASD or other neuropsychiatric disorder. A continuum model should inform the design and interpretation of studies of neuropsychiatric disease biology.

    Additional information

    ng.3529-S1.pdf
  • Rodenas-Cuadrado, P., Pietrafusa, N., Francavilla, T., La Neve, A., Striano, P., & Vernes, S. C. (2016). Characterisation of CASPR2 deficiency disorder - a syndrome involving autism, epilepsy and language impairment. BMC Medical Genetics, 17: 8. doi:10.1186/s12881-016-0272-8.

    Abstract

    Background Heterozygous mutations in CNTNAP2 have been identified in patients with a range of complex phenotypes including intellectual disability, autism and schizophrenia. However heterozygous CNTNAP2 mutations are also found in the normal population. Conversely, homozygous mutations are rare in patient populations and have not been found in any unaffected individuals. Case presentation We describe a consanguineous family carrying a deletion in CNTNAP2 predicted to abolish function of its protein product, CASPR2. Homozygous family members display epilepsy, facial dysmorphisms, severe intellectual disability and impaired language. We compared these patients with previously reported individuals carrying homozygous mutations in CNTNAP2 and identified a highly recognisable phenotype. Conclusions We propose that CASPR2 loss produces a syndrome involving early-onset refractory epilepsy, intellectual disability, language impairment and autistic features that can be recognized as CASPR2 deficiency disorder. Further screening for homozygous patients meeting these criteria, together with detailed phenotypic and molecular investigations will be crucial for understanding the contribution of CNTNAP2 to normal and disrupted development.
  • Roelofs, A. (2003). Shared phonological encoding processes and representations of languages in bilingual speakers. Language and Cognitive Processes, 18(2), 175-204. doi:10.1080/01690960143000515.

    Abstract

    Four form-preparation experiments investigated whether aspects of phonological encoding processes and representations are shared between languages in bilingual speakers. The participants were Dutch--English bilinguals. Experiment 1 showed that the basic rightward incrementality revealed in studies for the first language is also observed for second-language words. In Experiments 2 and 3, speakers were given words to produce that did or did not share onset segments, and that came or did not come from different languages. It was found that when onsets were shared among the response words, those onsets were prepared, even when the words came from different languages. Experiment 4 showed that preparation requires prior knowledge of the segments and that knowledge about their phonological features yields no effect. These results suggest that both first- and second-language words are phonologically planned through the same serial order mechanism and that the representations of segments common to the languages are shared.
  • Roelofs, A., Piai, V., Garrido Rodriguez, G., & Chwilla, D. J. (2016). Electrophysiology of Cross-Language Interference and Facilitation in Picture Naming. Cortex, 76, 1-16. doi:10.1016/j.cortex.2015.12.003.

    Abstract

    Disagreement exists about how bilingual speakers select words, in particular, whether words in another language compete, or competition is restricted to a target language, or no competition occurs. Evidence that competition occurs but is restricted to a target language comes from response time (RT) effects obtained when speakers name pictures in one language while trying to ignore distractor words in another language. Compared to unrelated distractor words, RT is longer when the picture name and distractor are semantically related, but RT is shorter when the distractor is the translation of the name of the picture in the other language. These effects suggest that distractor words from another language do not compete themselves but activate their counterparts in the target language, thereby yielding the semantic interference and translation facilitation effects. Here, we report an event-related brain potential (ERP) study testing the prediction that priming underlies both of these effects. The RTs showed semantic interference and translation facilitation effects. Moreover, the picture-word stimuli yielded an N400 response, whose amplitude was smaller on semantic and translation trials than on unrelated trials, providing evidence that interference and facilitation priming underlie the RT effects. We present the results of computer simulations showing the utility of a within-language competition account of our findings.
  • Roelofs, A. (2003). Goal-referenced selection of verbal action: Modeling attentional control in the Stroop task. Psychological Review, 110(1), 88-125.

    Abstract

    This article presents a new account of the color-word Stroop phenomenon ( J. R. Stroop, 1935) based on an implemented model of word production, WEAVER++ ( W. J. M. Levelt, A. Roelofs, & A. S. Meyer, 1999b; A. Roelofs, 1992, 1997c). Stroop effects are claimed to arise from processing interactions within the language-production architecture and explicit goal-referenced control. WEAVER++ successfully simulates 16 classic data sets, mostly taken from the review by C. M. MacLeod (1991), including incongruency, congruency, reverse-Stroop, response-set, semantic-gradient, time-course, stimulus, spatial, multiple-task, manual, bilingual, training, age, and pathological effects. Three new experiments tested the account against alternative explanations. It is shown that WEAVER++ offers a more satisfactory account of the data than other models.
  • Roelofs, A. (1997). The WEAVER model of word-form encoding in speech production. Cognition, 64, 249-284. doi:10.1016/S0010-0277(97)00027-9.

    Abstract

    Lexical access in speaking consists of two major steps: lemma retrieval and word-form encoding. In Roelofs (Roelofs, A. 1992a. Cognition 42. 107-142; Roelofs. A. 1993. Cognition 47, 59-87.), I described a model of lemma retrieval. The present paper extends this work by presenting a comprehensive model of the second access step, word-form encoding. The model is called WEAVER (Word-form Encoding by Activation and VERification). Unlike other models of word-form generation, WEAVER is able to provide accounts of response time data, particularly from the picture-word interference paradigm and the implicit priming paradigm. Its key features are (1) retrieval by spreading activation, (2) verification of activated information by a production rule, (3) a rightward incremental construction of phonological representations using a principle of active syllabification, syllables are constructed on the fly rather than stored with lexical items, (4) active competitive selection of syllabic motor programs using a mathematical formalism that generates response times and (5) the association of phonological speech errors with the selection of syllabic motor programs due to the failure of verification.
  • Rojas-Berscia, L. M. (2016). Lóxoro, traces of a contemporary Peruvian genderlect. Borealis: An International Journal of Hispanic Linguistics, 5, 157-170.

    Abstract

    Not long after the premiere of Loxoro in 2011, a short-film by Claudia Llosa which presents the problems the transgender community faces in the capital of Peru, a new language variety became visible for the first time to the Lima society. Lóxoro [‘lok.so.ɾo] or Húngaro [‘uŋ.ga.ɾo], as its speakers call it, is a language spoken by transsexuals and the gay community of Peru. The first clues about its existence were given by a comedian, Fernando Armas, in the mid 90’s, however it is said to have appeared not before the 60’s. Following some previous work on gay languages by Baker (2002) and languages and society (cf. Halliday 1978), the main aim of the present article is to provide a primary sketch of this language in its phonological, morphological, lexical and sociological aspects, based on a small corpus extracted from the film of Llosa and natural dialogues from Peruvian TV-journals, in order to classify this variety within modern sociolinguistic models (cf. Muysken 2010) and argue for the “anti-language” (cf. Halliday 1978) nature of it
  • Rossi, G., & Zinken, J. (2016). Grammar and social agency: The pragmatics of impersonal deontic statements. Language, 92(4), e296-e325. doi:10.1353/lan.2016.0083.

    Abstract

    Sentence and construction types generally have more than one pragmatic function. Impersonal deontic declaratives such as ‘it is necessary to X’ assert the existence of an obligation or necessity without tying it to any particular individual. This family of statements can accomplish a range of functions, including getting another person to act, explaining or justifying the speaker’s own behavior as he or she undertakes to do something, or even justifying the speaker’s behavior while simultaneously getting another person to help. How is an impersonal deontic declarative fit for these different functions? And how do people know which function it has in a given context? We address these questions using video recordings of everyday interactions among speakers of Italian and Polish. Our analysis results in two findings. The first is that the pragmatics of impersonal deontic declaratives is systematically shaped by (i) the relative responsibility of participants for the necessary task and (ii) the speaker’s nonverbal conduct at the time of the statement. These two factors influence whether the task in question will be dealt with by another person or by the speaker, often giving the statement the force of a request or, alternatively, of an account of the speaker’s behavior. The second finding is that, although these factors systematically influence their function, impersonal deontic declaratives maintain the potential to generate more complex interactions that go beyond a simple opposition between requests and accounts, where participation in the necessary task may be shared, negotiated, or avoided. This versatility of impersonal deontic declaratives derives from their grammatical makeup: by being deontic and impersonal, they can both mobilize or legitimize an act by different participants in the speech event, while their declarative form does not constrain how they should be responded to. These features make impersonal deontic declaratives a special tool for the management of social agency.
  • Rowbotham, S. J., Holler, J., Wearden, A., & Lloyd, D. M. (2016). I see how you feel: Recipients obtain additional information from speakers’ gestures about pain. Patient Education and Counseling, 99(8), 1333-1342. doi:10.1016/j.pec.2016.03.007.

    Abstract

    Objective

    Despite the need for effective pain communication, pain is difficult to verbalise. Co-speech gestures frequently add information about pain that is not contained in the accompanying speech. We explored whether recipients can obtain additional information from gestures about the pain that is being described.
    Methods

    Participants (n = 135) viewed clips of pain descriptions under one of four conditions: 1) Speech Only; 2) Speech and Gesture; 3) Speech, Gesture and Face; and 4) Speech, Gesture and Face plus Instruction (short presentation explaining the pain information that gestures can depict). Participants provided free-text descriptions of the pain that had been described. Responses were scored for the amount of information obtained from the original clips.
    Findings

    Participants in the Instruction condition obtained the most information, while those in the Speech Only condition obtained the least (all comparisons p<.001).
    Conclusions

    Gestures produced during pain descriptions provide additional information about pain that recipients are able to pick up without detriment to their uptake of spoken information.
    Practice implications

    Healthcare professionals may benefit from instruction in gestures to enhance uptake of information about patients’ pain experiences.
  • Rowland, C. F., Pine, J. M., Lieven, E. V., & Theakston, A. L. (2003). Determinants of acquisition order in wh-questions: Re-evaluating the role of caregiver speech. Journal of Child Language, 30(3), 609-635. doi:10.1017/S0305000903005695.

    Abstract

    Accounts that specify semantic and/or syntactic complexity as the primary determinant of the order in which children acquire particular words or grammatical constructions have been highly influential in the literature on question acquisition. One explanation of wh-question acquisition in particular suggests that the order in which English speaking children acquire wh-questions is determined by two interlocking linguistic factors; the syntactic function of the wh-word that heads the question and the semantic generality (or ‘lightness’) of the main verb (Bloom, Merkin & Wootten, 1982; Bloom, 1991). Another more recent view, however, is that acquisition is influenced by the relative frequency with which children hear particular wh-words and verbs in their input (e.g. Rowland & Pine, 2000). In the present study over 300 hours of naturalistic data from twelve two- to three-year-old children and their mothers were analysed in order to assess the relative contribution of complexity and input frequency to wh-question acquisition. The analyses revealed, first, that the acquisition order of wh-questions could be predicted successfully from the frequency with which particular wh-words and verbs occurred in the children's input and, second, that syntactic and semantic complexity did not reliably predict acquisition once input frequency was taken into account. These results suggest that the relationship between acquisition and complexity may be a by-product of the high correlation between complexity and the frequency with which mothers use particular wh-words and verbs. We interpret the results in terms of a constructivist view of language acquisition.
  • Rowland, C. F., & Pine, J. M. (2003). The development of inversion in wh-questions: a reply to Van Valin. Journal of Child Language, 30(1), 197-212. doi:10.1017/S0305000902005445.

    Abstract

    Van Valin (Journal of Child Language29, 2002, 161–75) presents a critique of Rowland & Pine (Journal of Child Language27, 2000, 157–81) and argues that the wh-question data from Adam (in Brown, A first language, Cambridge, MA, 1973) cannot be explained in terms of input frequencies as we suggest. Instead, he suggests that the data can be more successfully accounted for in terms of Role and Reference Grammar. In this note we re-examine the pattern of inversion and uninversion in Adam's wh-questions and argue that the RRG explanation cannot account for some of the developmental facts it was designed to explain.
  • Rubio-Fernández, P., Cummins, C., & Tian, Y. (2016). Are single and extended metaphors processed differently? A test of two Relevance-Theoretic accounts. Journal of Pragmatics, 94, 15-28. doi:10.1016/j.pragma.2016.01.005.

    Abstract

    Carston (2010) proposes that metaphors can be processed via two different routes. In line with the standard Relevance-Theoretic account of loose use, single metaphors are interpreted by a local pragmatic process of meaning adjustment, resulting in the construction of an ad hoc concept. In extended metaphorical passages, by contrast, the reader switches to a second processing mode because the various semantic associates in the passage are mutually reinforcing, which makes the literal meaning highly activated relative to possible meaning adjustments. In the second processing mode the literal meaning of the whole passage is metarepresented and entertained as an ‘imaginary world’ and the intended figurative implications are derived later in processing. The results of three experiments comparing the interpretation of the same target expressions across literal, single-metaphorical and extended-metaphorical contexts, using self-paced reading (Experiment 1), eye-tracking during natural reading (Experiment 2) and cued recall (Experiment 3), offered initial support to Carston's distinction between the processing of single and extended metaphors. We end with a comparison between extended metaphors and allegories, and make a call for further theoretical and experimental work to increase our understanding of the similarities and differences between the interpretation and processing of different figurative uses, single and extended.
  • Rubio-Fernández, P. (2016). How redundant are redundant color adjectives? An efficiency-based analysis of color overspecification. Frontiers in Psychology, 7: 153. doi:10.3389/fpsyg.2016.00153.

    Abstract

    Color adjectives tend to be used redundantly in referential communication. I propose that redundant color adjectives (RCAs) are often intended to exploit a color contrast in the visual context and hence facilitate object identification, despite not being necessary to establish unique reference. Two language-production experiments investigated two types of factors that may affect the use of RCAs: factors related to the efficiency of color in the visual context and factors related to the semantic category of the noun. The results of Experiment 1 confirmed that people produce RCAs when color may facilitate object recognition; e.g., they do so more often in polychrome displays than in monochrome displays, and more often in English (pre-nominal position) than in Spanish (post-nominal position). RCAs are also used when color is a central property of the object category; e.g., people referred to the color of clothes more often than to the color of geometrical figures (Experiment 1), and they overspecified atypical colors more often than variable and stereotypical colors (Experiment 2). These results are relevant for pragmatic models of referential communication based on Gricean pragmatics and informativeness. An alternative analysis is proposed, which focuses on the efficiency and pertinence of color in a given referential situation.
  • Rubio-Fernández, P., & Grassmann, S. (2016). Metaphors as second labels: Difficult for preschool children? Journal of Psycholinguistic Research, 45, 931-944. doi:10.1007/s10936-015-9386-y.

    Abstract

    This study investigates the development of two cognitive abilities that are involved in metaphor comprehension: implicit analogical reasoning and assigning an unconventional label to a familiar entity (as in Romeo’s ‘Juliet is the sun’). We presented 3- and 4-year-old children with literal object-requests in a pretense setting (e.g., ‘Give me the train with the hat’). Both age-groups succeeded in a baseline condition that used building blocks as props (e.g., placed either on the front or the rear of a train engine) and only required spatial analogical reasoning to interpret the referential expression. Both age-groups performed significantly worse in the critical condition, which used familiar objects as props (e.g., small dogs as pretend hats) and required both implicit analogical reasoning and assigning second labels. Only the 4-year olds succeeded in this condition. These results offer a new perspective on young children’s difficulties with metaphor comprehension in the preschool years.
  • Rubio-Fernández, P., & Geurts, B. (2016). Don’t mention the marble! The role of attentional processes in false-belief tasks. Review of Philosophy and Psychology, 7, 835-850. doi:10.1007/s13164-015-0290-z.
  • De Ruiter, J. P., Rossignol, S., Vuurpijl, L., Cunningham, D. W., & Levelt, W. J. M. (2003). SLOT: A research platform for investigating multimodal communication. Behavior Research Methods, Instruments, & Computers, 35(3), 408-419.

    Abstract

    In this article, we present the spatial logistics task (SLOT) platform for investigating multimodal communication between 2 human participants. Presented are the SLOT communication task and the software and hardware that has been developed to run SLOT experiments and record the participants’ multimodal behavior. SLOT offers a high level of flexibility in varying the context of the communication and is particularly useful in studies of the relationship between pen gestures and speech. We illustrate the use of the SLOT platform by discussing the results of some early experiments. The first is an experiment on negotiation with a one-way mirror between the participants, and the second is an exploratory study of automatic recognition of spontaneous pen gestures. The results of these studies demonstrate the usefulness of the SLOT platform for conducting multimodal communication research in both human– human and human–computer interactions.
  • Salverda, A. P., Dahan, D., & McQueen, J. M. (2003). The role of prosodic boundaries in the resolution of lexical embedding in speech comprehension. Cognition, 90(1), 51-89. doi:10.1016/S0010-0277(03)00139-2.

    Abstract

    Participants' eye movements were monitored as they heard sentences and saw four pictured objects on a computer screen. Participants were instructed to click on the object mentioned in the sentence. There were more transitory fixations to pictures representing monosyllabic words (e.g. ham) when the first syllable of the target word (e.g. hamster) had been replaced by a recording of the monosyllabic word than when it came from a different recording of the target word. This demonstrates that a phonemically identical sequence can contain cues that modulate its lexical interpretation. This effect was governed by the duration of the sequence, rather than by its origin (i.e. which type of word it came from). The longer the sequence, the more monosyllabic-word interpretations it generated. We argue that cues to lexical-embedding disambiguation, such as segmental lengthening, result from the realization of a prosodic boundary that often but not always follows monosyllabic words, and that lexical candidates whose word boundaries are aligned with prosodic boundaries are favored in the word-recognition process.
  • San Roque, L. (2016). 'Where' questions and their responses in Duna (Papua New Guinea). Open Linguistics, 2(1), 85-104. doi:10.1515/opli-2016-0005.

    Abstract

    Despite their central role in question formation, content interrogatives in spontaneous conversation remain relatively under-explored cross-linguistically. This paper outlines the structure of ‘where’ expressions in Duna, a language spoken in Papua New Guinea, and examines where-questions in a small Duna data set in terms of their frequency, function, and the responses they elicit. Questions that ask ‘where?’ have been identified as a useful tool in studying the language of space and place, and, in the Duna case and elsewhere, show high frequency and functional flexibility. Although where-questions formulate place as an information gap, they are not always answered through direct reference to canonical places. While some question types may be especially “socially costly” (Levinson 2012), asking ‘where’ perhaps provides a relatively innocuous way of bringing a particular event or situation into focus.
  • Sánchez-Fernández, M., & Rojas-Berscia, L. M. (2016). Vitalidad lingüística de la lengua paipai de Santa Catarina, Baja California. LIAMES, 16(1), 157-183. doi:10.20396/liames.v16i1.8646171.

    Abstract

    In the last few decades little to nothing has been said about the sociolinguistic situation of Yumanan languages in Mexico. In order to cope with this lack of studies, we present a first study on linguistic vitality in Paipai, as it is spoken in Santa Catarina, Baja California, Mexico. Since languages such as Mexican Spanish and Ko’ahl coexist with this language in the same ecology, both are part of the study as well. This first approach hoists from two axes: on one hand, providing a theoretical framework that explains the sociolinguistic dynamics in the ecology of the language (Mufwene 2001), and, on the other hand, bringing over a quantitative study based on MSF (Maximum Shared Facility) (Terborg & Garcìa 2011), which explains the state of linguistic vitality of paipai, enriched by qualitative information collected in situ
  • Sassenhagen, J., & Alday, P. M. (2016). A common misapplication of statistical inference: Nuisance control with null-hypothesis significance tests. Brain and Language, 162, 42-45. doi:10.1016/j.bandl.2016.08.001.

    Abstract

    Experimental research on behavior and cognition frequently rests on stimulus or subject selection where not all characteristics can be fully controlled, even when attempting strict matching. For example, when contrasting patients to controls, variables such as intelligence or socioeconomic status are often correlated with patient status. Similarly, when presenting word stimuli, variables such as word frequency are often correlated with primary variables of interest. One procedure very commonly employed to control for such nuisance effects is conducting inferential tests on confounding stimulus or subject characteristics. For example, if word length is not significantly different for two stimulus sets, they are considered as matched for word length. Such a test has high error rates and is conceptually misguided. It reflects a common misunderstanding of statistical tests: interpreting significance not to refer to inference about a particular population parameter, but about 1. the sample in question, 2. the practical relevance of a sample difference (so that a nonsignificant test is taken to indicate evidence for the absence of relevant differences). We show inferential testing for assessing nuisance effects to be inappropriate both pragmatically and philosophically, present a survey showing its high prevalence, and briefly discuss an alternative in the form of regression including nuisance variables.
  • Sauppe, S. (2016). Verbal semantics drives early anticipatory eye movements during the comprehension of verb-initial sentences. Frontiers in Psychology, 7: 95. doi:10.3389/fpsyg.2016.00095.

    Abstract

    Studies on anticipatory processes during sentence comprehension often focus on the prediction of postverbal direct objects. In subject-initial languages (the target of most studies so far), however, the position in the sentence, the syntactic function, and the semantic role of arguments are often conflated. For example, in the sentence “The frog will eat the fly” the syntactic object (“fly”) is at the same time also the last word and the patient argument of the verb. It is therefore not apparent which kind of information listeners orient to for predictive processing during sentence comprehension. A visual world eye tracking study on the verb-initial language Tagalog (Austronesian) tested what kind of information listeners use to anticipate upcoming postverbal linguistic input. The grammatical structure of Tagalog allows to test whether listeners' anticipatory gaze behavior is guided by predictions of the linear order of words, by syntactic functions (e.g., subject/object), or by semantic roles (agent/patient). Participants heard sentences of the type “Eat frog fly” or “Eat fly frog” (both meaning “The frog will eat the fly”) while looking at displays containing an agent referent (“frog”), a patient referent (“fly”) and a distractor. The verb carried morphological marking that allowed the order and syntactic function of agent and patient to be inferred. After having heard the verb, listeners fixated on the agent irrespective of its syntactic function or position in the sentence. While hearing the first-mentioned argument, listeners fixated on the corresponding referent in the display accordingly and then initiated saccades to the last-mentioned referent before it was encountered. The results indicate that listeners used verbal semantics to identify referents and their semantic roles early; information about word order or syntactic functions did not influence anticipatory gaze behavior directly after the verb was heard. In this verb-initial language, event semantics takes early precedence during the comprehension of sentences, while arguments are anticipated temporally more local to when they are encountered. The current experiment thus helps to better understand anticipation during language processing by employing linguistic structures not available in previously studied subject-initial languages.
  • Scharenborg, O., ten Bosch, L., Boves, L., & Norris, D. (2003). Bridging automatic speech recognition and psycholinguistics: Extending Shortlist to an end-to-end model of human speech recognition [Letter to the editor]. Journal of the Acoustical Society of America, 114, 3032-3035. doi:10.1121/1.1624065.

    Abstract

    This letter evaluates potential benefits of combining human speech recognition ~HSR! and automatic speech recognition by building a joint model of an automatic phone recognizer ~APR! and a computational model of HSR, viz., Shortlist @Norris, Cognition 52, 189–234 ~1994!#. Experiments based on ‘‘real-life’’ speech highlight critical limitations posed by some of the simplifying assumptions made in models of human speech recognition. These limitations could be overcome by avoiding hard phone decisions at the output side of the APR, and by using a match between the input and the internal lexicon that flexibly copes with deviations from canonical phonemic representations.
  • Scharenborg, O., Ten Bosch, L., & Boves, L. (2003). ‘Early recognition’ of words in continuous speech. Automatic Speech Recognition and Understanding, 2003 IEEE Workshop, 61-66. doi:10.1109/ASRU.2003.1318404.

    Abstract

    In this paper, we present an automatic speech recognition (ASR) system based on the combination of an automatic phone recogniser and a computational model of human speech recognition – SpeM – that is capable of computing ‘word activations’ during the recognition process, in addition to doing normal speech recognition, a task in which conventional ASR architectures only provide output after the end of an utterance. We explain the notion of word activation and show that it can be used for ‘early recognition’, i.e. recognising a word before the end of the word is available. Our ASR system was tested on 992 continuous speech utterances, each containing at least one target word: a city name of at least two syllables. The results show that early recognition was obtained for 72.8% of the target words that were recognised correctly. Also, it is shown that word activation can be used as an effective confidence measure.
  • Schepens, J., Van der Silk, F., & Van Hout, R. (2016). L1 and L2 Distance Effects in Learning L3 Dutch. Language Learning, 66, 224-256. doi:10.1111/lang.12150.

    Abstract

    Many people speak more than two languages. How do languages acquired earlier affect the learnability of additional languages? We show that linguistic distances between speakers' first (L1) and second (L2) languages and their third (L3) language play a role. Larger distances from the L1 to the L3 and from the L2 to the L3 correlate with lower degrees of L3 learnability. The evidence comes from L3 Dutch speaking proficiency test scores obtained by candidates who speak a diverse set of L1s and L2s. Lexical and morphological distances between the L1s of the learners and Dutch explained 47.7% of the variation in proficiency scores. Lexical and morphological distances between the L2s of the learners and Dutch explained 32.4% of the variation in proficiency scores in multilingual learners. Cross-linguistic differences require language learners to bridge varying linguistic gaps between their L1 and L2 competences and the target language.
  • Schiller, N. O., Münte, T. F., Horemans, I., & Jansma, B. M. (2003). The influence of semantic and phonological factors on syntactic decisions: An event-related brain potential study. Psychophysiology, 40(6), 869-877. doi:10.1111/1469-8986.00105.

    Abstract

    During language production and comprehension, information about a word's syntactic properties is sometimes needed. While the decision about the grammatical gender of a word requires access to syntactic knowledge, it has also been hypothesized that semantic (i.e., biological gender) or phonological information (i.e., sound regularities) may influence this decision. Event-related potentials (ERPs) were measured while native speakers of German processed written words that were or were not semantically and/or phonologically marked for gender. Behavioral and ERP results showed that participants were faster in making a gender decision when words were semantically and/or phonologically gender marked than when this was not the case, although the phonological effects were less clear. In conclusion, our data provide evidence that even though participants performed a grammatical gender decision, this task can be influenced by semantic and phonological factors.
  • Schiller, N. O., Bles, M., & Jansma, B. M. (2003). Tracking the time course of phonological encoding in speech production: An event-related brain potential study on internal monitoring. Cognitive Brain Research, 17(3), 819-831. doi:10.1016/S0926-6410(03)00204-0.

    Abstract

    This study investigated the time course of phonological encoding during speech production planning. Previous research has shown that conceptual/semantic information precedes syntactic information in the planning of speech production and that syntactic information is available earlier than phonological information. Here, we studied the relative time courses of the two different processes within phonological encoding, i.e. metrical encoding and syllabification. According to one prominent theory of language production, metrical encoding involves the retrieval of the stress pattern of a word, while syllabification is carried out to construct the syllabic structure of a word. However, the relative timing of these two processes is underspecified in the theory. We employed an implicit picture naming task and recorded event-related brain potentials to obtain fine-grained temporal information about metrical encoding and syllabification. Results revealed that both tasks generated effects that fall within the time window of phonological encoding. However, there was no timing difference between the two effects, suggesting that they occur approximately at the same time.
  • Schiller, N. O., & Caramazza, A. (2003). Grammatical feature selection in noun phrase production: Evidence from German and Dutch. Journal of Memory and Language, 48(1), 169-194. doi:10.1016/S0749-596X(02)00508-9.

    Abstract

    In this study, we investigated grammatical feature selection during noun phrase production in German and Dutch. More specifically, we studied the conditions under which different grammatical genders select either the same or different determiners or suffixes. Pictures of one or two objects paired with a gender-congruent or a gender-incongruent distractor word were presented. Participants named the pictures using a singular or plural noun phrase with the appropriate determiner and/or adjective in German or Dutch. Significant effects of gender congruency were only obtained in the singular condition where the selection of determiners is governed by the target’s gender, but not in the plural condition where the determiner is identical for all genders. When different suffixes were to be selected in the gender-incongruent condition, no gender congruency effect was obtained. The results suggest that the so-called gender congruency effect is really a determiner congruency effect. The overall pattern of results is interpreted as indicating that grammatical feature selection is an automatic consequence of lexical node selection and therefore not subject to interference from other grammatical features. This implies that lexical node and grammatical feature selection operate with distinct principles.
  • Schiller, N. O., Meyer, A. S., & Levelt, W. J. M. (1997). The syllabic structure of spoken words: Evidence from the syllabification of intervocalic consonants. Language and Speech, 40(2), 103-140.

    Abstract

    A series of experiments was carried out to investigate the syllable affiliation of intervocalic consonants following short vowels, long vowels, and schwa in Dutch. Special interest was paid to words such as letter ['leter] ''id.,'' where a short vowel is followed by a single consonant. On phonological grounds one may predict that the first syllable should always be closed, but earlier psycholinguistic research had shown that speakers tend to leave these syllables open. In our experiments, bisyllabic word forms were presented aurally, and participants produced their syllables in reversed order (Experiments 1 through 5), or repeated the words inserting a pause between the syllables (Experiment 6). The results showed that participants generally closed syllables with a short vowel. However, in a significant number of the cases they produced open short vowel syllables. Syllables containing schwa, like syllables with a long vowel, were hardly ever closed. Word stress, the phonetic quality of the vowel in the first syllable, and the experimental context influenced syllabification. Taken together, the experiments show that native speakers syllabify bisyllabic Dutch nouns in accordance with a small set of prosodic output constraints. To account for the variability of the results, we propose that these constraints differ in their probabilities of being applied.
  • Schmidt, J., Herzog, D., Scharenborg, O., & Janse, E. (2016). Do hearing aids improve affect perception? Advances in Experimental Medicine and Biology, 894, 47-55. doi:10.1007/978-3-319-25474-6_6.

    Abstract

    Normal-hearing listeners use acoustic cues in speech to interpret a speaker's emotional state. This study investigates the effect of hearing aids on the perception of the emotion dimensions arousal (aroused/calm) and valence (positive/negative attitude) in older adults with hearing loss. More specifically, we investigate whether wearing a hearing aid improves the correlation between affect ratings and affect-related acoustic parameters. To that end, affect ratings by 23 hearing-aid users were compared for aided and unaided listening. Moreover, these ratings were compared to the ratings by an age-matched group of 22 participants with age-normal hearing.For arousal, hearing-aid users rated utterances as generally more aroused in the aided than in the unaided condition. Intensity differences were the strongest indictor of degree of arousal. Among the hearing-aid users, those with poorer hearing used additional prosodic cues (i.e., tempo and pitch) for their arousal ratings, compared to those with relatively good hearing. For valence, pitch was the only acoustic cue that was associated with valence. Neither listening condition nor hearing loss severity (differences among the hearing-aid users) influenced affect ratings or the use of affect-related acoustic parameters. Compared to the normal-hearing reference group, ratings of hearing-aid users in the aided condition did not generally differ in both emotion dimensions. However, hearing-aid users were more sensitive to intensity differences in their arousal ratings than the normal-hearing participants.We conclude that the use of hearing aids is important for the rehabilitation of affect perception and particularly influences the interpretation of arousal
  • Schmidt, J., Janse, E., & Scharenborg, O. (2016). Perception of emotion in conversational speech by younger and older listeners. Frontiers in Psychology, 7: 781. doi:10.3389/fpsyg.2016.00781.

    Abstract

    This study investigated whether age and/or differences in hearing sensitivity influence the perception of the emotion dimensions arousal (calm vs. aroused) and valence (positive vs. negative attitude) in conversational speech. To that end, this study specifically focused on the relationship between participants' ratings of short affective utterances and the utterances' acoustic parameters (pitch, intensity, and articulation rate) known to be associated with the emotion dimensions arousal and valence. Stimuli consisted of short utterances taken from a corpus of conversational speech. In two rating tasks, younger and older adults either rated arousal or valence using a 5-point scale. Mean intensity was found to be the main cue participants used in the arousal task (i.e., higher mean intensity cueing higher levels of arousal) while mean F-0 was the main cue in the valence task (i.e., higher mean F-0 being interpreted as more negative). Even though there were no overall age group differences in arousal or valence ratings, compared to younger adults, older adults responded less strongly to mean intensity differences cueing arousal and responded more strongly to differences in mean F-0 cueing valence. Individual hearing sensitivity among the older adults did not modify the use of mean intensity as an arousal cue. However, individual hearing sensitivity generally affected valence ratings and modified the use of mean F-0. We conclude that age differences in the interpretation of mean F-0 as a cue for valence are likely due to age-related hearing loss, whereas age differences in rating arousal do not seem to be driven by hearing sensitivity differences between age groups (as measured by pure-tone audiometry).
  • Schoot, L., Heyselaar, E., Hagoort, P., & Segaert, K. (2016). Does syntactic alignment effectively influence how speakers are perceived by their conversation partner. PLoS One, 11(4): e015352. doi:10.1371/journal.pone.0153521.

    Abstract

    The way we talk can influence how we are perceived by others. Whereas previous studies have started to explore the influence of social goals on syntactic alignment, in the current study, we additionally investigated whether syntactic alignment effectively influences conversation partners’ perception of the speaker. To this end, we developed a novel paradigm in which we can measure the effect of social goals on the strength of syntactic alignment for one participant (primed participant), while simultaneously obtaining usable social opinions about them from their conversation partner (the evaluator). In Study 1, participants’ desire to be rated favorably by their partner was manipulated by assigning pairs to a Control (i.e., primed participants did not know they were being evaluated) or Evaluation context (i.e., primed participants knew they were being evaluated). Surprisingly, results showed no significant difference in the strength with which primed participants aligned their syntactic choices with their partners’ choices. In a follow-up study, we used a Directed Evaluation context (i.e., primed participants knew they were being evaluated and were explicitly instructed to make a positive impression). However, again, there was no evidence supporting the hypothesis that participants’ desire to impress their partner influences syntactic alignment. With respect to the influence of syntactic alignment on perceived likeability by the evaluator, a negative relationship was reported in Study 1: the more primed participants aligned their syntactic choices with their partner, the more that partner decreased their likeability rating after the experiment. However, this effect was not replicated in the Directed Evaluation context of Study 2. In other words, our results do not support the conclusion that speakers’ desire to be liked affects how much they align their syntactic choices with their partner, nor is there convincing evidence that there is a reliable relationship between syntactic alignment and perceived likeability.

    Additional information

    Data availability
  • Schoot, L., Hagoort, P., & Segaert, K. (2016). What can we learn from a two-brain approach to verbal interaction? Neuroscience and Biobehavioral Reviews, 68, 454-459. doi:10.1016/j.neubiorev.2016.06.009.

    Abstract

    Verbal interaction is one of the most frequent social interactions humans encounter on a daily basis. In the current paper, we zoom in on what the multi-brain approach has contributed, and can contribute in the future, to our understanding of the neural mechanisms supporting verbal interaction. Indeed, since verbal interaction can only exist between individuals, it seems intuitive to focus analyses on inter-individual neural markers, i.e. between-brain neural coupling. To date, however, there is a severe lack of theoretically-driven, testable hypotheses about what between-brain neural coupling actually reflects. In this paper, we develop a testable hypothesis in which between-pair variation in between-brain neural coupling is of key importance. Based on theoretical frameworks and empirical data, we argue that the level of between-brain neural coupling reflects speaker-listener alignment at different levels of linguistic and extra-linguistic representation. We discuss the possibility that between-brain neural coupling could inform us about the highest level of inter-speaker alignment: mutual understanding
  • Schumacher, M., & Skiba, R. (1992). Prädikative und modale Ausdrucksmittel in den Lernervarietäten einer polnischen Migrantin: Eine Longitudinalstudie. Teil I. Linguistische Berichte, 141, 371-400.
  • Schumacher, M., & Skiba, R. (1992). Prädikative und modale Ausdrucksmittel in den Lernervarietäten einer polnischen Migrantin: Eine Longitudinalstudie. Teil II. Linguistische Berichte, 142, 451-475.
  • Segaert, K., Wheeldon, L., & Hagoort, P. (2016). Unifying structural priming effects on syntactic choices and timing of sentence generation. Journal of Memory and Language, 91, 59-80. doi:10.1016/j.jml.2016.03.011.

    Abstract

    We investigated whether structural priming of production latencies is sensitive to the same factors known to influence persistence of structural choices: structure preference, cumulativity and verb repetition. In two experiments, we found structural persistence only for passives (inverse preference effect) while priming effects on latencies were stronger for the actives (positive preference effect). We found structural persistence for passives to be influenced by immediate primes and long lasting cumulativity (all preceding primes) (Experiment 1), and to be boosted by verb repetition (Experiment 2). In latencies we found effects for actives were sensitive to long lasting cumulativity (Experiment 1). In Experiment 2, in latencies we found priming for actives overall, while for passives the priming effects emerged as the cumulative exposure increased but only when also aided by verb repetition. These findings are consistent with the Two-stage Competition model, an integrated model of structural priming effects for sentence choice and latency
  • Seifart, F. (2003). Marqueurs de classe généraux et spécifiques en Miraña. Faits de Langues, 21, 121-132.
  • Selten, M., Meyer, F., Ba, W., Valles, A., Maas, D., Negwer, M., Eijsink, V. D., van Vugt, R. W. M., van Hulten, J. A., van Bakel, N. H. M., Roosen, J., van der Linden, R., Schubert, D., Verheij, M. M. M., Kasri, N. N., & Martens, G. J. M. (2016). Increased GABAB receptor signaling in a rat model for schizophrenia. Scientific Reports, 6: 34240. doi:10.1038/srep34240.

    Abstract

    Schizophrenia is a complex disorder that affects cognitive function and has been linked, both in patients and animal models, to dysfunction of the GABAergic system. However, the pathophysiological consequences of this dysfunction are not well understood. Here, we examined the GABAergic system in an animal model displaying schizophrenia-relevant features, the apomorphine-susceptible (APO-SUS) rat and its phenotypic counterpart, the apomorphine-unsusceptible (APO-UNSUS) rat at postnatal day 20-22. We found changes in the expression of the GABA-synthesizing enzyme GAD67 specifically in the prelimbic-but not the infralimbic region of the medial prefrontal cortex (mPFC), indicative of reduced inhibitory function in this region in APO-SUS rats. While we did not observe changes in basal synaptic transmission onto LII/III pyramidal cells in the mPFC of APO-SUS compared to APO-UNSUS rats, we report reduced paired-pulse ratios at longer inter-stimulus intervals. The GABA(B) receptor antagonist CGP 55845 abolished this reduction, indicating that the decreased paired-pulse ratio was caused by increased GABA(B) signaling. Consistently, we find an increased expression of the GABA(B1) receptor subunit in APO-SUS rats. Our data provide physiological evidence for increased presynaptic GABAB signaling in the mPFC of APO-SUS rats, further supporting an important role for the GABAergic system in the pathophysiology of schizophrenia.
  • Senft, G. (1992). Bakavilisi Biga - or: What happens to English words in the Kilivila Language? Language and Linguistics in Melanesia, 23, 13-49.
  • Senft, G. (1997). [Review of the book The design of language: An introduction to descriptive linguistics by Terry Crowley, John Lynch, Jeff Siegel, and Julie Piau]. Linguistics, 35, 781-785.
  • Senft, G. (1992). [Review of the book The Yimas language of New Guinea by William A. Foley]. Linguistics, 30, 634-639.
  • Senft, G. (2003). [Review of the book Representing space in Oceania: Culture in language and mind ed. by Giovanni Bennardo]. Journal of the Polynesian Society, 112, 169-171.
  • Senft, G. (1997). Magical conversation on the Trobriand Islands. Anthropos, 92, 369-391.
  • Senft, G. (1992). Everything we always thought we knew about space - but did not bother to question. Working Papers of the Cognitive Anthropology Research group at the MPI for Psycholinguistics, 10.
  • Senft, G. (1992). What happened to "the fearless tailor" in Kilivila: A European fairy tale - from the South Seas. Anthropos, 87, 407-421.
  • Seuren, P. A. M. (1997). [Review of the book Schets van de Nederlandse Taal. Grammatica, poëtica en retorica by Adriaen Verwer, Naar de editie van E. van Driel (1783) vertaald door J. Knol. Ed. Th.A.J.M. Janssen & J. Noordegraaf]. Nederlandse Taalkunde, 4, 370-374.
  • Seuren, P. A. M. (2016). Saussure and his intellectual environment. History of European Ideas, 42(6), 819-847. doi:10.1080/01916599.2016.1154398.

    Abstract

    The present study paints the intellectual environment in which Ferdinand de Saussure developed his ideas about language and linguistics during the fin de siècle. It sketches his dissatisfaction with that environment to the extent that it touched on linguistics, and shows the new course he was trying to steer on the basis of ideas that seemed to open new and exciting perspectives, even though they were still vaguely defined. As Saussure himself was extremely reticent about his sources and intellectual pedigree, his stance in the lively European cultural context in which he lived can only be established through textual critique and conjecture. On this basis, it is concluded that Saussure, though relatively uninformed about its historical roots, essentially aimed at integrating the rationalist tradition current in the sciences in his day into a new, ‘scientific’ general theory of language. In this, he was heavily indebted to a few predecessors, such as the French philosopher-psychologist Victor Egger, and particularly to the French psychologist, historian and philosopher Hippolyte Taine, who was a major cultural influence in nineteenth-century France, though now largely forgotten. The present study thus supports Hans Aarsleff's analysis, where, for the first time, Taine's influence is emphasised, and rejects John Joseph's contention that Taine had no influence and that, instead, Saussure was influenced mainly by the romanticist Adolphe Pictet. Saussure abhorred Pictet's method of etymologising, which predated the Young Grammarian school, central to Saussure's linguistic education. The issue has implications for the positioning of Saussure in the history of linguistics. Is he part of the non-analytical, romanticist and experience-based European strand of thought that is found in art and postmodernist philosophy and is sometimes called structuralism, or is he a representative of the short-lived European branch of specifically linguistic structuralism, which was rationalist in outlook, more science-oriented and more formalist, but lost out to American structuralism? The latter seems to be the case, though phenomenology, postmodernism and art have lately claimed Saussure as an icon
  • Shao, Z., & Stiegert, J. (2016). Predictors of photo naming: Dutch norms for 327 photos. Behavior Research Methods, 48(2), 577-584. doi:10.3758/s13428-015-0613-0.

    Abstract

    The present study reports naming latencies and norms for 327 photos of objects in Dutch. We provide norms for eight psycholinguistic variables: age of acquisition, familiarity, imageability, image agreement, objective and subjective visual complexity, word frequency, word length in syllables and in letters, and name agreement. Furthermore, multiple regression analyses reveal that significant predictors of photo naming latencies are name agreement, word frequency, imageability, and image agreement. Naming latencies, norms and stimuli are provided as Supplemental Materials.
  • Shitova, N., Roelofs, A., Schriefers, H., Bastiaansen, M., & Schoffelen, J.-M. (2016). Using Brain Potentials to Functionally Localise Stroop-Like Effects in Colour and Picture Naming: Perceptual Encoding versus Word Planning. PLoS One, 11(9): e0161052. doi:10.1371/journal.pone.0161052.

    Abstract

    The colour-word Stroop task and the picture-word interference task (PWI) have been used extensively to study the functional processes underlying spoken word production. One of the consistent behavioural effects in both tasks is the Stroop-like effect: The reaction time (RT) is longer on incongruent trials than on congruent trials. The effect in the Stroop task is usually linked to word planning, whereas the effect in the PWI task is associated with either word planning or perceptual encoding. To adjudicate between the word planning and perceptual encoding accounts of the effect in PWI, we conducted an EEG experiment consisting of three tasks: a standard colour-word Stroop task (three colours), a standard PWI task (39 pictures), and a Stroop-like version of the PWI task (three pictures). Participants overtly named the colours and pictures while their EEG was recorded. A Stroop-like effect in RTs was observed in all three tasks. ERPs at centro-parietal sensors started to deflect negatively for incongruent relative to congruent stimuli around 350 ms after stimulus onset for the Stroop, Stroop-like PWI, and the Standard PWI tasks: an N400 effect. No early differences were found in the PWI tasks. The onset of the Stroop-like effect at about 350 ms in all three tasks links the effect to word planning rather than perceptual encoding, which has been estimated in the literature to be finished around 200–250 ms after stimulus onset. We conclude that the Stroop-like effect arises during word planning in both Stroop and PWI.
  • Shopen, T., Reid, N., Shopen, G., & Wilkins, D. G. (1997). Ensuring the survival of Aboriginal and Torres Strait islander languages into the 21st century. Australian Review of Applied Linguistics, 10(1), 143-157.

    Abstract

    Aboriginal languages threatened by speakers poor economic and social conditions; some may survive through support for community development, language maintenance, bilingual education and training of Aboriginal teachers and linguists, and nonAboriginal teachers of Aboriginal and Islander students.
  • Sikora, K., Roelofs, A., & Hermans, D. (2016). Electrophysiology of executive control in spoken noun-phrase production: Dynamics of updating, inhibiting, and shifting. Neuropsychologia, 84, 44-53. doi:10.1016/j.neuropsychologia.2016.01.037.

    Abstract

    Previous studies have provided evidence that updating, inhibiting, and shifting abilities underlying executive control determine response time (RT) in language production. However, little is known about their electrophysiological basis and dynamics. In the present electroencephalography study, we assessed noun-phrase production using picture description and a picture-word interference paradigm. We measured picture description RTs to assess length, distractor, and switch effects, which have been related to the updating, inhibiting, and shifting abilities. In addition, we measured event-related brain potentials (ERPs). Previous research has suggested that inhibiting and shifting are associated with anterior and posterior N200 subcomponents, respectively, and updating with the P300. We obtained length, distractor, and switch effects in the RTs, and an interaction between length and switch. There was a widely distributed switch effect in the N200, an interaction of length and midline site in the N200, and a length effect in the P300, whereas distractor did not yield any ERP modulation. Moreover, length and switch interacted in the posterior N200. We argue that these results provide electrophysiological evidence that inhibiting and shifting of task set occur before updating in phrase planning.
  • Sikora, K., Roelofs, A., Hermans, D., & Knoors, H. (2016). Executive control in spoken noun-phrase production: Contributions of updating, inhibiting, and shifting. Quarterly Journal of Experimental Psychology, 69(9), 1719-1740. doi:10.1080/17470218.2015.1093007.

    Abstract

    The present study examined how the updating, inhibiting, and shifting abilities underlying executive control influence spoken noun-phrase production. Previous studies provided evidence that updating and inhibiting, but not shifting, influence picture-naming response time (RT). However, little is known about the role of executive control in more complex forms of language production like generating phrases. We assessed noun-phrase production using picture description and a picture–word interference procedure. We measured picture description RT to assess length, distractor, and switch effects, which were assumed to reflect, respectively, the updating, inhibiting, and shifting abilities of adult participants. Moreover, for each participant we obtained scores on executive control tasks that measured verbal and nonverbal updating, nonverbal inhibiting, and nonverbal shifting. We found that both verbal and nonverbal updating scores correlated with the overall mean picture description RTs. Furthermore, the length effect in the RTs correlated with verbal but not nonverbal updating scores, while the distractor effect correlated with inhibiting scores. We did not find a correlation between the switch effect in the mean RTs and the shifting scores. However, the shifting scores correlated with the switch effect in the normal part of the underlying RT distribution. These results suggest that updating, inhibiting, and shifting each influence the speed of phrase production, thereby demonstrating a contribution of all three executive control abilities to language production.
  • Silva, S., Reis, A., Casaca, L., Petersson, K. M., & Faísca, L. (2016). When the eyes no longer lead: Familiarity and length effects eye-voice span. Frontiers in Psychology, 7: 1720. doi:10.3389/fpsyg.2016.01720.

    Abstract

    During oral reading, the eyes tend to be ahead of the voice (eye-voice span, EVS). It has been hypothesized that the extent to which this happens depends on the automaticity of reading processes, namely on the speed of print-to-sound conversion. We tested whether EVS is affected by another automaticity component – immunity from interference. To that end, we manipulated word familiarity (high-frequency, lowfrequency, and pseudowords, PW) and word length as proxies of immunity from interference, and we used linear mixed effects models to measure the effects of both variables on the time interval at which readers do parallel processing by gazing at word N C 1 while not having articulated word N yet (offset EVS). Parallel processing was enhanced by automaticity, as shown by familiarity length interactions on offset EVS, and it was impeded by lack of automaticity, as shown by the transformation of offset EVS into voice-eye span (voice ahead of the offset of the eyes) in PWs. The relation between parallel processing and automaticity was strengthened by the fact that offset EVS predicted reading velocity. Our findings contribute to understand how the offset EVS, an index that is obtained in oral reading, may tap into different components of automaticity that underlie reading ability, oral or silent. In addition, we compared the duration of the offset EVS with the average reference duration of stages in word production, and we saw that the offset EVS may accommodate for more than the articulatory programming stage of word N.
  • Silva, S., Faísca, L., Araújo, S., Casaca, L., Carvalho, L., Petersson, K. M., & Reis, A. (2016). Too little or too much? Parafoveal preview benefits and parafoveal load costs in dyslexic adults. Annals of Dyslexia, 66(2), 187-201. doi:10.1007/s11881-015-0113-z.

    Abstract

    Two different forms of parafoveal dysfunction have been hypothesized as core deficits of dyslexic individuals: reduced parafoveal preview benefits (“too little parafovea”) and increased costs of parafoveal load (“too much parafovea”). We tested both hypotheses in a single eye-tracking experiment using a modified serial rapid automatized naming (RAN) task. Comparisons between dyslexic and non-dyslexic adults showed reduced parafoveal preview benefits in dyslexics, without increased costs of parafoveal load. Reduced parafoveal preview benefits were observed in a naming task, but not in a silent letter-finding task, indicating that the parafoveal dysfunction may be consequent to the overload with extracting phonological information from orthographic input. Our results suggest that dyslexics’ parafoveal dysfunction is not based on strict visuo-attentional factors, but nevertheless they stress the importance of extra-phonological processing. Furthermore, evidence of reduced parafoveal preview benefits in dyslexia may help understand why serial RAN is an important reading predictor in adulthood
  • Skiba, R., & Dittmar, N. (1992). Pragmatic, semantic and syntactic constraints and grammaticalization: A longitudinal perspective. Studies in Second Language Acquisition, 14, 323-349. doi:10.1017/S0272263100011141.
  • Smeets, C. J. L. M., & Verbeek, D. (2016). Climbing fibers in spinocerebellar ataxia: A mechanism for the loss of motor control. Neurobiology of Disease, 88, 96-106. doi:10.1016/j.nbd.2016.01.009.

    Abstract

    The spinocerebellar ataxias (SCAs) form an ever-growing group of neurodegenerative disorders causing dysfunction of the cerebellum and loss of motor control in patients. Currently, 41 different genetic causes have been identified, with each mutation affecting a different gene. Interestingly, these diverse genetic causes all disrupt cerebellar function and produce similar symptoms in patients. In order to understand the disease better, and define possible therapeutic targets for multiple SCAs, the field has been searching for common ground among the SCAs. In this review, we discuss the physiology of climbing fibers and the possibility that climbing fiber dysfunction is a point of convergence for at least a subset of SCAs.
  • Smeets, C. J. L. M., Zmorzynska, J., Melo, M. N., Stargardt, A., Dooley, C., Bakalkin, G., McLaughlin, J., Sinke, R. J., Marrink, S.-J., Reits, E., & Verbeek, D. S. (2016). Altered secondary structure of Dynorphin A associates with loss of opioid signalling and NMDA-mediated excitotoxicity in SCA23. Human Molecular Genetics, 25(13), 2728-2737. doi:10.1093/hmg/ddw130.

    Abstract

    Spinocerebellar ataxia type 23 (SCA23) is caused by missense mutations in prodynorphin, encoding the precursor protein for the opioid neuropeptides a -neoendorphin, Dynorphin (Dyn) A and Dyn B, leading to neurotoxic elevated mutant Dyn A levels. Dyn A acts on opioid receptors to reduce pain in the spinal cord, but its cerebellar function remains largely unknown. Increased concentration of or prolonged exposure to Dyn A is neurotoxic and these deleterious effects are very likely caused by an N - methyl- D -aspartate-mediated non-opioid mechanism as Dyn A peptides were shown to bind NMDA receptors and potentiate their glutamate-evoked currents. In the present study, we investigated the cellular mechanisms underlying SCA23-mutant Dyn A neurotoxicity. We show that SCA23 mutations in the Dyn A-coding region disrupted peptide secondary structure leading to a loss of the N-terminal a -helix associated with decreased j -opioid receptor affinity. Additionally, the altered secondary structure led to increased peptide stability of R6W and R9C Dyn A, as these peptides showed marked degradation resistance, which coin- cided with decreased peptide solubility. Notably, L5S Dyn A displayed increased degradation and no aggregation. R6W and wt Dyn A peptides were most toxic to primary cerebellar neurons. For R6W Dyn A, this is likely because of a switch from opioid to NMDA- receptor signalling, while for wt Dyn A, this switch was not observed. We propose that the pathology of SCA23 results from converging mechanisms of loss of opioid-mediated neuroprotection and NMDA-mediated excitotoxicity.
  • Smeets, C. J. L. M., & Verbeek, D. S. (2016). Reply: SCA23 and prodynorphin: is it time for gene retraction? Brain, 139(8): e43. doi:10.1093/brain/aww094.
  • Smits, R., Warner, N., McQueen, J. M., & Cutler, A. (2003). Unfolding of phonetic information over time: A database of Dutch diphone perception. Journal of the Acoustical Society of America, 113(1), 563-574. doi:10.1121/1.1525287.

    Abstract

    We present the results of a large-scale study on speech perception, assessing the number and type of perceptual hypotheses which listeners entertain about possible phoneme sequences in their language. Dutch listeners were asked to identify gated fragments of all 1179 diphones of Dutch, providing a total of 488 520 phoneme categorizations. The results manifest orderly uptake of acoustic information in the signal. Differences across phonemes in the rate at which fully correct recognition was achieved arose as a result of whether or not potential confusions could occur with other phonemes of the language ~long with short vowels, affricates with their initial components, etc.!. These data can be used to improve models of how acoustic phonetic information is mapped onto the mental lexicon during speech comprehension.
  • Sollis, E., Graham, S. A., Vino, A., Froehlich, H., Vreeburg, M., Dimitropoulou, D., Gilissen, C., Pfundt, R., Rappold, G., Brunner, H. G., Deriziotis, P., & Fisher, S. E. (2016). Identification and functional characterization of de novo FOXP1 variants provides novel insights into the etiology of neurodevelopmental disorder. Human Molecular Genetics, 25(3), 546-557. doi:10.1093/hmg/ddv495.

    Abstract

    De novo disruptions of the neural transcription factor FOXP1 are a recently discovered, rare cause of sporadic intellectual disability (ID). We report three new cases of FOXP1-related disorder identified through clinical whole-exome sequencing. Detailed phenotypic assessment confirmed that global developmental delay, autistic features, speech/language deficits, hypotonia and mild dysmorphic features are core features of the disorder. We expand the phenotypic spectrum to include sensory integration disorder and hypertelorism. Notably, the etiological variants in these cases include two missense variants within the DNA-binding domain of FOXP1. Only one such variant has been reported previously. The third patient carries a stop-gain variant. We performed functional characterization of the three missense variants alongside our stop-gain and two previously described truncating/frameshift variants. All variants severely disrupted multiple aspects of protein function. Strikingly, the missense variants had similarly severe effects on protein function as the truncating/frameshift variants. Our findings indicate that a loss of transcriptional repression activity of FOXP1 underlies the neurodevelopmental phenotype in FOXP1-related disorder. Interestingly, the three novel variants retained the ability to interact with wild-type FOXP1, suggesting these variants could exert a dominant-negative effect by interfering with the normal FOXP1 protein. These variants also retained the ability to interact with FOXP2, a paralogous transcription factor disrupted in rare cases of speech and language disorder. Thus, speech/language deficits in these individuals might be worsened through deleterious effects on FOXP2 function. Our findings highlight that de novo FOXP1 variants are a cause of sporadic ID and emphasize the importance of this transcription factor in neurodevelopment.

    Additional information

    ddv495supp.pdf
  • Spinelli, E., McQueen, J. M., & Cutler, A. (2003). Processing resyllabified words in French. Journal of Memory and Language, 48(2), 233-254. doi:10.1016/S0749-596X(02)00513-2.
  • Stagnitti, K., Bailey, A., Hudspeth Stevenson, E., Reynolds, E., & Kidd, E. (2016). An investigation into the effect of play-based instruction on the development of play skills and oral language. Journal of Early Childhood Research, 14(4), 389-406. doi:10.1177/1476718X15579741.

    Abstract

    The current study investigated the influence of a play-based curriculum on the development of pretend play skills and oral language in children attending their first year of formal schooling. In this quasi-experimental design, two groups of children were followed longitudinally across the first 6 months of their first year at school. The children in the experimental group were attending a school with a play-based curriculum; the children in the control group were attending schools following a traditional curriculum. A total of 54 children (Time 1 Mage = 5;6, range: 4;10–6;2 years) completed standardised measures of pretend play and narrative language skills upon school entry and again 6 months later. The results showed that the children in the play-based group significantly improved on all measures, whereas the children in the traditional group did not. A subset of the sample of children (N = 28, Time 1 Mage = 5;7, range: 5;2 – 6;1) also completed additional measures of vocabulary and grammar knowledge, and a test of non-verbal IQ. The results suggested that, in addition to improving play skills and narrative language ability, the play-based curriculum also had a positive influence on the acquisition of grammar.
  • Stivers, T., Mangione-Smith, R., Elliott, M. N., McDonald, L., & Heritage, J. (2003). Why do physicians think parents expect antibiotics? What parents report vs what physicians believe. Journal of Family Practice, 52(2), 140-147.
  • Stock, N. M., Humphries, K., St Pourcain, B., Bailey, M., Persson, M., Ho, K. M., Ring, S., Marsh, C., Albery, L., Rumsey, N., & Sandy, J. (2016). Opportunities and Challenges in Establishing a Cohort Study: An Example From Cleft Lip/Palate Research in the United Kingdom. Cleft Palate-Craniofacial Journal, (3), 317-325. doi:10.1597/14-306.

    Abstract

    Full text and MPG-specific services(opens in a new window)|
    Export
    | Download | Add to List | More...

    Cleft Palate-Craniofacial Journal
    Volume 53, Issue 3, May 2016, Pages 317-325
    Opportunities and challenges in establishing a cohort study: An example from cleft lip/palate research in the United Kingdom (Article)
    Stock, N.M.a ,
    Humphries, K.b,
    St. Pourcain, B.b,
    Bailey, M.b,
    Persson, M.a,
    Ho, K.M.b,
    Ring, S.b,
    Marsh, C.c,
    Albery, L.c,
    Rumsey, N.a,
    Sandy, J.b


    a Centre for Appearance Research, University of the West of England, Coldharbour Lane, Bristol, United Kingdom
    b Faculty of Medicine and Dentistry, University of Bristol, United Kingdom
    c South West Cleft Service, University Hospitals Bristol NHS Foundation Trust, United Kingdom
    Hide additional affiliations
    View references (32)
    Abstract

    Background: Cleft lip and/or palate (CL/P) is one of the most common birth conditions in the world, but little is known about its causes. Professional opinion remains divided as to which treatments may be the most beneficial for patients with CL/P, and the factors that contribute to psychological adjustment are poorly understood. The use of different methodological approaches and tools plays a key role in hampering efforts to address discrepancies within the evidence base. A new UK-wide program of research, The Cleft Collective, was established to combat many of these methodological challenges and to address some of the key research questions important to all CL/P stakeholders. Objective: To describe the establishment of CL/P cohort studies in the United Kingdom and to consider the many opportunities this resource will generate. Results: To date, protocols have been developed and implemented within most UK cleft teams. Biological samples, environmental information, and data pertaining to parental psychological well-being and child development are being collected successfully. Recruitment is currently on track to meet the ambitious target of approximately 9800 individuals from just more than 3000 families. Conclusions: The Cleft Collective cohort studies represent a significant step forward for research in the field of CL/P. The data collected will form a comprehensive resource of information about individuals with CL/P and their families. This resource will provide the basis for many future projects and collaborations, both in the United Kingdom and around the world.
  • Suomi, K., McQueen, J. M., & Cutler, A. (1997). Vowel harmony and speech segmentation in Finnish. Journal of Memory and Language, 36, 422-444. doi:10.1006/jmla.1996.2495.

    Abstract

    Finnish vowel harmony rules require that if the vowel in the first syllable of a word belongs to one of two vowel sets, then all subsequent vowels in that word must belong either to the same set or to a neutral set. A harmony mismatch between two syllables containing vowels from the opposing sets thus signals a likely word boundary. We report five experiments showing that Finnish listeners can exploit this information in an on-line speech segmentation task. Listeners found it easier to detect words likehymyat the end of the nonsense stringpuhymy(where there is a harmony mismatch between the first two syllables) than in the stringpyhymy(where there is no mismatch). There was no such effect, however, when the target words appeared at the beginning of the nonsense string (e.g.,hymypuvshymypy). Stronger harmony effects were found for targets containing front harmony vowels (e.g.,hymy) than for targets containing back harmony vowels (e.g.,paloinkypaloandkupalo). The same pattern of results appeared whether target position within the string was predictable or unpredictable. Harmony mismatch thus appears to provide a useful segmentation cue for the detection of word onsets in Finnish speech.
  • Swaab, T., Brown, C. M., & Hagoort, P. (2003). Understanding words in sentence contexts: The time course of ambiguity resolution. Brain and Language, 86(2), 326-343. doi:10.1016/S0093-934X(02)00547-3.

    Abstract

    Spoken language comprehension requires rapid integration of information from multiple linguistic sources. In the present study we addressed the temporal aspects of this integration process by focusing on the time course of the selection of the appropriate meaning of lexical ambiguities (“bank”) in sentence contexts. Successful selection of the contextually appropriate meaning of the ambiguous word is dependent upon the rapid binding of the contextual information in the sentence to the appropriate meaning of the ambiguity. We used the N400 to identify the time course of this binding process. The N400 was measured to target words that followed three types of context sentences. In the concordant context, the sentence biased the meaning of the sentence-final ambiguous word so that it was related to the target. In the discordant context, the sentence context biased the meaning so that it was not related to the target. In the unrelated control condition, the sentences ended in an unambiguous noun that was unrelated to the target. Half of the concordant sentences biased the dominant meaning, and the other half biased the subordinate meaning of the sentence-final ambiguous words. The ISI between onset of the target word and offset of the sentence-final word of the context sentence was 100 ms in one version of the experiment, and 1250 ms in the second version. We found that (i) the lexically dominant meaning is always partly activated, independent of context, (ii) initially both dominant and subordinate meaning are (partly) activated, which suggests that contextual and lexical factors both contribute to sentence interpretation without context completely overriding lexical information, and (iii) strong lexical influences remain present for a relatively long period of time.
  • Swaab, T. Y., Brown, C. M., & Hagoort, P. (1997). Spoken sentence comprehension in aphasia: Event-related potential evidence for a lexical integration deficit. Journal of Cognitive Neuroscience, 9(1), 39-66.

    Abstract

    In this study the N400 component of the event-related potential was used to investigate spoken sentence understanding in Broca's and Wernicke's aphasics. The aim of the study was to determine whether spoken sentence comprehension problems in these patients might result from a deficit in the on-line integration of lexical information. Subjects listened to sentences spoken at a normal rate. In half of these sentences, the meaning of the final word of the sentence matched the semantic specifications of the preceding sentence context. In the other half of the sentences, the sentence-final word was anomalous with respect to the preceding sentence context. The N400 was measured to the sentence-final words in both conditions. The results for the aphasic patients (n = 14) were analyzed according to the severity of their comprehension deficit and compared to a group of 12 neurologically unimpaired age-matched controls, as well as a group of 6 nonaphasic patients with a lesion in the right hemisphere. The nonaphasic brain damaged patients and the aphasic patients with a light comprehension deficit (high comprehenders, n = 7) showed an N400 effect that was comparable to that of the neurologically unimpaired subjects. In the aphasic patients with a moderate to severe comprehension deficit (low comprehenders, n = 7), a reduction and delay of the N400 effect was obtained. In addition, the P300 component was measured in a classical oddball paradigm, in which subjects were asked to count infrequent low tones in a random series of high and low tones. No correlation was found between the occurrence of N400 and P300 effects, indicating that changes in the N400 results were related to the patients' language deficit. Overall, the pattern of results was compatible with the idea that aphasic patients with moderate to severe comprehension problems are impaired in the integration of lexical information into a higher order representation of the preceding sentence context.
  • Swingley, D. (2003). Phonetic detail in the developing lexicon. Language and Speech, 46(3), 265-294.

    Abstract

    Although infants show remarkable sensitivity to linguistically relevant phonetic variation in speech, young children sometimes appear not to make use of this sensitivity. Here, children's knowledge of the sound-forms of familiar words was assessed using a visual fixation task. Dutch 19-month-olds were shown pairs of pictures and heard correct pronunciations and mispronunciations of familiar words naming one of the pictures. Mispronunciations were word-initial in Experiment 1 and word-medial in Experiment 2, and in both experiments involved substituting one segment with [d] (a common sound in Dutch) or [g] (a rare sound). In both experiments, word recognition performance was better for correct pronunciations than for mispronunciations involving either substituted consonant. These effects did not depend upon children's knowledge of lexical or nonlexical phonological neighbors of the tested words. The results indicate the encoding of phonetic detail in words at 19 months.
  • Takashima, A., Hulzink, I., Wagensveld, B., & Verhoeven, L. (2016). Emergence of representations through repeated training on pronouncing novel letter combinations leads to efficient reading. Neuropsychologia, 89, 14-30. doi:10.1016/j.neuropsychologia.2016.05.014.

    Abstract

    Printed text can be decoded by utilizing different processing routes depending on the familiarity of the script. A predominant use of word-level decoding strategies can be expected in the case of a familiar script, and an almost exclusive use of letter-level decoding strategies for unfamiliar scripts. Behavioural studies have revealed that frequently occurring words are read more efficiently, suggesting that these words are read in a more holistic way at the word-level, than infrequent and unfamiliar words. To test whether repeated exposure to specific letter combinations leads to holistic reading, we monitored both behavioural and neural responses during novel script decoding and examined changes related to repeated exposure. We trained a group of Dutch university students to decode pseudowords written in an unfamiliar script, i.e., Korean Hangul characters. We compared behavioural and neural responses to pronouncing trained versus untrained two-character pseudowords (equivalent to two-syllable pseudowords). We tested once shortly after the initial training and again after a four days' delay that included another training session. We found that trained pseudowords were pronounced faster and more accurately than novel combinations of radicals (equivalent to letters). Imaging data revealed that pronunciation of trained pseudowords engaged the posterior temporo-parietal region, and engagement of this network was predictive of reading efficiency a month later. The results imply that repeated exposure to specific combinations of graphemes can lead to emergence of holistic representations that result in efficient reading. Furthermore, inter-individual differences revealed that good learners retained efficiency more than bad learners one month later

    Additional information

    mmc1.docx
  • Takashima, A., Van de Ven, F., Kroes, M. C. W., & Fernández, G. (2016). Retrieved emotional context influences hippocampal involvement during recognition of neutral memories. NeuroImage, 143, 280-292. doi:10.1016/j.neuroimage.2016.08.069.

    Abstract

    It is well documented that emotionally arousing experiences are better remembered than mundane events. This is thought to occur through hippocampus-amygdala crosstalk during encoding, consolidation, and retrieval. Here we investigated whether emotional events (context) also cause a memory benefit for simultaneously encoded non-arousing contents and whether this effect persists after a delay via recruitment of a similar hippocampus-amygdala network. Participants studied neutral pictures (content) encoded together with either an arousing or a neutral sound (that served as context) in two study sessions three days apart. Memory was tested in a functional magnetic resonance scanner directly after the second study session. Pictures recognised with high confidence were more often thought to have been associated with an arousing than with a neutral context, irrespective of the veridical source memory. If the retrieved context was arousing, an area in the hippocampus adjacent to the amygdala exhibited heightened activation and this area increased functional connectivity with the parahippocampal gyrus, an area known to process pictures of scenes. These findings suggest that memories can be shaped by the retrieval act. Memory structures may be recruited to a higher degree when an arousing context is retrieved, and this may give rise to confident judgments of recognition for neutral pictures even after a delay
  • Ten Oever, S., Hausfeld, L., Correia, J. M., Van Atteveldt, N., Formisano, E., & Sack, A. T. (2016). A 7T fMRI study investigating the influence of oscillatory phase on syllable representations. NeuroImage, 141, 1-9. doi:10.1016/j.neuroimage.2016.07.011.

    Abstract

    Stimulus categorization is influenced by oscillations in the brain. For example, we have shown that ongoing oscillatory phase biases identification of an ambiguous syllable that can either be perceived as /da/ or /ga/. This suggests that phase is a cue for the brain to determine syllable identity and this cue could be an element of the representation of these syllables. If so, brain activation patterns for /da/ should be more unique when the syllable is presented at the /da/ biasing (i.e. its "preferred") phase. To test this hypothesis we presented non-ambiguous /da/ and /ga/ syllables at either their preferred or non-preferred phase ( using sensory entrainment) while measuring 7T fMRI. Using multivariate pattern analysis in auditory regions we show that syllable decoding performance is higher when syllables are presented at their preferred compared to their non-preferred phase. These results suggest that phase information increases the distinctiveness of /da/ and /ga/ brain activation patterns. (C) 2016 Elsevier Inc. All rights reserved.
  • Ten Oever, S., Romei, V., van Atteveldt, N., Soto-Faraco, S., Murray, M. M., & Matusz, P. J. (2016). The COGs (context, object, and goals) in multisensory processing. Experimental Brain Research, 234(5), 1307-1323. doi:10.1007/s00221-016-4590-z.

    Abstract

    Our understanding of how perception operates in real-world environments has been substantially advanced by studying both multisensory processes and "top-down" control processes influencing sensory processing via activity from higher-order brain areas, such as attention, memory, and expectations. As the two topics have been traditionally studied separately, the mechanisms orchestrating real-world multisensory processing remain unclear. Past work has revealed that the observer's goals gate the influence of many multisensory processes on brain and behavioural responses, whereas some other multisensory processes might occur independently of these goals. Consequently, other forms of top-down control beyond goal dependence are necessary to explain the full range of multisensory effects currently reported at the brain and the cognitive level. These forms of control include sensitivity to stimulus context as well as the detection of matches (or lack thereof) between a multisensory stimulus and categorical attributes of naturalistic objects (e.g. tools, animals). In this review we discuss and integrate the existing findings that demonstrate the importance of such goal-, object- and context-based top-down control over multisensory processing. We then put forward a few principles emerging from this literature review with respect to the mechanisms underlying multisensory processing and discuss their possible broader implications.
  • Ten Oever, S., De Graaf, T. A., Bonnemayer, C., Ronner, J., Sack, A. T., & Riecke, L. (2016). Stimulus presentation at specific neuronal oscillatory phases experimentally controlled with tACS: Implementation and applications. Frontiers in Cellular Neuroscience, 10: 240. doi:10.3389/fncel.2016.00240.

    Abstract

    In recent years, it has become increasingly clear that both the power and phase of oscillatory brain activity can influence the processing and perception of sensory stimuli. Transcranial alternating current stimulation (tACS) can phase-align and amplify endogenous brain oscillations and has often been used to control and thereby study oscillatory power. Causal investigation of oscillatory phase is more difficult, as it requires precise real-time temporal control over both oscillatory phase and sensory stimulation. Here, we present hardware and software solutions allowing temporally precise presentation of sensory stimuli during tACS at desired tACS phases, enabling causal investigations of oscillatory phase. We developed freely available and easy to use software, which can be coupled with standard commercially available hardware to allow flexible and multi-modal stimulus presentation (visual, auditory, magnetic stimuli, etc.) at pre-determined tACS-phases, opening up a range of new research opportunities. We validate that stimulus presentation at tACS phase in our setup is accurate to the sub-millisecond level with high inter-trial consistency. Conventional methods investigating the role of oscillatory phase such as magneto-/electroencephalography can only provide correlational evidence. Using brain stimulation with the described methodology enables investigations of the causal role of oscillatory phase. This setup turns oscillatory phase into an independent variable, allowing innovative, and systematic studies of its functional impact on perception and cognition.
  • Terrill, A., & Dunn, M. (2003). Orthographic design in the Solomon Islands: The social, historical, and linguistic situation of Touo (Baniata). Written Language and Literacy, 6(2), 177-192. doi:10.1075/wll.6.2.03ter.

    Abstract

    This paper discusses the development of an orthography for the Touo language (Solomon Islands). Various orthographies have been proposed for this language in the past, and the paper discusses why they are perceived by the community to have failed. Current opinion about orthography development within the Touo-speaking community is divided along religious, political, and geographical grounds; and the development of a successful orthography must take into account a variety of opinions. The paper examines the social, historical, and linguistic obstacles that have hitherto prevented the development of an accepted Touo orthography, and presents a new proposal which has thus far gained acceptance with community leaders. The fundamental issue is that creating an orthography for a language takes place in a social, political, and historical context; and for an orthography to be acceptable for the speakers of a language, all these factors must be taken into account.
  • Terrill, A. (2003). Linguistic stratigraphy in the central Solomon Islands: Lexical evidence of early Papuan/Austronesian interaction. Journal of the Polynesian Society, 112(4), 369-401.

    Abstract

    The extent to which linguistic borrowing can be used to shed light on the existence and nature of early contact between Papuan and Oceanic speakers is examined. The question is addressed by taking one Papuan language, Lavukaleve, spoken in the Russell Islands, central Solomon Islands and examining lexical borrowings between it and nearby Oceanic languages, and with reconstructed forms of Proto Oceanic. Evidence from ethnography, culture history and archaeology, when added to the linguistic evidence provided in this study, indicates long-standing cultural links between other (non-Russell) islands. The composite picture is one of a high degree of cultural contact with little linguistic mixing, i.e., little or no changes affecting the structure of the languages and actually very little borrowed vocabulary.
  • Thalmeier, D., Uhlmann, M., Kappen, H. J., & Memmeshiemer, R.-M. (2016). Learning Universal Computations with Spikes. PLoS Computational Biology, 12(6): e1004895. doi:10.1371/journal.pcbi.1004895.

    Abstract

    Providing the neurobiological basis of information processing in higher animals, spiking neural networks must be able to learn a variety of complicated computations, including the generation of appropriate, possibly delayed reactions to inputs and the self-sustained generation of complex activity patterns, e.g. for locomotion. Many such computations require previous building of intrinsic world models. Here we show how spiking neural networks may solve these different tasks. Firstly, we derive constraints under which classes of spiking neural networks lend themselves to substrates of powerful general purpose computing. The networks contain dendritic or synaptic nonlinearities and have a constrained connectivity. We then combine such networks with learning rules for outputs or recurrent connections. We show that this allows to learn even difficult benchmark tasks such as the self-sustained generation of desired low-dimensional chaotic dynamics or memory-dependent computations. Furthermore, we show how spiking networks can build models of external world systems and use the acquired knowledge to control them.
  • Tilot, A. K., Bebek, G., Niazi, F., Altemus, J., Romigh, T., Frazier, T., & Eng, C. (2016). Neural transcriptome of constitutional Pten dysfunction in mice and its relevance to human idiopathic autism spectrum disorder. Molecular Psychiatry, 21, 118-125. doi:10.1038/mp.2015.17.

    Abstract

    Autism spectrum disorder (ASD) is a neurodevelopmental condition with a clear, but heterogeneous, genetic component. Germline mutations in the tumor suppressor Pten are a well-established risk factor for ASD with macrocephaly, and conditional Pten mouse models have impaired social behavior and brain development. Some mutations observed in patients disrupt the normally balanced nuclear-cytoplasmic localization of the Pten protein, and we developed the Ptenm3m4 model to study the effects of a cytoplasm-predominant Pten. In this model, germline mislocalization of Pten causes inappropriate social behavior with intact learning and memory, a profile reminiscent of high-functioning ASD. These animals also exhibit histological evidence of neuroinflammation and expansion of glial populations by 6 weeks of age. We hypothesized that the neural transcriptome of this model would be altered in a manner that could inform human idiopathic ASD, a constitutional condition. Using total RNA sequencing, we found progressive disruption of neural gene expression in Ptenm3m4 mice from 2–6 weeks of age, involving both immune and synaptic pathways. These alterations include downregulation of many highly coexpressed human ASD-susceptibility genes. Comparison with a human cortical development coexpression network revealed that genes disrupted in Ptenm3m4 mice were enriched in the same areas as those of human ASD. Although Pten-related ASD is relatively uncommon, our observations suggest that the Ptenm3m4 model recapitulates multiple molecular features of human ASD, and that Pten operates far upstream of common pathways within ASD pathogenesis.

Share this page