Publications

Displaying 601 - 700 of 807
  • Rubio-Fernández, P. (2013). Associative and inferential processes in pragmatic enrichment: The case of emergent properties. Language and Cognitive Processes, 28(6), 723-745. doi:10.1080/01690965.2012.659264.

    Abstract

    Experimental research on word processing has generally focused on properties that are associated to a concept in long-term memory (e.g., basketball—round). The present study addresses a related issue: the accessibility of “emergent properties” or conceptual properties that have to be inferred in a given context (e.g., basketball—floats). This investigation sheds light on a current debate in cognitive pragmatics about the number of pragmatic systems that are there (Carston, 2002a, 2007; Recanati, 2004, 2007). Two experiments using a self-paced reading task suggest that inferential processes are fully integrated in the processing system. Emergent properties are accessed early on in processing, without delaying later discourse integration processes. I conclude that the theoretical distinction between explicit and implicit meaning is not paralleled by that between associative and inferential processes.
  • Rubio-Fernández, P. (2013). Perspective tracking in progress: Do not disturb. Cognition, 129(2), 264-272. doi:10.1016/j.cognition.2013.07.005.

    Abstract

    Two experiments tested the hypothesis that indirect false-belief tests allow participants to track a protagonist’s perspective uninterruptedly, whereas direct false-belief tests disrupt the process of perspective tracking in various ways. For this purpose, adults’ performance was compared on indirect and direct false-belief tests by means of continuous eye-tracking. Experiment 1 confirmed that the false-belief question used in direct tests disrupts perspective tracking relative to what is observed in an indirect test. Experiment 2 confirmed that perspective tracking is a continuous process that can be easily disrupted in adults by a subtle visual manipulation in both indirect and direct tests. These results call for a closer analysis of the demands of the false-belief tasks that have been used in developmental research.
  • Rubio-Fernández, P., & Geurts, B. (2013). How to pass the false-belief task before your fourth birthday. Psychological Science, 24(1), 27-33. doi:10.1177/0956797612447819.

    Abstract

    The experimental record of the last three decades shows that children under 4 years old fail all sorts of variations on the standard false-belief task, whereas more recent studies have revealed that infants are able to pass nonverbal versions of the task. We argue that these paradoxical results are an artifact of the type of false-belief tasks that have been used to test infants and children: Nonverbal designs allow infants to keep track of a protagonist’s perspective over a course of events, whereas verbal designs tend to disrupt the perspective-tracking process in various ways, which makes it too hard for younger children to demonstrate their capacity for perspective tracking. We report three experiments that confirm this hypothesis by showing that 3-year-olds can pass a suitably streamlined version of the verbal false-belief task. We conclude that young children can pass the verbal false-belief task provided that they are allowed to keep track of the protagonist’s perspective without too much disruption.
  • Rubio-Fernández, P. (2007). Suppression in metaphor interpretation: Differences between meaning selection and meaning construction. Journal of Semantics, 24(4), 345-371. doi:10.1093/jos/ffm006.

    Abstract

    Various accounts of metaphor interpretation propose that it involves constructing an ad hoc concept on the basis of the concept encoded by the metaphor vehicle (i.e. the expression used for conveying the metaphor). This paper discusses some of the differences between these theories and investigates their main empirical prediction: that metaphor interpretation involves enhancing properties of the metaphor vehicle that are relevant for interpretation, while suppressing those that are irrelevant. This hypothesis was tested in a cross-modal lexical priming study adapted from early studies on lexical ambiguity. The different patterns of suppression of irrelevant meanings observed in disambiguation studies and in the experiment on metaphor reported here are discussed in terms of differences between meaning selection and meaning construction.
  • De Ruiter, J. P. (2007). Postcards from the mind: The relationship between speech, imagistic gesture and thought. Gesture, 7(1), 21-38.

    Abstract

    In this paper, I compare three different assumptions about the relationship between speech, thought and gesture. These assumptions have profound consequences for theories about the representations and processing involved in gesture and speech production. I associate these assumptions with three simplified processing architectures. In the Window Architecture, gesture provides us with a 'window into the mind'. In the Language Architecture, properties of language have an influence on gesture. In the Postcard Architecture, gesture and speech are planned by a single process to become one multimodal message. The popular Window Architecture is based on the assumption that gestures come, as it were, straight out of the mind. I argue that during the creation of overt imagistic gestures, many processes, especially those related to (a) recipient design, and (b) effects of language structure, cause an observable gesture to be very different from the original thought that it expresses. The Language Architecture and the Postcard Architecture differ from the Window Architecture in that they both incorporate a central component which plans gesture and speech together, however they differ from each other in the way they align gesture and speech. The Postcard Architecture assumes that the process creating a multimodal message involving both gesture and speech has access to the concepts that are available in speech, while the Language Architecture relies on interprocess communication to resolve potential conflicts between the content of gesture and speech.
  • Sadakata, M., & McQueen, J. M. (2013). High stimulus variability in nonnative speech learning supports formation of abstract categories: Evidence from Japanese geminates. Journal of the Acoustical Society of America, 134(2), 1324-1335. doi:10.1121/1.4812767.

    Abstract

    This study reports effects of a high-variability training procedure on nonnative learning of a Japanese geminate-singleton fricative contrast. Thirty native speakers of Dutch took part in a 5-day training procedure in which they identified geminate and singleton variants of the Japanese fricative /s/. Participants were trained with either many repetitions of a limited set of words recorded by a single speaker (low-variability training) or with fewer repetitions of a more variable set of words recorded by multiple speakers (high-variability training). Both types of training enhanced identification of speech but not of nonspeech materials, indicating that learning was domain specific. High-variability training led to superior performance in identification but not in discrimination tests, and supported better generalization of learning as shown by transfer from the trained fricatives to the identification of untrained stops and affricates. Variability thus helps nonnative listeners to form abstract categories rather than to enhance early acoustic analysis.
  • Sakkalou, E., Ellis-Davies, K., Fowler, N., Hilbrink, E., & Gattis, M. (2013). Infants show stability of goal-directed imitation. Journal of Experimental Child Psychology, 114, 1-9. doi:10.1016/j.jecp.2012.09.005.

    Abstract

    Previous studies have reported that infants selectively reproduce observed actions and have argued that this selectivity reflects understanding of intentions and goals, or goal-directed imitation. We reasoned that if selective imitation of goal-directed actions reflects understanding of intentions, infants should demonstrate stability across perceptually and causally dissimilar imitation tasks. To this end, we employed a longitudinal within-participants design to compare the performance of 37 infants on two imitation tasks, with one administered at 13 months and one administered at 14 months. Infants who selectively imitated goal-directed actions in an object-cued task at 13 months also selectively imitated goal-directed actions in a vocal-cued task at 14 months. We conclude that goal-directed imitation reflects a general ability to interpret behavior in terms of mental states.
  • Salomo, D., & Liszkowski, U. (2013). Sociocultural settings influence the emergence of prelinguistic deictic gestures. Child development, 84(4), 1296-1307. doi:10.1111/cdev.12026.

    Abstract

    Daily activities of forty-eight 8- to 15-month-olds and their interlocutors were observed to test for the presence and frequency of triadic joint actions and deictic gestures across three different cultures: Yucatec-Mayans (Mexico), Dutch (Netherlands), and Shanghai-Chinese (China). The amount of joint action and deictic gestures to which infants were exposed differed systematically across settings, allowing testing for the role of social–interactional input in the ontogeny of prelinguistic gestures. Infants gestured more and at an earlier age depending on the amount of joint action and gestures infants were exposed to, revealing early prelinguistic sociocultural differences. The study shows that the emergence of basic prelinguistic gestures is socially mediated, suggesting that others' actions structure the ontogeny of human communication from early on.
  • Salverda, A. P., Dahan, D., Tanenhaus, M. K., Crosswhite, K., Masharov, M., & McDonough, J. (2007). Effects of prosodically modulated sub-phonetic variation on lexical competition. Cognition, 105(2), 466-476. doi:10.1016/j.cognition.2006.10.008.

    Abstract

    Eye movements were monitored as participants followed spoken instructions to manipulate one of four objects pictured on a computer screen. Target words occurred in utterance-medial (e.g., Put the cap next to the square) or utterance-final position (e.g., Now click on the cap). Displays consisted of the target picture (e.g., a cap), a monosyllabic competitor picture (e.g., a cat), a polysyllabic competitor picture (e.g., a captain) and a distractor (e.g., a beaker). The relative proportion of fixations to the two types of competitor pictures changed as a function of the position of the target word in the utterance, demonstrating that lexical competition is modulated by prosodically conditioned phonetic variation.
  • Sampaio, C., & Konopka, A. E. (2013). Memory for non-native language: The role of lexical processing in the retention of surface form. Memory, 21, 537-544. doi:10.1080/09658211.2012.746371.

    Abstract

    Research on memory for native language (L1) has consistently shown that retention of surface form is inferior to that of gist (e.g., Sachs, 1967). This paper investigates whether the same pattern is found in memory for non-native language (L2). We apply a model of bilingual word processing to more complex linguistic structures and predict that memory for L2 sentences ought to contain more surface information than L1 sentences. Native and non-native speakers of English were tested on a set of sentence pairs with different surface forms but the same meaning (e.g., “The bullet hit/struck the bull's eye”). Memory for these sentences was assessed with a cued recall procedure. Responses showed that native and non-native speakers did not differ in the accuracy of gist-based recall but that non-native speakers outperformed native speakers in the retention of surface form. The results suggest that L2 processing involves more intensive encoding of lexical level information than L1 processing.

    Files private

    Request files
  • San Roque, L., Kendrick, K. H., Norcliffe, E., & Majid, A. (2018). Universal meaning extensions of perception verbs are grounded in interaction. Cognitive Linguistics, 29, 371-406. doi:10.1515/cog-2017-0034.
  • Sauter, D. A., & Eisner, F. (2013). Commonalities outweigh differences in the communication of emotions across human cultures [Letter]. Proceedings of the National Academy of Sciences of the United States of America, 110, E180. doi:10.1073/pnas.1209522110.
  • Sauter, D., & Scott, S. K. (2007). More than one kind of happiness: Can we recognize vocal expressions of different positive states? Motivation and Emotion, 31(3), 192-199.

    Abstract

    Several theorists have proposed that distinctions are needed between different positive emotional states, and that these discriminations may be particularly useful in the domain of vocal signals (Ekman, 1992b, Cognition and Emotion, 6, 169–200; Scherer, 1986, Psychological Bulletin, 99, 143–165). We report an investigation into the hypothesis that positive basic emotions have distinct vocal expressions (Ekman, 1992b, Cognition and Emotion, 6, 169–200). Non-verbal vocalisations are used that map onto five putative positive emotions: Achievement/Triumph, Amusement, Contentment, Sensual Pleasure, and Relief. Data from categorisation and rating tasks indicate that each vocal expression is accurately categorised and consistently rated as expressing the intended emotion. This pattern is replicated across two language groups. These data, we conclude, provide evidence for the existence of robustly recognisable expressions of distinct positive emotions.
  • Schaeffer, J., van Witteloostuijn, M., & Creemers, A. (2018). Article choice, theory of mind, and memory in children with high-functioning autism and children with specific language impairment. Applied Psycholinguistics, 39(1), 89-115. doi:10.1017/S0142716417000492.

    Abstract

    Previous studies show that young, typically developing (TD) children (age 5) make errors in the choice between a definite and an indefinite article. Suggested explanations for overgeneration of the definite article include failure to distinguish speaker from hearer assumptions, and for overgeneration of the indefinite article failure to draw scalar implicatures, and weak working memory. However, no direct empirical evidence for these accounts is available. In this study, 27 Dutch-speaking children with high-functioning autism, 27 children with SLI, and 27 TD children aged 5–14 were administered a pragmatic article choice test, a nonverbal theory of mind test, and three types of memory tests (phonological memory, verbal, and nonverbal working memory). The results show that the children with high-functioning autism and SLI (a) make similar errors, that is, they overgenerate the indefinite article; (b) are TD-like at theory of mind, but (c) perform significantly more poorly than the TD children on phonological memory and verbal working memory. We propose that weak memory skills prevent the integration of the definiteness scale with the preceding discourse, resulting in the failure to consistently draw the relevant scalar implicature. This in turn yields the occasional erroneous choice of the indefinite article a in definite contexts.
  • Schapper, A., & Hammarström, H. (2013). Innovative numerals in Malayo-Polynesian languages outside of Oceania. Oceanic Linguistics, 52, 423-455.
  • Scharenborg, O., Seneff, S., & Boves, L. (2007). A two-pass approach for handling out-of-vocabulary words in a large vocabulary recognition task. Computer, Speech & Language, 21, 206-218. doi:10.1016/j.csl.2006.03.003.

    Abstract

    This paper addresses the problem of recognizing a vocabulary of over 50,000 city names in a telephone access spoken dialogue system. We adopt a two-stage framework in which only major cities are represented in the first stage lexicon. We rely on an unknown word model encoded as a phone loop to detect OOV city names (referred to as ‘rare city’ names). We use SpeM, a tool that can extract words and word-initial cohorts from phone graphs from a large fallback lexicon, to provide an N-best list of promising city name hypotheses on the basis of the phone graph corresponding to the OOV. This N-best list is then inserted into the second stage lexicon for a subsequent recognition pass. Experiments were conducted on a set of spontaneous telephone-quality utterances; each containing one rare city name. It appeared that SpeM was able to include nearly 75% of the correct city names in an N-best hypothesis list of 3000 city names. With the names found by SpeM to extend the lexicon of the second stage recognizer, a word accuracy of 77.3% could be obtained. The best one-stage system yielded a word accuracy of 72.6%. The absolute number of correctly recognized rare city names almost doubled, from 62 for the best one-stage system to 102 for the best two-stage system. However, even the best two-stage system recognized only about one-third of the rare city names retrieved by SpeM. The paper discusses ways for improving the overall performance in the context of an application.
  • Scharenborg, O., & Janse, E. (2013). Comparing lexically guided perceptual learning in younger and older listeners. Attention, Perception & Psychophysics, 75, 525-536. doi:10.3758/s13414-013-0422-4.

    Abstract

    Numerous studies have shown that younger adults engage in lexically guided perceptual learning in speech perception. Here, we investigated whether older listeners are also able to retune their phonetic category boundaries. More specifically, in this research we tried to answer two questions. First, do older adults show perceptual-learning effects of similar size to those of younger adults? Second, do differences in lexical behavior predict the strength of the perceptual-learning effect? An age group comparison revealed that older listeners do engage in lexically guided perceptual learning, but there were two age-related differences: Younger listeners had a stronger learning effect right after exposure than did older listeners, but the effect was more stable for older than for younger listeners. Moreover, a clear link was shown to exist between individuals’ lexical-decision performance during exposure and the magnitude of their perceptual-learning effects. A subsequent analysis on the results of the older participants revealed that, even within the older participant group, with increasing age the perceptual retuning effect became smaller but also more stable, mirroring the age group comparison results. These results could not be explained by differences in hearing loss. The age effect may be accounted for by decreased flexibility in the adjustment of phoneme categories or by age-related changes in the dynamics of spoken-word recognition, with older adults being more affected by competition from similar-sounding lexical competitors, resulting in less lexical guidance for perceptual retuning. In conclusion, our results clearly show that the speech perception system remains flexible over the life span.
  • Scharenborg, O., Ten Bosch, L., & Boves, L. (2007). 'Early recognition' of polysyllabic words in continuous speech. Computer, Speech & Language, 21, 54-71. doi:10.1016/j.csl.2005.12.001.

    Abstract

    Humans are able to recognise a word before its acoustic realisation is complete. This in contrast to conventional automatic speech recognition (ASR) systems, which compute the likelihood of a number of hypothesised word sequences, and identify the words that were recognised on the basis of a trace back of the hypothesis with the highest eventual score, in order to maximise efficiency and performance. In the present paper, we present an ASR system, SpeM, based on principles known from the field of human word recognition that is able to model the human capability of ‘early recognition’ by computing word activation scores (based on negative log likelihood scores) during the speech recognition process. Experiments on 1463 polysyllabic words in 885 utterances showed that 64.0% (936) of these polysyllabic words were recognised correctly at the end of the utterance. For 81.1% of the 936 correctly recognised polysyllabic words the local word activation allowed us to identify the word before its last phone was available, and 64.1% of those words were already identified one phone after their lexical uniqueness point. We investigated two types of predictors for deciding whether a word is considered as recognised before the end of its acoustic realisation. The first type is related to the absolute and relative values of the word activation, which trade false acceptances for false rejections. The second type of predictor is related to the number of phones of the word that have already been processed and the number of phones that remain until the end of the word. The results showed that SpeM’s performance increases if the amount of acoustic evidence in support of a word increases and the risk of future mismatches decreases.
  • Scharenborg, O. (2007). Reaching over the gap: A review of efforts to link human and automatic speech recognition research. Speech Communication, 49, 336-347. doi:10.1016/j.specom.2007.01.009.

    Abstract

    The fields of human speech recognition (HSR) and automatic speech recognition (ASR) both investigate parts of the speech recognition process and have word recognition as their central issue. Although the research fields appear closely related, their aims and research methods are quite different. Despite these differences there is, however, lately a growing interest in possible cross-fertilisation. Researchers from both ASR and HSR are realising the potential benefit of looking at the research field on the other side of the ‘gap’. In this paper, we provide an overview of past and present efforts to link human and automatic speech recognition research and present an overview of the literature describing the performance difference between machines and human listeners. The focus of the paper is on the mutual benefits to be derived from establishing closer collaborations and knowledge interchange between ASR and HSR. The paper ends with an argument for more and closer collaborations between researchers of ASR and HSR to further improve research in both fields.
  • Scharenborg, O., Wan, V., & Moore, R. K. (2007). Towards capturing fine phonetic variation in speech using articulatory features. Speech Communication, 49, 811-826. doi:10.1016/j.specom.2007.01.005.

    Abstract

    The ultimate goal of our research is to develop a computational model of human speech recognition that is able to capture the effects of fine-grained acoustic variation on speech recognition behaviour. As part of this work we are investigating automatic feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. In the experiments reported here, we analysed the classification results from support vector machines (SVMs) and multilayer perceptrons (MLPs). MLPs have been widely and successfully used for the task of multi-value articulatory feature classification, while (to the best of our knowledge) SVMs have not. This paper compares the performance of the two classifiers and analyses the results in order to better understand the articulatory representations. It was found that the SVMs outperformed the MLPs for five out of the seven articulatory feature classes we investigated while using only 8.8–44.2% of the training material used for training the MLPs. The structure in the misclassifications of the SVMs and MLPs suggested that there might be a mismatch between the characteristics of the classification systems and the characteristics of the description of the AF values themselves. The analyses showed that some of the misclassified features are inherently confusable given the acoustic space. We concluded that in order to come to a feature set that can be used for a reliable and accurate automatic description of the speech signal; it could be beneficial to move away from quantised representations.
  • Schepens, J., Dijkstra, T., Grootjen, F., & Van Heuven, W. J. (2013). Cross-language distributions of high frequency and phonetically similar cognates. PLoS One, 8(5): e63006. doi:10.1371/journal.pone.0063006.

    Abstract

    The coinciding form and meaning similarity of cognates, e.g. ‘flamme’ (French), ‘Flamme’ (German), ‘vlam’ (Dutch), meaning ‘flame’ in English, facilitates learning of additional languages. The cross-language frequency and similarity distributions of cognates vary according to evolutionary change and language contact. We compare frequency and orthographic (O), phonetic (P), and semantic similarity of cognates, automatically identified in semi-complete lexicons of six widely spoken languages. Comparisons of P and O similarity reveal inconsistent mappings in language pairs with deep orthographies. The frequency distributions show that cognate frequency is reduced in less closely related language pairs as compared to more closely related languages (e.g., French-English vs. German-English). These frequency and similarity patterns may support a better understanding of cognate processing in natural and experimental settings. The automatically identified cognates are available in the supplementary materials, including the frequency and similarity measurements.
  • Schijven, D., Kofink, D., Tragante, V., Verkerke, M., Pulit, S. L., Kahn, R. S., Veldink, J. H., Vinkers, C. H., Boks, M. P., & Luykx, J. J. (2018). Comprehensive pathway analyses of schizophrenia risk loci point to dysfunctional postsynaptic signaling. Schizophrenia Research, 199, 195-202. doi:10.1016/j.schres.2018.03.032.

    Abstract

    Large-scale genome-wide association studies (GWAS) have implicated many low-penetrance loci in schizophrenia. However, its pathological mechanisms are poorly understood, which in turn hampers the development of novel pharmacological treatments. Pathway and gene set analyses carry the potential to generate hypotheses about disease mechanisms and have provided biological context to genome-wide data of schizophrenia. We aimed to examine which biological processes are likely candidates to underlie schizophrenia by integrating novel and powerful pathway analysis tools using data from the largest Psychiatric Genomics Consortium schizophrenia GWAS (N=79,845) and the most recent 2018 schizophrenia GWAS (N=105,318). By applying a primary unbiased analysis (Multi-marker Analysis of GenoMic Annotation; MAGMA) to weigh the role of biological processes from the Molecular Signatures Database (MSigDB), we identified enrichment of common variants in synaptic plasticity and neuron differentiation gene sets. We supported these findings using MAGMA, Meta-Analysis Gene-set Enrichment of variaNT Associations (MAGENTA) and Interval Enrichment Analysis (INRICH) on detailed synaptic signaling pathways from the Kyoto Encyclopedia of Genes and Genomes (KEGG) and found enrichment in mainly the dopaminergic and cholinergic synapses. Moreover, shared genes involved in these neurotransmitter systems had a large contribution to the observed enrichment, protein products of top genes in these pathways showed more direct and indirect interactions than expected by chance, and expression profiles of these genes were largely similar among brain tissues. In conclusion, we provide strong and consistent genetics and protein-interaction informed evidence for the role of postsynaptic signaling processes in schizophrenia, opening avenues for future translational and psychopharmacological studies.
  • Schilberg, L., Engelen, T., Ten Oever, S., Schuhmann, T., De Gelder, B., De Graaf, T. A., & Sack, A. T. (2018). Phase of beta-frequency tACS over primary motor cortex modulates corticospinal excitability. Cortex, 103, 142-152. doi:10.1016/j.cortex.2018.03.001.

    Abstract

    The assessment of corticospinal excitability by means of transcranial magnetic stimulation-induced motor evoked potentials is an established diagnostic tool in neurophysiology and a widely used procedure in fundamental brain research. However, concern about low reliability of these measures has grown recently. One possible cause of high variability of MEPs under identical acquisition conditions could be the influence of oscillatory neuronal activity on corticospinal excitability. Based on research showing that transcranial alternating current stimulation can entrain neuronal oscillations we here test whether alpha or beta frequency tACS can influence corticospinal excitability in a phase-dependent manner. We applied tACS at individually calibrated alpha- and beta-band oscillation frequencies, or we applied sham tACS. Simultaneous single TMS pulses time locked to eight equidistant phases of the ongoing tACS signal evoked MEPs. To evaluate offline effects of stimulation frequency, MEP amplitudes were measured before and after tACS. To evaluate whether tACS influences MEP amplitude, we fitted one-cycle sinusoids to the average MEPs elicited at the different phase conditions of each tACS frequency. We found no frequency-specific offline effects of tACS. However, beta-frequency tACS modulation of MEPs was phase-dependent. Post hoc analyses suggested that this effect was specific to participants with low (<19 Hz) intrinsic beta frequency. In conclusion, by showing that beta tACS influences MEP amplitude in a phase-dependent manner, our results support a potential role attributed to neuronal oscillations in regulating corticospinal excitability. Moreover, our findings may be useful for the development of TMS protocols that improve the reliability of MEPs as a meaningful tool for research applications or for clinical monitoring and diagnosis. (C) 2018 Elsevier Ltd. All rights reserved.
  • Schiller, N. O. (1998). The effect of visually masked syllable primes on the naming latencies of words and pictures. Journal of Memory and Language, 39, 484-507. doi:10.1006/jmla.1998.2577.

    Abstract

    To investigate the role of the syllable in Dutch speech production, five experiments were carried out to examine the effect of visually masked syllable primes on the naming latencies for written words and pictures. Targets had clear syllable boundaries and began with a CV syllable (e.g., ka.no) or a CVC syllable (e.g., kak.tus), or had ambiguous syllable boundaries and began with a CV[C] syllable (e.g., ka[pp]er). In the syllable match condition, bisyllabic Dutch nouns or verbs were preceded by primes that were identical to the target’s first syllable. In the syllable mismatch condition, the prime was either shorter or longer than the target’s first syllable. A neutral condition was also included. None of the experiments showed a syllable priming effect. Instead, all related primes facilitated the naming of the targets. It is concluded that the syllable does not play a role in the process of phonological encoding in Dutch. Because the amount of facilitation increased with increasing overlap between prime and target, the priming effect is accounted for by a segmental overlap hypothesis.
  • Schillingmann, L., Ernst, J., Keite, V., Wrede, B., Meyer, A. S., & Belke, E. (2018). AlignTool: The automatic temporal alignment of spoken utterances in German, Dutch, and British English for psycholinguistic purposes. Behavior Research Methods, 50(2), 466-489. doi:10.3758/s13428-017-1002-7.

    Abstract

    In language production research, the latency with which speakers produce a spoken response to a stimulus and the onset and offset times of words in longer utterances are key dependent variables. Measuring these variables automatically often yields partially incorrect results. However, exact measurements through the visual inspection of the recordings are extremely time-consuming. We present AlignTool, an open-source alignment tool that establishes preliminarily the onset and offset times of words and phonemes in spoken utterances using Praat, and subsequently performs a forced alignment of the spoken utterances and their orthographic transcriptions in the automatic speech recognition system MAUS. AlignTool creates a Praat TextGrid file for inspection and manual correction by the user, if necessary. We evaluated AlignTool’s performance with recordings of single-word and four-word utterances as well as semi-spontaneous speech. AlignTool performs well with audio signals with an excellent signal-to-noise ratio, requiring virtually no corrections. For audio signals of lesser quality, AlignTool still is highly functional but its results may require more frequent manual corrections. We also found that audio recordings including long silent intervals tended to pose greater difficulties for AlignTool than recordings filled with speech, which AlignTool analyzed well overall. We expect that by semi-automatizing the temporal analysis of complex utterances, AlignTool will open new avenues in language production research.
  • Schoenmakers, G.-J., & Piepers, J. (2018). Echter kan het wel. Levende Talen Magazine, 105(4), 10-13.
  • De Schryver, J., Neijt, A., Ghesquière, P., & Ernestus, M. (2013). Zij surfde, maar hij durfte niet: De spellingproblematiek van de zwakke verleden tijd in Nederland en Vlaanderen. Dutch Journal of Applied Linguistics, 2(2), 133-151. doi:10.1075/dujal.2.2.01de.

    Abstract

    Hoewel de spelling van Nederlandse verledentijdsvormen van zwakke werkwoorden algemeen als eenvoudig wordt beschouwd (ze zijn immers klankzuiver) maken zelfs universiteitsstudenten opvallend veel fouten bij de keuze tussen de uitgangen -te en -de. Voor een deel zijn die fouten ‘natuurlijk’ in die zin dat ze het gevolg zijn van de werking van frequentie en analogie. Anderzijds stellen we vast dat Nederlanders veel meer fouten maken dan Vlamingen, althans als de stam op een coronale fricatief eindigt (s, z, f, v). Aangezien de Nederlandse proefpersonen de ‘regel’ (het ezelsbruggetje ’t kofschip) beter lijken te beheersen dan de Vlamingen, moet de verklaring voor het verschil gezocht worden in een klankverandering die zich wel in Nederland maar niet of nauwelijks in Vlaanderen voordoet, de verstemlozing van de fricatieven. Het spellingprobleem vraagt om didactische maatregelen en/of politieke: het kan wellicht grotendeels worden opgelost door de spellingregels een weinig aan te passen.
  • Schweinfurth, M. K., De Troy, S. E., Van Leeuwen, E. J. C., Call, J., & Haun, D. B. M. (2018). Spontaneous social tool use in Chimpanzees (Pan troglodytes). Journal of Comparative Psychology, 132(4), 455-463. doi:10.1037/com0000127.

    Abstract

    Although there is good evidence that social animals show elaborate cognitive skills to deal with others, there are few reports of animals physically using social agents and their respective responses as means to an end—social tool use. In this case study, we investigated spontaneous and repeated social tool use behavior in chimpanzees (Pan troglodytes). We presented a group of chimpanzees with an apparatus, in which pushing two buttons would release juice from a distantly located fountain. Consequently, any one individual could only either push the buttons or drink from the fountain but never push and drink simultaneously. In this scenario, an adult male attempted to retrieve three other individuals and push them toward the buttons that, if pressed, released juice from the fountain. With this strategy, the social tool user increased his juice intake 10-fold. Interestingly, the strategy was stable over time, which was possibly enabled by playing with the social tools. With over 100 instances, we provide the biggest data set on social tool use recorded among nonhuman animals so far. The repeated use of other individuals as social tools may represent a complex social skill linked to Machiavellian intelligence.
  • Seeliger, K., Fritsche, M., Güçlü, U., Schoenmakers, S., Schoffelen, J.-M., Bosch, S. E., & Van Gerven, M. A. J. (2018). Convolutional neural network-based encoding and decoding of visual object recognition in space and time. NeuroImage, 180, 253-266. doi:10.1016/j.neuroimage.2017.07.018.

    Abstract

    Representations learned by deep convolutional neural networks (CNNs) for object recognition are a widely
    investigated model of the processing hierarchy in the human visual system. Using functional magnetic resonance
    imaging, CNN representations of visual stimuli have previously been shown to correspond to processing stages in
    the ventral and dorsal streams of the visual system. Whether this correspondence between models and brain
    signals also holds for activity acquired at high temporal resolution has been explored less exhaustively. Here, we
    addressed this question by combining CNN-based encoding models with magnetoencephalography (MEG).
    Human participants passively viewed 1,000 images of objects while MEG signals were acquired. We modelled
    their high temporal resolution source-reconstructed cortical activity with CNNs, and observed a feed-forward
    sweep across the visual hierarchy between 75 and 200 ms after stimulus onset. This spatiotemporal cascade
    was captured by the network layer representations, where the increasingly abstract stimulus representation in the
    hierarchical network model was reflected in different parts of the visual cortex, following the visual ventral
    stream. We further validated the accuracy of our encoding model by decoding stimulus identity in a left-out
    validation set of viewed objects, achieving state-of-the-art decoding accuracy.
  • Segaert, K., Mazaheri, A., & Hagoort, P. (2018). Binding language: Structuring sentences through precisely timed oscillatory mechanisms. European Journal of Neuroscience, 48(7), 2651-2662. doi:10.1111/ejn.13816.

    Abstract

    Syntactic binding refers to combining words into larger structures. Using EEG, we investigated the neural processes involved in syntactic binding. Participants were auditorily presented two-word sentences (i.e. pronoun and pseudoverb such as ‘I grush’, ‘she grushes’, for which syntactic binding can take place) and wordlists (i.e. two pseudoverbs such as ‘pob grush’, ‘pob grushes’, for which no binding occurs). Comparing these two conditions, we targeted syntactic binding while minimizing contributions of semantic binding and of other cognitive processes such as working memory. We found a converging pattern of results using two distinct analysis approaches: one approach using frequency bands as defined in previous literature, and one data-driven approach in which we looked at the entire range of frequencies between 3-30 Hz without the constraints of pre-defined frequency bands. In the syntactic binding (relative to the wordlist) condition, a power increase was observed in the alpha and beta frequency range shortly preceding the presentation of the target word that requires binding, which was maximal over frontal-central electrodes. Our interpretation is that these signatures reflect that language comprehenders expect the need for binding to occur. Following the presentation of the target word in a syntactic binding context (relative to the wordlist condition), an increase in alpha power maximal over a left lateralized cluster of frontal-temporal electrodes was observed. We suggest that this alpha increase relates to syntactic binding taking place. Taken together, our findings suggest that increases in alpha and beta power are reflections of distinct the neural processes underlying syntactic binding.
  • Segaert, K., Kempen, G., Petersson, K. M., & Hagoort, P. (2013). Syntactic priming and the lexical boost effect during sentence production and sentence comprehension: An fMRI study. Brain and Language, 124, 174-183. doi:10.1016/j.bandl.2012.12.003.

    Abstract

    Behavioral syntactic priming effects during sentence comprehension are typically observed only if both the syntactic structure and lexical head are repeated. In contrast, during production syntactic priming occurs with structure repetition alone, but the effect is boosted by repetition of the lexical head. We used fMRI to investigate the neuronal correlates of syntactic priming and lexical boost effects during sentence production and comprehension. The critical measure was the magnitude of fMRI adaptation to repetition of sentences in active or passive voice, with or without verb repetition. In conditions with repeated verbs, we observed adaptation to structure repetition in the left IFG and MTG, for active and passive voice. However, in the absence of repeated verbs, adaptation occurred only for passive sentences. None of the fMRI adaptation effects yielded differential effects for production versus comprehension, suggesting that sentence comprehension and production are subserved by the same neuronal infrastructure for syntactic processing.

    Additional information

    Segaert_Supplementary_data_2013.docx
  • Segaert, K., Weber, K., De Lange, F., Petersson, K. M., & Hagoort, P. (2013). The suppression of repetition enhancement: A review of fMRI studies. Neuropsychologia, 51, 59-66. doi:10.1016/j.neuropsychologia.2012.11.006.

    Abstract

    Repetition suppression in fMRI studies is generally thought to underlie behavioural facilitation effects (i.e., priming) and it is often used to identify the neuronal representations associated with a stimulus. However, this pays little heed to the large number of repetition enhancement effects observed under similar conditions. In this review, we identify several cognitive variables biasing repetition effects in the BOLD response towards enhancement instead of suppression. These variables are stimulus recognition, learning, attention, expectation and explicit memory. We also evaluate which models can account for these repetition effects and come to the conclusion that there is no one single model that is able to embrace all repetition enhancement effects. Accumulation, novel network formation as well as predictive coding models can all explain subsets of repetition enhancement effects.
  • Segurado, R., Hamshere, M. L., Glaser, B., Nikolov, I., Moskvina, V., & Holmans, P. A. (2007). Combining linkage data sets for meta-analysis and mega-analysis: the GAW15 rheumatoid arthritis data set. BMC Proceedings, 1(Suppl 1): S104.

    Abstract

    We have used the genome-wide marker genotypes from Genetic Analysis Workshop 15 Problem 2 to explore joint evidence for genetic linkage to rheumatoid arthritis across several samples. The data consisted of four high-density genome scans on samples selected for rheumatoid arthritis. We cleaned the data, removed intermarker linkage disequilibrium, and assembled the samples onto a common genetic map using genome sequence positions as a reference for map interpolation. The individual studies were combined first at the genotype level (mega-analysis) prior to a multipoint linkage analysis on the combined sample, and second using the genome scan meta-analysis method after linkage analysis of each sample. The two approaches were compared, and give strong support to the HLA locus on chromosome 6 as a susceptibility locus. Other regions of interest include loci on chromosomes 11, 2, and 12.
  • Seifart, F., Evans, N., Hammarström, H., & Levinson, S. C. (2018). Language documentation twenty-five years on. Language, 94(4), e324-e345. doi:10.1353/lan.2018.0070.

    Abstract

    This discussion note reviews responses of the linguistics profession to the grave issues of language
    endangerment identified a quarter of a century ago in the journal Language by Krauss,
    Hale, England, Craig, and others (Hale et al. 1992). Two and a half decades of worldwide research
    not only have given us a much more accurate picture of the number, phylogeny, and typological
    variety of the world’s languages, but they have also seen the development of a wide range of new
    approaches, conceptual and technological, to the problem of documenting them. We review these
    approaches and the manifold discoveries they have unearthed about the enormous variety of linguistic
    structures. The reach of our knowledge has increased by about 15% of the world’s languages,
    especially in terms of digitally archived material, with about 500 languages now
    reasonably documented thanks to such major programs as DoBeS, ELDP, and DEL. But linguists
    are still falling behind in the race to document the planet’s rapidly dwindling linguistic diversity,
    with around 35–42% of the world’s languages still substantially undocumented, and in certain
    countries (such as the US) the call by Krauss (1992) for a significant professional realignment toward
    language documentation has only been heeded in a few institutions. Apart from the need for
    an intensified documentarist push in the face of accelerating language loss, we argue that existing
    language documentation efforts need to do much more to focus on crosslinguistically comparable
    data sets, sociolinguistic context, semantics, and interpretation of text material, and on methods
    for bridging the ‘transcription bottleneck’, which is creating a huge gap between the amount we
    can record and the amount in our transcribed corpora.*
  • Sekine, K., Wood, C., & Kita, S. (2018). Gestural depiction of motion events in narrative increases symbolic distance with age. Language, Interaction and Acquisition, 9(1), 11-21. doi:10.1075/lia.15020.sek.

    Abstract

    We examined gesture representation of motion events in narratives produced by three- and nine-year-olds, and adults. Two aspects of gestural depiction were analysed: how protagonists were depicted, and how gesture space was used. We found that older groups were more likely to express protagonists as an object that a gesturing hand held and manipulated, and less likely to express protagonists with whole-body enactment gestures. Furthermore, for older groups, gesture space increasingly became less similar to narrated space. The older groups were less likely to use large gestures or gestures in the periphery of the gesture space to represent movements that were large relative to a protagonist’s body or that took place next to a protagonist. They were also less likely to produce gestures on a physical surface (e.g. table) to represent movement on a surface in narrated events. The development of gestural depiction indicates that older speakers become less immersed in the story world and start to control and manipulate story representation from an outside perspective in a bounded and stage-like gesture space. We discuss this developmental shift in terms of increasing symbolic distancing (Werner & Kaplan, 1963).
  • Sekine, K., Rose, M. L., Foster, A. M., Attard, M. C., & Lanyon, L. E. (2013). Gesture production patterns in aphasic discourse: In-depth description and preliminary predictions. Aphasiology, 27(9), 1031-1049. doi:10.1080/02687038.2013.803017.

    Abstract

    Background: Gesture frequently accompanies speech in healthy speakers. For many individuals with aphasia, gestures are a target of speech-language pathology intervention, either as an alternative form of communication or as a facilitative device for language restoration. The patterns of gesture production for people with aphasia and the participant variables that predict these patterns remain unclear. Aims: We aimed to examine gesture production during conversational discourse in a large sample of individuals with aphasia. We used a detailed gesture coding system to determine patterns of gesture production associated with specific aphasia types and severities. Methods & Procedures: We analysed conversation samples from AphasiaBank, gathered from 46 people with post-stroke aphasia and 10 healthy matched controls all of whom had gestured at least once during a story re-tell task. Twelve gesture types were coded. Descriptive statistics were used to describe the patterns of gesture production. Possible significant differences in production patterns according to aphasia type and severity were examined with a series of analyses of variance (ANOVA) statistics, and multiple regression analysis was used to examine these potential predictors of gesture production patterns. Outcomes & Results: Individuals with aphasia gestured significantly more frequently than healthy controls. Aphasia type and severity impacted significantly on gesture type in specific identified patterns detailed here, especially on the production of meaning-laden gestures. Conclusions: These patterns suggest the opportunity for gestures as targets of aphasia therapy. Aphasia fluency accounted for a greater degree of data variability than aphasia severity or naming skills. More work is required to delineate predictive factors.
  • Sekine, K., & Rose, M. L. (2013). The relationship of aphasia type and gesture production in people with aphasia. American Journal of Speech-Language Pathology, 22, 662-672. doi:10.1044/1058-0360(2013/12-0030).

    Abstract

    Purpose For many individuals with aphasia, gestures form a vital component of message transfer and are the target of speech-language pathology intervention. What remains unclear are the participant variables that predict successful outcomes from gesture treatments. The authors examined the gesture production of a large number of individuals with aphasia—in a consistent discourse sampling condition and with a detailed gesture coding system—to determine patterns of gesture production associated with specific types of aphasia. Method The authors analyzed story retell samples from AphasiaBank (TalkBank, n.d.), gathered from 98 individuals with aphasia resulting from stroke and 64 typical controls. Twelve gesture types were coded. Descriptive statistics were used to describe the patterns of gesture production. Possible significant differences in production patterns according to aphasia type were examined using a series of chi-square, Fisher exact, and logistic regression statistics. Results A significantly higher proportion of individuals with aphasia gestured as compared to typical controls, and for many individuals with aphasia, this gesture was iconic and was capable of communicative load. Aphasia type impacted significantly on gesture type in specific identified patterns, detailed here. Conclusion These type-specific patterns suggest the opportunity for gestures as targets of aphasia therapy.
  • Senft, G. (1998). Body and mind in the Trobriand Islands. Ethos, 26, 73-104. doi:10.1525/eth.1998.26.1.73.

    Abstract

    This article discusses how the Trobriand Islanders speak about body and mind. It addresses the following questions: do the linguistic datafit into theories about lexical universals of body-part terminology? Can we make inferences about the Trobrianders' conceptualization of psychological and physical states on the basis of these data? If a Trobriand Islander sees these idioms as external manifestations of inner states, then can we interpret them as a kind of ethnopsychological theory about the body and its role for emotions, knowledge, thought, memory, and so on? Can these idioms be understood as representation of Trobriand ethnopsychological theory?
  • Senft, G. (1998). [Review of the book Anthropological linguistics: An introduction by William A. Foley]. Linguistics, 36, 995-1001.
  • Senft, G. (1991). [Review of the book Einführung in die deskriptive Linguistik by Michael Dürr and Peter Schlobinski]. Linguistics, 29, 722-725.
  • Senft, G. (1991). [Review of the book The sign languages of Aboriginal Australia by Adam Kendon]. Journal of Pragmatics, 15, 400-405. doi:10.1016/0378-2166(91)90040-5.
  • Senft, G. (1986). [Review of the book Under the Tumtum tree: From nonsense to sense in nonautomatic comprehension by Marlene Dolitsky]. Journal of Pragmatics, 10, 273-278. doi:10.1016/0378-2166(86)90094-9.
  • Senft, G. (2007). [Review of the book Bislama reference grammar by Terry Crowley]. Linguistics, 45(1), 235-239.
  • Senft, G. (2007). [Review of the book Serial verb constructions - A cross-linguistic typology by Alexandra Y. Aikhenvald and Robert M. W. Dixon]. Linguistics, 45(4), 833-840. doi:10.1515/LING.2007.024.
  • Senft, G. (1991). Network models to describe the Kilivila classifier system. Oceanic Linguistics, 30, 131-155. Retrieved from http://www.jstor.org/stable/3623085.
  • Senft, B., & Senft, G. (1986). Ninikula - Fadenspiele auf den Trobriand Inseln: Untersuchungen zum Spiele-Repertoire unter besonderer Berürcksichtigung der Spiel-begeleitenden Texte. Baessler Archiv: Beiträge zur Völkerkunde, N.F. 34, 92-235.
  • Senft, G., & Senft, B. (1986). Ninikula Fadenspiele auf den Trobriand-Inseln, Papua-Neuguinea: Untersuchungen zum Spiele-Repertoire unter besonderer Berücksichtigung der Spiel-begleitendenden Texte. Baessler-Archiv: Beiträge zur Völkerkunde, 34(1), 93-235.
  • Seuren, P. A. M. (2007). The theory that dare not speak its name: A rejoinder to Mufwene and Francis. Language Sciences, 29(4), 571-573. doi:10.1016/j.langsci.2007.02.001.
  • Seuren, P. A. M. (1986). Adjectives as adjectives in Sranan. Journal of Pidgin and Creole Languages, 1(1), 123-134.
  • Seuren, P. A. M. (1982). De spellingsproblematiek in Suriname: Een inleiding. OSO, 1(1), 71-79.
  • Seuren, P. A. M. (1998). [Review of the book Adverbial subordination; A typology and history of adverbial subordinators based on European languages by Bernd Kortmann]. Cognitive Linguistics, 9(3), 317-319. doi:10.1515/cogl.1998.9.3.315.
  • Seuren, P. A. M. (1998). [Review of the book The Dutch pendulum: Linguistics in the Netherlands 1740-1900 by Jan Noordegraaf]. Bulletin of the Henry Sweet Society, 31, 46-50.
  • Seuren, P. A. M. (1986). Formal theory and the ecology of language. Theoretical Linguistics, 13(1), 1-18. doi:10.1515/thli.1986.13.1-2.1.
  • Seuren, P. A. M. (1986). La transparence sémantique et la genèse des langues créoles: Le cas du Créole mauricien. Études Créoles, 9, 169-183.
  • Seuren, P. A. M. (1991). Grammatika als algorithme: Rekenen met taal. Koninklijke Nederlandse Akademie van Wetenschappen. Mededelingen van de Afdeling Letterkunde, Nieuwe Reeks, 54(2), 25-63.
  • Seuren, P. A. M. (1986). Helpen en helpen is twee. Glot, 9(1/2), 110-117.
  • Seuren, P. A. M. (1982). Internal variability in competence. Linguistische Berichte, 77, 1-31.
  • Seuren, P. A. M. (1998). Obituary. Herman Christiaan Wekker 1943–1997. Journal of Pidgin and Creole Languages, 13(1), 159-162.
  • Seuren, P. A. M. (1986). The self-styling of relevance theory [Review of the book Relevance, Communication and Cognition by Dan Sperber and Deirdre Wilson]. Journal of Semantics, 5(2), 123-143. doi:10.1093/jos/5.2.123.
  • Shao, Z., Meyer, A. S., & Roelofs, A. (2013). Selective and nonselective inhibition of competitors in picture naming. Memory & Cognition, 41(8), 1200-1211. doi:10.3758/s13421-013-0332-7.

    Abstract

    The present study examined the relation between nonselective inhibition and selective inhibition in picture naming performance. Nonselective inhibition refers to the ability to suppress any unwanted response, whereas selective inhibition refers to the ability to suppress specific competing responses. The degree of competition in picture naming was manipulated by presenting targets along with distractor words that could be semantically related (e.g., a picture of a dog combined with the word cat) or unrelated (tree) to the picture name. The mean naming response time (RT) was longer in the related than in the unrelated condition, reflecting semantic interference. Delta plot analyses showed that participants with small mean semantic interference effects employed selective inhibition more effectively than did participants with larger semantic interference effects. The participants were also tested on the stop-signal task, which taps nonselective inhibition. Their performance on this task was correlated with their mean naming RT but, importantly, not with the selective inhibition indexed by the delta plot analyses and the magnitude of the semantic interference effect. These results indicate that nonselective inhibition ability and selective inhibition of competitors in picture naming are separable to some extent.
  • Sikora, K., & Roelofs, A. (2018). Switching between spoken language-production tasks: the role of attentional inhibition and enhancement. Language, Cognition and Neuroscience, 33(7), 912-922. doi:10.1080/23273798.2018.1433864.

    Abstract

    Since Pillsbury [1908. Attention. London: Swan Sonnenschein & Co], the issue of whether attention operates through inhibition or enhancement has been on the scientific agenda. We examined whether overcoming previous attentional inhibition or enhancement is the source of asymmetrical switch costs in spoken noun-phrase production and colour-word Stroop tasks. In Experiment 1, using bivalent stimuli, we found asymmetrical costs in response times for switching between long and short phrases and between Stroop colour naming and reading. However, in Experiment 2, using bivalent stimuli for the weaker tasks (long phrases, colour naming) and univalent stimuli for the stronger tasks (short phrases, word reading), we obtained an asymmetrical switch cost for phrase production, but a symmetrical cost for Stroop. The switch cost evidence was quantified using Bayesian statistical analyses. Our findings suggest that switching between phrase types involves inhibition, whereas switching between colour naming and reading involves enhancement. Thus, the attentional mechanism depends on the language-production task involved. The results challenge theories of task switching that assume only one attentional mechanism, inhibition or enhancement, rather than both mechanisms.
  • Silva, S., Folia, V., Inácio, F., Castro, S. L., & Petersson, K. M. (2018). Modality effects in implicit artificial grammar learning: An EEG study. Brain Research, 1687, 50-59. doi:10.1016/j.brainres.2018.02.020.

    Abstract

    Recently, it has been proposed that sequence learning engages a combination of modality-specific operating networks and modality-independent computational principles. In the present study, we compared the behavioural and EEG outcomes of implicit artificial grammar learning in the visual vs. auditory modality. We controlled for the influence of surface characteristics of sequences (Associative Chunk Strength), thus focusing on the strictly structural aspects of sequence learning, and we adapted the paradigms to compensate for known frailties of the visual modality compared to audition (temporal presentation, fast presentation rate). The behavioural outcomes were similar across modalities. Favouring the idea of modality-specificity, ERPs in response to grammar violations differed in topography and latency (earlier and more anterior component in the visual modality), and ERPs in response to surface features emerged only in the auditory modality. In favour of modality-independence, we observed three common functional properties in the late ERPs of the two grammars: both were free of interactions between structural and surface influences, both were more extended in a grammaticality classification test than in a preference classification test, and both correlated positively and strongly with theta event-related-synchronization during baseline testing. Our findings support the idea of modality-specificity combined with modality-independence, and suggest that memory for visual vs. auditory sequences may largely contribute to cross-modal differences.
  • Sjerps, M. J., & Smiljanic, R. (2013). Compensation for vocal tract characteristics across native and non-native languages. Journal of Phonetics, 41, 145-155. doi:10.1016/j.wocn.2013.01.005.

    Abstract

    Perceptual compensation for speaker vocal tract properties was investigated in four groups of listeners: native speakers of English and native speakers of Dutch, native speakers of Spanish with low proficiency in English, and Spanish-English bilinguals. Listeners categorized targets on a [sofo] to [sufu] continuum. Targets were preceded by sentences that were manipulated to have either a high or a low F1 contour. All listeners performed the categorization task for targets that were preceded by Spanish, English and Dutch precursors. Results show that listeners from each of the four language backgrounds compensate for speaker vocal tract properties regardless of language-specific vowel inventory properties. Listeners also compensate when they listen to stimuli in another language. The results suggest that patterns of compensation are mainly determined by auditory properties of precursor sentences.
  • Sjerps, M. J. (2013). [Contribution to NextGen VOICES survey: Science communication's future]. Science, 340 (no. 6128, online supplement). Retrieved from http://www.sciencemag.org/content/340/6128/28/suppl/DC1.

    Abstract

    One of the important challenges for the development of science communication concerns the current problems with the under-exposure of null results. I suggest that each article published in a top scientific journal can get tagged (online) with attempts to replicate. As such, a future reader of an article will also be able to see whether replications have been attempted and how these turned out. Editors and/or reviewers decide whether a replication is of sound quality. The authors of the main article have the option to review the replication and can provide a supplementary comment with each attempt that is added. After 5 or 10 years, and provided enough attempts to replicate, the authors of the main article get the opportunity to discuss/review their original study in light of the outcomes of the replications. This approach has two important strengths: 1) The approach would provide researchers with the opportunity to show that they deliver scientifically thorough work, but sometimes just fail to replicate the result that others have reported. This can be especially valuable for the career opportunities of promising young researchers; 2) perhaps even more important, the visibility of replications provides an important incentive for researchers to publish findings only if they are sure that their effects are reliable (and thereby reduce the influence of "experimenter degrees of freedom" or even outright fraud). The proposed approach will stimulate researchers to look beyond the point of publication of their studies.
  • Sjerps, M. J., Zhang, C., & Peng, G. (2018). Lexical Tone is Perceived Relative to Locally Surrounding Context, Vowel Quality to Preceding Context. Journal of Experimental Psychology: Human Perception and Performance, 44(6), 914-924. doi:10.1037/xhp0000504.

    Abstract

    Important speech cues such as lexical tone and vowel quality are perceptually contrasted to the distribution of those same cues in surrounding contexts. However, it is unclear whether preceding and following contexts have similar influences, and to what extent those influences are modulated by the auditory history of previous trials. To investigate this, Cantonese participants labeled sounds from (a) a tone continuum (mid- to high-level), presented with a context that had raised or lowered F0 values and (b) a vowel quality continuum (/u/ to /o/), where the context had raised or lowered F1 values. Contexts with high or low F0/F1 were presented in separate blocks or intermixed in 1 block. Contexts were presented following (Experiment 1) or preceding the target continuum (Experiment 2). Contrastive effects were found for both tone and vowel quality (e.g., decreased F0 values in contexts lead to more high tone target judgments and vice versa). Importantly, however, lexical tone was only influenced by F0 in immediately preceding and following contexts. Vowel quality was only influenced by the F1 in preceding contexts, but this extended to contexts from preceding trials. Contextual influences on tone and vowel quality are qualitatively different, which has important implications for understanding the mechanism of context effects in speech perception.
  • Sjerps, M. J., McQueen, J. M., & Mitterer, H. (2013). Evidence for precategorical extrinsic vowel normalization. Attention, Perception & Psychophysics, 75, 576-587. doi:10.3758/s13414-012-0408-7.

    Abstract

    Three experiments investigated whether extrinsic vowel normalization takes place largely at a categorical or a precategorical level of processing. Traditional vowel normalization effects in categorization were replicated in Experiment 1: Vowels taken from an [ɪ]-[ε] continuum were more often interpreted as /ɪ/ (which has a low first formant, F (1)) when the vowels were heard in contexts that had a raised F (1) than when the contexts had a lowered F (1). This was established with contexts that consisted of only two syllables. These short contexts were necessary for Experiment 2, a discrimination task that encouraged listeners to focus on the perceptual properties of vowels at a precategorical level. Vowel normalization was again found: Ambiguous vowels were more easily discriminated from an endpoint [ε] than from an endpoint [ɪ] in a high-F (1) context, whereas the opposite was true in a low-F (1) context. Experiment 3 measured discriminability between pairs of steps along the [ɪ]-[ε] continuum. Contextual influences were again found, but without discrimination peaks, contrary to what was predicted from the same participants' categorization behavior. Extrinsic vowel normalization therefore appears to be a process that takes place at least in part at a precategorical processing level.
  • Slobin, D. I., & Bowerman, M. (2007). Interfaces between linguistic typology and child language research. Linguistic Typology, 11(1), 213-226. doi:10.1515/LINGTY.2007.015.
  • Slone, L. K., Abney, D. H., Borjon, J. I., Chen, C.-h., Franchak, J. M., Pearcy, D., Suarez-Rivera, C., Xu, T. L., Zhang, Y., Smith, L. B., & Yu, C. (2018). Gaze in action: Head-mounted eye tracking of children's dynamic visual attention during naturalistic behavior. Journal of Visualized Experiments, (141): e58496. doi:10.3791/58496.

    Abstract

    Young children's visual environments are dynamic, changing moment-by-moment as children physically and visually explore spaces and objects and interact with people around them. Head-mounted eye tracking offers a unique opportunity to capture children's dynamic egocentric views and how they allocate visual attention within those views. This protocol provides guiding principles and practical recommendations for researchers using head-mounted eye trackers in both laboratory and more naturalistic settings. Head-mounted eye tracking complements other experimental methods by enhancing opportunities for data collection in more ecologically valid contexts through increased portability and freedom of head and body movements compared to screen-based eye tracking. This protocol can also be integrated with other technologies, such as motion tracking and heart-rate monitoring, to provide a high-density multimodal dataset for examining natural behavior, learning, and development than previously possible. This paper illustrates the types of data generated from head-mounted eye tracking in a study designed to investigate visual attention in one natural context for toddlers: free-flowing toy play with a parent. Successful use of this protocol will allow researchers to collect data that can be used to answer questions not only about visual attention, but also about a broad range of other perceptual, cognitive, and social skills and their development.
  • De Smedt, F., Merchie, E., Barendse, M. T., Rosseel, Y., De Naeghel, J., & Van Keer, H. (2018). Cognitive and motivational challenges in writing: Studying the relation with writing performance across students' gender and achievement level. Reading Research Quarterly, 53(2), 249-272. doi:10.1002/rrq.193.

    Abstract

    Abstract In the past, several assessment reports on writing repeatedly showed that elementary school students do not develop the essential writing skills to be successful in school. In this respect, prior research has pointed to the fact that cognitive and motivational challenges are at the root of the rather basic level of elementary students' writing performance. Additionally, previous research has revealed gender and achievement-level differences in elementary students' writing. In view of providing effective writing instruction for all students to overcome writing difficulties, the present study provides more in-depth insight into (a) how cognitive and motivational challenges mediate and correlate with students' writing performance and (b) whether and how these relations vary for boys and girls and for writers of different achievement levels. In the present study, 1,577 fifth- and sixth-grade students completed questionnaires regarding their writing self-efficacy, writing motivation, and writing strategies. In addition, half of the students completed two writing tests, respectively focusing on the informational or narrative text genre. Based on multiple group structural equation modeling (MG-SEM), we put forward two models: a MG-SEM model for boys and girls and a MG-SEM model for low, average, and high achievers. The results underline the importance of studying writing models for different groups of students in order to gain more refined insight into the complex interplay between motivational and cognitive challenges related to students' writing performance.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). An amodal shared resource model of language-mediated visual attention. Frontiers in Psychology, 4: 528. doi:10.3389/fpsyg.2013.00528.

    Abstract

    Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language-mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language-mediated eye gaze.
  • Smits, R. (1998). A model for dependencies in phonetic categorization. Proceedings of the 16th International Congress on Acoustics and the 135th Meeting of the Acoustical Society of America, 2005-2006.

    Abstract

    A quantitative model of human categorization behavior is proposed, which can be applied to 4-alternative forced-choice categorization data involving two binary classifications. A number of processing dependencies between the two classifications are explicitly formulated, such as the dependence of the location, orientation, and steepness of the class boundary for one classification on the outcome of the other classification. The significance of various types of dependencies can be tested statistically. Analyses of a data set from the literature shows that interesting dependencies in human speech recognition can be uncovered using the model.
  • Smulders, F. T. Y., Ten Oever, S., Donkers, F. C. L., Quaedflieg, C. W. E. M., & Van de Ven, V. (2018). Single-trial log transformation is optimal in frequency analysis of resting EEG alpha. European Journal of Neuroscience, 48(7), 2585-2598. doi:10.1111/ejn.13854.

    Abstract

    The appropriate definition and scaling of the magnitude of electroencephalogram (EEG) oscillations is an underdeveloped area. The aim of this study was to optimize the analysis of resting EEG alpha magnitude, focusing on alpha peak frequency and nonlinear transformation of alpha power. A family of nonlinear transforms, Box-Cox transforms, were applied to find the transform that (a) maximized a non-disputed effect: the increase in alpha magnitude when the eyes are closed (Berger effect), and (b) made the distribution of alpha magnitude closest to normal across epochs within each participant, or across participants. The transformations were performed either at the single epoch level or at the epoch-average level. Alpha peak frequency showed large individual differences, yet good correspondence between various ways to estimate it in 2min of eyes-closed and 2min of eyes-open resting EEG data. Both alpha magnitude and the Berger effect were larger for individual alpha than for a generic (8-12Hz) alpha band. The log-transform on single epochs (a) maximized the t-value of the contrast between the eyes-open and eyes-closed conditions when tested within each participant, and (b) rendered near-normally distributed alpha power across epochs and participants, thereby making further transformation of epoch averages superfluous. The results suggest that the log-normal distribution is a fundamental property of variations in alpha power across time in the order of seconds. Moreover, effects on alpha power appear to be multiplicative rather than additive. These findings support the use of the log-transform on single epochs to achieve appropriate scaling of alpha magnitude.
  • Snijders, T. M., Milivojevic, B., & Kemner, C. (2013). Atypical excitation-inhibition balance in autism captured by the gamma response to contextual modulation. NeuroImage: Clinical, 3, 65-72. doi:10.1016/j.nicl.2013.06.015.

    Abstract

    Atypical visual perception in people with autism spectrum disorders (ASD) is hypothesized to stem from an imbalance in excitatory and inhibitory processes in the brain. We used neuronal oscillations in the gamma frequency range (30 – 90 Hz), which emerge from a balanced interaction of excitation and inhibition in the brain, to assess contextual modulation processes in early visual perception. Electroencephalography was recorded in 12 high-functioning adults with ASD and 12 age- and IQ-matched control participants. Oscilla- tions in the gamma frequency range were analyzed in response to stimuli consisting of small line-like elements. Orientation-speci fi c contextual modulation was manipulated by parametrically increasing the amount of homogeneously oriented elements in the stimuli. The stimuli elicited a strong steady-state gamma response around the refresh-rate of 60 Hz, which was larger for controls than for participants with ASD. The amount of orientation homogeneity (contextual modulation) in fl uenced the gamma response in control subjects, while for subjects with ASD this was not the case. The atypical steady-state gamma response to contextual modulation in subjects with ASD may capture the link between an imbalance in excitatory and inhibitory neuronal processing and atypical visual processing in ASD
  • Snijders Blok, L., Rousseau, J., Twist, J., Ehresmann, S., Takaku, M., Venselaar, H., Rodan, L. H., Nowak, C. B., Douglas, J., Swoboda, K. J., Steeves, M. A., Sahai, I., Stumpel, C. T. R. M., Stegmann, A. P. A., Wheeler, P., Willing, M., Fiala, E., Kochhar, A., Gibson, W. T., Cohen, A. S. A. and 59 moreSnijders Blok, L., Rousseau, J., Twist, J., Ehresmann, S., Takaku, M., Venselaar, H., Rodan, L. H., Nowak, C. B., Douglas, J., Swoboda, K. J., Steeves, M. A., Sahai, I., Stumpel, C. T. R. M., Stegmann, A. P. A., Wheeler, P., Willing, M., Fiala, E., Kochhar, A., Gibson, W. T., Cohen, A. S. A., Agbahovbe, R., Innes, A. M., Au, P. Y. B., Rankin, J., Anderson, I. J., Skinner, S. A., Louie, R. J., Warren, H. E., Afenjar, A., Keren, B., Nava, C., Buratti, J., Isapof, A., Rodriguez, D., Lewandowski, R., Propst, J., Van Essen, T., Choi, M., Lee, S., Chae, J. H., Price, S., Schnur, R. E., Douglas, G., Wentzensen, I. M., Zweier, C., Reis, A., Bialer, M. G., Moore, C., Koopmans, M., Brilstra, E. H., Monroe, G. R., Van Gassen, K. L. I., Van Binsbergen, E., Newbury-Ecob, R., Bownass, L., Bader, I., Mayr, J. A., Wortmann, S. B., Jakielski, K. J., Strand, E. A., Kloth, K., Bierhals, T., The DDD study, Roberts, J. D., Petrovich, R. M., Machida, S., Kurumizaka, H., Lelieveld, S., Pfundt, R., Jansen, S., Derizioti, P., Faivre, L., Thevenon, J., Assoum, M., Shriberg, L., Kleefstra, T., Brunner, H. G., Wade, P. A., Fisher, S. E., & Campeau, P. M. (2018). CHD3 helicase domain mutations cause a neurodevelopmental syndrome with macrocephaly and impaired speech and language. Nature Communications, 9: 4619. doi:10.1038/s41467-018-06014-6.

    Abstract

    Chromatin remodeling is of crucial importance during brain development. Pathogenic
    alterations of several chromatin remodeling ATPases have been implicated in neurodevelopmental
    disorders. We describe an index case with a de novo missense mutation in CHD3,
    identified during whole genome sequencing of a cohort of children with rare speech disorders.
    To gain a comprehensive view of features associated with disruption of this gene, we use a
    genotype-driven approach, collecting and characterizing 35 individuals with de novo CHD3
    mutations and overlapping phenotypes. Most mutations cluster within the ATPase/helicase
    domain of the encoded protein. Modeling their impact on the three-dimensional structure
    demonstrates disturbance of critical binding and interaction motifs. Experimental assays with
    six of the identified mutations show that a subset directly affects ATPase activity, and all but
    one yield alterations in chromatin remodeling. We implicate de novo CHD3 mutations in a
    syndrome characterized by intellectual disability, macrocephaly, and impaired speech and
    language.
  • Snijders Blok, L., Hiatt, S. M., Bowling, K. M., Prokop, J. W., Engel, K. L., Cochran, J. N., Bebin, E. M., Bijlsma, E. K., Ruivenkamp, C. A. L., Terhal, P., Simon, M. E. H., Smith, R., Hurst, J. A., The DDD study, MCLaughlin, H., Person, R., Crunk, A., Wangler, M. F., Streff, H., Symonds, J. D., Zuberi, S. M. and 11 moreSnijders Blok, L., Hiatt, S. M., Bowling, K. M., Prokop, J. W., Engel, K. L., Cochran, J. N., Bebin, E. M., Bijlsma, E. K., Ruivenkamp, C. A. L., Terhal, P., Simon, M. E. H., Smith, R., Hurst, J. A., The DDD study, MCLaughlin, H., Person, R., Crunk, A., Wangler, M. F., Streff, H., Symonds, J. D., Zuberi, S. M., Elliott, K. S., Sanders, V. R., Masunga, A., Hopkin, R. J., Dubbs, H. A., Ortiz-Gonzalez, X. R., Pfundt, R., Brunner, H. G., Fisher, S. E., Kleefstra, T., & Cooper, G. M. (2018). De novo mutations in MED13, a component of the Mediator complex, are associated with a novel neurodevelopmental disorder. Human Genetics, 137(5), 375-388. doi:10.1007/s00439-018-1887-y.

    Abstract

    Many genetic causes of developmental delay and/or intellectual disability (DD/ID) are extremely rare, and robust discovery of these requires both large-scale DNA sequencing and data sharing. Here we describe a GeneMatcher collaboration which led to a cohort of 13 affected individuals harboring protein-altering variants, 11 of which are de novo, in MED13; the only inherited variant was transmitted to an affected child from an affected mother. All patients had intellectual disability and/or developmental delays, including speech delays or disorders. Other features that were reported in two or more patients include autism spectrum disorder, attention deficit hyperactivity disorder, optic nerve abnormalities, Duane anomaly, hypotonia, mild congenital heart abnormalities, and dysmorphisms. Six affected individuals had mutations that are predicted to truncate the MED13 protein, six had missense mutations, and one had an in-frame-deletion of one amino acid. Out of the seven non-truncating mutations, six clustered in two specific locations of the MED13 protein: an N-terminal and C-terminal region. The four N-terminal clustering mutations affect two adjacent amino acids that are known to be involved in MED13 ubiquitination and degradation, p.Thr326 and p.Pro327. MED13 is a component of the CDK8-kinase module that can reversibly bind Mediator, a multi-protein complex that is required for Polymerase II transcription initiation. Mutations in several other genes encoding subunits of Mediator have been previously shown to associate with DD/ID, including MED13L, a paralog of MED13. Thus, our findings add MED13 to the group of CDK8-kinase module-associated disease genes
  • Snijders, T. M., Kooijman, V., Cutler, A., & Hagoort, P. (2007). Neurophysiological evidence of delayed segmentation in a foreign language. Brain Research, 1178, 106-113. doi:10.1016/j.brainres.2007.07.080.

    Abstract

    Previous studies have shown that segmentation skills are language-specific, making it difficult to segment continuous speech in an unfamiliar language into its component words. Here we present the first study capturing the delay in segmentation and recognition in the foreign listener using ERPs. We compared the ability of Dutch adults and of English adults without knowledge of Dutch (‘foreign listeners’) to segment familiarized words from continuous Dutch speech. We used the known effect of repetition on the event-related potential (ERP) as an index of recognition of words in continuous speech. Our results show that word repetitions in isolation are recognized with equivalent facility by native and foreign listeners, but word repetitions in continuous speech are not. First, words familiarized in isolation are recognized faster by native than by foreign listeners when they are repeated in continuous speech. Second, when words that have previously been heard only in a continuous-speech context re-occur in continuous speech, the repetition is detected by native listeners, but is not detected by foreign listeners. A preceding speech context facilitates word recognition for native listeners, but delays or even inhibits word recognition for foreign listeners. We propose that the apparent difference in segmentation rate between native and foreign listeners is grounded in the difference in language-specific skills available to the listeners.
  • Snowdon, C. T., & Cronin, K. A. (2007). Cooperative breeders do cooperate. Behavioural Processes, 76, 138-141. doi:10.1016/j.beproc.2007.01.016.

    Abstract

    Bergmuller et al. (2007) make an important contribution to studies of cooperative breeding and provide a theoretical basis for linking the evolution of cooperative breeding with cooperative behavior.We have long been involved in empirical research on the only family of nonhuman primates to exhibit cooperative breeding, the Callitrichidae, which includes marmosets and tamarins, with studies in both field and captive contexts. In this paper we expand on three themes from Bergm¨uller et al. (2007) with empirical data. First we provide data in support of the importance of helpers and the specific benefits that helpers can gain in terms of fitness. Second, we suggest that mechanisms of rewarding helpers are more common and more effective in maintaining cooperative breeding than punishments. Third, we present a summary of our own research on cooperative behavior in cotton-top tamarins (Saguinus oedipus) where we find greater success in cooperative problem solving than has been reported for non-cooperatively breeding species.
  • Speed, L. J., & Majid, A. (2018). An exception to mental simulation: No evidence for embodied odor language. Cognitive Science, 42(4), 1146-1178. doi:10.1111/cogs.12593.

    Abstract

    Do we mentally simulate olfactory information? We investigated mental simulation of odors and sounds in two experiments. Participants retained a word while they smelled an odor or heard a sound, then rated odor/sound intensity and recalled the word. Later odor/sound recognition was also tested, and pleasantness and familiarity judgments were collected. Word recall was slower when the sound and sound-word mismatched (e.g., bee sound with the word typhoon). Sound recognition was higher when sounds were paired with a match or near-match word (e.g., bee sound with bee or buzzer). This indicates sound-words are mentally simulated. However, using the same paradigm no memory effects were observed for odor. Instead it appears odor-words only affect lexical-semantic representations, demonstrated by higher ratings of odor intensity and pleasantness when an odor was paired with a match or near-match word (e.g., peach odor with peach or mango). These results suggest fundamental differences in how odor and sound-words are represented.

    Additional information

    cogs12593-sup-0001-SupInfo.docx
  • Speed, L. J., & Majid, A. (2018). Superior olfactory language and cognition in odor-color synaesthesia. Journal of Experimental Psychology: Human Perception and Performance, 44(3), 468-481. doi:10.1037/xhp0000469.

    Abstract

    Olfaction is often considered a vestigial sense in humans, demoted throughout evolution to make way for the dominant sense of vision. This perspective on olfaction is reflected in how we think and talk about smells in the West, with odor imagery and odor language reported to be difficult. In the present study we demonstrate odor cognition is superior in odor-color synaesthesia, where there are additional sensory connections to odor concepts. Synaesthesia is a neurological phenomenon in which input in 1 modality leads to involuntary perceptual associations. Semantic accounts of synaesthesia posit synaesthetic associations are mediated by activation of inducing concepts. Therefore, synaesthetic associations may strengthen conceptual representations. To test this idea, we ran 6 odor-color synaesthetes and 17 matched controls on a battery of tasks exploring odor and color cognition. We found synaesthetes outperformed controls on tests of both odor and color discrimination, demonstrating for the first time enhanced perception in both the inducer (odor) and concurrent (color) modality. So, not only do synaesthetes have additional perceptual experiences in comparison to controls, their primary perceptual experience is also different. Finally, synaesthetes were more consistent and accurate at naming odors. We propose synaesthetic associations to odors strengthen odor concepts, making them more differentiated (facilitating odor discrimination) and easier to link with lexical representations (facilitating odor naming). In summary, we show for the first time that both odor language and perception is enhanced in people with synaesthetic associations to odors
  • Spiteri, E., Konopka, G., Coppola, G., Bomar, J., Oldham, M., Ou, J., Vernes, S. C., Fisher, S. E., Ren, B., & Geschwind, D. (2007). Identification of the transcriptional targets of FOXP2, a gene linked to speech and language, in developing human brain. American Journal of Human Genetics, 81(6), 1144-1157. doi:10.1086/522237.

    Abstract

    Mutations in FOXP2, a member of the forkhead family of transcription factor genes, are the only known cause of developmental speech and language disorders in humans. To date, there are no known targets of human FOXP2 in the nervous system. The identification of FOXP2 targets in the developing human brain, therefore, provides a unique tool with which to explore the development of human language and speech. Here, we define FOXP2 targets in human basal ganglia (BG) and inferior frontal cortex (IFC) by use of chromatin immunoprecipitation followed by microarray analysis (ChIP-chip) and validate the functional regulation of targets in vitro. ChIP-chip identified 285 FOXP2 targets in fetal human brain; statistically significant overlap of targets in BG and IFC indicates a core set of 34 transcriptional targets of FOXP2. We identified targets specific to IFC or BG that were not observed in lung, suggesting important regional and tissue differences in FOXP2 activity. Many target genes are known to play critical roles in specific aspects of central nervous system patterning or development, such as neurite outgrowth, as well as plasticity. Subsets of the FOXP2 transcriptional targets are either under positive selection in humans or differentially expressed between human and chimpanzee brain. This is the first ChIP-chip study to use human brain tissue, making the FOXP2-target genes identified in these studies important to understanding the pathways regulating speech and language in the developing human brain. These data provide the first insight into the functional network of genes directly regulated by FOXP2 in human brain and by evolutionary comparisons, highlighting genes likely to be involved in the development of human higher-order cognitive processes.
  • Starreveld, P. A., La Heij, W., & Verdonschot, R. G. (2013). Time course analysis of the effects of distractor frequency and categorical relatedness in picture naming: An evaluation of the response exclusion account. Language and Cognitive Processes, 28(5), 633-654. doi:10.1080/01690965.2011.608026.

    Abstract

    The response exclusion account (REA), advanced by Mahon and colleagues, localises the distractor frequency effect and the semantic interference effect in picture naming at the level of the response output buffer. We derive four predictions from the REA: (1) the size of the distractor frequency effect should be identical to the frequency effect obtained when distractor words are read aloud, (2) the distractor frequency effect should not change in size when stimulus-onset asynchrony (SOA) is manipulated, (3) the interference effect induced by a distractor word (as measured from a nonword control distractor) should increase in size with increasing SOA, and (4) the word frequency effect and the semantic interference effect should be additive. The results of the picture-naming task in Experiment 1 and the word-reading task in Experiment 2 refute all four predictions. We discuss a tentative account of the findings obtained within a traditional selection-by-competition model in which both context effects are localised at the level of lexical selection.
  • Stephens, S., Hartz, S., Hoft, N., Saccone, N., Corley, R., Hewitt, J., Hopfer, C., Breslau, N., Coon, H., Chen, X., Ducci, F., Dueker, N., Franceschini, N., Frank, J., Han, Y., Hansel, N., Jiang, C., Korhonen, T., Lind, P., Liu, J. and 105 moreStephens, S., Hartz, S., Hoft, N., Saccone, N., Corley, R., Hewitt, J., Hopfer, C., Breslau, N., Coon, H., Chen, X., Ducci, F., Dueker, N., Franceschini, N., Frank, J., Han, Y., Hansel, N., Jiang, C., Korhonen, T., Lind, P., Liu, J., Michel, M., Lyytikäinen, L.-P., Shaffer, J., Short, S., Sun, J., Teumer, A., Thompson, J., Vogelzangs, N., Vink, J., Wenzlaff, A., Wheeler, W., Yang, B.-Z., Aggen, S., Balmforth, A., Baumesiter, S., Beaty, T., Benjamin, D., Bergen, A., Broms, U., Cesarini, D., Chatterjee, N., Chen, J., Cheng, Y.-C., Cichon, S., Couper, D., Cucca, F., Dick, D., Foround, T., Furberg, H., Giegling, I., Gillespie, N., Gu, F.,.Hall, A., Hällfors, J., Han, S., Hartmann, A., Heikkilä, K., Hickie, I., Hottenga, J., Jousilahti, P., Kaakinen, M., Kähönen, M., Koellinger, P., Kittner, S., Konte, B., Landi, M.-T., Laatikainen, T., Leppert, M., Levy, S., Mathias, R., McNeil, D., Medlund, S., Montgomery, G., Murray, T., Nauck, M., North, K., Paré, P., Pergadia, M., Ruczinski, I., Salomaa, V., Viikari, J., Willemsen, G., Barnes, K., Boerwinkle, E., Boomsma, D., Caporaso, N., Edenberg, H., Francks, C., Gelernter, J., Grabe, H., Hops, H., Jarvelin, M.-R., Johannesson, M., Kendler, K., Lehtimäki, T., Magnusson, P., Marazita, M., Marchini, J., Mitchell, B., Nöthen, M., Penninx, B., Raitakari, O., Rietschel, M., Rujescu, D., Samani, N., Schwartz, A., Shete, S., Spitz, M., Swan, G., Völzke, H., Veijola, J., Wei, Q., Amos, C., Canon, D., Grucza, R., Hatsukami, D., Heath, A., Johnson, E., Kaprio, J., Madden, P., Martin, N., Stevens, V., Weiss, R., Kraft, P., Bierut, L., & Ehringer, M. (2013). Distinct Loci in the CHRNA5/CHRNA3/CHRNB4 Gene Cluster are Associated with Onset of Regular Smoking. Genetic Epidemiology, 37, 846-859. doi:10.1002/gepi.21760.

    Abstract

    Neuronal nicotinic acetylcholine receptor (nAChR) genes (CHRNA5/CHRNA3/CHRNB4) have been reproducibly associated with nicotine dependence, smoking behaviors, and lung cancer risk. Of the few reports that have focused on early smoking behaviors, association results have been mixed. This meta-analysis examines early smoking phenotypes and SNPs in the gene cluster to determine: (1) whether the most robust association signal in this region (rs16969968) for other smoking behaviors is also associated with early behaviors, and/or (2) if additional statistically independent signals are important in early smoking. We focused on two phenotypes: age of tobacco initiation (AOI) and age of first regular tobacco use (AOS). This study included 56,034 subjects (41 groups) spanning nine countries and evaluated five SNPs including rs1948, rs16969968, rs578776, rs588765, and rs684513. Each dataset was analyzed using a centrally generated script. Meta-analyses were conducted from summary statistics. AOS yielded significant associations with SNPs rs578776 (beta = 0.02, P = 0.004), rs1948 (beta = 0.023, P = 0.018), and rs684513 (beta = 0.032, P = 0.017), indicating protective effects. There were no significant associations for the AOI phenotype. Importantly, rs16969968, the most replicated signal in this region for nicotine dependence, cigarettes per day, and cotinine levels, was not associated with AOI (P = 0.59) or AOS (P = 0.92). These results provide important insight into the complexity of smoking behavior phenotypes, and suggest that association signals in the CHRNA5/A3/B4 gene cluster affecting early smoking behaviors may be different from those affecting the mature nicotine dependence phenotype

    Files private

    Request files
  • Stewart, L., Verdonschot, R. G., Nasralla, P., & Lanipekun, J. (2013). Action–perception coupling in pianists: Learned mappings or spatial musical association of response codes (SMARC) effect? Quarterly Journal of Experimental Psychology, 66(1), 37-50. doi:10.1080/17470218.2012.687385.

    Abstract

    The principle of common coding suggests that a joint representation is formed when actions are repeatedly paired with a specific perceptual event. Musicians are occupationally specialized with regard to the coupling between actions and their auditory effects. In the present study, we employed a novel paradigm to demonstrate automatic action–effect associations in pianists. Pianists and nonmusicians pressed keys according to aurally presented number sequences. Numbers were presented at pitches that were neutral, congruent, or incongruent with respect to pitches that would normally be produced by such actions. Response time differences were seen between congruent and incongruent sequences in pianists alone. A second experiment was conducted to determine whether these effects could be attributed to the existence of previously documented spatial/pitch compatibility effects. In a “stretched” version of the task, the pitch distance over which the numbers were presented was enlarged to a range that could not be produced by the hand span used in Experiment 1. The finding of a larger response time difference between congruent and incongruent trials in the original, standard, version compared with the stretched version, in pianists, but not in nonmusicians, indicates that the effects obtained are, at least partially, attributable to learned action effects.
  • Stewart, A., Holler, J., & Kidd, E. (2007). Shallow processing of ambiguous pronouns: Evidence for delay. Quarterly Journal of Experimental Psychology, 60, 1680-1696. doi:10.1080/17470210601160807.
  • Stivers, T., & Majid, A. (2007). Questioning children: Interactional evidence of implicit bias in medical interviews. Social Psychology Quarterly, 70(4), 424-441.

    Abstract

    Social psychologists have shown experimentally that implicit race bias can influence an individual's behavior. Implicit bias has been suggested to be more subtle and less subject to cognitive control than more explicit forms of racial prejudice. Little is known about how implicit bias is manifest in naturally occurring social interaction. This study examines the factors associated with physicians selecting children rather than parents to answer questions in pediatric interviews about routine childhood illnesses. Analysis of the data using a Generalized Linear Latent and Mixed Model demonstrates a significant effect of parent race and education on whether physicians select children to answer questions. Black children and Latino children of low-education parents are less likely to be selected to answer questions than their same aged white peers irrespective of education. One way that implicit bias manifests itself in naturally occurring interaction may be through the process of speaker selection during questioning.
  • Stivers, T. (1998). Prediagnostic commentary in veterinarian-client interaction. Research on Language and Social Interaction, 31(2), 241-277. doi:10.1207/s15327973rlsi3102_4.
  • Stoehr, A., Benders, T., Van Hell, J. G., & Fikkert, P. (2018). Heritage language exposure impacts voice onset time of Dutch–German simultaneous bilingual preschoolers. Bilingualism: Language and Cognition, 21(3), 598-617. doi:10.1017/S1366728917000116.

    Abstract

    This study assesses the effects of age and language exposure on VOT production in 29 simultaneous bilingual children aged 3;7 to 5;11 who speak German as a heritage language in the Netherlands. Dutch and German have a binary voicing contrast, but the contrast is implemented with different VOT values in the two languages. The results suggest that bilingual children produce ‘voiced’ plosives similarly in their two languages, and these productions are not monolingual-like in either language. Bidirectional cross-linguistic influence between Dutch and German can explain these results. Yet, the bilinguals seemingly have two autonomous categories for Dutch and German ‘voiceless’ plosives. In German, the bilinguals’ aspiration is not monolingual-like, but bilinguals with more heritage language exposure produce more target-like aspiration. Importantly, the amount of exposure to German has no effect on the majority language's ‘voiceless’ category. This implies that more heritage language exposure is associated with more language-specific voicing systems.
  • Stolk, A., Griffin, S., Van der Meij, R., Dewar, C., Saez, I., Lin, J. J., Piantoni, G., Schoffelen, J.-M., Knight, R. T., & Oostenveld, R. (2018). Integrated analysis of anatomical and electrophysiological human intracranial data. Nature Protocols, 13, 1699-1723. doi:10.1038/s41596-018-0009-6.

    Abstract

    Human intracranial electroencephalography (iEEG) recordings provide data with much greater spatiotemporal precision
    than is possible from data obtained using scalp EEG, magnetoencephalography (MEG), or functional MRI. Until recently,
    the fusion of anatomical data (MRI and computed tomography (CT) images) with electrophysiological data and their
    subsequent analysis have required the use of technologically and conceptually challenging combinations of software.
    Here, we describe a comprehensive protocol that enables complex raw human iEEG data to be converted into more readily
    comprehensible illustrative representations. The protocol uses an open-source toolbox for electrophysiological data
    analysis (FieldTrip). This allows iEEG researchers to build on a continuously growing body of scriptable and reproducible
    analysis methods that, over the past decade, have been developed and used by a large research community. In this
    protocol, we describe how to analyze complex iEEG datasets by providing an intuitive and rapid approach that can handle
    both neuroanatomical information and large electrophysiological datasets. We provide a worked example using
    an example dataset. We also explain how to automate the protocol and adjust the settings to enable analysis of
    iEEG datasets with other characteristics. The protocol can be implemented by a graduate student or postdoctoral
    fellow with minimal MATLAB experience and takes approximately an hour to execute, excluding the automated cortical
    surface extraction.
  • Stolk, A., Verhagen, L., Schoffelen, J.-M., Oostenveld, R., Blokpoel, M., Hagoort, P., van Rooij, I., & Tonia, I. (2013). Neural mechanisms of communicative innovation. Proceedings of the National Academy of Sciences of the United States of America, 110(36), 14574-14579. doi:10.1073/pnas.1303170110.

    Abstract

    Human referential communication is often thought as coding-decoding a set of symbols, neglecting that establishing shared meanings requires a computational mechanism powerful enough to mutually negotiate them. Sharing the meaning of a novel symbol might rely on similar conceptual inferences across communicators or on statistical similarities in their sensorimotor behaviors. Using magnetoencephalography, we assess spectral, temporal, and spatial characteristics of neural activity evoked when people generate and understand novel shared symbols during live communicative interactions. Solving those communicative problems induced comparable changes in the spectral profile of neural activity of both communicators and addressees. This shared neuronal up-regulation was spatially localized to the right temporal lobe and the ventromedial prefrontal cortex and emerged already before the occurrence of a specific communicative problem. Communicative innovation relies on neuronal computations that are shared across generating and understanding novel shared symbols, operating over temporal scales independent from transient sensorimotor behavior.
  • Stolk, A., Todorovic, A., Schoffelen, J.-M., & Oostenveld, R. (2013). Online and offline tools for head movement compensation in MEG. NeuroImage, 68, 39-48. doi:10.1016/j.neuroimage.2012.11.047.

    Abstract

    Magnetoencephalography (MEG) is measured above the head, which makes it sensitive to variations of the head position with respect to the sensors. Head movements blur the topography of the neuronal sources of the MEG signal, increase localization errors, and reduce statistical sensitivity. Here we describe two novel and readily applicable methods that compensate for the detrimental effects of head motion on the statistical sensitivity of MEG experiments. First, we introduce an online procedure that continuously monitors head position. Second, we describe an offline analysis method that takes into account the head position time-series. We quantify the performance of these methods in the context of three different experimental settings, involving somatosensory, visual and auditory stimuli, assessing both individual and group-level statistics. The online head localization procedure allowed for optimal repositioning of the subjects over multiple sessions, resulting in a 28% reduction of the variance in dipole position and an improvement of up to 15% in statistical sensitivity. Offline incorporation of the head position time-series into the general linear model resulted in improvements of group-level statistical sensitivity between 15% and 29%. These tools can substantially reduce the influence of head movement within and between sessions, increasing the sensitivity of many cognitive neuroscience experiments.
  • Sulik, J. (2018). Cognitive mechanisms for inferring the meaning of novel signals during symbolisation. PLoS One, 13(1): e0189540. doi:10.1371/journal.pone.0189540.

    Abstract

    As participants repeatedly interact using graphical signals (as in a game of Pictionary), the signals gradually shift from being iconic (or motivated) to being symbolic (or arbitrary). The aim here is to test experimentally whether this change in the form of the signal implies a concomitant shift in the inferential mechanisms needed to understand it. The results show that, during early, iconic stages, there is more reliance on creative inferential processes associated with insight problem solving, and that the recruitment of these cognitive mechanisms decreases over time. The variation in inferential mechanism is not predicted by the sign’s visual complexity or iconicity, but by its familiarity, and by the complexity of the relevant mental representations. The discussion explores implications for pragmatics, language evolution, and iconicity research.
  • Swaab, T. Y., Brown, C. M., & Hagoort, P. (1998). Understanding ambiguous words in sentence contexts: Electrophysiological evidence for delayed contextual selection in Broca's aphasia. Neuropsychologia, 36(8), 737-761. doi:10.1016/S0028-3932(97)00174-7.

    Abstract

    This study investigates whether spoken sentence comprehension deficits in Broca's aphasics results from their inability to access the subordinate meaning of ambiguous words (e.g. bank), or alternatively, from a delay in their selection of the contextually appropriate meaning. Twelve Broca's aphasics and twelve elderly controls were presented with lexical ambiguities in three context conditions, each followed by the same target words. In the concordant condition, the sentence context biased the meaning of the sentence final ambiguous word that was related to the target. In the discordant condition, the sentence context biased the meaning of the sentence final ambiguous word that was incompatible with the target.In the unrelated condition, the sentence-final word was unambiguous and unrelated to the target. The task of the subjects was to listen attentively to the stimuli The activational status of the ambiguous sentence-final words was inferred from the amplitude of the N399 to the targets at two inter-stimulus intervals (ISIs) (100 ms and 1250 ms). At the short ISI, the Broca's aphasics showed clear evidence of activation of the subordinate meaning. In contrast to elderly controls, however, the Broca's aphasics were not successful at selecting the appropriate meaning of the ambiguity in the short ISI version of the experiment. But at the long ISI, in accordance with the performance of the elderly controls, the patients were able to successfully complete the contextual selection process. These results indicate that Broca's aphasics are delayed in the process of contextual selection. It is argued that this finding of delayed selection is compatible with the idea that comprehension deficits in Broca's aphasia result from a delay in the process of integrating lexical information.
  • Swift, M. (1998). [Book review of LOUIS-JACQUES DORAIS, La parole inuit: Langue, culture et société dans l'Arctique nord-américain]. Language in Society, 27, 273-276. doi:10.1017/S0047404598282042.

    Abstract

    This volume on Inuit speech follows the evolution of a native language of the North American Arctic, from its historical roots to its present-day linguistic structure and patterns of use from Alaska to Greenland. Drawing on a wide range of research from the fields of linguistics, anthropology, and sociology, Dorais integrates these diverse perspectives in a comprehensive view of native language development, maintenance, and use under conditions of marginalization due to social transition.
  • Swingley, D., & Aslin, R. N. (2007). Lexical competition in young children's word learning. Cognitive Psychology, 54(2), 99-132.

    Abstract

    In two experiments, 1.5-year-olds were taught novel words whose sound patterns were phonologically similar to familiar words (novel neighbors) or were not (novel nonneighbors). Learning was tested using a picture-fixation task. In both experiments, children learned the novel nonneighbors but not the novel neighbors. In addition, exposure to the novel neighbors impaired recognition performance on familiar neighbors. Finally, children did not spontaneously use phonological differences to infer that a novel word referred to a novel object. Thus, lexical competition—inhibitory interaction among words in speech comprehension—can prevent children from using their full phonological sensitivity in judging words as novel. These results suggest that word learning in young children, as in adults, relies not only on the discrimination and identification of phonetic categories, but also on evaluating the likelihood that an utterance conveys a new word.
  • Swingley, D. (2007). Lexical exposure and word-from encoding in 1.5-year-olds. Developmental Psychology, 43(2), 454-464. doi:10.1037/0012-1649.43.2.454.

    Abstract

    In this study, 1.5-year-olds were taught a novel word. Some children were familiarized with the word's phonological form before learning the word's meaning. Fidelity of phonological encoding was tested in a picture-fixation task using correctly pronounced and mispronounced stimuli. Only children with additional exposure in familiarization showed reduced recognition performance given slight mispronunciations relative to correct pronunciations; children with fewer exposures did not. Mathematical modeling of vocabulary exposure indicated that children may hear thousands of words frequently enough for accurate encoding. The results provide evidence compatible with partial failure of phonological encoding at 19 months of age, demonstrate that this limitation in learning does not always hinder word recognition, and show the value of infants' word-form encoding in early lexical development.
  • Takashima, A., Nieuwenhuis, I. L. C., Rijpkema, M., Petersson, K. M., Jensen, O., & Fernández, G. (2007). Memory trace stabilization leads to large-scale changes in the retrieval network: A functional MRI study on associative memory. Learning & Memory, 14, 472-479. doi:10.1101/lm.605607.

    Abstract

    Spaced learning with time to consolidate leads to more stabile memory traces. However, little is known about the neural correlates of trace stabilization, especially in humans. The present fMRI study contrasted retrieval activity of two well-learned sets of face-location associations, one learned in a massed style and tested on the day of learning (i.e., labile condition) and another learned in a spaced scheme over the course of one week (i.e., stabilized condition). Both sets of associations were retrieved equally well, but the retrieval of stabilized association was faster and accompanied by large-scale changes in the network supporting retrieval. Cued recall of stabilized as compared with labile associations was accompanied by increased activity in the precuneus, the ventromedial prefrontal cortex, the bilateral temporal pole, and left temporo–parietal junction. Conversely, memory representational areas such as the fusiform gyrus for faces and the posterior parietal cortex for locations did not change their activity with stabilization. The changes in activation in the precuneus, which also showed increased connectivity with the fusiform area, are likely to be related to the spatial nature of our task. The activation increase in the ventromedial prefrontal cortex, on the other hand, might reflect a general function in stabilized memory retrieval. This area might succeed the hippocampus in linking distributed neocortical representations.
  • Tamariz, M., Roberts, S. G., Martínez, J. I., & Santiago, J. (2018). The Interactive Origin of Iconicity. Cognitive Science, 42, 334-349. doi:10.1111/cogs.12497.

    Abstract

    We investigate the emergence of iconicity, specifically a bouba-kiki effect in miniature artificial languages under different functional constraints: when the languages are reproduced and when they are used communicatively. We ran transmission chains of (a) participant dyads who played an interactive communicative game and (b) individual participants who played a matched learning game. An analysis of the languages over six generations in an iterated learning experiment revealed that in the Communication condition, but not in the Reproduction condition, words for spiky shapes tend to be rated by naive judges as more spiky than the words for round shapes. This suggests that iconicity may not only be the outcome of innovations introduced by individuals, but, crucially, the result of interlocutor negotiation of new communicative conventions. We interpret our results as an illustration of cultural evolution by random mutation and selection (as opposed to by guided variation).
  • Tan, Y., & Martin, R. C. (2018). Verbal short-term memory capacities and executive function in semantic and syntactic interference resolution during sentence comprehension: Evidence from aphasia. Neuropsychologia, 113, 111-125. doi:10.1016/j.neuropsychologia.2018.03.001.

    Abstract

    This study examined the role of verbal short-term memory (STM) and executive function (EF) underlying semantic and syntactic interference resolution during sentence comprehension for persons with aphasia (PWA) with varying degrees of STM and EF deficits. Semantic interference was manipulated by varying the semantic plausibility of the intervening NP as subject of the verb and syntactic interference was manipulated by varying whether the NP was another subject or an object. Nine PWA were assessed on sentence reading times and on comprehension question performance. PWA showed exaggerated semantic and syntactic interference effects relative to healthy age-matched control subjects. Importantly, correlational analyses showed that while answering comprehension questions, PWA’ semantic STM capacity related to their ability to resolve semantic but not syntactic interference. In contrast, PWA’ EF abilities related to their ability to resolve syntactic but not semantic interference. Phonological STM deficits were not related to the ability to resolve either type of interference. The results for semantic STM are consistent with prior findings indicating a role for semantic but not phonological STM in sentence comprehension, specifically with regard to maintaining semantic information prior to integration. The results for syntactic interference are consistent with the recent findings suggesting that EF is critical for syntactic processing.
  • Tan, Y., Martin, R. C., & Van Dyke, J. (2013). Verbal WM capacities in sentence comprehension: Evidence from aphasia. Procedia - Social and Behavioral Sciences, 94, 108-109. doi:10.1016/j.sbspro.2013.09.052.
  • Teeling, E., Vernes, S. C., Davalos, L. M., Ray, D. A., Gilbert, M. T. P., Myers, E., & Bat1K Consortium (2018). Bat biology, genomes, and the Bat1K project: To generate chromosome-level genomes for all living bat species. Annual Review of Animal Biosciences, 6, 23-46. doi:10.1146/annurev-animal-022516-022811.

    Abstract

    Bats are unique among mammals, possessing some of the rarest mammalian adaptations, including true self-powered flight, laryngeal echolocation, exceptional longevity, unique immunity, contracted genomes, and vocal learning. They provide key ecosystem services, pollinating tropical plants, dispersing seeds, and controlling insect pest populations, thus driving healthy ecosystems. They account for more than 20% of all living mammalian diversity, and their crown-group evolutionary history dates back to the Eocene. Despite their great numbers and diversity, many species are threatened and endangered. Here we announce Bat1K, an initiative to sequence the genomes of all living bat species (n∼1,300) to chromosome-level assembly. The Bat1K genome consortium unites bat biologists (>132 members as of writing), computational scientists, conservation organizations, genome technologists, and any interested individuals committed to a better understanding of the genetic and evolutionary mechanisms that underlie the unique adaptations of bats. Our aim is to catalog the unique genetic diversity present in all living bats to better understand the molecular basis of their unique adaptations; uncover their evolutionary history; link genotype with phenotype; and ultimately better understand, promote, and conserve bats. Here we review the unique adaptations of bats and highlight how chromosome-level genome assemblies can uncover the molecular basis of these traits. We present a novel sequencing and assembly strategy and review the striking societal and scientific benefits that will result from the Bat1K initiative.

Share this page