Publications

Displaying 601 - 700 of 790
  • Schepens, J., Van der Slik, F., & Van Hout, R. (2013). The effect of linguistic distance across Indo-European mother tongues on learning Dutch as a second language. In L. Borin, & A. Saxena (Eds.), Approaches to measuring linguistic differences (pp. 199-230). Berlin: Mouton de Gruyter.
  • Schijven, D., Kofink, D., Tragante, V., Verkerke, M., Pulit, S. L., Kahn, R. S., Veldink, J. H., Vinkers, C. H., Boks, M. P., & Luykx, J. J. (2018). Comprehensive pathway analyses of schizophrenia risk loci point to dysfunctional postsynaptic signaling. Schizophrenia Research, 199, 195-202. doi:10.1016/j.schres.2018.03.032.

    Abstract

    Large-scale genome-wide association studies (GWAS) have implicated many low-penetrance loci in schizophrenia. However, its pathological mechanisms are poorly understood, which in turn hampers the development of novel pharmacological treatments. Pathway and gene set analyses carry the potential to generate hypotheses about disease mechanisms and have provided biological context to genome-wide data of schizophrenia. We aimed to examine which biological processes are likely candidates to underlie schizophrenia by integrating novel and powerful pathway analysis tools using data from the largest Psychiatric Genomics Consortium schizophrenia GWAS (N=79,845) and the most recent 2018 schizophrenia GWAS (N=105,318). By applying a primary unbiased analysis (Multi-marker Analysis of GenoMic Annotation; MAGMA) to weigh the role of biological processes from the Molecular Signatures Database (MSigDB), we identified enrichment of common variants in synaptic plasticity and neuron differentiation gene sets. We supported these findings using MAGMA, Meta-Analysis Gene-set Enrichment of variaNT Associations (MAGENTA) and Interval Enrichment Analysis (INRICH) on detailed synaptic signaling pathways from the Kyoto Encyclopedia of Genes and Genomes (KEGG) and found enrichment in mainly the dopaminergic and cholinergic synapses. Moreover, shared genes involved in these neurotransmitter systems had a large contribution to the observed enrichment, protein products of top genes in these pathways showed more direct and indirect interactions than expected by chance, and expression profiles of these genes were largely similar among brain tissues. In conclusion, we provide strong and consistent genetics and protein-interaction informed evidence for the role of postsynaptic signaling processes in schizophrenia, opening avenues for future translational and psychopharmacological studies.
  • Schilberg, L., Engelen, T., Ten Oever, S., Schuhmann, T., De Gelder, B., De Graaf, T. A., & Sack, A. T. (2018). Phase of beta-frequency tACS over primary motor cortex modulates corticospinal excitability. Cortex, 103, 142-152. doi:10.1016/j.cortex.2018.03.001.

    Abstract

    The assessment of corticospinal excitability by means of transcranial magnetic stimulation-induced motor evoked potentials is an established diagnostic tool in neurophysiology and a widely used procedure in fundamental brain research. However, concern about low reliability of these measures has grown recently. One possible cause of high variability of MEPs under identical acquisition conditions could be the influence of oscillatory neuronal activity on corticospinal excitability. Based on research showing that transcranial alternating current stimulation can entrain neuronal oscillations we here test whether alpha or beta frequency tACS can influence corticospinal excitability in a phase-dependent manner. We applied tACS at individually calibrated alpha- and beta-band oscillation frequencies, or we applied sham tACS. Simultaneous single TMS pulses time locked to eight equidistant phases of the ongoing tACS signal evoked MEPs. To evaluate offline effects of stimulation frequency, MEP amplitudes were measured before and after tACS. To evaluate whether tACS influences MEP amplitude, we fitted one-cycle sinusoids to the average MEPs elicited at the different phase conditions of each tACS frequency. We found no frequency-specific offline effects of tACS. However, beta-frequency tACS modulation of MEPs was phase-dependent. Post hoc analyses suggested that this effect was specific to participants with low (<19 Hz) intrinsic beta frequency. In conclusion, by showing that beta tACS influences MEP amplitude in a phase-dependent manner, our results support a potential role attributed to neuronal oscillations in regulating corticospinal excitability. Moreover, our findings may be useful for the development of TMS protocols that improve the reliability of MEPs as a meaningful tool for research applications or for clinical monitoring and diagnosis. (C) 2018 Elsevier Ltd. All rights reserved.
  • Schiller, N. O., & Verdonschot, R. G. (2018). Morphological theory and neurolinguistics. In J. Audring, & F. Masini (Eds.), The Oxford Handbook of Morphological Theory (pp. 554-572). Oxford: Oxford University Press.

    Abstract

    This chapter describes neurolinguistic aspects of morphology, morphological theory, and especially morphological processing. It briefly mentions the main processing models in the literature and how they deal with morphological issues, i.e. full-listing models (all morphologically related words are listed separately in the lexicon and are processed individually), full-parsing or decompositional models (morphologically related words are not listed in the lexicon but are decomposed into their constituent morphemes, each of which is listed in the lexicon), and hybrid, so-called dual route, models (regular morphologically related words are decomposed, irregular words are listed). The chapter also summarizes some important findings from the literature that bear on neurolinguistic aspects of morphological processing, from both language comprehension and language production, taking into consideration neuropsychological patient studies as well as studies employing neuroimaging methods.
  • Schillingmann, L., Ernst, J., Keite, V., Wrede, B., Meyer, A. S., & Belke, E. (2018). AlignTool: The automatic temporal alignment of spoken utterances in German, Dutch, and British English for psycholinguistic purposes. Behavior Research Methods, 50(2), 466-489. doi:10.3758/s13428-017-1002-7.

    Abstract

    In language production research, the latency with which speakers produce a spoken response to a stimulus and the onset and offset times of words in longer utterances are key dependent variables. Measuring these variables automatically often yields partially incorrect results. However, exact measurements through the visual inspection of the recordings are extremely time-consuming. We present AlignTool, an open-source alignment tool that establishes preliminarily the onset and offset times of words and phonemes in spoken utterances using Praat, and subsequently performs a forced alignment of the spoken utterances and their orthographic transcriptions in the automatic speech recognition system MAUS. AlignTool creates a Praat TextGrid file for inspection and manual correction by the user, if necessary. We evaluated AlignTool’s performance with recordings of single-word and four-word utterances as well as semi-spontaneous speech. AlignTool performs well with audio signals with an excellent signal-to-noise ratio, requiring virtually no corrections. For audio signals of lesser quality, AlignTool still is highly functional but its results may require more frequent manual corrections. We also found that audio recordings including long silent intervals tended to pose greater difficulties for AlignTool than recordings filled with speech, which AlignTool analyzed well overall. We expect that by semi-automatizing the temporal analysis of complex utterances, AlignTool will open new avenues in language production research.
  • Schoenmakers, G.-J., & Piepers, J. (2018). Echter kan het wel. Levende Talen Magazine, 105(4), 10-13.
  • De Schryver, J., Neijt, A., Ghesquière, P., & Ernestus, M. (2013). Zij surfde, maar hij durfte niet: De spellingproblematiek van de zwakke verleden tijd in Nederland en Vlaanderen. Dutch Journal of Applied Linguistics, 2(2), 133-151. doi:10.1075/dujal.2.2.01de.

    Abstract

    Hoewel de spelling van Nederlandse verledentijdsvormen van zwakke werkwoorden algemeen als eenvoudig wordt beschouwd (ze zijn immers klankzuiver) maken zelfs universiteitsstudenten opvallend veel fouten bij de keuze tussen de uitgangen -te en -de. Voor een deel zijn die fouten ‘natuurlijk’ in die zin dat ze het gevolg zijn van de werking van frequentie en analogie. Anderzijds stellen we vast dat Nederlanders veel meer fouten maken dan Vlamingen, althans als de stam op een coronale fricatief eindigt (s, z, f, v). Aangezien de Nederlandse proefpersonen de ‘regel’ (het ezelsbruggetje ’t kofschip) beter lijken te beheersen dan de Vlamingen, moet de verklaring voor het verschil gezocht worden in een klankverandering die zich wel in Nederland maar niet of nauwelijks in Vlaanderen voordoet, de verstemlozing van de fricatieven. Het spellingprobleem vraagt om didactische maatregelen en/of politieke: het kan wellicht grotendeels worden opgelost door de spellingregels een weinig aan te passen.
  • Schweinfurth, M. K., De Troy, S. E., Van Leeuwen, E. J. C., Call, J., & Haun, D. B. M. (2018). Spontaneous social tool use in Chimpanzees (Pan troglodytes). Journal of Comparative Psychology, 132(4), 455-463. doi:10.1037/com0000127.

    Abstract

    Although there is good evidence that social animals show elaborate cognitive skills to deal with others, there are few reports of animals physically using social agents and their respective responses as means to an end—social tool use. In this case study, we investigated spontaneous and repeated social tool use behavior in chimpanzees (Pan troglodytes). We presented a group of chimpanzees with an apparatus, in which pushing two buttons would release juice from a distantly located fountain. Consequently, any one individual could only either push the buttons or drink from the fountain but never push and drink simultaneously. In this scenario, an adult male attempted to retrieve three other individuals and push them toward the buttons that, if pressed, released juice from the fountain. With this strategy, the social tool user increased his juice intake 10-fold. Interestingly, the strategy was stable over time, which was possibly enabled by playing with the social tools. With over 100 instances, we provide the biggest data set on social tool use recorded among nonhuman animals so far. The repeated use of other individuals as social tools may represent a complex social skill linked to Machiavellian intelligence.
  • Scott, K., Sakkalou, E., Ellis-Davies, K., Hilbrink, E., Hahn, U., & Gattis, M. (2013). Infant contributions to joint attention predict vocabulary development. In M. Knauff, M. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Conference of the Cognitive Science Society (pp. 3384-3389). Austin,TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0602/index.html.

    Abstract

    Joint attention has long been accepted as constituting a privileged circumstance in which word learning prospers. Consequently research has investigated the role that maternal responsiveness to infant attention plays in predicting language outcomes. However there has been a recent expansion in research implicating similar predictive effects from individual differences in infant behaviours. Emerging from the foundations of such work comes an interesting question: do the relative contributions of the mother and infant to joint attention episodes impact upon language learning? In an attempt to address this, two joint attention behaviours were assessed as predictors of vocabulary attainment (as measured by OCDI Production Scores). These predictors were: mothers encouraging attention to an object given their infant was already attending to an object (maternal follow-in); and infants looking to an object given their mothers encouragement of attention to an object (infant follow-in). In a sample of 14-month old children (N=36) we compared the predictive power of these maternal and infant follow-in variables on concurrent and later language performance. Results using Growth Curve Analysis provided evidence that while both maternal follow-in and infant follow-in variables contributed to production scores, infant follow-in was a stronger predictor. Consequently it does appear to matter whose final contribution establishes joint attention episodes. Infants who more often follow-in into their mothers’ encouragement of attention have larger, and faster growing vocabularies between 14 and 18-months of age.
  • Scott, S. K., McGettigan, C., & Eisner, F. (2013). The neural basis of links and dissociations between speech perception and production. In J. J. Bolhuis, & M. Everaert (Eds.), Birdsong, speech and language: Exploring the evolution of mind and brain (pp. 277-294). Cambridge, Mass: MIT Press.
  • Seeliger, K., Fritsche, M., Güçlü, U., Schoenmakers, S., Schoffelen, J.-M., Bosch, S. E., & Van Gerven, M. A. J. (2018). Convolutional neural network-based encoding and decoding of visual object recognition in space and time. NeuroImage, 180, 253-266. doi:10.1016/j.neuroimage.2017.07.018.

    Abstract

    Representations learned by deep convolutional neural networks (CNNs) for object recognition are a widely
    investigated model of the processing hierarchy in the human visual system. Using functional magnetic resonance
    imaging, CNN representations of visual stimuli have previously been shown to correspond to processing stages in
    the ventral and dorsal streams of the visual system. Whether this correspondence between models and brain
    signals also holds for activity acquired at high temporal resolution has been explored less exhaustively. Here, we
    addressed this question by combining CNN-based encoding models with magnetoencephalography (MEG).
    Human participants passively viewed 1,000 images of objects while MEG signals were acquired. We modelled
    their high temporal resolution source-reconstructed cortical activity with CNNs, and observed a feed-forward
    sweep across the visual hierarchy between 75 and 200 ms after stimulus onset. This spatiotemporal cascade
    was captured by the network layer representations, where the increasingly abstract stimulus representation in the
    hierarchical network model was reflected in different parts of the visual cortex, following the visual ventral
    stream. We further validated the accuracy of our encoding model by decoding stimulus identity in a left-out
    validation set of viewed objects, achieving state-of-the-art decoding accuracy.
  • Segaert, K., Mazaheri, A., & Hagoort, P. (2018). Binding language: Structuring sentences through precisely timed oscillatory mechanisms. European Journal of Neuroscience, 48(7), 2651-2662. doi:10.1111/ejn.13816.

    Abstract

    Syntactic binding refers to combining words into larger structures. Using EEG, we investigated the neural processes involved in syntactic binding. Participants were auditorily presented two-word sentences (i.e. pronoun and pseudoverb such as ‘I grush’, ‘she grushes’, for which syntactic binding can take place) and wordlists (i.e. two pseudoverbs such as ‘pob grush’, ‘pob grushes’, for which no binding occurs). Comparing these two conditions, we targeted syntactic binding while minimizing contributions of semantic binding and of other cognitive processes such as working memory. We found a converging pattern of results using two distinct analysis approaches: one approach using frequency bands as defined in previous literature, and one data-driven approach in which we looked at the entire range of frequencies between 3-30 Hz without the constraints of pre-defined frequency bands. In the syntactic binding (relative to the wordlist) condition, a power increase was observed in the alpha and beta frequency range shortly preceding the presentation of the target word that requires binding, which was maximal over frontal-central electrodes. Our interpretation is that these signatures reflect that language comprehenders expect the need for binding to occur. Following the presentation of the target word in a syntactic binding context (relative to the wordlist condition), an increase in alpha power maximal over a left lateralized cluster of frontal-temporal electrodes was observed. We suggest that this alpha increase relates to syntactic binding taking place. Taken together, our findings suggest that increases in alpha and beta power are reflections of distinct the neural processes underlying syntactic binding.
  • Segaert, K., Kempen, G., Petersson, K. M., & Hagoort, P. (2013). Syntactic priming and the lexical boost effect during sentence production and sentence comprehension: An fMRI study. Brain and Language, 124, 174-183. doi:10.1016/j.bandl.2012.12.003.

    Abstract

    Behavioral syntactic priming effects during sentence comprehension are typically observed only if both the syntactic structure and lexical head are repeated. In contrast, during production syntactic priming occurs with structure repetition alone, but the effect is boosted by repetition of the lexical head. We used fMRI to investigate the neuronal correlates of syntactic priming and lexical boost effects during sentence production and comprehension. The critical measure was the magnitude of fMRI adaptation to repetition of sentences in active or passive voice, with or without verb repetition. In conditions with repeated verbs, we observed adaptation to structure repetition in the left IFG and MTG, for active and passive voice. However, in the absence of repeated verbs, adaptation occurred only for passive sentences. None of the fMRI adaptation effects yielded differential effects for production versus comprehension, suggesting that sentence comprehension and production are subserved by the same neuronal infrastructure for syntactic processing.

    Additional information

    Segaert_Supplementary_data_2013.docx
  • Segaert, K., Weber, K., De Lange, F., Petersson, K. M., & Hagoort, P. (2013). The suppression of repetition enhancement: A review of fMRI studies. Neuropsychologia, 51, 59-66. doi:10.1016/j.neuropsychologia.2012.11.006.

    Abstract

    Repetition suppression in fMRI studies is generally thought to underlie behavioural facilitation effects (i.e., priming) and it is often used to identify the neuronal representations associated with a stimulus. However, this pays little heed to the large number of repetition enhancement effects observed under similar conditions. In this review, we identify several cognitive variables biasing repetition effects in the BOLD response towards enhancement instead of suppression. These variables are stimulus recognition, learning, attention, expectation and explicit memory. We also evaluate which models can account for these repetition effects and come to the conclusion that there is no one single model that is able to embrace all repetition enhancement effects. Accumulation, novel network formation as well as predictive coding models can all explain subsets of repetition enhancement effects.
  • Seifart, F., Evans, N., Hammarström, H., & Levinson, S. C. (2018). Language documentation twenty-five years on. Language, 94(4), e324-e345. doi:10.1353/lan.2018.0070.

    Abstract

    This discussion note reviews responses of the linguistics profession to the grave issues of language
    endangerment identified a quarter of a century ago in the journal Language by Krauss,
    Hale, England, Craig, and others (Hale et al. 1992). Two and a half decades of worldwide research
    not only have given us a much more accurate picture of the number, phylogeny, and typological
    variety of the world’s languages, but they have also seen the development of a wide range of new
    approaches, conceptual and technological, to the problem of documenting them. We review these
    approaches and the manifold discoveries they have unearthed about the enormous variety of linguistic
    structures. The reach of our knowledge has increased by about 15% of the world’s languages,
    especially in terms of digitally archived material, with about 500 languages now
    reasonably documented thanks to such major programs as DoBeS, ELDP, and DEL. But linguists
    are still falling behind in the race to document the planet’s rapidly dwindling linguistic diversity,
    with around 35–42% of the world’s languages still substantially undocumented, and in certain
    countries (such as the US) the call by Krauss (1992) for a significant professional realignment toward
    language documentation has only been heeded in a few institutions. Apart from the need for
    an intensified documentarist push in the face of accelerating language loss, we argue that existing
    language documentation efforts need to do much more to focus on crosslinguistically comparable
    data sets, sociolinguistic context, semantics, and interpretation of text material, and on methods
    for bridging the ‘transcription bottleneck’, which is creating a huge gap between the amount we
    can record and the amount in our transcribed corpora.*
  • Seifart, F., & Hammarström, H. (2018). Language Isolates in South America. In L. Campbell, A. Smith, & T. Dougherty (Eds.), Language Isolates (pp. 260-286). London: Routledge.
  • Sekine, K., Wood, C., & Kita, S. (2018). Gestural depiction of motion events in narrative increases symbolic distance with age. Language, Interaction and Acquisition, 9(1), 11-21. doi:10.1075/lia.15020.sek.

    Abstract

    We examined gesture representation of motion events in narratives produced by three- and nine-year-olds, and adults. Two aspects of gestural depiction were analysed: how protagonists were depicted, and how gesture space was used. We found that older groups were more likely to express protagonists as an object that a gesturing hand held and manipulated, and less likely to express protagonists with whole-body enactment gestures. Furthermore, for older groups, gesture space increasingly became less similar to narrated space. The older groups were less likely to use large gestures or gestures in the periphery of the gesture space to represent movements that were large relative to a protagonist’s body or that took place next to a protagonist. They were also less likely to produce gestures on a physical surface (e.g. table) to represent movement on a surface in narrated events. The development of gestural depiction indicates that older speakers become less immersed in the story world and start to control and manipulate story representation from an outside perspective in a bounded and stage-like gesture space. We discuss this developmental shift in terms of increasing symbolic distancing (Werner & Kaplan, 1963).
  • Sekine, K., Rose, M. L., Foster, A. M., Attard, M. C., & Lanyon, L. E. (2013). Gesture production patterns in aphasic discourse: In-depth description and preliminary predictions. Aphasiology, 27(9), 1031-1049. doi:10.1080/02687038.2013.803017.

    Abstract

    Background: Gesture frequently accompanies speech in healthy speakers. For many individuals with aphasia, gestures are a target of speech-language pathology intervention, either as an alternative form of communication or as a facilitative device for language restoration. The patterns of gesture production for people with aphasia and the participant variables that predict these patterns remain unclear. Aims: We aimed to examine gesture production during conversational discourse in a large sample of individuals with aphasia. We used a detailed gesture coding system to determine patterns of gesture production associated with specific aphasia types and severities. Methods & Procedures: We analysed conversation samples from AphasiaBank, gathered from 46 people with post-stroke aphasia and 10 healthy matched controls all of whom had gestured at least once during a story re-tell task. Twelve gesture types were coded. Descriptive statistics were used to describe the patterns of gesture production. Possible significant differences in production patterns according to aphasia type and severity were examined with a series of analyses of variance (ANOVA) statistics, and multiple regression analysis was used to examine these potential predictors of gesture production patterns. Outcomes & Results: Individuals with aphasia gestured significantly more frequently than healthy controls. Aphasia type and severity impacted significantly on gesture type in specific identified patterns detailed here, especially on the production of meaning-laden gestures. Conclusions: These patterns suggest the opportunity for gestures as targets of aphasia therapy. Aphasia fluency accounted for a greater degree of data variability than aphasia severity or naming skills. More work is required to delineate predictive factors.
  • Sekine, K., & Rose, M. L. (2013). The relationship of aphasia type and gesture production in people with aphasia. American Journal of Speech-Language Pathology, 22, 662-672. doi:10.1044/1058-0360(2013/12-0030).

    Abstract

    Purpose For many individuals with aphasia, gestures form a vital component of message transfer and are the target of speech-language pathology intervention. What remains unclear are the participant variables that predict successful outcomes from gesture treatments. The authors examined the gesture production of a large number of individuals with aphasia—in a consistent discourse sampling condition and with a detailed gesture coding system—to determine patterns of gesture production associated with specific types of aphasia. Method The authors analyzed story retell samples from AphasiaBank (TalkBank, n.d.), gathered from 98 individuals with aphasia resulting from stroke and 64 typical controls. Twelve gesture types were coded. Descriptive statistics were used to describe the patterns of gesture production. Possible significant differences in production patterns according to aphasia type were examined using a series of chi-square, Fisher exact, and logistic regression statistics. Results A significantly higher proportion of individuals with aphasia gestured as compared to typical controls, and for many individuals with aphasia, this gesture was iconic and was capable of communicative load. Aphasia type impacted significantly on gesture type in specific identified patterns, detailed here. Conclusion These type-specific patterns suggest the opportunity for gestures as targets of aphasia therapy.
  • Senft, G., & Heeschen, V. (1989). Humanethologisches Tonarchiv. In Generalverwaltung der MPG (Ed.), Max-Planck-Gesellschaft Jahrbuch 1989 (pp. 246). Göttingen: Vandenhoeck and Ruprecht.
  • Senft, B., & Senft, G. (2018). Growing up on the Trobriand Islands in Papua New Guinea - Childhood and educational ideologies in Tauwema. Amsterdam: Benjamins. doi:10.1075/clu.21.

    Abstract

    This volume deals with the children’s socialization on the Trobriands. After a survey of ethnographic studies on childhood, the book zooms in on indigenous ideas of conception and birth-giving, the children’s early development, their integration into playgroups, their games and their education within their `own little community’ until they reach the age of seven years. During this time children enjoy much autonomy and independence. Attempts of parental education are confined to a minimum. However, parents use subtle means to raise their children. Educational ideologies are manifest in narratives and in speeches addressed to children. They provide guidelines for their integration into the Trobrianders’ “balanced society” which is characterized by cooperation and competition. It does not allow individual accumulation of wealth – surplus property gained has to be redistributed – but it values the fame acquired by individuals in competitive rituals. Fame is not regarded as threatening the balance of their society.
  • Senft, G. (2013). Ethnolinguistik. In B. Beer, & H. Fischer (Eds.), Ethnologie - Einführung und Überblick. (8. Auflage, pp. 271-286). Berlin: Reimer.
  • Senft, G. (2018). Pragmatics and anthropology - The Trobriand Islanders' Ways of Speaking. In C. Ilie, & N. Norrick (Eds.), Pragmatics and its Interfaces (pp. 185-211). Amsterdam: John Benjamins.

    Abstract

    Bronislaw Malinowski – based on his experience during his field research on the Trobriand Islands – pointed out that language is first and foremost a tool for creating social bonds. It is a mode of behavior and the meaning of an utterance is constituted by its pragmatic function. Malinowski’s ideas finally led to the formation of the subdiscipline “anthropological linguistics”. This paper presents three observations of the Trobrianders’ attitude to their language Kilivila and their language use in social interactions. They illustrate that whoever wants to successfully research the role of language, culture and cognition in social interaction must be on ‘common ground’ with the researched community.
  • Senft, G. (2018). Theory meets Practice - H. Paul Grice's Maxims of Quality and Manner and the Trobriand Islanders' Language Use. In A. Capone, M. Carapezza, & F. Lo Piparo (Eds.), Further Advances in Pragmatics and Philosophy Part 1: From Theory to Practice (pp. 203-220). Cham: Springer.

    Abstract

    As I have already pointed out elsewhere (Senft 2008; 2010; 2014), the Gricean conversational maxims of Quality – “Try to make your contribution one that is true” – and Manner “Be perspicuous”, specifically “Avoid obscurity of expression” and “Avoid ambiguity” (Grice 1967; 1975; 1978) – are not observed by the Trobriand Islanders of Papua New Guinea, neither in forms of their ritualized communication nor in forms and ways of everyday conversation and other ordinary verbal interactions. The speakers of the Austronesian language Kilivila metalinguistically differentiate eight specific non-diatopical registers which I have called “situational-intentional” varieties. One of these varieties is called “biga sopa”. This label can be glossed as “joking or lying speech, indirect speech, speech which is not vouched for”. The biga sopa constitutes the default register of Trobriand discourse and conversation. This contribution to the workshop on philosophy and pragmatics presents the Trobriand Islanders’ indigenous typology of non-diatopical registers, especially elaborating on the concept of sopa, describing its features, discussing its functions and illustrating its use within Trobriand society. It will be shown that the Gricean maxims of quality and manner are irrelevant for and thus not observed by the speakers of Kilivila. On the basis of the presented findings the Gricean maxims and especially Grice’s claim that his theory of conversational implicature is “universal in application” is critically discussed from a general anthropological-linguistic point of view.
  • Senghas, A., Ozyurek, A., & Goldin-Meadow, S. (2013). Homesign as a way-station between co-speech gesture and sign language: The evolution of segmenting and sequencing. In R. Botha, & M. Everaert (Eds.), The evolutionary emergence of language: Evidence and inference (pp. 62-77). Oxford: Oxford University Press.
  • Seuren, P. A. M. (1989). A problem in English subject complementation. In D. Jaspers, W. Klooster, Y. Putseys, & P. A. M. Seuren (Eds.), Sentential complementation and the lexicon: Studies in honour of Wim de Geest (pp. 355-375). Dordrecht: Foris.
  • Seuren, P. A. M. (2013). From Whorf to Montague: Explorations in the theory of language. Oxford: Oxford University Press.
  • Seuren, P. A. M. (1989). Neue Entwicklungen im Wahrheitsbegriff. Studia Leibnitiana, 21(2), 155-173.
  • Seuren, P. A. M. (1989). Notes on reflexivity. In F. J. Heyvaert, & F. Steurs (Eds.), Worlds behind words: Essays in honour of Prof. Dr. F.G. Droste on the occasion of his sixtieth birthday (pp. 85-95). Leuven: Leuven University Press.
  • Seuren, P. A. M. (2018). Semantic syntax (2nd rev. ed.). Leiden: Brill.

    Abstract

    This book presents a detailed formal machinery for the conversion of the Semantic Analyses (SAs) of sentences into surface structures of English, French, German, Dutch, and to some extent Turkish. The SAs are propositional structures consisting of a predicate and one, two or three argument terms, some of which can themselves be propositional structures. The surface structures are specified up to, but not including, the morphology. The book is thus an implementation of the programme formulated first by Albert Sechehaye (1870-1946) and then, independently, by James McCawley (1938-1999) in the school of Generative Semantics. It is the first, and so far the only formally precise and empirically motivated machinery in existence converting meaning representations into sentences of natural languages.
  • Seuren, P. A. M. (2018). Saussure and Sechehaye: A study in the history of linguistics and the foundations of language. Leiden: Brill.
  • Seuren, P. A. M. (2013). The logico-philosophical tradition. In K. Allan (Ed.), The Oxford handbook of the history of linguistics (pp. 537-554). Oxford: Oxford University Press.
  • Shao, Z., & Meyer, A. S. (2018). Word priming and interference paradigms. In A. M. B. De Groot, & P. Hagoort (Eds.), Research methods in psycholinguistics and the neurobiology of language: A practical guide (pp. 111-129). Hoboken: Wiley.
  • Shao, Z. (2013). Contributions of executive control to individual differences in word production. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Shao, Z., Meyer, A. S., & Roelofs, A. (2013). Selective and nonselective inhibition of competitors in picture naming. Memory & Cognition, 41(8), 1200-1211. doi:10.3758/s13421-013-0332-7.

    Abstract

    The present study examined the relation between nonselective inhibition and selective inhibition in picture naming performance. Nonselective inhibition refers to the ability to suppress any unwanted response, whereas selective inhibition refers to the ability to suppress specific competing responses. The degree of competition in picture naming was manipulated by presenting targets along with distractor words that could be semantically related (e.g., a picture of a dog combined with the word cat) or unrelated (tree) to the picture name. The mean naming response time (RT) was longer in the related than in the unrelated condition, reflecting semantic interference. Delta plot analyses showed that participants with small mean semantic interference effects employed selective inhibition more effectively than did participants with larger semantic interference effects. The participants were also tested on the stop-signal task, which taps nonselective inhibition. Their performance on this task was correlated with their mean naming RT but, importantly, not with the selective inhibition indexed by the delta plot analyses and the magnitude of the semantic interference effect. These results indicate that nonselective inhibition ability and selective inhibition of competitors in picture naming are separable to some extent.
  • Shayan, S., Moreira, A., Windhouwer, M., Koenig, A., & Drude, S. (2013). LEXUS 3 - a collaborative environment for multimedia lexica. In Proceedings of the Digital Humanities Conference 2013 (pp. 392-395).
  • Shitova, N. (2018). Electrophysiology of competition and adjustment in word and phrase production. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Sikora, K. (2018). Executive control in language production by adults and children with and without language impairment. PhD Thesis, Radboud University, Nijmegen, The Netherlands.

    Abstract

    The present study examined how the updating, inhibiting, and shifting abilities underlying executive control influence spoken noun-phrase production. Previous studies provided evidence that updating and inhibiting, but not shifting, influence picture naming response time (RT). However, little is known about the role of executive control in more complex forms of language production like generating phrases. We assessed noun-phrase production using picture description and a picture-word interference procedure. We measured picture description RT to assess length, distractor, and switch effects, which were assumed to reflect, respectively, the updating, inhibiting, and shifting abilities of adult participants. Moreover, for each participant we obtained scores on executive control tasks that measured verbal and nonverbal updating, nonverbal inhibiting, and nonverbal shifting. We found that both verbal and nonverbal updating scores correlated with the overall mean picture description RTs. Furthermore, the length effect in the RTs correlated with verbal but not nonverbal updating scores, while the distractor effect correlated with inhibiting scores. We did not find a correlation between the switch effect in the mean RTs and the shifting scores. However, the shifting scores correlated with the switch effect in the normal part of the underlying RT distribution. These results suggest that updating, inhibiting, and shifting each influence the speed of phrase production, thereby demonstrating a contribution of all three executive control abilities to language production.

    Additional information

    full text via Radboud Repository
  • Sikora, K., & Roelofs, A. (2018). Switching between spoken language-production tasks: the role of attentional inhibition and enhancement. Language, Cognition and Neuroscience, 33(7), 912-922. doi:10.1080/23273798.2018.1433864.

    Abstract

    Since Pillsbury [1908. Attention. London: Swan Sonnenschein & Co], the issue of whether attention operates through inhibition or enhancement has been on the scientific agenda. We examined whether overcoming previous attentional inhibition or enhancement is the source of asymmetrical switch costs in spoken noun-phrase production and colour-word Stroop tasks. In Experiment 1, using bivalent stimuli, we found asymmetrical costs in response times for switching between long and short phrases and between Stroop colour naming and reading. However, in Experiment 2, using bivalent stimuli for the weaker tasks (long phrases, colour naming) and univalent stimuli for the stronger tasks (short phrases, word reading), we obtained an asymmetrical switch cost for phrase production, but a symmetrical cost for Stroop. The switch cost evidence was quantified using Bayesian statistical analyses. Our findings suggest that switching between phrase types involves inhibition, whereas switching between colour naming and reading involves enhancement. Thus, the attentional mechanism depends on the language-production task involved. The results challenge theories of task switching that assume only one attentional mechanism, inhibition or enhancement, rather than both mechanisms.
  • Silva, S., Folia, V., Inácio, F., Castro, S. L., & Petersson, K. M. (2018). Modality effects in implicit artificial grammar learning: An EEG study. Brain Research, 1687, 50-59. doi:10.1016/j.brainres.2018.02.020.

    Abstract

    Recently, it has been proposed that sequence learning engages a combination of modality-specific operating networks and modality-independent computational principles. In the present study, we compared the behavioural and EEG outcomes of implicit artificial grammar learning in the visual vs. auditory modality. We controlled for the influence of surface characteristics of sequences (Associative Chunk Strength), thus focusing on the strictly structural aspects of sequence learning, and we adapted the paradigms to compensate for known frailties of the visual modality compared to audition (temporal presentation, fast presentation rate). The behavioural outcomes were similar across modalities. Favouring the idea of modality-specificity, ERPs in response to grammar violations differed in topography and latency (earlier and more anterior component in the visual modality), and ERPs in response to surface features emerged only in the auditory modality. In favour of modality-independence, we observed three common functional properties in the late ERPs of the two grammars: both were free of interactions between structural and surface influences, both were more extended in a grammaticality classification test than in a preference classification test, and both correlated positively and strongly with theta event-related-synchronization during baseline testing. Our findings support the idea of modality-specificity combined with modality-independence, and suggest that memory for visual vs. auditory sequences may largely contribute to cross-modal differences.
  • Sjerps, M. J., & Smiljanic, R. (2013). Compensation for vocal tract characteristics across native and non-native languages. Journal of Phonetics, 41, 145-155. doi:10.1016/j.wocn.2013.01.005.

    Abstract

    Perceptual compensation for speaker vocal tract properties was investigated in four groups of listeners: native speakers of English and native speakers of Dutch, native speakers of Spanish with low proficiency in English, and Spanish-English bilinguals. Listeners categorized targets on a [sofo] to [sufu] continuum. Targets were preceded by sentences that were manipulated to have either a high or a low F1 contour. All listeners performed the categorization task for targets that were preceded by Spanish, English and Dutch precursors. Results show that listeners from each of the four language backgrounds compensate for speaker vocal tract properties regardless of language-specific vowel inventory properties. Listeners also compensate when they listen to stimuli in another language. The results suggest that patterns of compensation are mainly determined by auditory properties of precursor sentences.
  • Sjerps, M. J. (2013). [Contribution to NextGen VOICES survey: Science communication's future]. Science, 340 (no. 6128, online supplement). Retrieved from http://www.sciencemag.org/content/340/6128/28/suppl/DC1.

    Abstract

    One of the important challenges for the development of science communication concerns the current problems with the under-exposure of null results. I suggest that each article published in a top scientific journal can get tagged (online) with attempts to replicate. As such, a future reader of an article will also be able to see whether replications have been attempted and how these turned out. Editors and/or reviewers decide whether a replication is of sound quality. The authors of the main article have the option to review the replication and can provide a supplementary comment with each attempt that is added. After 5 or 10 years, and provided enough attempts to replicate, the authors of the main article get the opportunity to discuss/review their original study in light of the outcomes of the replications. This approach has two important strengths: 1) The approach would provide researchers with the opportunity to show that they deliver scientifically thorough work, but sometimes just fail to replicate the result that others have reported. This can be especially valuable for the career opportunities of promising young researchers; 2) perhaps even more important, the visibility of replications provides an important incentive for researchers to publish findings only if they are sure that their effects are reliable (and thereby reduce the influence of "experimenter degrees of freedom" or even outright fraud). The proposed approach will stimulate researchers to look beyond the point of publication of their studies.
  • Sjerps, M. J., Zhang, C., & Peng, G. (2018). Lexical Tone is Perceived Relative to Locally Surrounding Context, Vowel Quality to Preceding Context. Journal of Experimental Psychology: Human Perception and Performance, 44(6), 914-924. doi:10.1037/xhp0000504.

    Abstract

    Important speech cues such as lexical tone and vowel quality are perceptually contrasted to the distribution of those same cues in surrounding contexts. However, it is unclear whether preceding and following contexts have similar influences, and to what extent those influences are modulated by the auditory history of previous trials. To investigate this, Cantonese participants labeled sounds from (a) a tone continuum (mid- to high-level), presented with a context that had raised or lowered F0 values and (b) a vowel quality continuum (/u/ to /o/), where the context had raised or lowered F1 values. Contexts with high or low F0/F1 were presented in separate blocks or intermixed in 1 block. Contexts were presented following (Experiment 1) or preceding the target continuum (Experiment 2). Contrastive effects were found for both tone and vowel quality (e.g., decreased F0 values in contexts lead to more high tone target judgments and vice versa). Importantly, however, lexical tone was only influenced by F0 in immediately preceding and following contexts. Vowel quality was only influenced by the F1 in preceding contexts, but this extended to contexts from preceding trials. Contextual influences on tone and vowel quality are qualitatively different, which has important implications for understanding the mechanism of context effects in speech perception.
  • Sjerps, M. J., McQueen, J. M., & Mitterer, H. (2013). Evidence for precategorical extrinsic vowel normalization. Attention, Perception & Psychophysics, 75, 576-587. doi:10.3758/s13414-012-0408-7.

    Abstract

    Three experiments investigated whether extrinsic vowel normalization takes place largely at a categorical or a precategorical level of processing. Traditional vowel normalization effects in categorization were replicated in Experiment 1: Vowels taken from an [ɪ]-[ε] continuum were more often interpreted as /ɪ/ (which has a low first formant, F (1)) when the vowels were heard in contexts that had a raised F (1) than when the contexts had a lowered F (1). This was established with contexts that consisted of only two syllables. These short contexts were necessary for Experiment 2, a discrimination task that encouraged listeners to focus on the perceptual properties of vowels at a precategorical level. Vowel normalization was again found: Ambiguous vowels were more easily discriminated from an endpoint [ε] than from an endpoint [ɪ] in a high-F (1) context, whereas the opposite was true in a low-F (1) context. Experiment 3 measured discriminability between pairs of steps along the [ɪ]-[ε] continuum. Contextual influences were again found, but without discrimination peaks, contrary to what was predicted from the same participants' categorization behavior. Extrinsic vowel normalization therefore appears to be a process that takes place at least in part at a precategorical processing level.
  • Skiba, R. (1989). Funktionale Beschreibung von Lernervarietäten: Das Berliner Projekt P-MoLL. In N. Reiter (Ed.), Sprechen und Hören: Akte des 23. Linguistischen Kolloquiums, Berlin (pp. 181-191). Tübingen: Niemeyer.
  • Sloetjes, H. (2013). The ELAN annotation tool. In H. Lausberg (Ed.), Understanding body movement: A guide to empirical research on nonverbal behaviour with an introduction to the NEUROGES coding system (pp. 193-198). Frankfurt a/M: Lang.
  • Sloetjes, H. (2013). Step by step introduction in NEUROGES coding with ELAN. In H. Lausberg (Ed.), Understanding body movement: A guide to empirical research on nonverbal behaviour with an introduction to the NEUROGES coding system (pp. 201-212). Frankfurt a/M: Lang.
  • Slone, L. K., Abney, D. H., Borjon, J. I., Chen, C.-h., Franchak, J. M., Pearcy, D., Suarez-Rivera, C., Xu, T. L., Zhang, Y., Smith, L. B., & Yu, C. (2018). Gaze in action: Head-mounted eye tracking of children's dynamic visual attention during naturalistic behavior. Journal of Visualized Experiments, (141): e58496. doi:10.3791/58496.

    Abstract

    Young children's visual environments are dynamic, changing moment-by-moment as children physically and visually explore spaces and objects and interact with people around them. Head-mounted eye tracking offers a unique opportunity to capture children's dynamic egocentric views and how they allocate visual attention within those views. This protocol provides guiding principles and practical recommendations for researchers using head-mounted eye trackers in both laboratory and more naturalistic settings. Head-mounted eye tracking complements other experimental methods by enhancing opportunities for data collection in more ecologically valid contexts through increased portability and freedom of head and body movements compared to screen-based eye tracking. This protocol can also be integrated with other technologies, such as motion tracking and heart-rate monitoring, to provide a high-density multimodal dataset for examining natural behavior, learning, and development than previously possible. This paper illustrates the types of data generated from head-mounted eye tracking in a study designed to investigate visual attention in one natural context for toddlers: free-flowing toy play with a parent. Successful use of this protocol will allow researchers to collect data that can be used to answer questions not only about visual attention, but also about a broad range of other perceptual, cognitive, and social skills and their development.
  • De Smedt, F., Merchie, E., Barendse, M. T., Rosseel, Y., De Naeghel, J., & Van Keer, H. (2018). Cognitive and motivational challenges in writing: Studying the relation with writing performance across students' gender and achievement level. Reading Research Quarterly, 53(2), 249-272. doi:10.1002/rrq.193.

    Abstract

    Abstract In the past, several assessment reports on writing repeatedly showed that elementary school students do not develop the essential writing skills to be successful in school. In this respect, prior research has pointed to the fact that cognitive and motivational challenges are at the root of the rather basic level of elementary students' writing performance. Additionally, previous research has revealed gender and achievement-level differences in elementary students' writing. In view of providing effective writing instruction for all students to overcome writing difficulties, the present study provides more in-depth insight into (a) how cognitive and motivational challenges mediate and correlate with students' writing performance and (b) whether and how these relations vary for boys and girls and for writers of different achievement levels. In the present study, 1,577 fifth- and sixth-grade students completed questionnaires regarding their writing self-efficacy, writing motivation, and writing strategies. In addition, half of the students completed two writing tests, respectively focusing on the informational or narrative text genre. Based on multiple group structural equation modeling (MG-SEM), we put forward two models: a MG-SEM model for boys and girls and a MG-SEM model for low, average, and high achievers. The results underline the importance of studying writing models for different groups of students in order to gain more refined insight into the complex interplay between motivational and cognitive challenges related to students' writing performance.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). An amodal shared resource model of language-mediated visual attention. Frontiers in Psychology, 4: 528. doi:10.3389/fpsyg.2013.00528.

    Abstract

    Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language-mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language-mediated eye gaze.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Modelling the effects of formal literacy training on language mediated visual attention. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 3420-3425). Austin, TX: Cognitive Science Society.

    Abstract

    Recent empirical evidence suggests that language-mediated eye gaze is partly determined by level of formal literacy training. Huettig, Singh and Mishra (2011) showed that high-literate individuals' eye gaze was closely time locked to phonological overlap between a spoken target word and items presented in a visual display. In contrast, low-literate individuals' eye gaze was not related to phonological overlap, but was instead strongly influenced by semantic relationships between items. Our present study tests the hypothesis that this behavior is an emergent property of an increased ability to extract phonological structure from the speech signal, as in the case of high-literates, with low-literates more reliant on more coarse grained structure. This hypothesis was tested using a neural network model, that integrates linguistic information extracted from the speech signal with visual and semantic information within a central resource. We demonstrate that contrasts in fixation behavior similar to those observed between high and low literates emerge when models are trained on speech signals of contrasting granularity.
  • Smith, M. R., Cutler, A., Butterfield, S., & Nimmo-Smith, I. (1989). The perception of rhythm and word boundaries in noise-masked speech. Journal of Speech and Hearing Research, 32, 912-920.

    Abstract

    The present experiment tested the suggestion that human listeners may exploit durational information in speech to parse continuous utterances into words. Listeners were presented with six-syllable unpredictable utterances under noise-masking, and were required to judge between alternative word strings as to which best matched the rhythm of the masked utterances. For each utterance there were four alternative strings: (a) an exact rhythmic and word boundary match, (b) a rhythmic mismatch, and (c) two utterances with the same rhythm as the masked utterance, but different word boundary locations. Listeners were clearly able to perceive the rhythm of the masked utterances: The rhythmic mismatch was chosen significantly less often than any other alternative. Within the three rhythmically matched alternatives, the exact match was chosen significantly more often than either word boundary mismatch. Thus, listeners both perceived speech rhythm and used durational cues effectively to locate the position of word boundaries.
  • Smulders, F. T. Y., Ten Oever, S., Donkers, F. C. L., Quaedflieg, C. W. E. M., & Van de Ven, V. (2018). Single-trial log transformation is optimal in frequency analysis of resting EEG alpha. European Journal of Neuroscience, 48(7), 2585-2598. doi:10.1111/ejn.13854.

    Abstract

    The appropriate definition and scaling of the magnitude of electroencephalogram (EEG) oscillations is an underdeveloped area. The aim of this study was to optimize the analysis of resting EEG alpha magnitude, focusing on alpha peak frequency and nonlinear transformation of alpha power. A family of nonlinear transforms, Box-Cox transforms, were applied to find the transform that (a) maximized a non-disputed effect: the increase in alpha magnitude when the eyes are closed (Berger effect), and (b) made the distribution of alpha magnitude closest to normal across epochs within each participant, or across participants. The transformations were performed either at the single epoch level or at the epoch-average level. Alpha peak frequency showed large individual differences, yet good correspondence between various ways to estimate it in 2min of eyes-closed and 2min of eyes-open resting EEG data. Both alpha magnitude and the Berger effect were larger for individual alpha than for a generic (8-12Hz) alpha band. The log-transform on single epochs (a) maximized the t-value of the contrast between the eyes-open and eyes-closed conditions when tested within each participant, and (b) rendered near-normally distributed alpha power across epochs and participants, thereby making further transformation of epoch averages superfluous. The results suggest that the log-normal distribution is a fundamental property of variations in alpha power across time in the order of seconds. Moreover, effects on alpha power appear to be multiplicative rather than additive. These findings support the use of the log-transform on single epochs to achieve appropriate scaling of alpha magnitude.
  • Snijders, T. M., Milivojevic, B., & Kemner, C. (2013). Atypical excitation-inhibition balance in autism captured by the gamma response to contextual modulation. NeuroImage: Clinical, 3, 65-72. doi:10.1016/j.nicl.2013.06.015.

    Abstract

    Atypical visual perception in people with autism spectrum disorders (ASD) is hypothesized to stem from an imbalance in excitatory and inhibitory processes in the brain. We used neuronal oscillations in the gamma frequency range (30 – 90 Hz), which emerge from a balanced interaction of excitation and inhibition in the brain, to assess contextual modulation processes in early visual perception. Electroencephalography was recorded in 12 high-functioning adults with ASD and 12 age- and IQ-matched control participants. Oscilla- tions in the gamma frequency range were analyzed in response to stimuli consisting of small line-like elements. Orientation-speci fi c contextual modulation was manipulated by parametrically increasing the amount of homogeneously oriented elements in the stimuli. The stimuli elicited a strong steady-state gamma response around the refresh-rate of 60 Hz, which was larger for controls than for participants with ASD. The amount of orientation homogeneity (contextual modulation) in fl uenced the gamma response in control subjects, while for subjects with ASD this was not the case. The atypical steady-state gamma response to contextual modulation in subjects with ASD may capture the link between an imbalance in excitatory and inhibitory neuronal processing and atypical visual processing in ASD
  • Snijders Blok, L., Rousseau, J., Twist, J., Ehresmann, S., Takaku, M., Venselaar, H., Rodan, L. H., Nowak, C. B., Douglas, J., Swoboda, K. J., Steeves, M. A., Sahai, I., Stumpel, C. T. R. M., Stegmann, A. P. A., Wheeler, P., Willing, M., Fiala, E., Kochhar, A., Gibson, W. T., Cohen, A. S. A. and 59 moreSnijders Blok, L., Rousseau, J., Twist, J., Ehresmann, S., Takaku, M., Venselaar, H., Rodan, L. H., Nowak, C. B., Douglas, J., Swoboda, K. J., Steeves, M. A., Sahai, I., Stumpel, C. T. R. M., Stegmann, A. P. A., Wheeler, P., Willing, M., Fiala, E., Kochhar, A., Gibson, W. T., Cohen, A. S. A., Agbahovbe, R., Innes, A. M., Au, P. Y. B., Rankin, J., Anderson, I. J., Skinner, S. A., Louie, R. J., Warren, H. E., Afenjar, A., Keren, B., Nava, C., Buratti, J., Isapof, A., Rodriguez, D., Lewandowski, R., Propst, J., Van Essen, T., Choi, M., Lee, S., Chae, J. H., Price, S., Schnur, R. E., Douglas, G., Wentzensen, I. M., Zweier, C., Reis, A., Bialer, M. G., Moore, C., Koopmans, M., Brilstra, E. H., Monroe, G. R., Van Gassen, K. L. I., Van Binsbergen, E., Newbury-Ecob, R., Bownass, L., Bader, I., Mayr, J. A., Wortmann, S. B., Jakielski, K. J., Strand, E. A., Kloth, K., Bierhals, T., The DDD study, Roberts, J. D., Petrovich, R. M., Machida, S., Kurumizaka, H., Lelieveld, S., Pfundt, R., Jansen, S., Derizioti, P., Faivre, L., Thevenon, J., Assoum, M., Shriberg, L., Kleefstra, T., Brunner, H. G., Wade, P. A., Fisher, S. E., & Campeau, P. M. (2018). CHD3 helicase domain mutations cause a neurodevelopmental syndrome with macrocephaly and impaired speech and language. Nature Communications, 9: 4619. doi:10.1038/s41467-018-06014-6.

    Abstract

    Chromatin remodeling is of crucial importance during brain development. Pathogenic
    alterations of several chromatin remodeling ATPases have been implicated in neurodevelopmental
    disorders. We describe an index case with a de novo missense mutation in CHD3,
    identified during whole genome sequencing of a cohort of children with rare speech disorders.
    To gain a comprehensive view of features associated with disruption of this gene, we use a
    genotype-driven approach, collecting and characterizing 35 individuals with de novo CHD3
    mutations and overlapping phenotypes. Most mutations cluster within the ATPase/helicase
    domain of the encoded protein. Modeling their impact on the three-dimensional structure
    demonstrates disturbance of critical binding and interaction motifs. Experimental assays with
    six of the identified mutations show that a subset directly affects ATPase activity, and all but
    one yield alterations in chromatin remodeling. We implicate de novo CHD3 mutations in a
    syndrome characterized by intellectual disability, macrocephaly, and impaired speech and
    language.
  • Snijders Blok, L., Hiatt, S. M., Bowling, K. M., Prokop, J. W., Engel, K. L., Cochran, J. N., Bebin, E. M., Bijlsma, E. K., Ruivenkamp, C. A. L., Terhal, P., Simon, M. E. H., Smith, R., Hurst, J. A., The DDD study, MCLaughlin, H., Person, R., Crunk, A., Wangler, M. F., Streff, H., Symonds, J. D., Zuberi, S. M. and 11 moreSnijders Blok, L., Hiatt, S. M., Bowling, K. M., Prokop, J. W., Engel, K. L., Cochran, J. N., Bebin, E. M., Bijlsma, E. K., Ruivenkamp, C. A. L., Terhal, P., Simon, M. E. H., Smith, R., Hurst, J. A., The DDD study, MCLaughlin, H., Person, R., Crunk, A., Wangler, M. F., Streff, H., Symonds, J. D., Zuberi, S. M., Elliott, K. S., Sanders, V. R., Masunga, A., Hopkin, R. J., Dubbs, H. A., Ortiz-Gonzalez, X. R., Pfundt, R., Brunner, H. G., Fisher, S. E., Kleefstra, T., & Cooper, G. M. (2018). De novo mutations in MED13, a component of the Mediator complex, are associated with a novel neurodevelopmental disorder. Human Genetics, 137(5), 375-388. doi:10.1007/s00439-018-1887-y.

    Abstract

    Many genetic causes of developmental delay and/or intellectual disability (DD/ID) are extremely rare, and robust discovery of these requires both large-scale DNA sequencing and data sharing. Here we describe a GeneMatcher collaboration which led to a cohort of 13 affected individuals harboring protein-altering variants, 11 of which are de novo, in MED13; the only inherited variant was transmitted to an affected child from an affected mother. All patients had intellectual disability and/or developmental delays, including speech delays or disorders. Other features that were reported in two or more patients include autism spectrum disorder, attention deficit hyperactivity disorder, optic nerve abnormalities, Duane anomaly, hypotonia, mild congenital heart abnormalities, and dysmorphisms. Six affected individuals had mutations that are predicted to truncate the MED13 protein, six had missense mutations, and one had an in-frame-deletion of one amino acid. Out of the seven non-truncating mutations, six clustered in two specific locations of the MED13 protein: an N-terminal and C-terminal region. The four N-terminal clustering mutations affect two adjacent amino acids that are known to be involved in MED13 ubiquitination and degradation, p.Thr326 and p.Pro327. MED13 is a component of the CDK8-kinase module that can reversibly bind Mediator, a multi-protein complex that is required for Polymerase II transcription initiation. Mutations in several other genes encoding subunits of Mediator have been previously shown to associate with DD/ID, including MED13L, a paralog of MED13. Thus, our findings add MED13 to the group of CDK8-kinase module-associated disease genes
  • Speed, L. J., Wnuk, E., & Majid, A. (2018). Studying psycholinguistics out of the lab. In A. De Groot, & P. Hagoort (Eds.), Research methods in psycholinguistics and the neurobiology of language: A practical guide (pp. 190-207). Hoboken: Wiley.

    Abstract

    Traditional psycholinguistic studies take place in controlled experimental labs and typically involve testing undergraduate psychology or linguistics students. Investigating psycholinguistics in this manner calls into question the external validity of findings, that is, the extent to which research findings generalize across languages and cultures, as well as ecologically valid settings. Here we consider three ways in which psycholinguistics can be taken out of the lab. First, researchers can conduct cross-cultural fieldwork in diverse languages and cultures. Second, they can conduct online experiments or experiments in institutionalized public spaces (e.g., museums) to obtain large, diverse participant samples. And, third, researchers can perform studies in more ecologically valid settings, to increase the real-world generalizability of findings. By moving away from the traditional lab setting, psycholinguists can enrich their understanding of language use in all its rich and diverse contexts.
  • Speed, L. J., & Majid, A. (2018). An exception to mental simulation: No evidence for embodied odor language. Cognitive Science, 42(4), 1146-1178. doi:10.1111/cogs.12593.

    Abstract

    Do we mentally simulate olfactory information? We investigated mental simulation of odors and sounds in two experiments. Participants retained a word while they smelled an odor or heard a sound, then rated odor/sound intensity and recalled the word. Later odor/sound recognition was also tested, and pleasantness and familiarity judgments were collected. Word recall was slower when the sound and sound-word mismatched (e.g., bee sound with the word typhoon). Sound recognition was higher when sounds were paired with a match or near-match word (e.g., bee sound with bee or buzzer). This indicates sound-words are mentally simulated. However, using the same paradigm no memory effects were observed for odor. Instead it appears odor-words only affect lexical-semantic representations, demonstrated by higher ratings of odor intensity and pleasantness when an odor was paired with a match or near-match word (e.g., peach odor with peach or mango). These results suggest fundamental differences in how odor and sound-words are represented.

    Additional information

    cogs12593-sup-0001-SupInfo.docx
  • Speed, L., & Majid, A. (2018). Music and odor in harmony: A case of music-odor synaesthesia. In C. Kalish, M. Rau, J. Zhu, & T. T. Rogers (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (CogSci 2018) (pp. 2527-2532). Austin, TX: Cognitive Science Society.

    Abstract

    We report an individual with music-odor synaesthesia who experiences automatic and vivid odor sensations when she hears music. S’s odor associations were recorded on two days, and compared with those of two control participants. Overall, S produced longer descriptions, and her associations were of multiple odors at once, in comparison to controls who typically reported a single odor. Although odor associations were qualitatively different between S and controls, ratings of the consistency of their descriptions did not differ. This demonstrates that crossmodal associations between music and odor exist in non-synaesthetes too. We also found that S is better at discriminating between odors than control participants, and is more likely to experience emotion, memories and evaluations triggered by odors, demonstrating the broader impact of her synaesthesia.

    Additional information

    link to conference website
  • Speed, L. J., & Majid, A. (2018). Superior olfactory language and cognition in odor-color synaesthesia. Journal of Experimental Psychology: Human Perception and Performance, 44(3), 468-481. doi:10.1037/xhp0000469.

    Abstract

    Olfaction is often considered a vestigial sense in humans, demoted throughout evolution to make way for the dominant sense of vision. This perspective on olfaction is reflected in how we think and talk about smells in the West, with odor imagery and odor language reported to be difficult. In the present study we demonstrate odor cognition is superior in odor-color synaesthesia, where there are additional sensory connections to odor concepts. Synaesthesia is a neurological phenomenon in which input in 1 modality leads to involuntary perceptual associations. Semantic accounts of synaesthesia posit synaesthetic associations are mediated by activation of inducing concepts. Therefore, synaesthetic associations may strengthen conceptual representations. To test this idea, we ran 6 odor-color synaesthetes and 17 matched controls on a battery of tasks exploring odor and color cognition. We found synaesthetes outperformed controls on tests of both odor and color discrimination, demonstrating for the first time enhanced perception in both the inducer (odor) and concurrent (color) modality. So, not only do synaesthetes have additional perceptual experiences in comparison to controls, their primary perceptual experience is also different. Finally, synaesthetes were more consistent and accurate at naming odors. We propose synaesthetic associations to odors strengthen odor concepts, making them more differentiated (facilitating odor discrimination) and easier to link with lexical representations (facilitating odor naming). In summary, we show for the first time that both odor language and perception is enhanced in people with synaesthetic associations to odors
  • Starreveld, P. A., La Heij, W., & Verdonschot, R. G. (2013). Time course analysis of the effects of distractor frequency and categorical relatedness in picture naming: An evaluation of the response exclusion account. Language and Cognitive Processes, 28(5), 633-654. doi:10.1080/01690965.2011.608026.

    Abstract

    The response exclusion account (REA), advanced by Mahon and colleagues, localises the distractor frequency effect and the semantic interference effect in picture naming at the level of the response output buffer. We derive four predictions from the REA: (1) the size of the distractor frequency effect should be identical to the frequency effect obtained when distractor words are read aloud, (2) the distractor frequency effect should not change in size when stimulus-onset asynchrony (SOA) is manipulated, (3) the interference effect induced by a distractor word (as measured from a nonword control distractor) should increase in size with increasing SOA, and (4) the word frequency effect and the semantic interference effect should be additive. The results of the picture-naming task in Experiment 1 and the word-reading task in Experiment 2 refute all four predictions. We discuss a tentative account of the findings obtained within a traditional selection-by-competition model in which both context effects are localised at the level of lexical selection.
  • Stephens, S., Hartz, S., Hoft, N., Saccone, N., Corley, R., Hewitt, J., Hopfer, C., Breslau, N., Coon, H., Chen, X., Ducci, F., Dueker, N., Franceschini, N., Frank, J., Han, Y., Hansel, N., Jiang, C., Korhonen, T., Lind, P., Liu, J. and 105 moreStephens, S., Hartz, S., Hoft, N., Saccone, N., Corley, R., Hewitt, J., Hopfer, C., Breslau, N., Coon, H., Chen, X., Ducci, F., Dueker, N., Franceschini, N., Frank, J., Han, Y., Hansel, N., Jiang, C., Korhonen, T., Lind, P., Liu, J., Michel, M., Lyytikäinen, L.-P., Shaffer, J., Short, S., Sun, J., Teumer, A., Thompson, J., Vogelzangs, N., Vink, J., Wenzlaff, A., Wheeler, W., Yang, B.-Z., Aggen, S., Balmforth, A., Baumesiter, S., Beaty, T., Benjamin, D., Bergen, A., Broms, U., Cesarini, D., Chatterjee, N., Chen, J., Cheng, Y.-C., Cichon, S., Couper, D., Cucca, F., Dick, D., Foround, T., Furberg, H., Giegling, I., Gillespie, N., Gu, F.,.Hall, A., Hällfors, J., Han, S., Hartmann, A., Heikkilä, K., Hickie, I., Hottenga, J., Jousilahti, P., Kaakinen, M., Kähönen, M., Koellinger, P., Kittner, S., Konte, B., Landi, M.-T., Laatikainen, T., Leppert, M., Levy, S., Mathias, R., McNeil, D., Medlund, S., Montgomery, G., Murray, T., Nauck, M., North, K., Paré, P., Pergadia, M., Ruczinski, I., Salomaa, V., Viikari, J., Willemsen, G., Barnes, K., Boerwinkle, E., Boomsma, D., Caporaso, N., Edenberg, H., Francks, C., Gelernter, J., Grabe, H., Hops, H., Jarvelin, M.-R., Johannesson, M., Kendler, K., Lehtimäki, T., Magnusson, P., Marazita, M., Marchini, J., Mitchell, B., Nöthen, M., Penninx, B., Raitakari, O., Rietschel, M., Rujescu, D., Samani, N., Schwartz, A., Shete, S., Spitz, M., Swan, G., Völzke, H., Veijola, J., Wei, Q., Amos, C., Canon, D., Grucza, R., Hatsukami, D., Heath, A., Johnson, E., Kaprio, J., Madden, P., Martin, N., Stevens, V., Weiss, R., Kraft, P., Bierut, L., & Ehringer, M. (2013). Distinct Loci in the CHRNA5/CHRNA3/CHRNB4 Gene Cluster are Associated with Onset of Regular Smoking. Genetic Epidemiology, 37, 846-859. doi:10.1002/gepi.21760.

    Abstract

    Neuronal nicotinic acetylcholine receptor (nAChR) genes (CHRNA5/CHRNA3/CHRNB4) have been reproducibly associated with nicotine dependence, smoking behaviors, and lung cancer risk. Of the few reports that have focused on early smoking behaviors, association results have been mixed. This meta-analysis examines early smoking phenotypes and SNPs in the gene cluster to determine: (1) whether the most robust association signal in this region (rs16969968) for other smoking behaviors is also associated with early behaviors, and/or (2) if additional statistically independent signals are important in early smoking. We focused on two phenotypes: age of tobacco initiation (AOI) and age of first regular tobacco use (AOS). This study included 56,034 subjects (41 groups) spanning nine countries and evaluated five SNPs including rs1948, rs16969968, rs578776, rs588765, and rs684513. Each dataset was analyzed using a centrally generated script. Meta-analyses were conducted from summary statistics. AOS yielded significant associations with SNPs rs578776 (beta = 0.02, P = 0.004), rs1948 (beta = 0.023, P = 0.018), and rs684513 (beta = 0.032, P = 0.017), indicating protective effects. There were no significant associations for the AOI phenotype. Importantly, rs16969968, the most replicated signal in this region for nicotine dependence, cigarettes per day, and cotinine levels, was not associated with AOI (P = 0.59) or AOS (P = 0.92). These results provide important insight into the complexity of smoking behavior phenotypes, and suggest that association signals in the CHRNA5/A3/B4 gene cluster affecting early smoking behaviors may be different from those affecting the mature nicotine dependence phenotype

    Files private

    Request files
  • Stewart, L., Verdonschot, R. G., Nasralla, P., & Lanipekun, J. (2013). Action–perception coupling in pianists: Learned mappings or spatial musical association of response codes (SMARC) effect? Quarterly Journal of Experimental Psychology, 66(1), 37-50. doi:10.1080/17470218.2012.687385.

    Abstract

    The principle of common coding suggests that a joint representation is formed when actions are repeatedly paired with a specific perceptual event. Musicians are occupationally specialized with regard to the coupling between actions and their auditory effects. In the present study, we employed a novel paradigm to demonstrate automatic action–effect associations in pianists. Pianists and nonmusicians pressed keys according to aurally presented number sequences. Numbers were presented at pitches that were neutral, congruent, or incongruent with respect to pitches that would normally be produced by such actions. Response time differences were seen between congruent and incongruent sequences in pianists alone. A second experiment was conducted to determine whether these effects could be attributed to the existence of previously documented spatial/pitch compatibility effects. In a “stretched” version of the task, the pitch distance over which the numbers were presented was enlarged to a range that could not be produced by the hand span used in Experiment 1. The finding of a larger response time difference between congruent and incongruent trials in the original, standard, version compared with the stretched version, in pianists, but not in nonmusicians, indicates that the effects obtained are, at least partially, attributable to learned action effects.
  • Stivers, T., & Sidnell, J. (Eds.). (2013). The handbook on conversation analysis. Malden, MA: Wiley-Blackwell.

    Abstract

    Presenting a comprehensive, state-of-the-art overview of theoretical and descriptive research in the field, The Handbook of Conversation Analysis brings together contributions by leading international experts to provide an invaluable information resource and reference for scholars of social interaction across the areas of conversation analysis, discourse analysis, linguistic anthropology, interpersonal communication, discursive psychology and sociolinguistics. Ideal as an introduction to the field for upper level undergraduates and as an in-depth review of the latest developments for graduate level students and established scholars Five sections outline the history and theory, methods, fundamental concepts, and core contexts in the study of conversation, as well as topics central to conversation analysis Written by international conversation analysis experts, the book covers a wide range of topics and disciplines, from reviewing underlying structures of conversation, to describing conversation analysis' relationship to anthropology, communication, linguistics, psychology, and sociology
  • Stoehr, A., Benders, T., Van Hell, J. G., & Fikkert, P. (2018). Heritage language exposure impacts voice onset time of Dutch–German simultaneous bilingual preschoolers. Bilingualism: Language and Cognition, 21(3), 598-617. doi:10.1017/S1366728917000116.

    Abstract

    This study assesses the effects of age and language exposure on VOT production in 29 simultaneous bilingual children aged 3;7 to 5;11 who speak German as a heritage language in the Netherlands. Dutch and German have a binary voicing contrast, but the contrast is implemented with different VOT values in the two languages. The results suggest that bilingual children produce ‘voiced’ plosives similarly in their two languages, and these productions are not monolingual-like in either language. Bidirectional cross-linguistic influence between Dutch and German can explain these results. Yet, the bilinguals seemingly have two autonomous categories for Dutch and German ‘voiceless’ plosives. In German, the bilinguals’ aspiration is not monolingual-like, but bilinguals with more heritage language exposure produce more target-like aspiration. Importantly, the amount of exposure to German has no effect on the majority language's ‘voiceless’ category. This implies that more heritage language exposure is associated with more language-specific voicing systems.
  • Stoehr, A. (2018). Speech production, perception, and input of simultaneous bilingual preschoolers: Evidence from voice onset time. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Stolk, A., Griffin, S., Van der Meij, R., Dewar, C., Saez, I., Lin, J. J., Piantoni, G., Schoffelen, J.-M., Knight, R. T., & Oostenveld, R. (2018). Integrated analysis of anatomical and electrophysiological human intracranial data. Nature Protocols, 13, 1699-1723. doi:10.1038/s41596-018-0009-6.

    Abstract

    Human intracranial electroencephalography (iEEG) recordings provide data with much greater spatiotemporal precision
    than is possible from data obtained using scalp EEG, magnetoencephalography (MEG), or functional MRI. Until recently,
    the fusion of anatomical data (MRI and computed tomography (CT) images) with electrophysiological data and their
    subsequent analysis have required the use of technologically and conceptually challenging combinations of software.
    Here, we describe a comprehensive protocol that enables complex raw human iEEG data to be converted into more readily
    comprehensible illustrative representations. The protocol uses an open-source toolbox for electrophysiological data
    analysis (FieldTrip). This allows iEEG researchers to build on a continuously growing body of scriptable and reproducible
    analysis methods that, over the past decade, have been developed and used by a large research community. In this
    protocol, we describe how to analyze complex iEEG datasets by providing an intuitive and rapid approach that can handle
    both neuroanatomical information and large electrophysiological datasets. We provide a worked example using
    an example dataset. We also explain how to automate the protocol and adjust the settings to enable analysis of
    iEEG datasets with other characteristics. The protocol can be implemented by a graduate student or postdoctoral
    fellow with minimal MATLAB experience and takes approximately an hour to execute, excluding the automated cortical
    surface extraction.
  • Stolk, A., Verhagen, L., Schoffelen, J.-M., Oostenveld, R., Blokpoel, M., Hagoort, P., van Rooij, I., & Tonia, I. (2013). Neural mechanisms of communicative innovation. Proceedings of the National Academy of Sciences of the United States of America, 110(36), 14574-14579. doi:10.1073/pnas.1303170110.

    Abstract

    Human referential communication is often thought as coding-decoding a set of symbols, neglecting that establishing shared meanings requires a computational mechanism powerful enough to mutually negotiate them. Sharing the meaning of a novel symbol might rely on similar conceptual inferences across communicators or on statistical similarities in their sensorimotor behaviors. Using magnetoencephalography, we assess spectral, temporal, and spatial characteristics of neural activity evoked when people generate and understand novel shared symbols during live communicative interactions. Solving those communicative problems induced comparable changes in the spectral profile of neural activity of both communicators and addressees. This shared neuronal up-regulation was spatially localized to the right temporal lobe and the ventromedial prefrontal cortex and emerged already before the occurrence of a specific communicative problem. Communicative innovation relies on neuronal computations that are shared across generating and understanding novel shared symbols, operating over temporal scales independent from transient sensorimotor behavior.
  • Stolk, A., Todorovic, A., Schoffelen, J.-M., & Oostenveld, R. (2013). Online and offline tools for head movement compensation in MEG. NeuroImage, 68, 39-48. doi:10.1016/j.neuroimage.2012.11.047.

    Abstract

    Magnetoencephalography (MEG) is measured above the head, which makes it sensitive to variations of the head position with respect to the sensors. Head movements blur the topography of the neuronal sources of the MEG signal, increase localization errors, and reduce statistical sensitivity. Here we describe two novel and readily applicable methods that compensate for the detrimental effects of head motion on the statistical sensitivity of MEG experiments. First, we introduce an online procedure that continuously monitors head position. Second, we describe an offline analysis method that takes into account the head position time-series. We quantify the performance of these methods in the context of three different experimental settings, involving somatosensory, visual and auditory stimuli, assessing both individual and group-level statistics. The online head localization procedure allowed for optimal repositioning of the subjects over multiple sessions, resulting in a 28% reduction of the variance in dipole position and an improvement of up to 15% in statistical sensitivity. Offline incorporation of the head position time-series into the general linear model resulted in improvements of group-level statistical sensitivity between 15% and 29%. These tools can substantially reduce the influence of head movement within and between sessions, increasing the sensitivity of many cognitive neuroscience experiments.
  • Sulik, J. (2018). Cognitive mechanisms for inferring the meaning of novel signals during symbolisation. PLoS One, 13(1): e0189540. doi:10.1371/journal.pone.0189540.

    Abstract

    As participants repeatedly interact using graphical signals (as in a game of Pictionary), the signals gradually shift from being iconic (or motivated) to being symbolic (or arbitrary). The aim here is to test experimentally whether this change in the form of the signal implies a concomitant shift in the inferential mechanisms needed to understand it. The results show that, during early, iconic stages, there is more reliance on creative inferential processes associated with insight problem solving, and that the recruitment of these cognitive mechanisms decreases over time. The variation in inferential mechanism is not predicted by the sign’s visual complexity or iconicity, but by its familiarity, and by the complexity of the relevant mental representations. The discussion explores implications for pragmatics, language evolution, and iconicity research.
  • Sumer, B., Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2013). Acquisition of locative expressions in children learning Turkish Sign Language (TİD) and Turkish. In E. Arik (Ed.), Current directions in Turkish Sign Language research (pp. 243-272). Newcastle upon Tyne: Cambridge Scholars Publishing.

    Abstract

    In sign languages, where space is often used to talk about space, expressions of spatial relations (e.g., ON, IN, UNDER, BEHIND) may rely on analogue mappings of real space onto signing space. In contrast, spoken languages express space in mostly categorical ways (e.g. adpositions). This raises interesting questions about the role of language modality in the acquisition of expressions of spatial relations. However, whether and to what extent modality influences the acquisition of spatial language is controversial – mostly due to the lack of direct comparisons of Deaf children to Deaf adults and to age-matched hearing children in similar tasks. Furthermore, the previous studies have taken English as the only model for spoken language development of spatial relations.
    Therefore, we present a balanced study in which spatial expressions by deaf and hearing children in two different age-matched groups (preschool children and school-age children) are systematically compared, as well as compared to the spatial expressions of adults. All participants performed the same tasks, describing angular (LEFT, RIGHT, FRONT, BEHIND) and non-angular spatial configurations (IN, ON, UNDER) of different objects (e.g. apple in box; car behind box).
    The analysis of the descriptions with non-angular spatial relations does not show an effect of modality on the development of
    locative expressions in TİD and Turkish. However, preliminary results of the analysis of expressions of angular spatial relations suggest that signers provide angular information in their spatial descriptions
    more frequently than Turkish speakers in all three age groups, and thus showing a potentially different developmental pattern in this domain. Implications of the findings with regard to the development of relations in spatial language and cognition will be discussed.
  • Sumner, M., Kurumada, C., Gafter, R., & Casillas, M. (2013). Phonetic variation and the recognition of words with pronunciation variants. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 3486-3492). Austin, TX: Cognitive Science Society.
  • Tamariz, M., Roberts, S. G., Martínez, J. I., & Santiago, J. (2018). The Interactive Origin of Iconicity. Cognitive Science, 42, 334-349. doi:10.1111/cogs.12497.

    Abstract

    We investigate the emergence of iconicity, specifically a bouba-kiki effect in miniature artificial languages under different functional constraints: when the languages are reproduced and when they are used communicatively. We ran transmission chains of (a) participant dyads who played an interactive communicative game and (b) individual participants who played a matched learning game. An analysis of the languages over six generations in an iterated learning experiment revealed that in the Communication condition, but not in the Reproduction condition, words for spiky shapes tend to be rated by naive judges as more spiky than the words for round shapes. This suggests that iconicity may not only be the outcome of innovations introduced by individuals, but, crucially, the result of interlocutor negotiation of new communicative conventions. We interpret our results as an illustration of cultural evolution by random mutation and selection (as opposed to by guided variation).
  • Tan, Y., & Martin, R. C. (2018). Verbal short-term memory capacities and executive function in semantic and syntactic interference resolution during sentence comprehension: Evidence from aphasia. Neuropsychologia, 113, 111-125. doi:10.1016/j.neuropsychologia.2018.03.001.

    Abstract

    This study examined the role of verbal short-term memory (STM) and executive function (EF) underlying semantic and syntactic interference resolution during sentence comprehension for persons with aphasia (PWA) with varying degrees of STM and EF deficits. Semantic interference was manipulated by varying the semantic plausibility of the intervening NP as subject of the verb and syntactic interference was manipulated by varying whether the NP was another subject or an object. Nine PWA were assessed on sentence reading times and on comprehension question performance. PWA showed exaggerated semantic and syntactic interference effects relative to healthy age-matched control subjects. Importantly, correlational analyses showed that while answering comprehension questions, PWA’ semantic STM capacity related to their ability to resolve semantic but not syntactic interference. In contrast, PWA’ EF abilities related to their ability to resolve syntactic but not semantic interference. Phonological STM deficits were not related to the ability to resolve either type of interference. The results for semantic STM are consistent with prior findings indicating a role for semantic but not phonological STM in sentence comprehension, specifically with regard to maintaining semantic information prior to integration. The results for syntactic interference are consistent with the recent findings suggesting that EF is critical for syntactic processing.
  • Tan, Y., Martin, R. C., & Van Dyke, J. (2013). Verbal WM capacities in sentence comprehension: Evidence from aphasia. Procedia - Social and Behavioral Sciences, 94, 108-109. doi:10.1016/j.sbspro.2013.09.052.
  • Teeling, E., Vernes, S. C., Davalos, L. M., Ray, D. A., Gilbert, M. T. P., Myers, E., & Bat1K Consortium (2018). Bat biology, genomes, and the Bat1K project: To generate chromosome-level genomes for all living bat species. Annual Review of Animal Biosciences, 6, 23-46. doi:10.1146/annurev-animal-022516-022811.

    Abstract

    Bats are unique among mammals, possessing some of the rarest mammalian adaptations, including true self-powered flight, laryngeal echolocation, exceptional longevity, unique immunity, contracted genomes, and vocal learning. They provide key ecosystem services, pollinating tropical plants, dispersing seeds, and controlling insect pest populations, thus driving healthy ecosystems. They account for more than 20% of all living mammalian diversity, and their crown-group evolutionary history dates back to the Eocene. Despite their great numbers and diversity, many species are threatened and endangered. Here we announce Bat1K, an initiative to sequence the genomes of all living bat species (n∼1,300) to chromosome-level assembly. The Bat1K genome consortium unites bat biologists (>132 members as of writing), computational scientists, conservation organizations, genome technologists, and any interested individuals committed to a better understanding of the genetic and evolutionary mechanisms that underlie the unique adaptations of bats. Our aim is to catalog the unique genetic diversity present in all living bats to better understand the molecular basis of their unique adaptations; uncover their evolutionary history; link genotype with phenotype; and ultimately better understand, promote, and conserve bats. Here we review the unique adaptations of bats and highlight how chromosome-level genome assemblies can uncover the molecular basis of these traits. We present a novel sequencing and assembly strategy and review the striking societal and scientific benefits that will result from the Bat1K initiative.
  • Ten Bosch, L., Ernestus, M., & Boves, L. (2018). Analyzing reaction time sequences from human participants in auditory experiments. In Proceedings of Interspeech 2018 (pp. 971-975). doi:10.21437/Interspeech.2018-1728.

    Abstract

    Sequences of reaction times (RT) produced by participants in an experiment are not only influenced by the stimuli, but by many other factors as well, including fatigue, attention, experience, IQ, handedness, etc. These confounding factors result in longterm effects (such as a participant’s overall reaction capability) and in short- and medium-time fluctuations in RTs (often referred to as ‘local speed effects’). Because stimuli are usually presented in a random sequence different for each participant, local speed effects affect the underlying ‘true’ RTs of specific trials in different ways across participants. To be able to focus statistical analysis on the effects of the cognitive process under study, it is necessary to reduce the effect of confounding factors as much as possible. In this paper we propose and compare techniques and criteria for doing so, with focus on reducing (‘filtering’) the local speed effects. We show that filtering matters substantially for the significance analyses of predictors in linear mixed effect regression models. The performance of filtering is assessed by the average between-participant correlation between filtered RT sequences and by Akaike’s Information Criterion, an important measure of the goodness-of-fit of linear mixed effect regression models.
  • Ten Oever, S., Sack, A. T., Wheat, K. L., Bien, N., & Van Atteveldt, N. (2013). Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs. Frontiers in Psychology, 4: 331. doi:10.3389/fpsyg.2013.00331.

    Abstract

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.
  • Ten Bosch, L., & Boves, L. (2018). Information encoding by deep neural networks: what can we learn? In Proceedings of Interspeech 2018 (pp. 1457-1461). doi:10.21437/Interspeech.2018-1896.

    Abstract

    The recent advent of deep learning techniques in speech tech-nology and in particular in automatic speech recognition hasyielded substantial performance improvements. This suggeststhat deep neural networks (DNNs) are able to capture structurein speech data that older methods for acoustic modeling, suchas Gaussian Mixture Models and shallow neural networks failto uncover. In image recognition it is possible to link repre-sentations on the first couple of layers in DNNs to structuralproperties of images, and to representations on early layers inthe visual cortex. This raises the question whether it is possi-ble to accomplish a similar feat with representations on DNNlayers when processing speech input. In this paper we presentthree different experiments in which we attempt to untanglehow DNNs encode speech signals, and to relate these repre-sentations to phonetic knowledge, with the aim to advance con-ventional phonetic concepts and to choose the topology of aDNNs more efficiently. Two experiments investigate represen-tations formed by auto-encoders. A third experiment investi-gates representations on convolutional layers that treat speechspectrograms as if they were images. The results lay the basisfor future experiments with recursive networks.
  • Ten Bosch, L., Boves, L., & Ernestus, M. (2013). Towards an end-to-end computational model of speech comprehension: simulating a lexical decision task. In Proceedings of INTERSPEECH 2013: 14th Annual Conference of the International Speech Communication Association (pp. 2822-2826).

    Abstract

    This paper describes a computational model of speech comprehension that takes the acoustic signal as input and predicts reaction times as observed in an auditory lexical decision task. By doing so, we explore a new generation of end-to-end computational models that are able to simulate the behaviour of human subjects participating in a psycholinguistic experiment. So far, nearly all computational models of speech comprehension do not start from the speech signal itself, but from abstract representations of the speech signal, while the few existing models that do start from the acoustic signal cannot directly model reaction times as obtained in comprehension experiments. The main functional components in our model are the perception stage, which is compatible with the psycholinguistic model Shortlist B and is implemented with techniques from automatic speech recognition, and the decision stage, which is based on the linear ballistic accumulation decision model. We successfully tested our model against data from 20 participants performing a largescale auditory lexical decision experiment. Analyses show that the model is a good predictor for the average judgment and reaction time for each word.
  • Thompson, B., & Lupyan, G. (2018). Automatic estimation of lexical concreteness in 77 languages. In C. Kalish, M. Rau, J. Zhu, & T. T. Rogers (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (CogSci 2018) (pp. 1122-1127). Austin, TX: Cognitive Science Society.

    Abstract

    We estimate lexical Concreteness for millions of words across 77 languages. Using a simple regression framework, we combine vector-based models of lexical semantics with experimental norms of Concreteness in English and Dutch. By applying techniques to align vector-based semantics across distinct languages, we compute and release Concreteness estimates at scale in numerous languages for which experimental norms are not currently available. This paper lays out the technique and its efficacy. Although this is a difficult dataset to evaluate immediately, Concreteness estimates computed from English correlate with Dutch experimental norms at $\rho$ = .75 in the vocabulary at large, increasing to $\rho$ = .8 among Nouns. Our predictions also recapitulate attested relationships with word frequency. The approach we describe can be readily applied to numerous lexical measures beyond Concreteness
  • Thompson, B., Roberts, S., & Lupyan, G. (2018). Quantifying semantic similarity across languages. In C. Kalish, M. Rau, J. Zhu, & T. T. Rogers (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (CogSci 2018) (pp. 2551-2556). Austin, TX: Cognitive Science Society.

    Abstract

    Do all languages convey semantic knowledge in the same way? If language simply mirrors the structure of the world, the answer should be a qualified “yes”. If, however, languages impose structure as much as reflecting it, then even ostensibly the “same” word in different languages may mean quite different things. We provide a first pass at a large-scale quantification of cross-linguistic semantic alignment of approximately 1000 meanings in 55 languages. We find that the translation equivalents in some domains (e.g., Time, Quantity, and Kinship) exhibit high alignment across languages while the structure of other domains (e.g., Politics, Food, Emotions, and Animals) exhibits substantial cross-linguistic variability. Our measure of semantic alignment correlates with known phylogenetic distances between languages: more phylogenetically distant languages have less semantic alignment. We also find semantic alignment to correlate with cultural distances between societies speaking the languages, suggesting a rich co-adaptation of language and culture even in domains of experience that appear most constrained by the natural world
  • Thompson-Schill, S., Hagoort, P., Dominey, P. F., Honing, H., Koelsch, S., Ladd, D. R., Lerdahl, F., Levinson, S. C., & Steedman, M. (2013). Multiple levels of structure in language and music. In M. A. Arbib (Ed.), Language, music, and the brain: A mysterious relationship (pp. 289-303). Cambridge, MA: MIT Press.

    Abstract

    A forum devoted to the relationship between music and language begins with an implicit assumption: There is at least one common principle that is central to all human musical systems and all languages, but that is not characteristic of (most) other domains. Why else should these two categories be paired together for analysis? We propose that one candidate for a common principle is their structure. In this chapter, we explore the nature of that structure—and its consequences for psychological and neurological processing mechanisms—within and across these two domains.
  • Thorin, J., Sadakata, M., Desain, P., & McQueen, J. M. (2018). Perception and production in interaction during non-native speech category learning. The Journal of the Acoustical Society of America, 144(1), 92-103. doi:10.1121/1.5044415.

    Abstract

    Establishing non-native phoneme categories can be a notoriously difficult endeavour—in both speech perception and speech production. This study asks how these two domains interact in the course of this learning process. It investigates the effect of perceptual learning and related production practice of a challenging non-native category on the perception and/or production of that category. A four-day perceptual training protocol on the British English /æ/-/ɛ/ vowel contrast was combined with either related or unrelated production practice. After feedback on perceptual categorisation of the contrast, native Dutch participants in the related production group (N = 19) pronounced the trial's correct answer, while participants in the unrelated production group (N = 19) pronounced similar but phonologically unrelated words. Comparison of pre- and post-tests showed significant improvement over the course of training in both perception and production, but no differences between the groups were found. The lack of an effect of production practice is discussed in the light of previous, competing results and models of second-language speech perception and production. This study confirms that, even in the context of related production practice, perceptual training boosts production learning.
  • Tian, X., Ding, N., Teng, X., Bai, F., & Poeppel, D. (2018). Imagined speech influences perceived loudness of sound. Nature Human Behaviour, 2, 225-234. doi:10.1038/s41562-018-0305-8.

    Abstract

    The way top-down and bottom-up processes interact to shape our perception and behaviour is a fundamental question and remains highly controversial. How early in a processing stream do such interactions occur, and what factors govern such interactions? The degree of abstractness of a perceptual attribute (for example, orientation versus shape in vision, or loudness versus sound identity in hearing) may determine the locus of neural processing and interaction between bottom-up and internal information. Using an imagery-perception repetition paradigm, we find that imagined speech affects subsequent auditory perception, even for a low-level attribute such as loudness. This effect is observed in early auditory responses in magnetoencephalography and electroencephalography that correlate with behavioural loudness ratings. The results suggest that the internal reconstruction of neural representations without external stimulation is flexibly regulated by task demands, and that such top-down processes can interact with bottom-up information at an early perceptual stage to modulate perception.
  • Tilot, A. K., Kucera, K. S., Vino, A., Asher, J. E., Baron-Cohen, S., & Fisher, S. E. (2018). Rare variants in axonogenesis genes connect three families with sound–color synesthesia. Proceedings of the National Academy of Sciences of the United States of America, 115(12), 3168-3173. doi:10.1073/pnas.1715492115.

    Abstract

    Synesthesia is a rare nonpathological phenomenon where stimulation of one sense automatically provokes a secondary perception in another. Hypothesized to result from differences in cortical wiring during development, synesthetes show atypical structural and functional neural connectivity, but the underlying molecular mechanisms are unknown. The trait also appears to be more common among people with autism spectrum disorder and savant abilities. Previous linkage studies searching for shared loci of large effect size across multiple families have had limited success. To address the critical lack of candidate genes, we applied whole-exome sequencing to three families with sound–color (auditory–visual) synesthesia affecting multiple relatives across three or more generations. We identified rare genetic variants that fully cosegregate with synesthesia in each family, uncovering 37 genes of interest. Consistent with reports indicating genetic heterogeneity, no variants were shared across families. Gene ontology analyses highlighted six genes—COL4A1, ITGA2, MYO10, ROBO3, SLC9A6, and SLIT2—associated with axonogenesis and expressed during early childhood when synesthetic associations are formed. These results are consistent with neuroimaging-based hypotheses about the role of hyperconnectivity in the etiology of synesthesia and offer a potential entry point into the neurobiology that organizes our sensory experiences.

    Additional information

    Tilot_etal_2018SI.pdf
  • Timmer, K., Ganushchak, L. Y., Mitlina, Y., & Schiller, N. O. (2013). Choosing first or second language phonology in 125 ms [Abstract]. Journal of Cognitive Neuroscience, 25 Suppl., 164.

    Abstract

    We are often in a bilingual situation (e.g., overhearing a conversation in the train). We investigated whether first (L1) and second language (L2) phonologies are automatically activated. A masked priming paradigm was used, with Russian words as targets and either Russian or English words as primes. Event-related potentials (ERPs) were recorded while Russian (L1) – English (L2) bilinguals read aloud L1 target words (e.g. РЕЙС /reis/ ‘fl ight’) primed with either L1 (e.g. РАНА /rana/ ‘wound’) or L2 words (e.g. PACK). Target words were read faster when they were preceded by phonologically related L1 primes but not by orthographically related L2 primes. ERPs showed orthographic priming in the 125-200 ms time window. Thus, both L1 and L2 phonologies are simultaneously activated during L1 reading. The results provide support for non-selective models of bilingual reading, which assume automatic activation of the non-target language phonology even when it is not required by the task.
  • Tornero, D., Wattananit, S., Madsen, M. G., Koch, P., Wood, J., Tatarishvili, J., Mine, Y., Ge, R., Monni, E., Devaraju, K., Hevner, R. F., Bruestle, O., Lindval, O., & Kokaia, Z. (2013). Human induced pluripotent stem cell-derived cortical neurons integrate in stroke-injured cortex and improve functional recovery. Brain, 136(12), 3561-3577. doi:10.1093/brain/awt278.

    Abstract

    Stem cell-based approaches to restore function after stroke through replacement of dead neurons require the generation of specific neuronal subtypes. Loss of neurons in the cerebral cortex is a major cause of stroke-induced neurological deficits in adult humans. Reprogramming of adult human somatic cells to induced pluripotent stem cells is a novel approach to produce patient-specific cells for autologous transplantation. Whether such cells can be converted to functional cortical neurons that survive and give rise to behavioural recovery after transplantation in the stroke-injured cerebral cortex is not known. We have generated progenitors in vitro, expressing specific cortical markers and giving rise to functional neurons, from long-term self-renewing neuroepithelial-like stem cells, produced from adult human fibroblast-derived induced pluripotent stem cells. At 2 months after transplantation into the stroke-damaged rat cortex, the cortically fated cells showed less proliferation and more efficient conversion to mature neurons with morphological and immunohistochemical characteristics of a cortical phenotype and higher axonal projection density as compared with non-fated cells. Pyramidal morphology and localization of the cells expressing the cortex-specific marker TBR1 in a certain layered pattern provided further evidence supporting the cortical phenotype of the fated, grafted cells, and electrophysiological recordings demonstrated their functionality. Both fated and non-fated cell-transplanted groups showed bilateral recovery of the impaired function in the stepping test compared with vehicle-injected animals. The behavioural improvement at this early time point was most likely not due to neuronal replacement and reconstruction of circuitry. At 5 months after stroke in immunocompromised rats, there was no tumour formation and the grafted cells exhibited electrophysiological properties of mature neurons with evidence of integration in host circuitry. Our findings show, for the first time, that human skin-derived induced pluripotent stem cells can be differentiated to cortical neuronal progenitors, which survive, differentiate to functional neurons and improve neurological outcome after intracortical implantation in a rat stroke model.
  • Torreira, F., & Grice, M. (2018). Melodic constructions in Spanish: Metrical structure determines the association properties of intonational tones. Journal of the International Phonetic Association, 48(1), 9-32. doi:10.1017/S0025100317000603.

    Abstract

    This paper explores phrase-length-related alternations in the association of tones to positions in metrical structure in two melodic constructions of Spanish. An imitation-and-completion task eliciting (a) the low–falling–rising contour and (b) the circumflex contour on intonation phrases (IPs) of one, two, and three prosodic words revealed that, although the focus structure and pragmatic context is constant across conditions, phrases containing one prosodic word differ in their nuclear (i.e. final) pitch accents and edge tones from phrases containing more than one prosodic word. For contour (a), short intonation phrases (e.g. [ Ma no lo ] IP ) were produced with a low accent followed by a high edge tone (L ∗ H% in ToBI notation), whereas longer phrases (e.g. [ El her ma no de la a m igadeMa no lo ] IP ‘Manolo’s friend’s brother’) had a low accent on the first stressed syllable, a rising accent on the last stressed syllable, and a low edge tone (L ∗ L+H ∗ L%). For contour (b), short phrases were produced with a high–rise (L+H ∗ ¡H%), whereas longer phrases were produced with an initial accentual rise followed by an upstepped rise–fall (L+H ∗ ¡H ∗ L%). These findings imply that the common practice of describing the structure of intonation contours as consisting of a constant nuclear pitch accent and following edge tone is not adequate for modeling Spanish intonation. To capture the observed melodic alternations, we argue for clearer separation between tones and metrical structure, whereby intonational tones do not necessarily have an intrinsic culminative or delimitative function (i.e. as pitch accents or as edge tones). Instead, this function results from melody-specific principles of tonal–metrical association.
  • Tourtouri, E. N., Delogu, F., & Crocker, M. W. (2018). Specificity and entropy reduction in situated referential processing. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 3356-3361). Austin: Cognitive Science Society.

    Abstract

    In situated communication, reference to an entity in the shared visual context can be established using eitheranexpression that conveys precise (minimally specified) or redundant (over-specified) information. There is, however, along-lasting debate in psycholinguistics concerningwhether the latter hinders referential processing. We present evidence from an eyetrackingexperiment recordingfixations as well asthe Index of Cognitive Activity –a novel measure of cognitive workload –supporting the view that over-specifications facilitate processing. We further present originalevidence that, above and beyond the effect of specificity,referring expressions thatuniformly reduce referential entropyalso benefitprocessing
  • Tribushinina, E., Mak, M., Dubinkina, E., & Mak, W. M. (2018). Adjective production by Russian-speaking children with developmental language disorder and Dutch–Russian simultaneous bilinguals: Disentangling the profiles. Applied Psycholinguistics, 39(5), 1033-1064. doi:10.1017/S0142716418000115.

    Abstract

    Bilingual children with reduced exposure to one or both languages may have language profiles that are
    apparently similar to those of children with developmental language disorder (DLD). Children with
    DLD receive enough input, but have difficulty using this input for acquisition due to processing deficits.
    The present investigation aims to determine aspects of adjective production that are differentially
    affected by reduced input (in bilingualism) and reduced intake (in DLD). Adjectives were elicited
    from Dutch–Russian simultaneous bilinguals with limited exposure to Russian and Russian-speaking
    monolinguals with andwithout DLD.Anantonymelicitation taskwas used to assess the size of adjective
    vocabularies, and a degree task was employed to compare the preferences of the three groups in the
    use of morphological, lexical, and syntactic degree markers. The results revealed that adjective–noun
    agreement is affected to the same extent by both reduced input and reduced intake. The size of adjective
    lexicons is also negatively affected by both, but more so by reduced exposure. However, production
    of morphological degree markers and learning of semantic paradigms are areas of relative strength in
    which bilinguals outperform monolingual children with DLD.We suggest that reduced input might be
    counterbalanced by linguistic and cognitive advantages of bilingualism
  • Tromp, J. (2018). Indirect request comprehension in different contexts. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Tromp, J., Peeters, D., Meyer, A. S., & Hagoort, P. (2018). The combined use of Virtual Reality and EEG to study language processing in naturalistic environments. Behavior Research Methods, 50(2), 862-869. doi:10.3758/s13428-017-0911-9.

    Abstract

    When we comprehend language, we often do this in rich settings in which we can use many cues to understand what someone is saying. However, it has traditionally been difficult to design experiments with rich three-dimensional contexts that resemble our everyday environments, while maintaining control over the linguistic and non-linguistic information that is available. Here we test the validity of combining electroencephalography (EEG) and Virtual Reality (VR) to overcome this problem. We recorded electrophysiological brain activity during language processing in a well-controlled three-dimensional virtual audiovisual environment. Participants were immersed in a virtual restaurant, while wearing EEG equipment. In the restaurant participants encountered virtual restaurant guests. Each guest was seated at a separate table with an object on it (e.g. a plate with salmon). The restaurant guest would then produce a sentence (e.g. “I just ordered this salmon.”). The noun in the spoken sentence could either match (“salmon”) or mismatch (“pasta”) with the object on the table, creating a situation in which the auditory information was either appropriate or inappropriate in the visual context. We observed a reliable N400 effect as a consequence of the mismatch. This finding validates the combined use of VR and EEG as a tool to study the neurophysiological mechanisms of everyday language comprehension in rich, ecologically valid settings.
  • Trompenaars, T. (2018). Empathy for the inanimate. Linguistics in the Netherlands, 35, 125-138. doi:10.1075/avt.00009.tro.

    Abstract

    Narrative fiction may invite us to share the perspective of characters which are very much unlike ourselves. Inanimate objects featuring as protagonists or narrators are an extreme example of this. The way readers experience these characters was examined by means of a narrative immersion study. Participants (N = 200) judged narratives containing animate or inanimate characters in predominantly Agent or Experiencer roles. Narratives with inanimate characters were judged to be less emotionally engaging. This effect was influenced by the dominant thematic role associated with the character: inanimate Agents led to more defamiliarization compared to their animate counterparts than inanimate Experiencers. I argue for an integrated account of thematic roles and animacy in literary experience and linguistics in general.
  • Trompenaars, T., Hogeweg, L., Stoop, W., & De Hoop, H. (2018). The language of an inanimate narrator. Open Linguistics, 4, 707-721. doi:10.1515/opli-2018-0034.

    Abstract

    We show by means of a corpus study that the language used by the inanimate first person narrator in the novel Specht en zoon deviates from what we would expect on the basis of the fact that the narrator is inanimate, but at the same time also differsfrom the language of a human narrator in the novel De wijde blik on several linguistic dimensions. Whereas the human narrator is associated strongly with action verbs, preferring the Agent role, the inanimate narrator is much more limited to the Experiencer role, predominantly associated with cognition and sensory verbs. Our results show that animacy as a linguistic concept may be refined by taking into account the myriad ways in which an entity’s conceptual animacy may be expressed: we accept the conceptual animacy of the inanimate narrator despite its inability to act on its environment, showing this need not be a requirement for animacy
  • Trujillo, J. P., Simanova, I., Bekkering, H., & Ozyurek, A. (2018). Communicative intent modulates production and perception of actions and gestures: A Kinect study. Cognition, 180, 38-51. doi:10.1016/j.cognition.2018.04.003.

    Abstract

    Actions may be used to directly act on the world around us, or as a means of communication. Effective communication requires the addressee to recognize the act as being communicative. Humans are sensitive to ostensive communicative cues, such as direct eye gaze (Csibra & Gergely, 2009). However, there may be additional cues present in the action or gesture itself. Here we investigate features that characterize the initiation of a communicative interaction in both production and comprehension.

    We asked 40 participants to perform 31 pairs of object-directed actions and representational gestures in more- or less- communicative contexts. Data were collected using motion capture technology for kinematics and video recording for eye-gaze. With these data, we focused on two issues. First, if and how actions and gestures are systematically modulated when performed in a communicative context. Second, if observers exploit such kinematic information to classify an act as communicative.

    Our study showed that during production the communicative context modulates space–time dimensions of kinematics and elicits an increase in addressee-directed eye-gaze. Naïve participants detected communicative intent in actions and gestures preferentially using eye-gaze information, only utilizing kinematic information when eye-gaze was unavailable.

    Our study highlights the general communicative modulation of action and gesture kinematics during production but also shows that addressees only exploit this modulation to recognize communicative intention in the absence of eye-gaze. We discuss these findings in terms of distinctive but potentially overlapping functions of addressee directed eye-gaze and kinematic modulations within the wider context of human communication and learning.
  • Tsuji, S., & Cristia, A. (2013). Fifty years of infant vowel discrimination research: What have we learned? Journal of the Phonetic Society of Japan, 17(3), 1-11.
  • Turco, G., Dimroth, C., & Braun, B. (2013). Intonational means to mark verum focus in German and French. Language and Speech., 56(4), 461-491. doi:10.1177/0023830912460506.

    Abstract

    German and French differ in a number of aspects. Regarding the prosody-pragmatics interface, German is said to have a direct focus-to-accent mapping, which is largely absent in French – owing to strong structural constraints. We used a semi-spontaneous dialogue setting to investigate the intonational marking of Verum Focus, a focus on the polarity of an utterance in the two languages (e.g. the child IS tearing the banknote as an opposite claim to the child is not tearing the banknote). When Verum Focus applies to auxiliaries, pragmatic aspects (i.e. highlighting the contrast) directly compete with structural constraints (e.g. avoiding an accent on phonologically weak elements such as monosyllabic function words). Intonational analyses showed that auxiliaries were predominantly accented in German, as expected. Interestingly, we found a high number of (as yet undocumented) focal accents on phrase-initial auxiliaries in French Verum Focus contexts. When French accent patterns were equally distributed across information structural contexts, relative prominence (in terms of peak height) between initial and final accents was shifted towards initial accents in Verum Focus compared to non-Verum Focus contexts. Our data hence suggest that French also may mark Verum Focus by focal accents but that this tendency is partly overridden by strong structural constraints.
  • Udden, J., & Männel, C. (2018). Artificial grammar learning and its neurobiology in relation to language processing and development. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), The Oxford Handbook of Psycholinguistics (2nd ed., pp. 755-783). Oxford: Oxford University Press.

    Abstract

    The artificial grammar learning (AGL) paradigm enables systematic investigation of the acquisition of linguistically relevant structures. It is a paradigm of interest for language processing research, interfacing with theoretical linguistics, and for comparative research on language acquisition and evolution. This chapter presents a key for understanding major variants of the paradigm. An unbiased summary of neuroimaging findings of AGL is presented, using meta-analytic methods, pointing to the crucial involvement of the bilateral frontal operculum and regions in the right lateral hemisphere. Against a background of robust posterior temporal cortex involvement in processing complex syntax, the evidence for involvement of the posterior temporal cortex in AGL is reviewed. Infant AGL studies testing for neural substrates are reviewed, covering the acquisition of adjacent and non-adjacent dependencies as well as algebraic rules. The language acquisition data suggest that comparisons of learnability of complex grammars performed with adults may now also be possible with children.
  • Ünal, E., & Papafragou, A. (2013). Linguistic and conceptual representations of inference as a knowledge source. In S. Baiz, N. Goldman, & R. Hawkes (Eds.), Proceedings of the 37th Annual Boston University Conference on Language Development (BUCLD 37) (pp. 433-443). Boston: Cascadilla Press.

Share this page