Publications

Displaying 301 - 400 of 428
  • Reif, A., Nguyen, T. T., Weißflog, L., Jacob, C. P., Romanos, M., Renner, T. J., Buttenschon, H. N., Kittel-Schneider, S., Gessner, A., Weber, H., Neuner, M., Gross-Lesch, S., Zamzow, K., Kreiker, S., Walitza, S., Meyer, J., Freitag, C. M., Bosch, R., Casas, M., Gómez, N. and 24 moreReif, A., Nguyen, T. T., Weißflog, L., Jacob, C. P., Romanos, M., Renner, T. J., Buttenschon, H. N., Kittel-Schneider, S., Gessner, A., Weber, H., Neuner, M., Gross-Lesch, S., Zamzow, K., Kreiker, S., Walitza, S., Meyer, J., Freitag, C. M., Bosch, R., Casas, M., Gómez, N., Ribasès, M., Bayès, M., Buitelaar, J. K., Kiemeney, L. A. L. M., Kooij, J. J. S., Kan, C. C., Hoogman, M., Johansson, S., Jacobsen, K. K., Knappskog, P. M., Fasmer, O. B., Asherson, P., Warnke, A., Grabe, H.-J., Mahler, J., Teumer, A., Völzke, H., Mors, O. N., Schäfer, H., Ramos-Quiroga, J. A., Cormand, B., Haavik, J., Franke, B., & Lesch, K.-P. (2011). DIRAS2 is associated with Adult ADHD, related traits, and co-morbid disorders. Neuropsychopharmacology, 36, 2318-2327. doi:10.1038/npp.2011.120.

    Abstract

    Several linkage analyses implicated the chromosome 9q22 region in attention deficit/hyperactivity disorder (ADHD), a neurodevelopmental disease with remarkable persistence into adulthood. This locus contains the brain-expressed GTP-binding RAS-like 2 gene (DIRAS2) thought to regulate neurogenesis. As DIRAS2 is a positional and functional ADHD candidate gene, we conducted an association study in 600 patients suffering from adult ADHD (aADHD) and 420 controls. Replication samples consisted of 1035 aADHD patients and 1381 controls, as well as 166 families with a child affected from childhood ADHD. Given the high degree of co-morbidity with ADHD, we also investigated patients suffering from bipolar disorder (BD) (n=336) or personality disorders (PDs) (n=622). Twelve single-nucleotide polymorphisms (SNPs) covering the structural gene and the transcriptional control region of DIRAS2 were analyzed. Four SNPs and two haplotype blocks showed evidence of association with ADHD, with nominal p-values ranging from p=0.006 to p=0.05. In the adult replication samples, we obtained a consistent effect of rs1412005 and of a risk haplotype containing the promoter region (p=0.026). Meta-analysis resulted in a significant common OR of 1.12 (p=0.04) for rs1412005 and confirmed association with the promoter risk haplotype (OR=1.45, p=0.0003). Subsequent analysis in nuclear families with childhood ADHD again showed an association of the promoter haplotype block (p=0.02). rs1412005 also increased risk toward BD (p=0.026) and cluster B PD (p=0.031). Additional SNPs showed association with personality scores (p=0.008–0.048). Converging lines of evidence implicate genetic variance in the promoter region of DIRAS2 in the etiology of ADHD and co-morbid impulsive disorders.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2011). Speaking rate affects the perception of duration as a suprasegmental lexical-stress cue. Language and Speech, 54(2), 147-165. doi:10.1177/0023830910397489.

    Abstract

    Three categorization experiments investigated whether the speaking rate of a preceding sentence influences durational cues to the perception of suprasegmental lexical-stress patterns. Dutch two-syllable word fragments had to be judged as coming from one of two longer words that matched the fragment segmentally but differed in lexical stress placement. Word pairs contrasted primary stress on either the first versus the second syllable or the first versus the third syllable. Duration of the initial or the second syllable of the fragments and rate of the preceding context (fast vs. slow) were manipulated. Listeners used speaking rate to decide about the degree of stress on initial syllables whether the syllables' absolute durations were informative about stress (Experiment 1a) or not (Experiment 1b). Rate effects on the second syllable were visible only when the initial syllable was ambiguous in duration with respect to the preceding rate context (Experiment 2). Absolute second syllable durations contributed little to stress perception (Experiment 3). These results suggest that speaking rate is used to disambiguate words and that rate-modulated stress cues are more important on initial than non-initial syllables. Speaking rate affects perception of suprasegmental information.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2011). Speaking rate from proximal and distal contexts is used during word segmentation. Journal of Experimental Psychology: Human Perception and Performance, 37, 978-996. doi:10.1037/a0021923.

    Abstract

    A series of eye-tracking and categorization experiments investigated the use of speaking-rate information in the segmentation of Dutch ambiguous-word sequences. Juncture phonemes with ambiguous durations (e.g., [s] in 'eens (s)peer,' “once (s)pear,” [t] in 'nooit (t)rap,' “never staircase/quick”) were perceived as longer and hence more often as word-initial when following a fast than a slow context sentence. Listeners used speaking-rate information as soon as it became available. Rate information from a context proximal to the juncture phoneme and from a more distal context was used during on-line word recognition, as reflected in listeners' eye movements. Stronger effects of distal context, however, were observed in the categorization task, which measures the off-line results of the word-recognition process. In categorization, the amount of rate context had the greatest influence on the use of rate information, but in eye tracking, the rate information's proximal location was the most important. These findings constrain accounts of how speaking rate modulates the interpretation of durational cues during word recognition by suggesting that rate estimates are used to evaluate upcoming phonetic information continuously during prelexical speech processing.
  • Rekers, Y., Haun, D. B. M., & Tomasello, M. (2011). Children, but not chimpanzees, prefer to collaborate. Current Biology, 21, 1756-1758. doi:10.1016/j.cub.2011.08.066.

    Abstract

    Human societies are built on collaborative activities. Already from early childhood, human children are skillful and proficient collaborators. They recognize when they need help in solving a problem and actively recruit collaborators [ [1] and 2 F. Warneken, F. Chen and M. Tomasello, Cooperative activities in young children and chimpanzees. Child Dev., 77 (2006), pp. 640–663. | View Record in Scopus | [MPG-SFX] | | Full Text via CrossRef | Cited By in Scopus (56) [2] ]. The societies of other primates are also to some degree cooperative. Chimpanzees, for example, engage in a variety of cooperative activities such as border patrols, group hunting, and intra- and intergroup coalitionary behavior [ [3] , [4] and [5] ]. Recent studies have shown that chimpanzees possess many of the cognitive prerequisites necessary for human-like collaboration. Chimpanzees have been shown to recognize when they need help in solving a problem and to actively recruit good over bad collaborators [ [6] and [7] ]. However, cognitive abilities might not be all that differs between chimpanzees and humans when it comes to cooperation. Another factor might be the motivation to engage in a cooperative activity. Here, we hypothesized that a key difference between human and chimpanzee collaboration—and so potentially a key mechanism in the evolution of human cooperation—is a simple preference for collaborating (versus acting alone) to obtain food. Our results supported this hypothesis, finding that whereas children strongly prefer to work together with another to obtain food, chimpanzees show no such preference.
  • Reynolds, E., Stagnitti, K., & Kidd, E. (2011). Play, language and social skills of children attending a play-based curriculum school and a traditionally structured classroom curriculum school in low socioeconomic areas. Australasian Journal of Early Childhood, 36(4), 120-130.

    Abstract

    Aim and method: A comparison study of four six-year-old children attending a school with a play-based curriculum and a school with a traditionally structured classroom from low socioeconomic areas was conducted in Victoria, Australia. Children’s play, language and social skills were measured in February and again in August. At baseline assessment there was a combined sample of 31 children (mean age 5.5 years, SD 0.35 years; 13 females and 18 males). At follow-up there was a combined sample of 26 children (mean age 5.9 years, SD 0.35 years; 10 females, 16 males). Results: There was no significant difference between the school groups in play, language, social skills, age and sex at baseline assessment. Compared to norms on a standardised assessment, all the children were beginning school with delayed play ability. At follow-up assessment, children at the play-based curriculum school had made significant gains in all areas assessed (p values ranged from 0.000 to 0.05). Children at the school with the traditional structured classroom had made significant positive gains in use of symbols in play (p < 0.05) and semantic language (p < 0.05). At follow-up, there were significant differences between schools in elaborate play (p < 0.000), semantic language (p < 0.000), narrative language (p < 0.01) and social connection (p < 0.01), with children in the play-based curriculum school having significantly higher scores in play, narrative language and language and lower scores in social disconnection. Implications: Children from low SES areas begin school at risk of failure as skills in play, language and social skills are delayed. The school experience increases children’s skills, with children in the play-based curriculum showing significant improvements in all areas assessed. It is argued that a play-based curriculum meets children’s developmental and learning needs more effectively. More research is needed to replicate these results.
  • Rieffe, C., Oosterveld, P., Meerum Terwogt, M., Mootz, S., Van Leeuwen, E. J. C., & Stockmann, L. (2011). Emotion regulation and internalizing symptoms in children with Autism Spectrum Disorders. Autism, 15(6), 655-670. doi:10.1177/1362361310366571.

    Abstract

    The aim of this study was to examine the unique contribution of two aspects of emotion regulation (awareness and coping) to the development of internalizing problems in 11-year-old high-functioning children with an autism spectrum disorder (HFASD) and a control group, and the moderating effect of group membership on this. The results revealed overlap between the two groups, but also significant differences, suggesting a more fragmented emotion regulation pattern in children with HFASD, especially related to worry and rumination. Moreover, in children with HFASD, symptoms of depression were unrelated to positive mental coping strategies and the conviction that the emotion experience helps in dealing with the problem, suggesting that a positive approach to the problem and its subsequent emotion experience are less effective in the HFASD group.
  • Riley, M. A., Richardson, M. J., Shockley, K., & Ramenzoni, V. C. (2011). Interpersonal synergies. Frontiers in Psychology, 2, 38. doi:10.3389/fpsyg.2011.00038.

    Abstract

    We present the perspective that interpersonal movement coordination results from establishing interpersonal synergies. Interpersonal synergies are higher-order control systems formed by coupling movement system degrees of freedom of two (or more) actors. Characteristic features of synergies identified in studies of intrapersonal coordination – dimensional compression and reciprocal compensation – are revealed in studies of interpersonal coordination that applied the uncontrolled manifold approach and principal component analysis to interpersonal movement tasks. Broader implications of the interpersonal synergy approach for movement science include an expanded notion of mechanism and an emphasis on interaction-dominant dynamics.
  • Roberts, L., & Felser, C. (2011). Plausibility and recovery from garden paths in L2 sentence processing. Applied Psycholinguistics, 32, 299-331. doi:10.1017/S0142716410000421.

    Abstract

    In this study, the influence of plausibility information on the real-time processing of locally ambiguous (“garden path”) sentences in a nonnative language is investigated. Using self-paced reading, we examined how advanced Greek-speaking learners of English and native speaker controls read sentences containing temporary subject–object ambiguities, with the ambiguous noun phrase being either semantically plausible or implausible as the direct object of the immediately preceding verb. Besides providing evidence for incremental interpretation in second language processing, our results indicate that the learners were more strongly influenced by plausibility information than the native speaker controls in their on-line processing of the experimental items. For the second language learners an initially plausible direct object interpretation lead to increased reanalysis difficulty in “weak” garden-path sentences where the required reanalysis did not interrupt the current thematic processing domain. No such evidence of on-line recovery was observed, in contrast, for “strong” garden-path sentences that required more substantial revisions of the representation built thus far, suggesting that comprehension breakdown was more likely here.
  • Robotham, L., Sauter, D., Bachoud-Lévi, A.-C., & Trinkler, I. (2011). The impairment of emotion recognition in Huntington’s disease extends to positive emotions. Cortex, 47(7), 880-884. doi:10.1016/j.cortex.2011.02.014.

    Abstract

    Patients with Huntington’s Disease are impaired in the recognition of emotional signals. However, the nature and extent of the impairment is controversial: It has variously been argued to be disgust-specific (Sprengelmeyer et al., 1996; 1997), general for negative emotions (Snowden, et al., 2008), or a consequence of item difficulty (Milders, Crawford, Lamb, & Simpson, 2003). Yet no study to date has included more than one positive stimulus category in emotion recognition tasks. We present a study of 14 Huntington’s patients and 15 control participants performing a forced-choice task with a range of negative and positive non-verbal emotional vocalizations. Participants were found to be impaired in emotion recognition across the emotion categories, including positive emotions such as amusement and sensual pleasure, and negative emotions, such as anger, disgust, and fear. These data complement previous work by demonstrating that impairments are found in the recognition of positive, as well as negative, emotions in Huntington’s disease. Our results point to a global deficit in the recognition of emotional signals in Huntington’s Disease.
  • Roelofs, A., & Piai, V. (2011). Attention demands of spoken word planning: A review. Frontiers in Psychology, 2, 307. doi:10.1037/a0023328.

    Abstract

    E. Dhooge and R. J. Hartsuiker (2010) reported experiments showing that picture naming takes longer with low- than high-frequency distractor words, replicating M. Miozzo and A. Caramazza (2003). In addition, they showed that this distractor-frequency effect disappears when distractors are masked or preexposed. These findings were taken to refute models like WEAVER++ (A. Roelofs, 2003) in which words are selected by competition. However, Dhooge and Hartsuiker do not take into account that according to this model, picture-word interference taps not only into word production but also into attentional processes. Here, the authors indicate that WEAVER++ contains an attentional mechanism that accounts for the distractor-frequency effect (A. Roelofs, 2005). Moreover, the authors demonstrate that the model accounts for the influence of masking and preexposure, and does so in a simpler way than the response exclusion through self-monitoring account advanced by Dhooge and Hartsuiker
  • Roelofs, A., Piai, V., & Garrido Rodriguez, G. (2011). Attentional inhibition in bilingual naming performance: Evidence from delta-plot analyses. Frontiers in Psychology, 2, 184. doi:10.3389/fpsyg.2011.00184.

    Abstract

    It has been argued that inhibition is a mechanism of attentional control in bilingual language performance. Evidence suggests that effects of inhibition are largest in the tail of a response time (RT) distribution in non-linguistic and monolingual performance domains. We examined this for bilingual performance by conducting delta-plot analyses of naming RTs. Dutch-English bilingual speakers named pictures using English while trying to ignore superimposed neutral Xs or Dutch distractor words that were semantically related, unrelated, or translations. The mean RTs revealed semantic, translation, and lexicality effects. The delta plots leveled off with increasing RT, more so when the mean distractor effect was smaller as compared with larger. This suggests that the influence of inhibition is largest toward the distribution tail, corresponding to what is observed in other performance domains. Moreover, the delta plots suggested that more inhibition was applied by high- than low-proficiency individuals in the unrelated than the other distractor conditions. These results support the view that inhibition is a domain-general mechanism that may be optionally engaged depending on the prevailing circumstances.
  • Roelofs, A., Piai, V., & Schriefers, H. (2011). Selective attention and distractor frequency in naming performance: Comment on Dhooge and Hartsuiker (2010). Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 1032-1038. doi:10.1037/a0023328.

    Abstract

    E. Dhooge and R. J. Hartsuiker (2010) reported experiments showing that picture naming takes longer with low- than high-frequency distractor words, replicating M. Miozzo and A. Caramazza (2003). In addition, they showed that this distractor-frequency effect disappears when distractors are masked or preexposed. These findings were taken to refute models like WEAVER++ (A. Roelofs, 2003) in which words are selected by competition. However, Dhooge and Hartsuiker do not take into account that according to this model, picture-word interference taps not only into word production but also into attentional processes. Here, the authors indicate that WEAVER++ contains an attentional mechanism that accounts for the distractor-frequency effect (A. Roelofs, 2005). Moreover, the authors demonstrate that the model accounts for the influence of masking and preexposure, and does so in a simpler way than the response exclusion through self-monitoring account advanced by Dhooge and Hartsuiker
  • Roos, N. M., Chauvet, J., & Piai, V. (2024). The Concise Language Paradigm (CLaP), a framework for studying the intersection of comprehension and production: Electrophysiological properties. Brain Structure and Function. Advance online publication. doi:10.1007/s00429-024-02801-8.

    Abstract

    Studies investigating language commonly isolate one modality or process, focusing on comprehension or production. Here, we present a framework for a paradigm that combines both: the Concise Language Paradigm (CLaP), tapping into comprehension and production within one trial. The trial structure is identical across conditions, presenting a sentence followed by a picture to be named. We tested 21 healthy speakers with EEG to examine three time periods during a trial (sentence, pre-picture interval, picture onset), yielding contrasts of sentence comprehension, contextually and visually guided word retrieval, object recognition, and naming. In the CLaP, sentences are presented auditorily (constrained, unconstrained, reversed), and pictures appear as normal (constrained, unconstrained, bare) or scrambled objects. Imaging results revealed different evoked responses after sentence onset for normal and time-reversed speech. Further, we replicated the context effect of alpha-beta power decreases before picture onset for constrained relative to unconstrained sentences, and could clarify that this effect arises from power decreases following constrained sentences. Brain responses locked to picture-onset differed as a function of sentence context and picture type (normal vs. scrambled), and naming times were fastest for pictures in constrained sentences, followed by scrambled picture naming, and equally fast for bare and unconstrained picture naming. Finally, we also discuss the potential of the CLaP to be adapted to different focuses, using different versions of the linguistic content and tasks, in combination with electrophysiology or other imaging methods. These first results of the CLaP indicate that this paradigm offers a promising framework to investigate the language system.
  • Rossano, F., Rakoczy, H., & Tomasello, M. (2011). Young children’s understanding of violations of property rights. Cognition, 121, 219-227. doi:10.1016/j.cognition.2011.06.007.

    Abstract

    The present work investigated young children’s normative understanding of property rights using a novel methodology. Two- and 3-year-old children participated in situations in which an actor (1) took possession of an object for himself, and (2) attempted to throw it away. What varied was who owned the object: the actor himself, the child subject, or a third party. We found that while both 2- and 3-year-old children protested frequently when their own object was involved, only 3-year-old children protested more when a third party’s object was involved than when the actor was acting on his own object. This suggests that at the latest around 3 years of age young children begin to understand the normative dimensions of property rights.
  • Rossi, S., Jürgenson, I. B., Hanulikova, A., Telkemeyer, S., Wartenburger, I., & Obrig, H. (2011). Implicit processing of phonotactic cues: Evidence from electrophysiological and vascular responses. Journal of Cognitive Neuroscience, 23, 1752-1764. doi:10.1162/jocn.2010.21547.

    Abstract

    Spoken word recognition is achieved via competition between activated lexical candidates that match the incoming speech input. The competition is modulated by prelexical cues that are important for segmenting the auditory speech stream into linguistic units. One such prelexical cue that listeners rely on in spoken word recognition is phonotactics. Phonotactics defines possible combinations of phonemes within syllables or words in a given language. The present study aimed at investigating both temporal and topographical aspects of the neuronal correlates of phonotactic processing by simultaneously applying event-related brain potentials (ERPs) and functional near-infrared spectroscopy (fNIRS). Pseudowords, either phonotactically legal or illegal with respect to the participants' native language, were acoustically presented to passively listening adult native German speakers. ERPs showed a larger N400 effect for phonotactically legal compared to illegal pseudowords, suggesting stronger lexical activation mechanisms in phonotactically legal material. fNIRS revealed a left hemispheric network including fronto-temporal regions with greater response to phonotactically legal pseudowords than to illegal pseudowords. This confirms earlier hypotheses on a left hemispheric dominance of phonotactic processing most likely due to the fact that phonotactics is related to phonological processing and represents a segmental feature of language comprehension. These segmental linguistic properties of a stimulus are predominantly processed in the left hemisphere. Thus, our study provides first insights into temporal and topographical characteristics of phonotactic processing mechanisms in a passive listening task. Differential brain responses between known and unknown phonotactic rules thus supply evidence for an implicit use of phonotactic cues to guide lexical activation mechanisms.
  • Rowland, C. F., & Noble, C. L. (2011). The role of syntactic structure in children's sentence comprehension: Evidence from the dative. Language Learning and Development, 7(1), 55-75. doi:10.1080/15475441003769411.

    Abstract

    Research has demonstrated that young children quickly acquire knowledge of how the structure of their language encodes meaning. However, this work focused on structurally simple transitives. The present studies investigate childrens' comprehension of the double object dative (e.g., I gave him the box) and the prepositional dative (e.g., I gave the box to him). In Study 1, 3- and 4-year-olds correctly preferred a transfer event reading of prepositional datives with novel verbs (e.g., I'm glorping the rabbit to the duck) but were unable to interpret double object datives (e.g., I'm glorping the duck the rabbit). In Studies 2 and 3, they were able to interpret both dative types when the nouns referring to the theme and recipient were canonically marked (Study 2; I'm glorping the rabbit to Duck) and, to a lesser extent, when they were distinctively but noncanonically marked (Study 3: I'm glorping rabbit to the Duck). Overall, the results suggest that English children have some verb-general knowledge of how dative syntax encodes meaning by 3 years of age, but successful comprehension may require the presence of additional surface cues.
  • Rubianes, M., Drijvers, L., Muñoz, F., Jiménez-Ortega, L., Almeida-Rivera, T., Sánchez-García, J., Fondevila, S., Casado, P., & Martín-Loeches, M. (2024). The self-reference effect can modulate language syntactic processing even without explicit awareness: An electroencephalography study. Journal of Cognitive Neuroscience, 36(3), 460-474. doi:10.1162/jocn_a_02104.

    Abstract

    Although it is well established that self-related information can rapidly capture our attention and bias cognitive functioning, whether this self-bias can affect language processing remains largely unknown. In addition, there is an ongoing debate as to the functional independence of language processes, notably regarding the syntactic domain. Hence, this study investigated the influence of self-related content on syntactic speech processing. Participants listened to sentences that could contain morphosyntactic anomalies while the masked face identity (self, friend, or unknown faces) was presented for 16 msec preceding the critical word. The language-related ERP components (left anterior negativity [LAN] and P600) appeared for all identity conditions. However, the largest LAN effect followed by a reduced P600 effect was observed for self-faces, whereas a larger LAN with no reduction of the P600 was found for friend faces compared with unknown faces. These data suggest that both early and late syntactic processes can be modulated by self-related content. In addition, alpha power was more suppressed over the left inferior frontal gyrus only when self-faces appeared before the critical word. This may reflect higher semantic demands concomitant to early syntactic operations (around 150–550 msec). Our data also provide further evidence of self-specific response, as reflected by the N250 component. Collectively, our results suggest that identity-related information is rapidly decoded from facial stimuli and may impact core linguistic processes, supporting an interactive view of syntactic processing. This study provides evidence that the self-reference effect can be extended to syntactic processing.
  • Rubio-Fernández, P. (2024). Cultural evolutionary pragmatics: Investigating the codevelopment and coevolution of language and social cognition. Psychological Review, 131(1), 18-35. doi:10.1037/rev0000423.

    Abstract

    Language and social cognition come together in communication, but their relation has been intensely contested. Here, I argue that these two distinctively human abilities are connected in a positive feedback loop, whereby the development of one cognitive skill boosts the development of the other. More specifically, I hypothesize that language and social cognition codevelop in ontogeny and coevolve in diachrony through the acquisition, mature use, and cultural evolution of reference systems (e.g., demonstratives: “this” vs. “that”; articles: “a” vs. “the”; pronouns: “I” vs. “you”). I propose to study the connection between reference systems and communicative social cognition across three parallel timescales—language acquisition, language use, and language change, as a new research program for cultural evolutionary pragmatics. Within that framework, I discuss the coevolution of language and communicative social cognition as cognitive gadgets, and introduce a new methodological approach to study how universals and cross-linguistic differences in reference systems may result in different developmental pathways to human social cognition.
  • De Ruiter, L. E. (2011). Polynomial modeling of child and adult intonation in German spontaneous speech. Language and Speech, 54, 199-223. doi:10.1177/0023830910397495.

    Abstract

    In a data set of 291 spontaneous utterances from German 5-year-olds, 7-year-olds and adults, nuclear pitch contours were labeled manually using the GToBI annotation system. Ten different contour types were identified. The fundamental frequency (F0) of these contours was modeled using third-order orthogonal polynomials, following an approach similar to the one Grabe, Kochanski, and Coleman (2007) used for English. Statistical analyses showed that all but one contour pair differed significantly from each other in at least one of the four coefficients. This demonstrates that polynomial modeling can provide quantitative empirical support for phonological labels in unscripted speech, and for languages other than English. Furthermore, polynomial expressions can be used to derive the alignment of tonal targets relative to the syllable structure, making polynomial modeling more accessible to the phonological research community. Finally, within-contour comparisons of the three age groups showed that for children, the magnitude of the higher coefficients is lower, suggesting that they are not yet able to modulate their pitch as fast as adults.
  • Ruiter, M. B., Kolk, H. H. J., Rietveld, T. C. M., Dijkstra, N., & Lotgering, E. (2011). Towards a quantitative measure of verbal effectiveness and efficiency in the Amsterdam-Nijmegen Everyday Language Test (ANELT). Aphasiology, 25, 961-975. doi:10.1080/02687038.2011.569892.

    Abstract

    Background: A well-known test for measuring verbal adequacy (i.e., verbal effectiveness) in mildly impaired aphasic speakers is the Amsterdam-Nijmegen Everyday Language Test (ANELT; Blomert, Koster, & Kean, 1995). Aphasia therapy practitioners score verbal adequacy qualitatively when they administer the ANELT to their aphasic clients in clinical practice. Aims: The current study investigated whether the construct validity of the ANELT could be further improved by substituting the qualitative score by a quantitative one, which takes the number of essential information units into account. The new quantitative measure could have the following advantages: the ability to derive a quantitative score of verbal efficiency, as well as improved sensitivity to detect changes in functional communication over time. Methods & Procedures: The current study systematically compared a new quantitative measure of verbal effectiveness with the current ANELT Comprehensibility scale, which is based on qualitative judgements. A total of 30 speakers of Dutch participated: 20 non-aphasic speakers and 10 aphasic patients with predominantly expressive disturbances. Outcomes & Results: Although our findings need to be replicated in a larger group of aphasic speakers, the main results suggest that the new quantitative measure of verbal effectiveness is more sensitive to detect change in verbal effectiveness over time. What is more, it can be used to derive a measure of verbal efficiency. Conclusions: The fact that both verbal effectiveness and verbal efficiency can be reliably as well as validly measured in the ANELT is of relevance to clinicians. It allows them to obtain a more complete picture of aphasic speakers' functional communication skills.
  • Sadakata, M., & Sekiyama, K. (2011). Enhanced perception of various linguistic features by musicians: A cross-linguistic study. Acta Psychologica, 138, 1-10. doi:10.1016/j.actpsy.2011.03.007.

    Abstract

    Two cross-linguistic experiments comparing musicians and non-musicians were performed in order to examine whether musicians have enhanced perception of specific acoustical features of speech in a second language (L2). These discrimination and identification experiments examined the perception of various speech features; namely, the timing and quality of Japanese consonants, and the quality of Dutch vowels. We found that musical experience was more strongly associated with discrimination performance rather than identification performance. The enhanced perception was observed not only with respect to L2, but also L1. It was most pronounced when tested with Japanese consonant timing. These findings suggest the following: 1) musicians exhibit enhanced early acoustical analysis of speech, 2) musical training does not equally enhance the perception of all acoustic features automatically, and 3) musicians may enjoy an advantage in the perception of acoustical features that are important in both language and music, such as pitch and timing. Research Highlights We compared the perception of L1 and L2 speech by musicians and non-musicians. Discrimination and identification experiments examined perception of consonant timing, quality of Japanese consonants and of Dutch vowels. We compared results for Japanese native musicians and non-musicians as well as, Dutch native musicians and non-musicians. Musicians demonstrated enhanced perception for both L1 and L2. Most pronounced effect was found for Japanese consonant timing.
  • Salomo, D., Graf, E., Lieven, E., & Tomasello, M. (2011). The role of perceptual availability and discourse context in young children’s question answering. Journal of Child Language, 38, 918-931. doi:10.1017/S0305000910000395.

    Abstract

    Three- and four-year-old children were asked predicate-focus questions ('What's X doing?') about a scene in which an agent performed an action on a patient. We varied: (i) whether (or not) the preceding discourse context, which established the patient as given information, was available for the questioner; and (ii) whether (or not) the patient was perceptually available to the questioner when she asked the question. The main finding in our study differs from those of previous studies since it suggests that children are sensitive to the perceptual context at an earlier age than they are to previous discourse context if they need to take the questioner's perspective into account. Our finding indicates that, while children are in principle sensitive to both factors, young children rely on perceptual availability when a conflict arises.
  • Sánchez-Mora, C., Ribasés, M., Casas, M., Bayés, M., Bosch, R., Fernàndez-Castillo, N., Brunso, L., Jacobsen, K. K., Landaas, E. T., Lundervold, A. J., Gross-Lesch, S., Kreiker, S., Jacob, C. P., Lesch, K.-P., Buitelaar, J. K., Hoogman, M., Kiemeney, L. A., Kooij, J. S., Mick, E., Asherson, P. and 7 moreSánchez-Mora, C., Ribasés, M., Casas, M., Bayés, M., Bosch, R., Fernàndez-Castillo, N., Brunso, L., Jacobsen, K. K., Landaas, E. T., Lundervold, A. J., Gross-Lesch, S., Kreiker, S., Jacob, C. P., Lesch, K.-P., Buitelaar, J. K., Hoogman, M., Kiemeney, L. A., Kooij, J. S., Mick, E., Asherson, P., Faraone, S. V., Franke, B., Reif, A., Johansson, S., Haavik, J., Ramos-Quiroga, J. A., & Cormand, B. (2011). Exploring DRD4 and its interaction with SLC6A3 as possible risk factors for adult ADHD: A meta-analysis in four European populations. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 156, 600-612. doi:10.1002/ajmg.b.31202.

    Abstract

    Attention-deficit hyperactivity disorder (ADHD) is a common behavioral disorder affecting about 4–8% of children. ADHD persists into adulthood in around 65% of cases, either as the full condition or in partial remission with persistence of symptoms. Pharmacological, animal and molecular genetic studies support a role for genes of the dopaminergic system in ADHD due to its essential role in motor control, cognition, emotion, and reward. Based on these data, we analyzed two functional polymorphisms within the DRD4 gene (120 bp duplication in the promoter and 48 bp VNTR in exon 3) in a clinical sample of 1,608 adult ADHD patients and 2,352 controls of Caucasian origin from four European countries that had been recruited in the context of the International Multicentre persistent ADHD CollaboraTion (IMpACT). Single-marker analysis of the two polymorphisms did not reveal association with ADHD. In contrast, multiple-marker meta-analysis showed a nominal association (P  = 0.02) of the L-4R haplotype (dup120bp-48bpVNTR) with adulthood ADHD, especially with the combined clinical subtype. Since we previously described association between adulthood ADHD and the dopamine transporter SLC6A3 9R-6R haplotype (3′UTR VNTR-intron 8 VNTR) in the same dataset, we further tested for gene × gene interaction between DRD4 and SLC6A3. However, we detected no epistatic effects but our results rather suggest additive effects of the DRD4 risk haplotype and the SLC6A3 gene.
  • Sauter, D., Le Guen, O., & Haun, D. B. M. (2011). Categorical perception of emotional expressions does not require lexical categories. Emotion, 11, 1479-1483. doi:10.1037/a0025336.

    Abstract

    Does our perception of others’ emotional signals depend on the language we speak or is our perception the same regardless of language and culture? It is well established that human emotional facial expressions are perceived categorically by viewers, but whether this is driven by perceptual or linguistic mechanisms is debated. We report an investigation into the perception of emotional facial expressions, comparing German speakers to native speakers of Yucatec Maya, a language with no lexical labels that distinguish disgust from anger. In a free naming task, speakers of German, but not Yucatec Maya, made lexical distinctions between disgust and anger. However, in a delayed match-to-sample task, both groups perceived emotional facial expressions of these and other emotions categorically. The magnitude of this effect was equivalent across the language groups, as well as across emotion continua with and without lexical distinctions. Our results show that the perception of affective signals is not driven by lexical labels, instead lending support to accounts of emotions as a set of biologically evolved mechanisms.
  • Schaefer, R. S., Farquhar, J., Blokland, Y., Sadakata, M., & Desain, P. (2011). Name that tune: Decoding music from the listening brain. NeuroImage, 56, 843-849. doi:10.1016/j.neuroimage.2010.05.084.

    Abstract

    In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven different musical fragments of about three seconds, both individually and cross-participants, using only time domain information (the event-related potential, ERP). The best individual results are 70% correct in a seven-class problem while using single trials, and when using multiple trials we achieve 100% correct after six presentations of the stimulus. When classifying across participants, a maximum rate of 53% was reached, supporting a general representation of each musical fragment over participants. While for some music stimuli the amplitude envelope correlated well with the ERP, this was not true for all stimuli. Aspects of the stimulus that may contribute to the differences between the EEG responses to the pieces of music are discussed.

    Additional information

    supp_f.pdf
  • Schapper, A., & San Roque, L. (2011). Demonstratives and non-embedded nominalisations in three Papuan languages of the Timor-Alor-Pantar family. Studies in Language, 35, 380-408. doi:10.1075/sl.35.2.05sch.

    Abstract

    This paper explores the use of demonstratives in non-embedded clausal nominalisations. We present data and analysis from three Papuan languages of the Timor-Alor-Pantar family in south-east Indonesia. In these languages, demonstratives can apply to the clausal as well as to the nominal domain, contributing contrastive semantic content in assertive stance-taking and attention-directing utterances. In the Timor-Alor-Pantar constructions, meanings that are to do with spatial and discourse locations at the participant level apply to spatial, temporal and mental locations at the state or event leve
  • Scheeringa, R., Fries, P., Petersson, K. M., Oostenveld, R., Grothe, I., Norris, D. G., Hagoort, P., & Bastiaansen, M. C. M. (2011). Neuronal dynamics underlying high- and low- frequency EEG oscillations contribute independently to the human BOLD signal. Neuron, 69, 572-583. doi:10.1016/j.neuron.2010.11.044.

    Abstract

    Work on animals indicates that BOLD is preferentially sensitive to local field potentials, and that it correlates most strongly with gamma band neuronal synchronization. Here we investigate how the BOLD signal in humans performing a cognitive task is related to neuronal synchronization across different frequency bands. We simultaneously recorded EEG and BOLD while subjects engaged in a visual attention task known to induce sustained changes in neuronal synchronization across a wide range of frequencies. Trial-by-trial BOLD luctuations correlated positively with trial-by-trial fluctuations in high-EEG gamma power (60–80 Hz) and negatively with alpha and beta power. Gamma power on the one hand, and alpha and beta power on the other hand, independently contributed to explaining BOLD variance. These results indicate that the BOLD-gamma coupling observed in animals can be extrapolated to humans performing a task and that neuronal dynamics underlying high- and low-frequency synchronization contribute independently to the BOLD signal.

    Additional information

    mmc1.pdf
  • Schijven, D., Soheili-Nezhad, S., Fisher, S. E., & Francks, C. (2024). Exome-wide analysis implicates rare protein-altering variants in human handedness. Nature Communications, 15: 2632. doi:10.1038/s41467-024-46277-w.

    Abstract

    Handedness is a manifestation of brain hemispheric specialization. Left-handedness occurs at increased rates in neurodevelopmental disorders. Genome-wide association studies have identified common genetic effects on handedness or brain asymmetry, which mostly involve variants outside protein-coding regions and may affect gene expression. Implicated genes include several that encode tubulins (microtubule components) or microtubule-associated proteins. Here we examine whether left-handedness is also influenced by rare coding variants (frequencies ≤ 1%), using exome data from 38,043 left-handed and 313,271 right-handed individuals from the UK Biobank. The beta-tubulin gene TUBB4B shows exome-wide significant association, with a rate of rare coding variants 2.7 times higher in left-handers than right-handers. The TUBB4B variants are mostly heterozygous missense changes, but include two frameshifts found only in left-handers. Other TUBB4B variants have been linked to sensorineural and/or ciliopathic disorders, but not the variants found here. Among genes previously implicated in autism or schizophrenia by exome screening, DSCAM and FOXP1 show evidence for rare coding variant association with left-handedness. The exome-wide heritability of left-handedness due to rare coding variants was 0.91%. This study reveals a role for rare, protein-altering variants in left-handedness, providing further evidence for the involvement of microtubules and disorder-relevant genes.
  • Schimke, S. (2011). Variable verb placement in second-language German and French: Evidence from production and elicited imitation of finite and nonfinite negated sentences. Applied Psycholinguistics, 32, 635-685. doi:10.1017/S0142716411000014.

    Abstract

    This study examines the placement of finite and nonfinite lexical verbs and finite light verbs (LVs) in semispontaneous production and elicited imitation of adult beginning learners of German and French. Theories assuming nonnativelike syntactic representations at early stages of development predict variable placement of lexical verbs and consistent placement of LVs, whereas theories assuming nativelike syntax predict variability for nonfinite verbs and consistent placement of all finite verbs. The results show that beginning learners of German have consistent preferences only for LVs. More advanced learners of German and learners of French produce and imitate finite verbs in more variable positions than nonfinite verbs. This is argued to support a structure-building view of second-language development.
  • Schoffelen, J.-M., & Gross, J. (2011). Improving the interpretability of all-to-all pairwise source connectivity analysis in MEG with nonhomogeneous smoothing. Human brain mapping, 32, 426-437. doi:10.1002/hbm.21031.

    Abstract

    Studying the interaction between brain regions is important to increase our understanding of brain function. Magnetoencephalography (MEG) is well suited to investigate brain connectivity, because it provides measurements of activity of the whole brain at very high temporal resolution. Typically, brain activity is reconstructed from the sensor recordings with an inverse method such as a beamformer, and subsequently a connectivity metric is estimated between predefined reference regions-of-interest (ROIs) and the rest of the source space. Unfortunately, this approach relies on a robust estimate of the relevant reference regions and on a robust estimate of the activity in those reference regions, and is not generally applicable to a wide variety of cognitive paradigms. Here, we investigate the possibility to perform all-to-all pairwise connectivity analysis, thus removing the need to define ROIs. Particularly, we evaluate the effect of nonhomogeneous spatial smoothing of differential connectivity maps. This approach is inspired by the fact that the spatial resolution of source reconstructions is typically spatially nonhomogeneous. We use this property to reduce the spatial noise in the cerebro-cerebral connectivity map, thus improving interpretability. Using extensive data simulations we show a superior detection rate and a substantial reduction in the number of spurious connections. We conclude that nonhomogeneous spatial smoothing of cerebro-cerebral connectivity maps could be an important improvement of the existing analysis tools to study neuronal interactions noninvasively.
  • Schoffelen, J.-M., Poort, J., Oostenveld, R., & Fries, P. (2011). Selective movement preparation is subserved by selective increases in corticomuscular gamma-band coherence. Journal of Neuroscience, 31, 6750-6758. doi:10.1523/​JNEUROSCI.4882-10.2011.

    Abstract

    Local groups of neurons engaged in a cognitive task often exhibit rhythmically synchronized activity in the gamma band, a phenomenon that likely enhances their impact on downstream areas. The efficacy of neuronal interactions may be enhanced further by interareal synchronization of these local rhythms, establishing mutually well timed fluctuations in neuronal excitability. This notion suggests that long-range synchronization is enhanced selectively for connections that are behaviorally relevant. We tested this prediction in the human motor system, assessing activity from bilateral motor cortices with magnetoencephalography and corresponding spinal activity through electromyography of bilateral hand muscles. A bimanual isometric wrist extension task engaged the two motor cortices simultaneously into interactions and coherence with their respective corresponding contralateral hand muscles. One of the hands was cued before each trial as the response hand and had to be extended further to report an unpredictable visual go cue. We found that, during the isometric hold phase, corticomuscular coherence was enhanced, spatially selective for the corticospinal connection that was effectuating the subsequent motor response. This effect was spectrally selective in the low gamma-frequency band (40–47 Hz) and was observed in the absence of changes in motor output or changes in local cortical gamma-band synchronization. These findings indicate that, in the anatomical connections between the cortex and the spinal cord, gamma-band synchronization is a mechanism that may facilitate behaviorally relevant interactions between these distant neuronal groups.
  • Schuppler, B., Ernestus, M., Scharenborg, O., & Boves, L. (2011). Acoustic reduction in conversational Dutch: A quantitative analysis based on automatically generated segmental transcriptions [Letter to the editor]. Journal of Phonetics, 39(1), 96-109. doi:10.1016/j.wocn.2010.11.006.

    Abstract

    In spontaneous, conversational speech, words are often reduced compared to their citation forms, such that a word like yesterday may sound like [’jεsmall eshei]. The present chapter investigates such acoustic reduction. The study of reduction needs large corpora that are transcribed phonetically. The first part of this chapter describes an automatic transcription procedure used to obtain such a large phonetically transcribed corpus of Dutch spontaneous dialogues, which is subsequently used for the investigation of acoustic reduction. First, the orthographic transcriptions were adapted for automatic processing. Next, the phonetic transcription of the corpus was created by means of a forced alignment using a lexicon with multiple pronunciation variants per word. These variants were generated by applying phonological and reduction rules to the canonical phonetic transcriptions of the words. The second part of this chapter reports the results of a quantitative analysis of reduction in the corpus on the basis of the generated transcriptions and gives an inventory of segmental reductions in standard Dutch. Overall, we found that reduction is more pervasive in spontaneous Dutch than previously documented.
  • Segaert, K., Menenti, L., Weber, K., & Hagoort, P. (2011). A paradox of syntactic priming: Why response tendencies show priming for passives, and response latencies show priming for actives. PLoS One, 6(10), e24209. doi:10.1371/journal.pone.0024209.

    Abstract

    Speakers tend to repeat syntactic structures across sentences, a phenomenon called syntactic priming. Although it has been suggested that repeating syntactic structures should result in speeded responses, previous research has focused on effects in response tendencies. We investigated syntactic priming effects simultaneously in response tendencies and response latencies for active and passive transitive sentences in a picture description task. In Experiment 1, there were priming effects in response tendencies for passives and in response latencies for actives. However, when participants' pre-existing preference for actives was altered in Experiment 2, syntactic priming occurred for both actives and passives in response tendencies as well as in response latencies. This is the first investigation of the effects of structure frequency on both response tendencies and latencies in syntactic priming. We discuss the implications of these data for current theories of syntactic processing.

    Additional information

    Segaert_2011_Supporting_Info.doc
  • Seijdel, N., Schoffelen, J.-M., Hagoort, P., & Drijvers, L. (2024). Attention drives visual processing and audiovisual integration during multimodal communication. The Journal of Neuroscience, 44(10): e0870232023. doi:10.1523/JNEUROSCI.0870-23.2023.

    Abstract

    During communication in real-life settings, our brain often needs to integrate auditory and visual information, and at the same time actively focus on the relevant sources of information, while ignoring interference from irrelevant events. The interaction between integration and attention processes remains poorly understood. Here, we use rapid invisible frequency tagging (RIFT) and magnetoencephalography (MEG) to investigate how attention affects auditory and visual information processing and integration, during multimodal communication. We presented human participants (male and female) with videos of an actress uttering action verbs (auditory; tagged at 58 Hz) accompanied by two movie clips of hand gestures on both sides of fixation (attended stimulus tagged at 65 Hz; unattended stimulus tagged at 63 Hz). Integration difficulty was manipulated by a lower-order auditory factor (clear/degraded speech) and a higher-order visual semantic factor (matching/mismatching gesture). We observed an enhanced neural response to the attended visual information during degraded speech compared to clear speech. For the unattended information, the neural response to mismatching gestures was enhanced compared to matching gestures. Furthermore, signal power at the intermodulation frequencies of the frequency tags, indexing non-linear signal interactions, was enhanced in left frontotemporal and frontal regions. Focusing on LIFG (Left Inferior Frontal Gyrus), this enhancement was specific for the attended information, for those trials that benefitted from integration with a matching gesture. Together, our results suggest that attention modulates audiovisual processing and interaction, depending on the congruence and quality of the sensory input.

    Additional information

    link to preprint
  • Sekine, K. (2011). The role of gesture in the language production of preschool children. Gesture, 11(2), 148-173. doi:10.1075/gest.11.2.03sek.

    Abstract

    The present study investigates the functions of gestures in preschoolers’ descriptions of activities. Specifically, utilizing McNeill’s growth point theory (1992), I examine how gestures contribute to the creation of contrast from the immediate context in the spoken discourse of children. When preschool children describe an activity consisting of multiple actions, like playing on a slide, they often begin with the central action (e.g., sliding-down) instead of with the beginning of the activity sequence (e.g., climbing-up). This study indicates that, in descriptions of activities, gestures may be among the cues the speaker uses for forming a next idea or for repairing the temporal order of the activities described. Gestures may function for the speaker as visual feedback and contribute to the process of utterance formation and provide an index for assessing language development.
  • Sekine, K., & Özyürek, A. (2024). Children benefit from gestures to understand degraded speech but to a lesser extent than adults. Frontiers in Psychology, 14: 1305562. doi:10.3389/fpsyg.2023.1305562.

    Abstract

    The present study investigated to what extent children, compared to adults, benefit from gestures to disambiguate degraded speech by manipulating speech signals and manual modality. Dutch-speaking adults (N = 20) and 6- and 7-year-old children (N = 15) were presented with a series of video clips in which an actor produced a Dutch action verb with or without an accompanying iconic gesture. Participants were then asked to repeat what they had heard. The speech signal was either clear or altered into 4- or 8-band noise-vocoded speech. Children had more difficulty than adults in disambiguating degraded speech in the speech-only condition. However, when presented with both speech and gestures, children reached a comparable level of accuracy to that of adults in the degraded-speech-only condition. Furthermore, for adults, the enhancement of gestures was greater in the 4-band condition than in the 8-band condition, whereas children showed the opposite pattern. Gestures help children to disambiguate degraded speech, but children need more phonological information than adults to benefit from use of gestures. Children’s multimodal language integration needs to further develop to adapt flexibly to challenging situations such as degraded speech, as tested in our study, or instances where speech is heard with environmental noise or through a face mask.

    Additional information

    supplemental material
  • Senft, G. (1991). [Review of the book Einführung in die deskriptive Linguistik by Michael Dürr and Peter Schlobinski]. Linguistics, 29, 722-725.
  • Senft, G. (1991). [Review of the book The sign languages of Aboriginal Australia by Adam Kendon]. Journal of Pragmatics, 15, 400-405. doi:10.1016/0378-2166(91)90040-5.
  • Senft, G. (1991). Network models to describe the Kilivila classifier system. Oceanic Linguistics, 30, 131-155. Retrieved from http://www.jstor.org/stable/3623085.
  • Senft, G. (2011). Talking about color and taste on the Trobriand Islands: A diachronic study. The Senses & Society, 6(1), 48 -56. doi:10.2752/174589311X12893982233713.

    Abstract

    How stable is the lexicon for perceptual experiences? This article presents results on how the Trobriand Islanders of Papua New Guinea talk about color and taste and whether this has changed over the years. Comparing the results of research on color terms conducted in 1983 with data collected in 2008 revealed that many English color terms have been integrated into the Kilivila lexicon. Members of the younger generation with school education have been the agents of this language change. However, today not all English color terms are produced correctly according to English lexical semantics. The traditional Kilivila color terms bwabwau ‘black’, pupwakau ‘white’, and bweyani ‘red’ are not affected by this change, probably because of the cultural importance of the art of coloring canoes, big yams houses, and bodies. Comparing the 1983 data on taste vocabulary with the results of my 2008 research revealed no substantial change. The conservatism of the Trobriand Islanders' taste vocabulary may be related to the conservatism of their palate. Moreover, they are more interested in displaying and exchanging food than in savoring it. Although English color terms are integrated into the lexicon, Kilivila provides evidence that traditional terms used for talking about color and terms used to refer to tastes have remained stable over time.
  • Seuren, P. A. M. (1982). De spellingsproblematiek in Suriname: Een inleiding. OSO, 1(1), 71-79.
  • Seuren, P. A. M. (1983). [Review of the book The inheritance of presupposition by J. Dinsmore]. Journal of Semantics, 2(3/4), 356-358. doi:10.1093/semant/2.3-4.356.
  • Seuren, P. A. M. (1983). [Review of the book Thirty million theories of grammar by J. McCawley]. Journal of Semantics, 2(3/4), 325-341. doi:10.1093/semant/2.3-4.325.
  • Seuren, P. A. M. (2011). How I remember Evert Beth [In memoriam]. Synthese, 179(2), 207-210. doi:10.1007/s11229-010-9777-4.

    Abstract

    Without Abstract
  • Seuren, P. A. M. (1983). In memoriam Jan Voorhoeve. Bijdragen tot de Taal-, Land- en Volkenkunde, 139(4), 403-406.
  • Seuren, P. A. M. (1963). Naar aanleiding van Dr. F. Balk-Smit Duyzentkunst "De Grammatische Functie". Levende Talen, 219, 179-186.
  • Seuren, P. A. M. (1991). Grammatika als algorithme: Rekenen met taal. Koninklijke Nederlandse Akademie van Wetenschappen. Mededelingen van de Afdeling Letterkunde, Nieuwe Reeks, 54(2), 25-63.
  • Seuren, P. A. M. (1982). Internal variability in competence. Linguistische Berichte, 77, 1-31.
  • Seuren, P. A. M. (1983). Overwegingen bij de spelling van het Sranan en een spellingsvoorstel. OSO, 2(1), 67-81.
  • Severijnen, G. G. A., Bosker, H. R., & McQueen, J. M. (2024). Your “VOORnaam” is not my “VOORnaam”: An acoustic analysis of individual talker differences in word stress in Dutch. Journal of Phonetics, 103: 101296. doi:10.1016/j.wocn.2024.101296.

    Abstract

    Different talkers speak differently, even within the same homogeneous group. These differences lead to acoustic variability in speech, causing challenges for correct perception of the intended message. Because previous descriptions of this acoustic variability have focused mostly on segments, talker variability in prosodic structures is not yet well documented. The present study therefore examined acoustic between-talker variability in word stress in Dutch. We recorded 40 native Dutch talkers from a participant sample with minimal dialectal variation and balanced gender, producing segmentally overlapping words (e.g., VOORnaam vs. voorNAAM; ‘first name’ vs. ‘respectable’, capitalization indicates lexical stress), and measured different acoustic cues to stress. Each individual participant’s acoustic measurements were analyzed using Linear Discriminant Analyses, which provide coefficients for each cue, reflecting the strength of each cue in a talker’s productions. On average, talkers primarily used mean F0, intensity, and duration. Moreover, each participant also employed a unique combination of cues, illustrating large prosodic variability between talkers. In fact, classes of cue-weighting tendencies emerged, differing in which cue was used as the main cue. These results offer the most comprehensive acoustic description, to date, of word stress in Dutch, and illustrate that large prosodic variability is present between individual talkers.
  • Shan, W., Zhang, Y., Zhao, J., Wu, S., Zhao, L., Ip, P., Tucker, J. D., & Jiang, F. (2024). Positive parent–child interactions moderate certain maltreatment effects on psychosocial well-being in 6-year-old children. Pediatric Research, 95, 802-808. doi:10.1038/s41390-023-02842-5.

    Abstract

    Background: Positive parental interactions may buffer maltreated children from poor psychosocial outcomes. The study aims to evaluate the associations between various types of maltreatment and psychosocial outcomes in early childhood, and examine the moderating effect of positive parent-child interactions on them.

    Methods: Data were from a representative Chinese 6-year-old children sample (n = 17,088). Caregivers reported the history of child maltreatment perpetrated by any individuals, completed the Strengths and Difficulties Questionnaire as a proxy for psychosocial well-being, and reported the frequency of their interactions with children by the Chinese Parent-Child Interaction Scale.

    Results: Physical abuse, emotional abuse, neglect, and sexual abuse were all associated with higher odds of psychosocial problems (aOR = 1.90 [95% CI: 1.57-2.29], aOR = 1.92 [95% CI: 1.75-2.10], aOR = 1.64 [95% CI: 1.17-2.30], aOR = 2.03 [95% CI: 1.30-3.17]). Positive parent-child interactions were associated with lower odds of psychosocial problems after accounting for different types of maltreatment. The moderating effect of frequent parent-child interactions was found only in the association between occasional only physical abuse and psychosocial outcomes (interaction term: aOR = 0.34, 95% CI: 0.15-0.77).

    Conclusions: Maltreatment and positive parent-child interactions have impacts on psychosocial well-being in early childhood. Positive parent-child interactions could only buffer the adverse effect of occasional physical abuse on psychosocial outcomes. More frequent parent-child interactions may be an important intervention opportunity among some children.

    Impact: It provides the first data on the prevalence of different single types and combinations of maltreatment in early childhood in Shanghai, China by drawing on a city-level population-representative sample. It adds to evidence that different forms and degrees of maltreatment were all associated with a higher risk of psychosocial problems in early childhood. Among them, sexual abuse posed the highest risk, followed by emotional abuse. It innovatively found that higher frequencies of parent-child interactions may provide buffering effects only to children who are exposed to occasional physical abuse. It provides a potential intervention opportunity, especially for physically abused children.
  • Shayan, S., Ozturk, O., & Sicoli, M. A. (2011). The thickness of pitch: Crossmodal metaphors in Farsi, Turkish and Zapotec. The Senses & Society, 6(1), 96-105. doi:10.2752/174589311X12893982233911.

    Abstract

    Speakers use vocabulary for spatial verticality and size to describe pitch. A high–low contrast is common to many languages, but others show contrasts like thick–thin and big–small. We consider uses of thick for low pitch and thin for high pitch in three languages: Farsi, Turkish, and Zapotec. We ask how metaphors for pitch structure the sound space. In a language like English, high applies to both high-pitched as well as high-amplitude (loud) sounds; low applies to low-pitched as well as low-amplitude (quiet) sounds. Farsi, Turkish, and Zapotec organize sound in a different way. Thin applies to high pitch and low amplitude and thick to low pitch and high amplitude. We claim that these metaphors have their sources in life experiences. Musical instruments show co-occurrences of higher pitch with thinner, smaller objects and lower pitch with thicker, larger objects. On the other hand bodily experience can ground the high–low metaphor. A raised larynx produces higher pitch and lowered larynx lower pitch. Low-pitched sounds resonate the chest, a lower place than highpitched sounds. While both patterns are available from life experience, linguistic experience privileges one over the other, which results in differential structuring of the multiple dimensions of sound.
  • Silverstein, P., Bergmann, C., & Syed, M. (Eds.). (2024). Open science and metascience in developmental psychology [Special Issue]. Infant and Child Development, 33(1).
  • Silverstein, P., Bergmann, C., & Syed, M. (2024). Open science and metascience in developmental psychology: Introduction to the special issue. Infant and Child Development, 33(1): e2495. doi:10.1002/icd.2495.
  • Sjerps, M. J., Mitterer, H., & McQueen, J. M. (2011). Constraints on the processes responsible for the extrinsic normalization of vowels. Attention, Perception & Psychophysics, 73, 1195-1215. doi:10.3758/s13414-011-0096-8.

    Abstract

    Listeners tune in to talkers’ vowels through extrinsic normalization. We asked here whether this process could be based on compensation for the Long Term Average Spectrum (LTAS) of preceding sounds and whether the mechanisms responsible for normalization are indifferent to the nature of those sounds. If so, normalization should apply to nonspeech stimuli. Previous findings were replicated with first formant (F1) manipulations of speech. Targets on a [pIt]-[pEt] (low-high F1) continuum were labeled as [pIt] more after high-F1 than after low-F1 precursors. Spectrally-rotated nonspeech versions of these materials produced similar normalization. None occurred, however, with nonspeech stimuli that were less speech-like, even though precursor-target LTAS relations were equivalent to those used earlier. Additional experiments investigated the roles of pitch movement, amplitude variation, formant location, and the stimuli's perceived similarity to speech. It appears that normalization is not restricted to speech, but that the nature of the preceding sounds does matter. Extrinsic normalization of vowels is due at least in part to an auditory process which may require familiarity with the spectro-temporal characteristics of speech.
  • Sjerps, M. J., Mitterer, H., & McQueen, J. M. (2011). Listening to different speakers: On the time-course of perceptual compensation for vocal-tract characteristics. Neuropsychologia, 49, 3831-3846. doi:10.1016/j.neuropsychologia.2011.09.044.

    Abstract

    This study used an active multiple-deviant oddball design to investigate the time-course of normalization processes that help listeners deal with between-speaker variability. Electroencephalograms were recorded while Dutch listeners heard sequences of non-words (standards and occasional deviants). Deviants were [ɪ papu] or [ɛ papu], and the standard was [ɪɛpapu], where [ɪɛ] was a vowel that was ambiguous between [ɛ] and [ɪ]. These sequences were presented in two conditions, which differed with respect to the vocal-tract characteristics (i.e., the average 1st formant frequency) of the [papu] part, but not of the initial vowels [ɪ], [ɛ] or [ɪɛ] (these vowels were thus identical across conditions). Listeners more often detected a shift from [ɪɛpapu] to [ɛ papu] than from [ɪɛpapu] to [ɪ papu] in the high F1 context condition; the reverse was true in the low F1 context condition. This shows that listeners’ perception of vowels differs depending on the speaker‘s vocal-tract characteristics, as revealed in the speech surrounding those vowels. Cortical electrophysiological responses reflected this normalization process as early as about 120 ms after vowel onset, which suggests that shifts in perception precede influences due to conscious biases or decision strategies. Listeners’ abilities to normalize for speaker-vocal-tract properties are for an important part the result of a process that influences representations of speech sounds early in the speech processing stream.
  • Skoruppa, K., Cristia, A., Peperkamp, S., & Seidl, A. (2011). English-learning infants' perception of word stress patterns [JASA Express Letter]. Journal of the Acoustical Society of America, 130(1), EL50-EL55. doi:10.1121/1.3590169.

    Abstract

    Adult speakers of different free stress languages (e.g., English, Spanish) differ both in their sensitivity to lexical stress and in their processing of suprasegmental and vowel quality cues to stress. In a head-turn preference experiment with a familiarization phase, both 8-month-old and 12-month-old English-learning infants discriminated between initial stress and final stress among lists of Spanish-spoken disyllabic nonwords that were segmentally varied (e.g. [ˈnila, ˈtuli] vs [luˈta, puˈki]). This is evidence that English-learning infants are sensitive to lexical stress patterns, instantiated primarily by suprasegmental cues, during the second half of the first year of life.
  • Slonimska, A. (2024). The role of iconicity and simultaneity in efficient communication in the visual modality: Evidence from LIS (Italian Sign Language) [Dissertation Abstract]. Sign Language & Linguistics. Advance online publication. doi:10.1075/sll.00084.slo.
  • Small, S. L., Hickok, G., Nusbaum, H. C., Blumstein, S., Coslett, H. B., Dell, G., Hagoort, P., Kutas, M., Marantz, A., Pylkkanen, L., Thompson-Schill, S., Watkins, K., & Wise, R. J. (2011). The neurobiology of language: Two years later [Editorial]. Brain and Language, 116(3), 103-104. doi:10.1016/j.bandl.2011.02.004.
  • Soheili-Nezhad, S., Ibáñez-Solé, O., Izeta, A., Hoeijmakers, J. H. J., & Stoeger, T. (2024). Time is ticking faster for long genes in aging. Trends in Genetics, 40(4), 299-312. doi:10.1016/j.tig.2024.01.009.

    Abstract

    Recent studies of aging organisms have identified a systematic phenomenon, characterized by a negative correlation between gene length and their expression in various cell types, species, and diseases. We term this phenomenon gene-length-dependent transcription decline (GLTD) and suggest that it may represent a bottleneck in the transcription machinery and thereby significantly contribute to aging as an etiological factor. We review potential links between GLTD and key aging processes such as DNA damage and explore their potential in identifying disease modification targets. Notably, in Alzheimer’s disease, GLTD spotlights extremely long synaptic genes at chromosomal fragile sites (CFSs) and their vulnerability to postmitotic DNA damage. We suggest that GLTD is an integral element of biological aging.
  • De Sousa, H. (2011). Changes in the language of perception in Cantonese. The Senses & Society, 6(1), 38-47. doi:10.2752/174589311X12893982233678.

    Abstract

    The way a language encodes sensory experiences changes over time, and often this correlates with other changes in the society. There are noticeable differences in the language of perception between older and younger speakers of Cantonese in Hong Kong and Macau. Younger speakers make finer distinctions in the distal senses, but have less knowledge of the finer categories of the proximal senses than older speakers. The difference in the language of perception between older and younger speakers probably reflects the rapid changes that happened in Hong Kong and Macau in the last fifty years, from an underdeveloped and lessliterate society, to a developed and highly literate society. In addition to the increase in literacy, the education system has also undergone significant Westernization. Western-style education systems have most likely created finer categorizations in the distal senses. At the same time, the traditional finer distinctions of the proximal senses have become less salient: as the society became more urbanized and sanitized, people have had fewer opportunities to experience the variety of olfactory sensations experienced by their ancestors. This case study investigating interactions between social-economic 'development' and the elaboration of the senses hopefully contributes to the study of the ineffability of senses.
  • Stivers, T., Chalfoun, A., & Rossi, G. (2024). To err is human but to persist is diabolical: Toward a theory of interactional policing. Frontiers in Sociology: Sociological Theory, 9: 1369776. doi:10.3389/fsoc.2024.1369776.

    Abstract

    Social interaction is organized around norms and preferences that guide our construction of actions and our interpretation of those of others, creating a reflexive moral order. Sociological theory suggests two possibilities for the type of moral order that underlies the policing of interactional norm and preference violations: a morality that focuses on the nature of violations themselves and a morality that focuses on the positioning of actors as they maintain their conduct comprehensible, even when they depart from norms and preferences. We find that actors are more likely to reproach interactional violations for which an account is not provided by the transgressor, and that actors weakly reproach or let pass first offenses while more strongly policing violators who persist in bad behavior. Based on these findings, we outline a theory of interactional policing that rests not on the nature of the violation but rather on actors' moral positioning.
  • Takashima, A., Carota, F., Schoots, V., Redmann, A., Jehee, J., & Indefrey, P. (2024). Tomatoes are red: The perception of achromatic objects elicits retrieval of associated color knowledge. Journal of Cognitive Neuroscience, 36(1), 24-45. doi:10.1162/jocn_a_02068.

    Abstract

    When preparing to name an object, semantic knowledge about the object and its attributes is activated, including perceptual properties. It is unclear, however, whether semantic attribute activation contributes to lexical access or is a consequence of activating a concept irrespective of whether that concept is to be named or not. In this study, we measured neural responses using fMRI while participants named objects that are typically green or red, presented in black line drawings. Furthermore, participants underwent two other tasks with the same objects, color naming and semantic judgment, to see if the activation pattern we observe during picture naming is (a) similar to that of a task that requires accessing the color attribute and (b) distinct from that of a task that requires accessing the concept but not its name or color. We used representational similarity analysis to detect brain areas that show similar patterns within the same color category, but show different patterns across the two color categories. In all three tasks, activation in the bilateral fusiform gyri (“Human V4”) correlated with a representational model encoding the red–green distinction weighted by the importance of color feature for the different objects. This result suggests that when seeing objects whose color attribute is highly diagnostic, color knowledge about the objects is retrieved irrespective of whether the color or the object itself have to be named.
  • Tamaoka, K., Yu, S., Zhang, J., Otsuka, Y., Lim, H., Koizumi, M., & Verdonschot, R. G. (2024). Syntactic structures in motion: Investigating word order variations in verb-final (Korean) and verb-initial (Tongan) languages. Frontiers in Psychology, 15: 1360191. doi:10.3389/fpsyg.2024.1360191.

    Abstract

    This study explored sentence processing in two typologically distinct languages: Korean, a verb-final language, and Tongan, a verb-initial language. The first experiment revealed that in Korean, sentences arranged in the scrambled OSV (Object, Subject, Verb) order were processed more slowly than those in the canonical SOV order, highlighting a scrambling effect. It also found that sentences with subject topicalization in the SOV order were processed as swiftly as those in the canonical form, whereas sentences with object topicalization in the OSV order were processed with speeds and accuracy comparable to scrambled sentences. However, since topicalization and scrambling in Korean use the same OSV order, independently distinguishing the effects of topicalization is challenging. In contrast, Tongan allows for a clear separation of word orders for topicalization and scrambling, facilitating an independent evaluation of topicalization effects. The second experiment, employing a maze task, confirmed that Tongan’s canonical VSO order was processed more efficiently than the VOS scrambled order, thereby verifying a scrambling effect. The third experiment investigated the effects of both scrambling and topicalization in Tongan, finding that the canonical VSO order was processed most efficiently in terms of speed and accuracy, unlike the VOS scrambled and SVO topicalized orders. Notably, the OVS object-topicalized order was processed as efficiently as the VSO canonical order, while the SVO subject-topicalized order was slower than VSO but faster than VOS. By independently assessing the effects of topicalization apart from scrambling, this study demonstrates that both subject and object topicalization in Tongan facilitate sentence processing, contradicting the predictions based on movement-based anticipation.

    Additional information

    appendix 1-3
  • Ten Oever, S., & Martin, A. E. (2024). Interdependence of “what” and “when” in the brain. Journal of Cognitive Neuroscience, 36(1), 167-186. doi:10.1162/jocn_a_02067.

    Abstract

    From a brain's-eye-view, when a stimulus occurs and what it is are interrelated aspects of interpreting the perceptual world. Yet in practice, the putative perceptual inferences about sensory content and timing are often dichotomized and not investigated as an integrated process. We here argue that neural temporal dynamics can influence what is perceived, and in turn, stimulus content can influence the time at which perception is achieved. This computational principle results from the highly interdependent relationship of what and when in the environment. Both brain processes and perceptual events display strong temporal variability that is not always modeled; we argue that understanding—and, minimally, modeling—this temporal variability is key for theories of how the brain generates unified and consistent neural representations and that we ignore temporal variability in our analysis practice at the peril of both data interpretation and theory-building. Here, we review what and when interactions in the brain, demonstrate via simulations how temporal variability can result in misguided interpretations and conclusions, and outline how to integrate and synthesize what and when in theories and models of brain computation.
  • Ten Oever, S., Titone, L., te Rietmolen, N., & Martin, A. E. (2024). Phase-dependent word perception emerges from region-specific sensitivity to the statistics of language. PNAS, 121(3): e2320489121. doi:10.1073/pnas.2320489121.

    Abstract

    Neural oscillations reflect fluctuations in excitability, which biases the percept of ambiguous sensory input. Why this bias occurs is still not fully understood. We hypothesized that neural populations representing likely events are more sensitive, and thereby become active on earlier oscillatory phases, when the ensemble itself is less excitable. Perception of ambiguous input presented during less-excitable phases should therefore be biased toward frequent or predictable stimuli that have lower activation thresholds. Here, we show such a frequency bias in spoken word recognition using psychophysics, magnetoencephalography (MEG), and computational modelling. With MEG, we found a double dissociation, where the phase of oscillations in the superior temporal gyrus and medial temporal gyrus biased word-identification behavior based on phoneme and lexical frequencies, respectively. This finding was reproduced in a computational model. These results demonstrate that oscillations provide a temporal ordering of neural activity based on the sensitivity of separable neural populations.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Hand gestures have predictive potential during conversation: An investigation of the timing of gestures in relation to speech. Cognitive Science, 48(1): e13407. doi:10.1111/cogs.13407.

    Abstract

    During face-to-face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face-to-face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co-speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face-to-face conversation.
  • Ter Bekke, M., Levinson, S. C., Van Otterdijk, L., Kühn, M., & Holler, J. (2024). Visual bodily signals and conversational context benefit the anticipation of turn ends. Cognition, 248: 105806. doi:10.1016/j.cognition.2024.105806.

    Abstract

    The typical pattern of alternating turns in conversation seems trivial at first sight. But a closer look quickly reveals the cognitive challenges involved, with much of it resulting from the fast-paced nature of conversation. One core ingredient to turn coordination is the anticipation of upcoming turn ends so as to be able to ready oneself for providing the next contribution. Across two experiments, we investigated two variables inherent to face-to-face conversation, the presence of visual bodily signals and preceding discourse context, in terms of their contribution to turn end anticipation. In a reaction time paradigm, participants anticipated conversational turn ends better when seeing the speaker and their visual bodily signals than when they did not, especially so for longer turns. Likewise, participants were better able to anticipate turn ends when they had access to the preceding discourse context than when they did not, and especially so for longer turns. Critically, the two variables did not interact, showing that visual bodily signals retain their influence even in the context of preceding discourse. In a pre-registered follow-up experiment, we manipulated the visibility of the speaker's head, eyes and upper body (i.e. torso + arms). Participants were better able to anticipate turn ends when the speaker's upper body was visible, suggesting a role for manual gestures in turn end anticipation. Together, these findings show that seeing the speaker during conversation may critically facilitate turn coordination in interaction.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Gestures speed up responses to questions. Language, Cognition and Neuroscience, 39(4), 423-430. doi:10.1080/23273798.2024.2314021.

    Abstract

    Most language use occurs in face-to-face conversation, which involves rapid turn-taking. Seeing communicative bodily signals in addition to hearing speech may facilitate such fast responding. We tested whether this holds for co-speech hand gestures by investigating whether these gestures speed up button press responses to questions. Sixty native speakers of Dutch viewed videos in which an actress asked yes/no-questions, either with or without a corresponding iconic hand gesture. Participants answered the questions as quickly and accurately as possible via button press. Gestures did not impact response accuracy, but crucially, gestures sped up responses, suggesting that response planning may be finished earlier when gestures are seen. How much gestures sped up responses was not related to their timing in the question or their timing with respect to the corresponding information in speech. Overall, these results are in line with the idea that multimodality may facilitate fast responding during face-to-face conversation.
  • Terporten, R., Huizeling, E., Heidlmayr, K., Hagoort, P., & Kösem, A. (2024). The interaction of context constraints and predictive validity during sentence reading. Journal of Cognitive Neuroscience, 36(2), 225-238. doi:10.1162/jocn_a_02082.

    Abstract

    Words are not processed in isolation; instead, they are commonly embedded in phrases and sentences. The sentential context influences the perception and processing of a word. However, how this is achieved by brain processes and whether predictive mechanisms underlie this process remain a debated topic. Here, we employed an experimental paradigm in which we orthogonalized sentence context constraints and predictive validity, which was defined as the ratio of congruent to incongruent sentence endings within the experiment. While recording electroencephalography, participants read sentences with three levels of sentential context constraints (high, medium, and low). Participants were also separated into two groups that differed in their ratio of valid congruent to incongruent target words that could be predicted from the sentential context. For both groups, we investigated modulations of alpha power before, and N400 amplitude modulations after target word onset. The results reveal that the N400 amplitude gradually decreased with higher context constraints and cloze probability. In contrast, alpha power was not significantly affected by context constraint. Neither the N400 nor alpha power were significantly affected by changes in predictive validity.
  • Terrill, A. (2011). Languages in contact: An exploration of stability and change in the Solomon Islands. Oceanic Linguistics, 50(2), 312-337.

    Abstract

    The Papuan-Oceanic world has long been considered a hotbed of contact-induced linguistic change, and there have been a number of studies of deep linguistic influence between Papuan and Oceanic languages (like those by Thurston and Ross). This paper assesses the degree and type of contact-induced language change in the Solomon Islands, between the four Papuan languages—Bilua (spoken on Vella Lavella, Western Province), Touo (spoken on southern Rendova, Western Province), Savosavo (spoken on Savo Island, Central Province), and Lavukaleve (spoken in the Russell Islands, Central Province)—and their Oceanic neighbors. First, a claim is made for a degree of cultural homogeneity for Papuan and Oceanic-speaking populations within the Solomons. Second, lexical and grammatical borrowing are considered in turn, in an attempt to identify which elements in each of the four Papuan languages may have an origin in Oceanic languages—and indeed which elements in Oceanic languages may have their origin in Papuan languages. Finally, an assessment is made of the degrees of stability versus change in the Papuan and Oceanic languages of the Solomon Islands.
  • Tesink, C. M. J. Y., Buitelaar, J. K., Petersson, K. M., Van der Gaag, R. J., Teunisse, J.-P., & Hagoort, P. (2011). Neural correlates of language comprehension in autism spectrum disorders: When language conflicts with world knowledge. Neuropsychologia, 49, 1095-1104. doi:10.1016/j.neuropsychologia.2011.01.018.

    Abstract

    In individuals with ASD, difficulties with language comprehension are most evident when higher-level semantic-pragmatic language processing is required, for instance when context has to be used to interpret the meaning of an utterance. Until now, it is unclear at what level of processing and for what type of context these difficulties in language comprehension occur. Therefore, in the current fMRI study, we investigated the neural correlates of the integration of contextual information during auditory language comprehension in 24 adults with ASD and 24 matched control participants. Different levels of context processing were manipulated by using spoken sentences that were correct or contained either a semantic or world knowledge anomaly. Our findings demonstrated significant differences between the groups in inferior frontal cortex that were only present for sentences with a world knowledge anomaly. Relative to the ASD group, the control group showed significantly increased activation in left inferior frontal gyrus (LIFG) for sentences with a world knowledge anomaly compared to correct sentences. This effect possibly indicates reduced integrative capacities of the ASD group. Furthermore, world knowledge anomalies elicited significantly stronger activation in right inferior frontal gyrus (RIFG) in the control group compared to the ASD group. This additional RIFG activation probably reflects revision of the situation model after new, conflicting information. The lack of recruitment of RIFG is possibly related to difficulties with exception handling in the ASD group.

    Files private

    Request files
  • Thiebaut de Schotten, M., Dell'Acqua, F., Forkel, S. J., Simmons, A., Vergani, F., Murphy, D. G. M., & Catani, M. (2011). A lateralized brain network for visuospatial attention. Nature Neuroscience, 14, 1245-1246. doi:10.1038/nn.2905.

    Abstract

    Right hemisphere dominance for visuospatial attention is characteristic of most humans, but its anatomical basis remains unknown. We report the first evidence in humans for a larger parieto-frontal network in the right than left hemisphere, and a significant correlation between the degree of anatomical lateralization and asymmetry of performance on visuospatial tasks. Our results suggest that hemispheric specialization is associated with an unbalanced speed of visuospatial processing.

    Additional information

    supplementary material
  • Thothathiri, M., Basnakova, J., Lewis, A. G., & Briand, J. M. (2024). Fractionating difficulty during sentence comprehension using functional neuroimaging. Cerebral Cortex, 34(2): bhae032. doi:10.1093/cercor/bhae032.

    Abstract

    Sentence comprehension is highly practiced and largely automatic, but this belies the complexity of the underlying processes. We used functional neuroimaging to investigate garden-path sentences that cause difficulty during comprehension, in order to unpack the different processes used to support sentence interpretation. By investigating garden-path and other types of sentences within the same individuals, we functionally profiled different regions within the temporal and frontal cortices in the left hemisphere. The results revealed that different aspects of comprehension difficulty are handled by left posterior temporal, left anterior temporal, ventral left frontal, and dorsal left frontal cortices. The functional profiles of these regions likely lie along a spectrum of specificity to generality, including language-specific processing of linguistic representations, more general conflict resolution processes operating over linguistic representations, and processes for handling difficulty in general. These findings suggest that difficulty is not unitary and that there is a role for a variety of linguistic and non-linguistic processes in supporting comprehension.

    Additional information

    supplementary information
  • Titus, A., Dijkstra, T., Willems, R. M., & Peeters, D. (2024). Beyond the tried and true: How virtual reality, dialog setups, and a focus on multimodality can take bilingual language production research forward. Neuropsychologia, 193: 108764. doi:10.1016/j.neuropsychologia.2023.108764.

    Abstract

    Bilinguals possess the ability of expressing themselves in more than one language, and typically do so in contextually rich and dynamic settings. Theories and models have indeed long considered context factors to affect bilingual language production in many ways. However, most experimental studies in this domain have failed to fully incorporate linguistic, social, or physical context aspects, let alone combine them in the same study. Indeed, most experimental psycholinguistic research has taken place in isolated and constrained lab settings with carefully selected words or sentences, rather than under rich and naturalistic conditions. We argue that the most influential experimental paradigms in the psycholinguistic study of bilingual language production fall short of capturing the effects of context on language processing and control presupposed by prominent models. This paper therefore aims to enrich the methodological basis for investigating context aspects in current experimental paradigms and thereby move the field of bilingual language production research forward theoretically. After considering extensions of existing paradigms proposed to address context effects, we present three far-ranging innovative proposals, focusing on virtual reality, dialog situations, and multimodality in the context of bilingual language production.
  • Torreira, F., & Ernestus, M. (2011). Realization of voiceless stops and vowels in conversational French and Spanish. Laboratory Phonology, 2(2), 331-353. doi:10.1515/LABPHON.2011.012.

    Abstract

    The present study compares the realization of intervocalic voiceless stops and vowels surrounded by voiceless stops in conversational Spanish and French. Our data reveal significant differences in how these segments are realized in each language. Spanish voiceless stops tend to have shorter stop closures, display incomplete closures more often, and exhibit more voicing than French voiceless stops. As for vowels, more cases of complete devoicing and greater degrees of partial devoicing were found in French than in Spanish. Moreover, all French vowel types exhibit significantly lower F1 values than their Spanish counterparts. These findings indicate that the extent of reduction that a segment type can undergo in conversational speech can vary significantly across languages. Language differences in coarticulatory strategies and “base-of-articulation” are discussed as possible causes of our observations.
  • Torreira, F., & Ernestus, M. (2011). Vowel elision in casual French: The case of vowel /e/ in the word c’était. Journal of Phonetics, 39(1), 50 -58. doi:10.1016/j.wocn.2010.11.003.

    Abstract

    This study investigates the reduction of vowel /e/ in the French word c’était /setε/ ‘it was’. This reduction phenomenon appeared to be highly frequent, as more than half of the occurrences of this word in a corpus of casual French contained few or no acoustic traces of a vowel between [s] and [t]. All our durational analyses clearly supported a categorical absence of vowel /e/ in a subset of c’était tokens. This interpretation was also supported by our finding that the occurrence of complete elision and [e] duration in non-elision tokens were conditioned by different factors. However, spectral measures were consistent with the possibility that a highly reduced /e/ vowel is still present in elision tokens in spite of the durational evidence for categorical elision. We discuss how these findings can be reconciled, and conclude that acoustic analysis of uncontrolled materials can provide valuable information about the mechanisms underlying reduction phenomena in casual speech.
  • Trujillo, J. P., & Holler, J. (2024). Information distribution patterns in naturalistic dialogue differ across languages. Psychonomic Bulletin & Review. Advance online publication. doi:10.3758/s13423-024-02452-0.

    Abstract

    The natural ecology of language is conversation, with individuals taking turns speaking to communicate in a back-and-forth fashion. Language in this context involves strings of words that a listener must process while simultaneously planning their own next utterance. It would thus be highly advantageous if language users distributed information within an utterance in a way that may facilitate this processing–planning dynamic. While some studies have investigated how information is distributed at the level of single words or clauses, or in written language, little is known about how information is distributed within spoken utterances produced during naturalistic conversation. It also is not known how information distribution patterns of spoken utterances may differ across languages. We used a set of matched corpora (CallHome) containing 898 telephone conversations conducted in six different languages (Arabic, English, German, Japanese, Mandarin, and Spanish), analyzing more than 58,000 utterances, to assess whether there is evidence of distinct patterns of information distributions at the utterance level, and whether these patterns are similar or differed across the languages. We found that English, Spanish, and Mandarin typically show a back-loaded distribution, with higher information (i.e., surprisal) in the last half of utterances compared with the first half, while Arabic, German, and Japanese showed front-loaded distributions, with higher information in the first half compared with the last half. Additional analyses suggest that these patterns may be related to word order and rate of noun and verb usage. We additionally found that back-loaded languages have longer turn transition times (i.e.,time between speaker turns)

    Additional information

    Data availability
  • Trujillo, J. P., & Holler, J. (2024). Conversational facial signals combine into compositional meanings that change the interpretation of speaker intentions. Scientific Reports, 14: 2286. doi:10.1038/s41598-024-52589-0.

    Abstract

    Human language is extremely versatile, combining a limited set of signals in an unlimited number of ways. However, it is unknown whether conversational visual signals feed into the composite utterances with which speakers communicate their intentions. We assessed whether different combinations of visual signals lead to different intent interpretations of the same spoken utterance. Participants viewed a virtual avatar uttering spoken questions while producing single visual signals (i.e., head turn, head tilt, eyebrow raise) or combinations of these signals. After each video, participants classified the communicative intention behind the question. We found that composite utterances combining several visual signals conveyed different meaning compared to utterances accompanied by the single visual signals. However, responses to combinations of signals were more similar to the responses to related, rather than unrelated, individual signals, indicating a consistent influence of the individual visual signals on the whole. This study therefore provides first evidence for compositional, non-additive (i.e., Gestalt-like) perception of multimodal language.

    Additional information

    41598_2024_52589_MOESM1_ESM.docx
  • Tufvesson, S. (2011). Analogy-making in the Semai sensory world. The Senses & Society, 6(1), 86-95. doi:10.2752/174589311X12893982233876.

    Abstract

    In the interplay between language, culture, and perception, iconicity structures our representations of what we experience. By examining secondary iconicity in sensory vocabulary, this study draws attention to diagrammatic qualities in human interaction with, and representation of, the sensory world. In Semai (Mon-Khmer, Aslian), spoken on Peninsular Malaysia, sensory experiences are encoded by expressives. Expressives display a diagrammatic iconic structure whereby related sensory experiences receive related linguistic forms. Through this type of formmeaning mapping, gradient relationships in the perceptual world receive gradient linguistic representations. Form-meaning mapping such as this enables speakers to categorize sensory events into types and subtypes of perceptions, and provide illustrates how a diagrammatic iconic structure within sensory vocabulary creates networks of relational sensory knowledge. Through analogy, speakers draw on this knowledge to comprehend sensory referents and create new unconventional forms, which are easily understood by other members of the community. Analogy-making such as this allows speakers to capture fine-grained differences between sensory events, and effectively guide each other through the Semai sensory landscape. sensory specifics of various kinds. This studyillustrates how a diagrammatic iconic structure within sensory vocabulary creates networks of relational sensory knowledge. Through analogy, speakers draw on this knowledge to comprehend sensory referents and create new unconventional forms, which are easily understood by other members of the community. Analogy-making such as this allows speakers to capture fine-grained differences between sensory events, and effectively guide each other through the Semai sensory landscape.
  • Tuinman, A., Mitterer, H., & Cutler, A. (2011). Perception of intrusive /r/ in English by native, cross-language and cross-dialect listeners. Journal of the Acoustical Society of America, 130, 1643-1652. doi:10.1121/1.3619793.

    Abstract

    In sequences such as law and order, speakers of British English often insert /r/ between law and and. Acoustic analyses revealed such “intrusive” /r/ to be significantly shorter than canonical /r/. In a 2AFC experiment, native listeners heard British English sentences in which /r/ duration was manipulated across a word boundary [e.g., saw (r)ice], and orthographic and semantic factors were varied. These listeners responded categorically on the basis of acoustic evidence for /r/ alone, reporting ice after short /r/s, rice after long /r/s; orthographic and semantic factors had no effect. Dutch listeners proficient in English who heard the same materials relied less on durational cues than the native listeners, and were affected by both orthography and semantic bias. American English listeners produced intermediate responses to the same materials, being sensitive to duration (less so than native, more so than Dutch listeners), and to orthography (less so than the Dutch), but insensitive to the semantic manipulation. Listeners from language communities without common use of intrusive /r/ may thus interpret intrusive /r/ as canonical /r/, with a language difference increasing this propensity more than a dialect difference. Native listeners, however, efficiently distinguish intrusive from canonical /r/ by exploiting the relevant acoustic variation.
  • De Vaan, L., Ernestus, M., & Schreuder, R. (2011). The lifespan of lexical traces for novel morphologically complex words. The Mental Lexicon, 6, 374-392. doi:10.1075/ml.6.3.02dev.

    Abstract

    This study investigates the lifespans of lexical traces for novel morphologically complex words. In two visual lexical decision experiments, a neologism was either primed by itself or by its stem. The target occurred 40 trials after the prime (Experiments 1 & 2), after a 12 hour delay (Experiment 1), or after a one week delay (Experiment 2). Participants recognized neologisms more quickly if they had seen them before in the experiment. These results show that memory traces for novel morphologically complex words already come into existence after a very first exposure and that they last for at least a week. We did not find evidence for a role of sleep in the formation of memory traces. Interestingly, Base Frequency appeared to play a role in the processing of the neologisms also when they were presented a second time and had their own memory traces.
  • Van Wijk, C., & Kempen, G. (1982). De ontwikkeling van syntactische formuleervaardigheid bij kinderen van 9 tot 16 jaar. Nederlands Tijdschrift voor de Psychologie en haar Grensgebieden, 37(8), 491-509.

    Abstract

    An essential phenomenon in the development towards syntactic maturity after early childhood is the increasing use of so-called sentence-combining transformations. Especially by using subordination, complex sentences are produced. The research reported here is an attempt to arrive at a more adequate characterization and explanation. Our starting point was an analysis of 280 texts written by Dutch-speaking pupils of the two highest grades of the primary school and the four lowest grades of three different types of secondary education. It was examined whether systematic shifts in the use of certain groups of so-called function words could be traced. We concluded that the development of the syntactic formulating ability can be characterized as an increase in connectivity: the use of all kinds of function words which explicitly mark logico-semantic relations between propositions. This development starts by inserting special adverbs and coordinating conjunctions resulting in various types of coordination. In a later stage, the syntactic patterning of the sentence is affected as well (various types of subordination). The increase in sentence complexity is only one aspect of the entire development. An explanation for the increase in connectivity is offered based upon a distinction between narrative and expository language use. The latter, but not the former, is characterized by frequent occurrence of connectives. The development in syntactic formulating ability includes a high level of skill in expository language use. Speed of development is determined by intensity of training, e.g. in scholastic and occupational settings.
  • Van Leeuwen, T. M., Den Ouden, H. E. M., & Hagoort, P. (2011). Effective connectivity determines the nature of subjective experience in grapheme-color synesthesia. Journal of Neuroscience, 31, 9879-9884. doi:10.1523/JNEUROSCI.0569-11.2011.

    Abstract

    Synesthesia provides an elegant model to investigate neural mechanisms underlying individual differences in subjective experience in humans. In grapheme–color synesthesia, written letters induce color sensations, accompanied by activation of color area V4. Competing hypotheses suggest that enhanced V4 activity during synesthesia is either induced by direct bottom-up cross-activation from grapheme processing areas within the fusiform gyrus, or indirectly via higher-order parietal areas. Synesthetes differ in the way synesthetic color is perceived: “projector” synesthetes experience color externally colocalized with a presented grapheme, whereas “associators” report an internally evoked association. Using dynamic causal modeling for fMRI, we show that V4 cross-activation during synesthesia was induced via a bottom-up pathway (within fusiform gyrus) in projector synesthetes, but via a top-down pathway (via parietal lobe) in associators. These findings show how altered coupling within the same network of active regions leads to differences in subjective experience. Our findings reconcile the two most influential cross-activation accounts of synesthesia.
  • Van de Geer, J. P., & Levelt, W. J. M. (1963). Detection of visual patterns disturbed by noise: An exploratory study. Quarterly Journal of Experimental Psychology, 15, 192-204. doi:10.1080/17470216308416324.

    Abstract

    An introductory study of the perception of stochastically specified events is reported. The initial problem was to determine whether the perceiver can split visual input data of this kind into random and determined components. The inability of subjects to do so with the stimulus material used (a filmlike sequence of dot patterns), led to the more general question of how subjects code this kind of visual material. To meet the difficulty of defining the subjects' responses, two experiments were designed. In both, patterns were presented as a rapid sequence of dots on a screen. The patterns were more or less disturbed by “noise,” i.e. the dots did not appear exactly at their proper places. In the first experiment the response was a rating on a semantic scale, in the second an identification from among a set of alternative patterns. The results of these experiments give some insight in the coding systems adopted by the subjects. First, noise appears to be detrimental to pattern recognition, especially to patterns with little spread. Second, this shows connections with the factors obtained from analysis of the semantic ratings, e.g. easily disturbed patterns show a large drop in the semantic regularity factor, when only a little noise is added.
  • Van Berkum, J. J. A., Hijne, H., De Jong, T., Van Joolingen, W. R., & Njoo, M. (1991). Aspects of computer simulations in education. Education & Computing, 6(3/4), 231-239.

    Abstract

    Computer simulations in an instructional context can be characterized according to four aspects (themes): simulation models, learning goals, learning processes and learner activity. The present paper provides an outline of these four themes. The main classification criterion for simulation models is quantitative vs. qualitative models. For quantitative models a further subdivision can be made by classifying the independent and dependent variables as continuous or discrete. A second criterion is whether one of the independent variables is time, thus distinguishing dynamic and static models. Qualitative models on the other hand use propositions about non-quantitative properties of a system or they describe quantitative aspects in a qualitative way. Related to the underlying model is the interaction with it. When this interaction has a normative counterpart in the real world we call it a procedure. The second theme of learning with computer simulation concerns learning goals. A learning goal is principally classified along three dimensions, which specify different aspects of the knowledge involved. The first dimension, knowledge category, indicates that a learning goal can address principles, concepts and/or facts (conceptual knowledge) or procedures (performance sequences). The second dimension, knowledge representation, captures the fact that knowledge can be represented in a more declarative (articulate, explicit), or in a more compiled (implicit) format, each one having its own advantages and drawbacks. The third dimension, knowledge scope, involves the learning goal's relation with the simulation domain; knowledge can be specific to a particular domain, or generalizable over classes of domains (generic). A more or less separate type of learning goal refers to knowledge acquisition skills that are pertinent to learning in an exploratory environment. Learning processes constitute the third theme. Learning processes are defined as cognitive actions of the learner. Learning processes can be classified using a multilevel scheme. The first (highest) of these levels gives four main categories: orientation, hypothesis generation, testing and evaluation. Examples of more specific processes are model exploration and output interpretation. The fourth theme of learning with computer simulations is learner activity. Learner activity is defined as the ‘physical’ interaction of the learner with the simulations (as opposed to the mental interaction that was described in the learning processes). Five main categories of learner activity are distinguished: defining experimental settings (variables, parameters etc.), interaction process choices (deciding a next step), collecting data, choice of data presentation and metacontrol over the simulation.
  • Van Berkum, J. J. A., & De Jong, T. (1991). Instructional environments for simulations. Education & Computing, 6(3/4), 305-358.

    Abstract

    The use of computer simulations in education and training can have substantial advantages over other approaches. In comparison with alternatives such as textbooks, lectures, and tutorial courseware, a simulation-based approach offers the opportunity to learn in a relatively realistic problem-solving context, to practise task performance without stress, to systematically explore both realistic and hypothetical situations, to change the time-scale of events, and to interact with simplified versions of the process or system being simulated. However, learners are often unable to cope with the freedom offered by, and the complexity of, a simulation. As a result many of them resort to an unsystematic, unproductive mode of exploration. There is evidence that simulation-based learning can be improved if the learner is supported while working with the simulation. Constructing such an instructional environment around simulations seems to run counter to the freedom the learner is allowed to in ‘stand alone’ simulations. The present article explores instructional measures that allow for an optimal freedom for the learner. An extensive discussion of learning goals brings two main types of learning goals to the fore: conceptual knowledge and operational knowledge. A third type of learning goal refers to the knowledge acquisition (exploratory learning) process. Cognitive theory has implications for the design of instructional environments around simulations. Most of these implications are quite general, but they can also be related to the three types of learning goals. For conceptual knowledge the sequence and choice of models and problems is important, as is providing the learner with explanations and minimization of error. For operational knowledge cognitive theory recommends learning to take place in a problem solving context, the explicit tracing of the behaviour of the learner, providing immediate feedback and minimization of working memory load. For knowledge acquisition goals, it is recommended that the tutor takes the role of a model and coach, and that learning takes place together with a companion. A second source of inspiration for designing instructional environments can be found in Instructional Design Theories. Reviewing these shows that interacting with a simulation can be a part of a more comprehensive instructional strategy, in which for example also prerequisite knowledge is taught. Moreover, information present in a simulation can also be represented in a more structural or static way and these two forms of presentation provoked to perform specific learning processes and learner activities by tutor controlled variations in the simulation, and by tutor initiated prodding techniques. And finally, instructional design theories showed that complex models and procedures can be taught by starting with central and simple elements of these models and procedures and subsequently presenting more complex models and procedures. Most of the recent simulation-based intelligent tutoring systems involve troubleshooting of complex technical systems. Learners are supposed to acquire knowledge of particular system principles, of troubleshooting procedures, or of both. Commonly encountered instructional features include (a) the sequencing of increasingly complex problems to be solved, (b) the availability of a range of help information on request, (c) the presence of an expert troubleshooting module which can step in to provide criticism on learner performance, hints on the problem nature, or suggestions on how to proceed, (d) the option of having the expert module demonstrate optimal performance afterwards, and (e) the use of different ways of depicting the simulated system. A selection of findings is summarized by placing them under the four themes we think to be characteristic of learning with computer simulations (see de Jong, this volume).
  • Van de Meerendonk, N., Indefrey, P., Chwilla, D. J., & Kolk, H. H. (2011). Monitoring in language perception: Electrophysiological and hemodynamic responses to spelling violations. Neuroimage, 54, 2350-2363. doi:10.1016/j.neuroimage.2010.10.022.

    Abstract

    The monitoring theory of language perception proposes that competing representations that are caused by strong expectancy violations can trigger a conflict which elicits reprocessing of the input to check for possible processing errors. This monitoring process is thought to be reflected by the P600 component in the EEG. The present study further investigated this monitoring process by comparing syntactic and spelling violations in an EEG and an fMRI experiment. To assess the effect of conflict strength, misspellings were embedded in sentences that were weakly or strongly predictive of a critical word. In support of the monitoring theory, syntactic and spelling violations elicited similarly distributed P600 effects. Furthermore, the P600 effect was larger to misspellings in the strongly compared to the weakly predictive sentences. The fMRI results showed that both syntactic and spelling violations increased activation in the left inferior frontal gyrus (lIFG), while only the misspellings activated additional areas. Conflict strength did not affect the hemodynamic response to spelling violations. These results extend the idea that the lIFG is involved in implementing cognitive control in the presence of representational conflicts in general to the processing of errors in language perception.
  • Van de Ven, M., & Gussenhoven, C. (2011). On the timing of the final rise in Dutch falling-rising intonation contours. Journal of Phonetics, 39, 225-236. doi:10.1016/j.wocn.2011.01.006.

    Abstract

    A corpus of Dutch falling-rising intonation contours with early nuclear accent was elicited from nine speakers with a view to establishing the extent to which the low F0 target immediately preceding the final rise, was attracted by a post-nuclear stressed syllable (PNS) in either of the last two words or by Second Occurrence Contrastive Focus (SOCF) on either of these words. We found a small effect of foot type, which we interpret as due to a rhythmic 'trochaic enhancement' effect. The results show that neither PNS nor SOCF influences the location of the low F0 target, which appears consistently to be timed with reference to the utterance end. It is speculated that there are two ways in which postnuclear tones can be timed. The first is by means of a phonological association with a post-nuclear stressed syllable, as in Athenian Greek and Roermond Dutch. The second is by a fixed distance from the utterance end or from the target of an adjacent tone. Accordingly, two phonological mechanisms are defended, association and edge alignment, such that all tones edge-align, but only some associate. Specifically, no evidence was found for a third situation that can be envisaged, in which a post-nuclear tone is gradiently attracted to a post-nuclear stress.

    Files private

    Request files
  • Van Gijn, R. (2011). Pronominal affixes, the best of both worlds: The case of Yurakaré. Transactions of the Philological Society, 109(1), 41-58. doi:10.1111/j.1467-968X.2011.01249.x.

    Abstract

    I thank the speakers of Yurakaré who have taught me their language for sharing their knowledge with me. I would furthermore like to thank Grev Corbett, Michael Cysouw, and an anonymous reviewer for commenting on earlier drafts of this paper. All remaining errors are mine. The research reported in this paper was made possible by grants from Prof. Pieter Muysken’s Spinoza project Lexicon & Syntax, the University of Surrey, the DoBeS foundation, and the Netherlands Organization for Scientific Research, for which I am grateful. Pronominal affixes in polysynthetic languages have an ambiguous status in the sense that they have characteristics normally associated with free pronouns as well as characteristics associated with agreement markers. This situation arises because pronominal affixes represent intermediate stages in a diachronic development from independent pronouns to agreement markers. Because this diachronic change is not abrupt, pronominal affixes can show different characteristics from language to language. By presenting an in-depth discussion of the pronominal affixes of Yurakaré, an unclassified language from Bolivia, I argue that these so-called intermediate stages as typically attested in polysynthetic languages actually represent economical systems that combine advantages of agreement markers and of free pronouns. In terms of diachronic development, such ‘intermediate’ systems, being functionally well-adapted, appear to be rather stable, and it can even be reinforced by subsequent diachronic developments.
  • Van de Geer, J. P., Levelt, W. J. M., & Plomp, R. (1962). The connotation of musical consonance. Acta Psychologica, 20, 308-319.

    Abstract

    As a preliminary to further research on musical consonance an explanatory investigation was made on the different modes of judgment of musical intervals. This was done by way of a semantic differential. Subjects rated 23 intervals against 10 scales. In a factor analysis three factors appeared: pitch, evaluation and fusion. The relation between these factors and some physical characteristics has been investigated. The scale consonant-dissonant showed to be purely evaluative (in opposition to Stumpf's theory). This evaluative connotation is not in accordance with the musicological meaning of consonance. Suggestions to account for this difference have been given.
  • Van Gijn, R. (2011). Subjects and objects: A semantic account of Yurakaré argument structure. International Journal of American Linguistics, 77, 595-621. doi:10.1086/662158.

    Abstract

    Yurakaré (unclassified, central Bolivia) marks core arguments on the verb by means of pronominal affixes. Subjects are suffixed, objects are prefixed. There are six types of head-marked objects in Yurakaré, each with its own morphosyntactic and semantic properties. Distributional patterns suggest that the six objects can be divided into two larger groups reminiscent of the typologically recognized direct vs. indirect object distinction. This paper looks at the interaction of this complex system of participant marking and verbal semantics. By investigating the participant-marking patterns of nine verb classes (four representing a gradual decrease of patienthood of the P participant, five a gradual decrease of agentivity of the A participant), I come to the conclusion that grammatical roles in Yurakaré can be defined semantically, and case frames are to a high degree determined by verbal semantics.
  • Van Wijk, C., & Kempen, G. (1982). Syntactische formuleervaardigheid en het schrijven van opstellen. Pedagogische Studiën, 59, 126-136.

    Abstract

    Meermalen is getracht om syntactische formuleenuuirdigheid direct en objectief te meten aan de hand van gesproken of geschreven teksten. Uitgangspunt hierbij vormde in de regel de syntactische complexiteit van de geproduceerde taaluitingen. Dit heeft echter niet geleid tot een plausibele, duidelijk omschreven en praktisch bruikbare index. N.a.v. een kritische bespreking van de notie complexiteit wordt in dit artikel als nieuw criterium voorgesteld de connectiviteit van de taaluitingen; de expliciete aanduiding van logiscli-scmantische relaties tussen proposities. Connectiviteit is gemakkelijk scoorbaar aan de hand van functiewoorden die verschillende vormen van nevenschikkend en onderschikkend zinsverband markeren. Deze nieuwe index ondetrangt de kritiek die op complexiteit gegeven kon worden, blijkt duidelijk te discrimineren tussen groepen leerlingen die van elkaar verschillen naar leeftijd en opleidingsniveau, en sluit aan bij recente taalpsychologische en sociolinguïstische theorie. Tot besluit worden enige onderwijskundige implicaties aangegeven.
  • Van Leeuwen, E. J. C., Zimmerman, E., & Davila Ross, M. (2011). Responding to inequities: Gorillas try to maintain their competitive advantage during play fights. Biology Letters, 7(1), 39-42. doi:10.1098/rsbl.2010.0482.

    Abstract

    Humans respond to unfair situations in various ways. Experimental research has revealed that non-human species also respond to unequal situ- ations in the form of inequity aversions when they have the disadvantage. The current study focused on play fights in gorillas to explore for the first time, to our knowledge, if/how non-human species respond to inequities in natural social settings. Hitting causes a naturally occurring inequity among individuals and here it was specifically assessed how the hitters and their partners engaged in play chases that followed the hitting. The results of this work showed that the hitters significantly more often moved first to run away immediately after the encounter than their partners. These findings provide evidence that non-human species respond to inequities by trying to maintain their competitive advantages. We conclude that non-human pri- mates, like humans, may show different responses to inequities and that they may modify them depending on if they have the advantage or the disadvantage.
  • Van de Ven, M., Tucker, B. V., & Ernestus, M. (2011). Semantic context effects in the comprehension of reduced pronunciation variants. Memory & Cognition, 39, 1301-1316. doi:10.3758/s13421-011-0103-2.

    Abstract

    Listeners require context to understand the highly reduced words that occur in casual speech. The present study reports four auditory lexical decision experiments in which the role of semantic context in the comprehension of reduced versus unreduced speech was investigated. Experiments 1 and 2 showed semantic priming for combinations of unreduced, but not reduced, primes and low-frequency targets. In Experiment 3, we crossed the reduction of the prime with the reduction of the target. Results showed no semantic priming from reduced primes, regardless of the reduction of the targets. Finally, Experiment 4 showed that reduced and unreduced primes facilitate upcoming low-frequency related words equally if the interstimulus interval is extended. These results suggest that semantically related words need more time to be recognized after reduced primes, but once reduced primes have been fully (semantically) processed, these primes can facilitate the recognition of upcoming words as well as do unreduced primes.
  • Van der Veer, G. C., Bagnara, S., & Kempen, G. (1991). Preface. Acta Psychologica, 78, ix. doi:10.1016/0001-6918(91)90002-H.
  • Van der Werff, J., Ravignani, A., & Jadoul, Y. (2024). thebeat: A Python package for working with rhythms and other temporal sequences. Behavior Research Methods. Advance online publication. doi:10.3758/s13428-023-02334-8.

    Abstract

    thebeat is a Python package for working with temporal sequences and rhythms in the behavioral and cognitive sciences, as well as in bioacoustics. It provides functionality for creating experimental stimuli, and for visualizing and analyzing temporal data. Sequences, sounds, and experimental trials can be generated using single lines of code. thebeat contains functions for calculating common rhythmic measures, such as interval ratios, and for producing plots, such as circular histograms. thebeat saves researchers time when creating experiments, and provides the first steps in collecting widely accepted methods for use in timing research. thebeat is an open-source, on-going, and collaborative project, and can be extended for use in specialized subfields. thebeat integrates easily with the existing Python ecosystem, allowing one to combine our tested code with custom-made scripts. The package was specifically designed to be useful for both skilled and novice programmers. thebeat provides a foundation for working with temporal sequences onto which additional functionality can be built. This combination of specificity and plasticity should facilitate research in multiple research contexts and fields of study.
  • Van Geert, E., Ding, R., & Wagemans, J. (2024). A cross-cultural comparison of aesthetic preferences for neatly organized compositions: Native Chinese- versus Native Dutch-speaking samples. Empirical Studies of the Arts. Advance online publication. doi:10.1177/02762374241245917.

    Abstract

    Do aesthetic preferences for images of neatly organized compositions (e.g., images collected on blogs like Things Organized Neatly©) generalize across cultures? In an earlier study, focusing on stimulus and personal properties related to order and complexity, Western participants indicated their preference for one of two simultaneously presented images (100 pairs). In the current study, we compared the data of the native Dutch-speaking participants from this earlier sample (N = 356) to newly collected data from a native Chinese-speaking sample (N = 220). Overall, aesthetic preferences were quite similar across cultures. When relating preferences for each sample to ratings of order, complexity, soothingness, and fascination collected from a Western, mainly Dutch-speaking sample, the results hint at a cross-culturally consistent preference for images that Western participants rate as more ordered, but a cross-culturally diverse relation between preferences and complexity.
  • Vandeberg, L., Guadalupe, T., & Zwaan, R. A. (2011). How verbs can activate things: Cross-language activation across word classes. Acta Psychologica, 138, 68-73. doi:10.1016/j.actpsy.2011.05.007.

    Abstract

    The present study explored whether language-nonselective access in bilinguals occurs across word classes in a sentence context. Dutch–English bilinguals were auditorily presented with English (L2) sentences while looking at a visual world. The sentences contained interlingual homophones from distinct lexical categories (e.g., the English verb spoke, which overlaps phonologically with the Dutch noun for ghost, spook). Eye movement recordings showed that depictions of referents of the Dutch (L1) nouns attracted more visual attention than unrelated distractor pictures in sentences containing homophones. This finding shows that native language objects are activated during second language verb processing despite the structural information provided by the sentence context. Research highlights We show that native language words are activated during second language sentence processing. We tested this in a visual world setting on homophones with a different word class across languages. Fixations show that processing second language verbs activated native language nouns.
  • Verdonschot, R. G., La Heij, W., Paolieri, D., Zhang, Q., & Schiller, N. O. (2011). Homophonic context effects when naming Japanese kanji: Evidence for processing costs. Quarterly Journal of Experimental Psychology, 64(9), 1836-1849. doi:10.1080/17470218.2011.585241.

    Abstract

    The current study investigated the effects of phonologically related context pictures on the naming latencies of target words in Japanese and Chinese. Reading bare words in alphabetic languages has been shown to be rather immune to effects of context stimuli, even when these stimuli are presented in advance of the target word (e. g., Glaser & Dungelhoff, 1984; Roelofs, 2003). However, recently, semantic context effects of distractor pictures on the naming latencies of Japanese kanji (but not Chinese hanzi) words have been observed (Verdonschot, La Heij, & Schiller, 2010). In the present study, we further investigated this issue using phonologically related (i.e., homophonic) context pictures when naming target words in either Chinese or Japanese. We found that pronouncing bare nouns in Japanese is sensitive to phonologically related context pictures, whereas this is not the case in Chinese. The difference between these two languages is attributed to processing costs caused by multiple pronunciations for Japanese kanji.

Share this page