Publications

Displaying 501 - 600 of 720
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2011). Speaking rate affects the perception of duration as a suprasegmental lexical-stress cue. Language and Speech, 54(2), 147-165. doi:10.1177/0023830910397489.

    Abstract

    Three categorization experiments investigated whether the speaking rate of a preceding sentence influences durational cues to the perception of suprasegmental lexical-stress patterns. Dutch two-syllable word fragments had to be judged as coming from one of two longer words that matched the fragment segmentally but differed in lexical stress placement. Word pairs contrasted primary stress on either the first versus the second syllable or the first versus the third syllable. Duration of the initial or the second syllable of the fragments and rate of the preceding context (fast vs. slow) were manipulated. Listeners used speaking rate to decide about the degree of stress on initial syllables whether the syllables' absolute durations were informative about stress (Experiment 1a) or not (Experiment 1b). Rate effects on the second syllable were visible only when the initial syllable was ambiguous in duration with respect to the preceding rate context (Experiment 2). Absolute second syllable durations contributed little to stress perception (Experiment 3). These results suggest that speaking rate is used to disambiguate words and that rate-modulated stress cues are more important on initial than non-initial syllables. Speaking rate affects perception of suprasegmental information.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2011). Speaking rate from proximal and distal contexts is used during word segmentation. Journal of Experimental Psychology: Human Perception and Performance, 37, 978-996. doi:10.1037/a0021923.

    Abstract

    A series of eye-tracking and categorization experiments investigated the use of speaking-rate information in the segmentation of Dutch ambiguous-word sequences. Juncture phonemes with ambiguous durations (e.g., [s] in 'eens (s)peer,' “once (s)pear,” [t] in 'nooit (t)rap,' “never staircase/quick”) were perceived as longer and hence more often as word-initial when following a fast than a slow context sentence. Listeners used speaking-rate information as soon as it became available. Rate information from a context proximal to the juncture phoneme and from a more distal context was used during on-line word recognition, as reflected in listeners' eye movements. Stronger effects of distal context, however, were observed in the categorization task, which measures the off-line results of the word-recognition process. In categorization, the amount of rate context had the greatest influence on the use of rate information, but in eye tracking, the rate information's proximal location was the most important. These findings constrain accounts of how speaking rate modulates the interpretation of durational cues during word recognition by suggesting that rate estimates are used to evaluate upcoming phonetic information continuously during prelexical speech processing.
  • Reis, A., Guerreiro, M., & Petersson, K. M. (2003). A sociodemographic and neuropsychological characterization of an illiterate population. Applied Neuropsychology, 10, 191-204. doi:10.1207/s15324826an1004_1.

    Abstract

    The objectives of this article are to characterize the performance and to discuss the performance differences between literate and illiterate participants in a well-defined study population.We describe the participant-selection procedure used to investigate this population. Three groups with similar sociocultural backgrounds living in a relatively homogeneous fishing community in southern Portugal were characterized in terms of socioeconomic and sociocultural background variables and compared on a simple neuropsychological test battery; specifically, a literate group with more than 4 years of education (n = 9), a literate group with 4 years of education (n = 26), and an illiterate group (n = 31) were included in this study.We compare and discuss our results with other similar studies on the effects of literacy and illiteracy. The results indicate that naming and identification of real objects, verbal fluency using ecologically relevant semantic criteria, verbal memory, and orientation are not affected by literacy or level of formal education. In contrast, verbal working memory assessed with digit span, verbal abstraction, long-term semantic memory, and calculation (i.e., multiplication) are significantly affected by the level of literacy. We indicate that it is possible, with proper participant-selection procedures, to exclude general cognitive impairment and to control important sociocultural factors that potentially could introduce bias when studying the specific effects of literacy and level of formal education on cognitive brain function.
  • Reis, A., & Petersson, K. M. (2003). Educational level, socioeconomic status and aphasia research: A comment on Connor et al. (2001)- Effect of socioeconomic status on aphasia severity and recovery. Brain and Language, 87, 449-452. doi:10.1016/S0093-934X(03)00140-8.

    Abstract

    Is there a relation between socioeconomic factors and aphasia severity and recovery? Connor, Obler, Tocco, Fitzpatrick, and Albert (2001) describe correlations between the educational level and socioeconomic status of aphasic subjects with aphasia severity and subsequent recovery. As stated in the introduction by Connor et al. (2001), studies of the influence of educational level and literacy (or illiteracy) on aphasia severity have yielded conflicting results, while no significant link between socioeconomic status and aphasia severity and recovery has been established. In this brief note, we will comment on their findings and conclusions, beginning first with a brief review of literacy and aphasia research, and complexities encountered in these fields of investigation. This serves as a general background to our specific comments on Connor et al. (2001), which will be focusing on methodological issues and the importance of taking normative values in consideration when subjects with different socio-cultural or socio-economic backgrounds are assessed.
  • Rekers, Y., Haun, D. B. M., & Tomasello, M. (2011). Children, but not chimpanzees, prefer to collaborate. Current Biology, 21, 1756-1758. doi:10.1016/j.cub.2011.08.066.

    Abstract

    Human societies are built on collaborative activities. Already from early childhood, human children are skillful and proficient collaborators. They recognize when they need help in solving a problem and actively recruit collaborators [ [1] and 2 F. Warneken, F. Chen and M. Tomasello, Cooperative activities in young children and chimpanzees. Child Dev., 77 (2006), pp. 640–663. | View Record in Scopus | [MPG-SFX] | | Full Text via CrossRef | Cited By in Scopus (56) [2] ]. The societies of other primates are also to some degree cooperative. Chimpanzees, for example, engage in a variety of cooperative activities such as border patrols, group hunting, and intra- and intergroup coalitionary behavior [ [3] , [4] and [5] ]. Recent studies have shown that chimpanzees possess many of the cognitive prerequisites necessary for human-like collaboration. Chimpanzees have been shown to recognize when they need help in solving a problem and to actively recruit good over bad collaborators [ [6] and [7] ]. However, cognitive abilities might not be all that differs between chimpanzees and humans when it comes to cooperation. Another factor might be the motivation to engage in a cooperative activity. Here, we hypothesized that a key difference between human and chimpanzee collaboration—and so potentially a key mechanism in the evolution of human cooperation—is a simple preference for collaborating (versus acting alone) to obtain food. Our results supported this hypothesis, finding that whereas children strongly prefer to work together with another to obtain food, chimpanzees show no such preference.
  • Reynolds, E., Stagnitti, K., & Kidd, E. (2011). Play, language and social skills of children attending a play-based curriculum school and a traditionally structured classroom curriculum school in low socioeconomic areas. Australasian Journal of Early Childhood, 36(4), 120-130.

    Abstract

    Aim and method: A comparison study of four six-year-old children attending a school with a play-based curriculum and a school with a traditionally structured classroom from low socioeconomic areas was conducted in Victoria, Australia. Children’s play, language and social skills were measured in February and again in August. At baseline assessment there was a combined sample of 31 children (mean age 5.5 years, SD 0.35 years; 13 females and 18 males). At follow-up there was a combined sample of 26 children (mean age 5.9 years, SD 0.35 years; 10 females, 16 males). Results: There was no significant difference between the school groups in play, language, social skills, age and sex at baseline assessment. Compared to norms on a standardised assessment, all the children were beginning school with delayed play ability. At follow-up assessment, children at the play-based curriculum school had made significant gains in all areas assessed (p values ranged from 0.000 to 0.05). Children at the school with the traditional structured classroom had made significant positive gains in use of symbols in play (p < 0.05) and semantic language (p < 0.05). At follow-up, there were significant differences between schools in elaborate play (p < 0.000), semantic language (p < 0.000), narrative language (p < 0.01) and social connection (p < 0.01), with children in the play-based curriculum school having significantly higher scores in play, narrative language and language and lower scores in social disconnection. Implications: Children from low SES areas begin school at risk of failure as skills in play, language and social skills are delayed. The school experience increases children’s skills, with children in the play-based curriculum showing significant improvements in all areas assessed. It is argued that a play-based curriculum meets children’s developmental and learning needs more effectively. More research is needed to replicate these results.
  • Richter, N., Tiddeman, B., & Haun, D. (2016). Social Preference in Preschoolers: Effects of Morphological Self-Similarity and Familiarity. PLoS One, 11(1): e0145443. doi:10.1371/journal.pone.0145443.

    Abstract

    Adults prefer to interact with others that are similar to themselves. Even slight facial self-resemblance can elicit trust towards strangers. Here we investigate if preschoolers at the age of 5 years already use facial self-resemblance when they make social judgments about others. We found that, in the absence of any additional knowledge about prospective peers, children preferred those who look subtly like themselves over complete strangers. Thus, subtle morphological similarities trigger social preferences well before adulthood.
  • Rieffe, C., Oosterveld, P., Meerum Terwogt, M., Mootz, S., Van Leeuwen, E. J. C., & Stockmann, L. (2011). Emotion regulation and internalizing symptoms in children with Autism Spectrum Disorders. Autism, 15(6), 655-670. doi:10.1177/1362361310366571.

    Abstract

    The aim of this study was to examine the unique contribution of two aspects of emotion regulation (awareness and coping) to the development of internalizing problems in 11-year-old high-functioning children with an autism spectrum disorder (HFASD) and a control group, and the moderating effect of group membership on this. The results revealed overlap between the two groups, but also significant differences, suggesting a more fragmented emotion regulation pattern in children with HFASD, especially related to worry and rumination. Moreover, in children with HFASD, symptoms of depression were unrelated to positive mental coping strategies and the conviction that the emotion experience helps in dealing with the problem, suggesting that a positive approach to the problem and its subsequent emotion experience are less effective in the HFASD group.
  • Riley, M. A., Richardson, M. J., Shockley, K., & Ramenzoni, V. C. (2011). Interpersonal synergies. Frontiers in Psychology, 2, 38. doi:10.3389/fpsyg.2011.00038.

    Abstract

    We present the perspective that interpersonal movement coordination results from establishing interpersonal synergies. Interpersonal synergies are higher-order control systems formed by coupling movement system degrees of freedom of two (or more) actors. Characteristic features of synergies identified in studies of intrapersonal coordination – dimensional compression and reciprocal compensation – are revealed in studies of interpersonal coordination that applied the uncontrolled manifold approach and principal component analysis to interpersonal movement tasks. Broader implications of the interpersonal synergy approach for movement science include an expanded notion of mechanism and an emphasis on interaction-dominant dynamics.
  • Roberts, S. G., & Verhoef, T. (2016). Double-blind reviewing at EvoLang 11 reveals gender bias. Journal of Language Evolution, 1(2), 163-167. doi:10.1093/jole/lzw009.

    Abstract

    The impact of introducing double-blind reviewing in the most recent Evolution of Language conference is assessed. The ranking of papers is compared between EvoLang 11 (double-blind review) and EvoLang 9 and 10 (single-blind review). Main effects were found for first author gender by conference. The results mirror some findings in the literature on the effects of double-blind review, suggesting that it helps reduce a bias against female authors.

    Additional information

    SI.pdf
  • Roberts, L., & Felser, C. (2011). Plausibility and recovery from garden paths in L2 sentence processing. Applied Psycholinguistics, 32, 299-331. doi:10.1017/S0142716410000421.

    Abstract

    In this study, the influence of plausibility information on the real-time processing of locally ambiguous (“garden path”) sentences in a nonnative language is investigated. Using self-paced reading, we examined how advanced Greek-speaking learners of English and native speaker controls read sentences containing temporary subject–object ambiguities, with the ambiguous noun phrase being either semantically plausible or implausible as the direct object of the immediately preceding verb. Besides providing evidence for incremental interpretation in second language processing, our results indicate that the learners were more strongly influenced by plausibility information than the native speaker controls in their on-line processing of the experimental items. For the second language learners an initially plausible direct object interpretation lead to increased reanalysis difficulty in “weak” garden-path sentences where the required reanalysis did not interrupt the current thematic processing domain. No such evidence of on-line recovery was observed, in contrast, for “strong” garden-path sentences that required more substantial revisions of the representation built thus far, suggesting that comprehension breakdown was more likely here.
  • Robinson, E. B., St Pourcain, B., Anttila, V., Kosmicki, J. A., Bulik-Sullivan, B., Grove, J., Maller, J., Samocha, K. E., Sanders, S. J., Ripke, S., Martin, J., Hollegaard, M. V., Werge, T., Hougaard, D. M., i Psych- S. S. I. Broad Autism Group, Neale, B. M., Evans, D. M., Skuse, D., Mortensen, P. B., Borglum, A. D., Ronald, A. and 2 moreRobinson, E. B., St Pourcain, B., Anttila, V., Kosmicki, J. A., Bulik-Sullivan, B., Grove, J., Maller, J., Samocha, K. E., Sanders, S. J., Ripke, S., Martin, J., Hollegaard, M. V., Werge, T., Hougaard, D. M., i Psych- S. S. I. Broad Autism Group, Neale, B. M., Evans, D. M., Skuse, D., Mortensen, P. B., Borglum, A. D., Ronald, A., Smith, G. D., & Daly, M. J. (2016). Genetic risk for autism spectrum disorders and neuropsychiatric variation in the general population. Nature Genetics, 48, 552-555. doi:10.1038/ng.3529.

    Abstract

    Almost all genetic risk factors for autism spectrum disorders (ASDs) can be found in the general population, but the effects of this risk are unclear in people not ascertained for neuropsychiatric symptoms. Using several large ASD consortium and population-based resources (total n > 38,000), we find genome-wide genetic links between ASDs and typical variation in social behavior and adaptive functioning. This finding is evidenced through both LD score correlation and de novo variant analysis, indicating that multiple types of genetic risk for ASDs influence a continuum of behavioral and developmental traits, the severe tail of which can result in diagnosis with an ASD or other neuropsychiatric disorder. A continuum model should inform the design and interpretation of studies of neuropsychiatric disease biology.

    Additional information

    ng.3529-S1.pdf
  • Robotham, L., Sauter, D., Bachoud-Lévi, A.-C., & Trinkler, I. (2011). The impairment of emotion recognition in Huntington’s disease extends to positive emotions. Cortex, 47(7), 880-884. doi:10.1016/j.cortex.2011.02.014.

    Abstract

    Patients with Huntington’s Disease are impaired in the recognition of emotional signals. However, the nature and extent of the impairment is controversial: It has variously been argued to be disgust-specific (Sprengelmeyer et al., 1996; 1997), general for negative emotions (Snowden, et al., 2008), or a consequence of item difficulty (Milders, Crawford, Lamb, & Simpson, 2003). Yet no study to date has included more than one positive stimulus category in emotion recognition tasks. We present a study of 14 Huntington’s patients and 15 control participants performing a forced-choice task with a range of negative and positive non-verbal emotional vocalizations. Participants were found to be impaired in emotion recognition across the emotion categories, including positive emotions such as amusement and sensual pleasure, and negative emotions, such as anger, disgust, and fear. These data complement previous work by demonstrating that impairments are found in the recognition of positive, as well as negative, emotions in Huntington’s disease. Our results point to a global deficit in the recognition of emotional signals in Huntington’s Disease.
  • Rodenas-Cuadrado, P., Pietrafusa, N., Francavilla, T., La Neve, A., Striano, P., & Vernes, S. C. (2016). Characterisation of CASPR2 deficiency disorder - a syndrome involving autism, epilepsy and language impairment. BMC Medical Genetics, 17: 8. doi:10.1186/s12881-016-0272-8.

    Abstract

    Background Heterozygous mutations in CNTNAP2 have been identified in patients with a range of complex phenotypes including intellectual disability, autism and schizophrenia. However heterozygous CNTNAP2 mutations are also found in the normal population. Conversely, homozygous mutations are rare in patient populations and have not been found in any unaffected individuals. Case presentation We describe a consanguineous family carrying a deletion in CNTNAP2 predicted to abolish function of its protein product, CASPR2. Homozygous family members display epilepsy, facial dysmorphisms, severe intellectual disability and impaired language. We compared these patients with previously reported individuals carrying homozygous mutations in CNTNAP2 and identified a highly recognisable phenotype. Conclusions We propose that CASPR2 loss produces a syndrome involving early-onset refractory epilepsy, intellectual disability, language impairment and autistic features that can be recognized as CASPR2 deficiency disorder. Further screening for homozygous patients meeting these criteria, together with detailed phenotypic and molecular investigations will be crucial for understanding the contribution of CNTNAP2 to normal and disrupted development.
  • Roelofs, A. (2003). Shared phonological encoding processes and representations of languages in bilingual speakers. Language and Cognitive Processes, 18(2), 175-204. doi:10.1080/01690960143000515.

    Abstract

    Four form-preparation experiments investigated whether aspects of phonological encoding processes and representations are shared between languages in bilingual speakers. The participants were Dutch--English bilinguals. Experiment 1 showed that the basic rightward incrementality revealed in studies for the first language is also observed for second-language words. In Experiments 2 and 3, speakers were given words to produce that did or did not share onset segments, and that came or did not come from different languages. It was found that when onsets were shared among the response words, those onsets were prepared, even when the words came from different languages. Experiment 4 showed that preparation requires prior knowledge of the segments and that knowledge about their phonological features yields no effect. These results suggest that both first- and second-language words are phonologically planned through the same serial order mechanism and that the representations of segments common to the languages are shared.
  • Roelofs, A., & Piai, V. (2011). Attention demands of spoken word planning: A review. Frontiers in Psychology, 2, 307. doi:10.1037/a0023328.

    Abstract

    E. Dhooge and R. J. Hartsuiker (2010) reported experiments showing that picture naming takes longer with low- than high-frequency distractor words, replicating M. Miozzo and A. Caramazza (2003). In addition, they showed that this distractor-frequency effect disappears when distractors are masked or preexposed. These findings were taken to refute models like WEAVER++ (A. Roelofs, 2003) in which words are selected by competition. However, Dhooge and Hartsuiker do not take into account that according to this model, picture-word interference taps not only into word production but also into attentional processes. Here, the authors indicate that WEAVER++ contains an attentional mechanism that accounts for the distractor-frequency effect (A. Roelofs, 2005). Moreover, the authors demonstrate that the model accounts for the influence of masking and preexposure, and does so in a simpler way than the response exclusion through self-monitoring account advanced by Dhooge and Hartsuiker
  • Roelofs, A., Piai, V., & Garrido Rodriguez, G. (2011). Attentional inhibition in bilingual naming performance: Evidence from delta-plot analyses. Frontiers in Psychology, 2, 184. doi:10.3389/fpsyg.2011.00184.

    Abstract

    It has been argued that inhibition is a mechanism of attentional control in bilingual language performance. Evidence suggests that effects of inhibition are largest in the tail of a response time (RT) distribution in non-linguistic and monolingual performance domains. We examined this for bilingual performance by conducting delta-plot analyses of naming RTs. Dutch-English bilingual speakers named pictures using English while trying to ignore superimposed neutral Xs or Dutch distractor words that were semantically related, unrelated, or translations. The mean RTs revealed semantic, translation, and lexicality effects. The delta plots leveled off with increasing RT, more so when the mean distractor effect was smaller as compared with larger. This suggests that the influence of inhibition is largest toward the distribution tail, corresponding to what is observed in other performance domains. Moreover, the delta plots suggested that more inhibition was applied by high- than low-proficiency individuals in the unrelated than the other distractor conditions. These results support the view that inhibition is a domain-general mechanism that may be optionally engaged depending on the prevailing circumstances.
  • Roelofs, A., Piai, V., Garrido Rodriguez, G., & Chwilla, D. J. (2016). Electrophysiology of Cross-Language Interference and Facilitation in Picture Naming. Cortex, 76, 1-16. doi:10.1016/j.cortex.2015.12.003.

    Abstract

    Disagreement exists about how bilingual speakers select words, in particular, whether words in another language compete, or competition is restricted to a target language, or no competition occurs. Evidence that competition occurs but is restricted to a target language comes from response time (RT) effects obtained when speakers name pictures in one language while trying to ignore distractor words in another language. Compared to unrelated distractor words, RT is longer when the picture name and distractor are semantically related, but RT is shorter when the distractor is the translation of the name of the picture in the other language. These effects suggest that distractor words from another language do not compete themselves but activate their counterparts in the target language, thereby yielding the semantic interference and translation facilitation effects. Here, we report an event-related brain potential (ERP) study testing the prediction that priming underlies both of these effects. The RTs showed semantic interference and translation facilitation effects. Moreover, the picture-word stimuli yielded an N400 response, whose amplitude was smaller on semantic and translation trials than on unrelated trials, providing evidence that interference and facilitation priming underlie the RT effects. We present the results of computer simulations showing the utility of a within-language competition account of our findings.
  • Roelofs, A., Meyer, A. S., & Levelt, W. J. M. (1998). A case for the lemma/lexeme distinction in models of speaking: Comment on Caramazza and Miozzo (1997). Cognition, 69(2), 219-230. doi:10.1016/S0010-0277(98)00056-0.

    Abstract

    In a recent series of papers, Caramazza and Miozzo [Caramazza, A., 1997. How many levels of processing are there in lexical access? Cognitive Neuropsychology 14, 177-208; Caramazza, A., Miozzo, M., 1997. The relation between syntactic and phonological knowledge in lexical access: evidence from the 'tip-of-the-tongue' phenomenon. Cognition 64, 309-343; Miozzo, M., Caramazza, A., 1997. On knowing the auxiliary of a verb that cannot be named: evidence for the independence of grammatical and phonological aspects of lexical knowledge. Journal of Cognitive Neuropsychology 9, 160-166] argued against the lemma/lexeme distinction made in many models of lexical access in speaking, including our network model [Roelofs, A., 1992. A spreading-activation theory of lemma retrieval in speaking. Cognition 42, 107-142; Levelt, W.J.M., Roelofs, A., Meyer, A.S., 1998. A theory of lexical access in speech production. Behavioral and Brain Sciences, (in press)]. Their case was based on the observations that grammatical class deficits of brain-damaged patients and semantic errors may be restricted to either spoken or written forms and that the grammatical gender of a word and information about its form can be independently available in tip-of-the-tongue stales (TOTs). In this paper, we argue that though our model is about speaking, not taking position on writing, extensions to writing are possible that are compatible with the evidence from aphasia and speech errors. Furthermore, our model does not predict a dependency between gender and form retrieval in TOTs. Finally, we argue that Caramazza and Miozzo have not accounted for important parts of the evidence motivating the lemma/lexeme distinction, such as word frequency effects in homophone production, the strict ordering of gender and pho neme access in LRP data, and the chronometric and speech error evidence for the production of complex morphology.
  • Roelofs, A. (2003). Goal-referenced selection of verbal action: Modeling attentional control in the Stroop task. Psychological Review, 110(1), 88-125.

    Abstract

    This article presents a new account of the color-word Stroop phenomenon ( J. R. Stroop, 1935) based on an implemented model of word production, WEAVER++ ( W. J. M. Levelt, A. Roelofs, & A. S. Meyer, 1999b; A. Roelofs, 1992, 1997c). Stroop effects are claimed to arise from processing interactions within the language-production architecture and explicit goal-referenced control. WEAVER++ successfully simulates 16 classic data sets, mostly taken from the review by C. M. MacLeod (1991), including incongruency, congruency, reverse-Stroop, response-set, semantic-gradient, time-course, stimulus, spatial, multiple-task, manual, bilingual, training, age, and pathological effects. Three new experiments tested the account against alternative explanations. It is shown that WEAVER++ offers a more satisfactory account of the data than other models.
  • Roelofs, A., & Meyer, A. S. (1998). Metrical structure in planning the production of spoken words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 922-939. doi:10.1037/0278-7393.24.4.922.

    Abstract

    According to most models of speech production, the planning of spoken words involves the independent retrieval of segments and metrical frames followed by segment-to-frame association. In some models, the metrical frame includes a specification of the number and ordering of consonants and vowels, but in the word-form encoding by activation and verification (WEAVER) model (A. Roelofs, 1997), the frame specifies only the stress pattern across syllables. In 6 implicit priming experiments, on each trial, participants produced 1 word out of a small set as quickly as possible. In homogeneous sets, the response words shared word-initial segments, whereas in heterogeneous sets, they did not. Priming effects from shared segments depended on all response words having the same number of syllables and stress pattern, but not on their having the same number of consonants and vowels. No priming occurred when the response words had only the same metrical frame but shared no segments. Computer simulations demonstrated that WEAVER accounts for the findings.
  • Roelofs, A. (1998). Rightward incrementality in encoding simple phrasal forms in speech production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 904-921. doi:10.1037/0278-7393.24.4.904.

    Abstract

    This article reports 7 experiments investigating whether utterances are planned in a parallel or rightward incremental fashion during language production. The experiments examined the role of linear order, length, frequency, and repetition in producing Dutch verb–particle combinations. On each trial, participants produced 1 utterance out of a set of 3 as quickly as possible. The responses shared part of their form or not. For particle-initial infinitives, facilitation was obtained when the responses shared the particle but not when they shared the verb. For verb-initial imperatives, however, facilitation was obtained for the verbs but not for the particles. The facilitation increased with length, decreased with frequency, and was independent of repetition. A simple rightward incremental model accounts quantitatively for the results.
  • Roelofs, A., Piai, V., & Schriefers, H. (2011). Selective attention and distractor frequency in naming performance: Comment on Dhooge and Hartsuiker (2010). Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 1032-1038. doi:10.1037/a0023328.

    Abstract

    E. Dhooge and R. J. Hartsuiker (2010) reported experiments showing that picture naming takes longer with low- than high-frequency distractor words, replicating M. Miozzo and A. Caramazza (2003). In addition, they showed that this distractor-frequency effect disappears when distractors are masked or preexposed. These findings were taken to refute models like WEAVER++ (A. Roelofs, 2003) in which words are selected by competition. However, Dhooge and Hartsuiker do not take into account that according to this model, picture-word interference taps not only into word production but also into attentional processes. Here, the authors indicate that WEAVER++ contains an attentional mechanism that accounts for the distractor-frequency effect (A. Roelofs, 2005). Moreover, the authors demonstrate that the model accounts for the influence of masking and preexposure, and does so in a simpler way than the response exclusion through self-monitoring account advanced by Dhooge and Hartsuiker
  • Rojas-Berscia, L. M. (2016). Lóxoro, traces of a contemporary Peruvian genderlect. Borealis: An International Journal of Hispanic Linguistics, 5, 157-170.

    Abstract

    Not long after the premiere of Loxoro in 2011, a short-film by Claudia Llosa which presents the problems the transgender community faces in the capital of Peru, a new language variety became visible for the first time to the Lima society. Lóxoro [‘lok.so.ɾo] or Húngaro [‘uŋ.ga.ɾo], as its speakers call it, is a language spoken by transsexuals and the gay community of Peru. The first clues about its existence were given by a comedian, Fernando Armas, in the mid 90’s, however it is said to have appeared not before the 60’s. Following some previous work on gay languages by Baker (2002) and languages and society (cf. Halliday 1978), the main aim of the present article is to provide a primary sketch of this language in its phonological, morphological, lexical and sociological aspects, based on a small corpus extracted from the film of Llosa and natural dialogues from Peruvian TV-journals, in order to classify this variety within modern sociolinguistic models (cf. Muysken 2010) and argue for the “anti-language” (cf. Halliday 1978) nature of it
  • Rossano, F., Rakoczy, H., & Tomasello, M. (2011). Young children’s understanding of violations of property rights. Cognition, 121, 219-227. doi:10.1016/j.cognition.2011.06.007.

    Abstract

    The present work investigated young children’s normative understanding of property rights using a novel methodology. Two- and 3-year-old children participated in situations in which an actor (1) took possession of an object for himself, and (2) attempted to throw it away. What varied was who owned the object: the actor himself, the child subject, or a third party. We found that while both 2- and 3-year-old children protested frequently when their own object was involved, only 3-year-old children protested more when a third party’s object was involved than when the actor was acting on his own object. This suggests that at the latest around 3 years of age young children begin to understand the normative dimensions of property rights.
  • Rossi, S., Jürgenson, I. B., Hanulikova, A., Telkemeyer, S., Wartenburger, I., & Obrig, H. (2011). Implicit processing of phonotactic cues: Evidence from electrophysiological and vascular responses. Journal of Cognitive Neuroscience, 23, 1752-1764. doi:10.1162/jocn.2010.21547.

    Abstract

    Spoken word recognition is achieved via competition between activated lexical candidates that match the incoming speech input. The competition is modulated by prelexical cues that are important for segmenting the auditory speech stream into linguistic units. One such prelexical cue that listeners rely on in spoken word recognition is phonotactics. Phonotactics defines possible combinations of phonemes within syllables or words in a given language. The present study aimed at investigating both temporal and topographical aspects of the neuronal correlates of phonotactic processing by simultaneously applying event-related brain potentials (ERPs) and functional near-infrared spectroscopy (fNIRS). Pseudowords, either phonotactically legal or illegal with respect to the participants' native language, were acoustically presented to passively listening adult native German speakers. ERPs showed a larger N400 effect for phonotactically legal compared to illegal pseudowords, suggesting stronger lexical activation mechanisms in phonotactically legal material. fNIRS revealed a left hemispheric network including fronto-temporal regions with greater response to phonotactically legal pseudowords than to illegal pseudowords. This confirms earlier hypotheses on a left hemispheric dominance of phonotactic processing most likely due to the fact that phonotactics is related to phonological processing and represents a segmental feature of language comprehension. These segmental linguistic properties of a stimulus are predominantly processed in the left hemisphere. Thus, our study provides first insights into temporal and topographical characteristics of phonotactic processing mechanisms in a passive listening task. Differential brain responses between known and unknown phonotactic rules thus supply evidence for an implicit use of phonotactic cues to guide lexical activation mechanisms.
  • Rossi, G., & Zinken, J. (2016). Grammar and social agency: The pragmatics of impersonal deontic statements. Language, 92(4), e296-e325. doi:10.1353/lan.2016.0083.

    Abstract

    Sentence and construction types generally have more than one pragmatic function. Impersonal deontic declaratives such as ‘it is necessary to X’ assert the existence of an obligation or necessity without tying it to any particular individual. This family of statements can accomplish a range of functions, including getting another person to act, explaining or justifying the speaker’s own behavior as he or she undertakes to do something, or even justifying the speaker’s behavior while simultaneously getting another person to help. How is an impersonal deontic declarative fit for these different functions? And how do people know which function it has in a given context? We address these questions using video recordings of everyday interactions among speakers of Italian and Polish. Our analysis results in two findings. The first is that the pragmatics of impersonal deontic declaratives is systematically shaped by (i) the relative responsibility of participants for the necessary task and (ii) the speaker’s nonverbal conduct at the time of the statement. These two factors influence whether the task in question will be dealt with by another person or by the speaker, often giving the statement the force of a request or, alternatively, of an account of the speaker’s behavior. The second finding is that, although these factors systematically influence their function, impersonal deontic declaratives maintain the potential to generate more complex interactions that go beyond a simple opposition between requests and accounts, where participation in the necessary task may be shared, negotiated, or avoided. This versatility of impersonal deontic declaratives derives from their grammatical makeup: by being deontic and impersonal, they can both mobilize or legitimize an act by different participants in the speech event, while their declarative form does not constrain how they should be responded to. These features make impersonal deontic declaratives a special tool for the management of social agency.
  • Rowbotham, S. J., Holler, J., Wearden, A., & Lloyd, D. M. (2016). I see how you feel: Recipients obtain additional information from speakers’ gestures about pain. Patient Education and Counseling, 99(8), 1333-1342. doi:10.1016/j.pec.2016.03.007.

    Abstract

    Objective

    Despite the need for effective pain communication, pain is difficult to verbalise. Co-speech gestures frequently add information about pain that is not contained in the accompanying speech. We explored whether recipients can obtain additional information from gestures about the pain that is being described.
    Methods

    Participants (n = 135) viewed clips of pain descriptions under one of four conditions: 1) Speech Only; 2) Speech and Gesture; 3) Speech, Gesture and Face; and 4) Speech, Gesture and Face plus Instruction (short presentation explaining the pain information that gestures can depict). Participants provided free-text descriptions of the pain that had been described. Responses were scored for the amount of information obtained from the original clips.
    Findings

    Participants in the Instruction condition obtained the most information, while those in the Speech Only condition obtained the least (all comparisons p<.001).
    Conclusions

    Gestures produced during pain descriptions provide additional information about pain that recipients are able to pick up without detriment to their uptake of spoken information.
    Practice implications

    Healthcare professionals may benefit from instruction in gestures to enhance uptake of information about patients’ pain experiences.
  • Rowland, C. F., Pine, J. M., Lieven, E. V., & Theakston, A. L. (2003). Determinants of acquisition order in wh-questions: Re-evaluating the role of caregiver speech. Journal of Child Language, 30(3), 609-635. doi:10.1017/S0305000903005695.

    Abstract

    Accounts that specify semantic and/or syntactic complexity as the primary determinant of the order in which children acquire particular words or grammatical constructions have been highly influential in the literature on question acquisition. One explanation of wh-question acquisition in particular suggests that the order in which English speaking children acquire wh-questions is determined by two interlocking linguistic factors; the syntactic function of the wh-word that heads the question and the semantic generality (or ‘lightness’) of the main verb (Bloom, Merkin & Wootten, 1982; Bloom, 1991). Another more recent view, however, is that acquisition is influenced by the relative frequency with which children hear particular wh-words and verbs in their input (e.g. Rowland & Pine, 2000). In the present study over 300 hours of naturalistic data from twelve two- to three-year-old children and their mothers were analysed in order to assess the relative contribution of complexity and input frequency to wh-question acquisition. The analyses revealed, first, that the acquisition order of wh-questions could be predicted successfully from the frequency with which particular wh-words and verbs occurred in the children's input and, second, that syntactic and semantic complexity did not reliably predict acquisition once input frequency was taken into account. These results suggest that the relationship between acquisition and complexity may be a by-product of the high correlation between complexity and the frequency with which mothers use particular wh-words and verbs. We interpret the results in terms of a constructivist view of language acquisition.
  • Rowland, C. F., & Pine, J. M. (2003). The development of inversion in wh-questions: a reply to Van Valin. Journal of Child Language, 30(1), 197-212. doi:10.1017/S0305000902005445.

    Abstract

    Van Valin (Journal of Child Language29, 2002, 161–75) presents a critique of Rowland & Pine (Journal of Child Language27, 2000, 157–81) and argues that the wh-question data from Adam (in Brown, A first language, Cambridge, MA, 1973) cannot be explained in terms of input frequencies as we suggest. Instead, he suggests that the data can be more successfully accounted for in terms of Role and Reference Grammar. In this note we re-examine the pattern of inversion and uninversion in Adam's wh-questions and argue that the RRG explanation cannot account for some of the developmental facts it was designed to explain.
  • Rowland, C. F., & Noble, C. L. (2011). The role of syntactic structure in children's sentence comprehension: Evidence from the dative. Language Learning and Development, 7(1), 55-75. doi:10.1080/15475441003769411.

    Abstract

    Research has demonstrated that young children quickly acquire knowledge of how the structure of their language encodes meaning. However, this work focused on structurally simple transitives. The present studies investigate childrens' comprehension of the double object dative (e.g., I gave him the box) and the prepositional dative (e.g., I gave the box to him). In Study 1, 3- and 4-year-olds correctly preferred a transfer event reading of prepositional datives with novel verbs (e.g., I'm glorping the rabbit to the duck) but were unable to interpret double object datives (e.g., I'm glorping the duck the rabbit). In Studies 2 and 3, they were able to interpret both dative types when the nouns referring to the theme and recipient were canonically marked (Study 2; I'm glorping the rabbit to Duck) and, to a lesser extent, when they were distinctively but noncanonically marked (Study 3: I'm glorping rabbit to the Duck). Overall, the results suggest that English children have some verb-general knowledge of how dative syntax encodes meaning by 3 years of age, but successful comprehension may require the presence of additional surface cues.
  • Rubio-Fernández, P., Cummins, C., & Tian, Y. (2016). Are single and extended metaphors processed differently? A test of two Relevance-Theoretic accounts. Journal of Pragmatics, 94, 15-28. doi:10.1016/j.pragma.2016.01.005.

    Abstract

    Carston (2010) proposes that metaphors can be processed via two different routes. In line with the standard Relevance-Theoretic account of loose use, single metaphors are interpreted by a local pragmatic process of meaning adjustment, resulting in the construction of an ad hoc concept. In extended metaphorical passages, by contrast, the reader switches to a second processing mode because the various semantic associates in the passage are mutually reinforcing, which makes the literal meaning highly activated relative to possible meaning adjustments. In the second processing mode the literal meaning of the whole passage is metarepresented and entertained as an ‘imaginary world’ and the intended figurative implications are derived later in processing. The results of three experiments comparing the interpretation of the same target expressions across literal, single-metaphorical and extended-metaphorical contexts, using self-paced reading (Experiment 1), eye-tracking during natural reading (Experiment 2) and cued recall (Experiment 3), offered initial support to Carston's distinction between the processing of single and extended metaphors. We end with a comparison between extended metaphors and allegories, and make a call for further theoretical and experimental work to increase our understanding of the similarities and differences between the interpretation and processing of different figurative uses, single and extended.
  • Rubio-Fernández, P. (2016). How redundant are redundant color adjectives? An efficiency-based analysis of color overspecification. Frontiers in Psychology, 7: 153. doi:10.3389/fpsyg.2016.00153.

    Abstract

    Color adjectives tend to be used redundantly in referential communication. I propose that redundant color adjectives (RCAs) are often intended to exploit a color contrast in the visual context and hence facilitate object identification, despite not being necessary to establish unique reference. Two language-production experiments investigated two types of factors that may affect the use of RCAs: factors related to the efficiency of color in the visual context and factors related to the semantic category of the noun. The results of Experiment 1 confirmed that people produce RCAs when color may facilitate object recognition; e.g., they do so more often in polychrome displays than in monochrome displays, and more often in English (pre-nominal position) than in Spanish (post-nominal position). RCAs are also used when color is a central property of the object category; e.g., people referred to the color of clothes more often than to the color of geometrical figures (Experiment 1), and they overspecified atypical colors more often than variable and stereotypical colors (Experiment 2). These results are relevant for pragmatic models of referential communication based on Gricean pragmatics and informativeness. An alternative analysis is proposed, which focuses on the efficiency and pertinence of color in a given referential situation.
  • Rubio-Fernández, P., & Grassmann, S. (2016). Metaphors as second labels: Difficult for preschool children? Journal of Psycholinguistic Research, 45, 931-944. doi:10.1007/s10936-015-9386-y.

    Abstract

    This study investigates the development of two cognitive abilities that are involved in metaphor comprehension: implicit analogical reasoning and assigning an unconventional label to a familiar entity (as in Romeo’s ‘Juliet is the sun’). We presented 3- and 4-year-old children with literal object-requests in a pretense setting (e.g., ‘Give me the train with the hat’). Both age-groups succeeded in a baseline condition that used building blocks as props (e.g., placed either on the front or the rear of a train engine) and only required spatial analogical reasoning to interpret the referential expression. Both age-groups performed significantly worse in the critical condition, which used familiar objects as props (e.g., small dogs as pretend hats) and required both implicit analogical reasoning and assigning second labels. Only the 4-year olds succeeded in this condition. These results offer a new perspective on young children’s difficulties with metaphor comprehension in the preschool years.
  • Rubio-Fernández, P., & Geurts, B. (2016). Don’t mention the marble! The role of attentional processes in false-belief tasks. Review of Philosophy and Psychology, 7, 835-850. doi:10.1007/s13164-015-0290-z.
  • De Ruiter, J. P., Rossignol, S., Vuurpijl, L., Cunningham, D. W., & Levelt, W. J. M. (2003). SLOT: A research platform for investigating multimodal communication. Behavior Research Methods, Instruments, & Computers, 35(3), 408-419.

    Abstract

    In this article, we present the spatial logistics task (SLOT) platform for investigating multimodal communication between 2 human participants. Presented are the SLOT communication task and the software and hardware that has been developed to run SLOT experiments and record the participants’ multimodal behavior. SLOT offers a high level of flexibility in varying the context of the communication and is particularly useful in studies of the relationship between pen gestures and speech. We illustrate the use of the SLOT platform by discussing the results of some early experiments. The first is an experiment on negotiation with a one-way mirror between the participants, and the second is an exploratory study of automatic recognition of spontaneous pen gestures. The results of these studies demonstrate the usefulness of the SLOT platform for conducting multimodal communication research in both human– human and human–computer interactions.
  • De Ruiter, L. E. (2011). Polynomial modeling of child and adult intonation in German spontaneous speech. Language and Speech, 54, 199-223. doi:10.1177/0023830910397495.

    Abstract

    In a data set of 291 spontaneous utterances from German 5-year-olds, 7-year-olds and adults, nuclear pitch contours were labeled manually using the GToBI annotation system. Ten different contour types were identified. The fundamental frequency (F0) of these contours was modeled using third-order orthogonal polynomials, following an approach similar to the one Grabe, Kochanski, and Coleman (2007) used for English. Statistical analyses showed that all but one contour pair differed significantly from each other in at least one of the four coefficients. This demonstrates that polynomial modeling can provide quantitative empirical support for phonological labels in unscripted speech, and for languages other than English. Furthermore, polynomial expressions can be used to derive the alignment of tonal targets relative to the syllable structure, making polynomial modeling more accessible to the phonological research community. Finally, within-contour comparisons of the three age groups showed that for children, the magnitude of the higher coefficients is lower, suggesting that they are not yet able to modulate their pitch as fast as adults.
  • Ruiter, M. B., Kolk, H. H. J., Rietveld, T. C. M., Dijkstra, N., & Lotgering, E. (2011). Towards a quantitative measure of verbal effectiveness and efficiency in the Amsterdam-Nijmegen Everyday Language Test (ANELT). Aphasiology, 25, 961-975. doi:10.1080/02687038.2011.569892.

    Abstract

    Background: A well-known test for measuring verbal adequacy (i.e., verbal effectiveness) in mildly impaired aphasic speakers is the Amsterdam-Nijmegen Everyday Language Test (ANELT; Blomert, Koster, & Kean, 1995). Aphasia therapy practitioners score verbal adequacy qualitatively when they administer the ANELT to their aphasic clients in clinical practice. Aims: The current study investigated whether the construct validity of the ANELT could be further improved by substituting the qualitative score by a quantitative one, which takes the number of essential information units into account. The new quantitative measure could have the following advantages: the ability to derive a quantitative score of verbal efficiency, as well as improved sensitivity to detect changes in functional communication over time. Methods & Procedures: The current study systematically compared a new quantitative measure of verbal effectiveness with the current ANELT Comprehensibility scale, which is based on qualitative judgements. A total of 30 speakers of Dutch participated: 20 non-aphasic speakers and 10 aphasic patients with predominantly expressive disturbances. Outcomes & Results: Although our findings need to be replicated in a larger group of aphasic speakers, the main results suggest that the new quantitative measure of verbal effectiveness is more sensitive to detect change in verbal effectiveness over time. What is more, it can be used to derive a measure of verbal efficiency. Conclusions: The fact that both verbal effectiveness and verbal efficiency can be reliably as well as validly measured in the ANELT is of relevance to clinicians. It allows them to obtain a more complete picture of aphasic speakers' functional communication skills.
  • Sadakata, M., & Sekiyama, K. (2011). Enhanced perception of various linguistic features by musicians: A cross-linguistic study. Acta Psychologica, 138, 1-10. doi:10.1016/j.actpsy.2011.03.007.

    Abstract

    Two cross-linguistic experiments comparing musicians and non-musicians were performed in order to examine whether musicians have enhanced perception of specific acoustical features of speech in a second language (L2). These discrimination and identification experiments examined the perception of various speech features; namely, the timing and quality of Japanese consonants, and the quality of Dutch vowels. We found that musical experience was more strongly associated with discrimination performance rather than identification performance. The enhanced perception was observed not only with respect to L2, but also L1. It was most pronounced when tested with Japanese consonant timing. These findings suggest the following: 1) musicians exhibit enhanced early acoustical analysis of speech, 2) musical training does not equally enhance the perception of all acoustic features automatically, and 3) musicians may enjoy an advantage in the perception of acoustical features that are important in both language and music, such as pitch and timing. Research Highlights We compared the perception of L1 and L2 speech by musicians and non-musicians. Discrimination and identification experiments examined perception of consonant timing, quality of Japanese consonants and of Dutch vowels. We compared results for Japanese native musicians and non-musicians as well as, Dutch native musicians and non-musicians. Musicians demonstrated enhanced perception for both L1 and L2. Most pronounced effect was found for Japanese consonant timing.
  • Salomo, D., Graf, E., Lieven, E., & Tomasello, M. (2011). The role of perceptual availability and discourse context in young children’s question answering. Journal of Child Language, 38, 918-931. doi:10.1017/S0305000910000395.

    Abstract

    Three- and four-year-old children were asked predicate-focus questions ('What's X doing?') about a scene in which an agent performed an action on a patient. We varied: (i) whether (or not) the preceding discourse context, which established the patient as given information, was available for the questioner; and (ii) whether (or not) the patient was perceptually available to the questioner when she asked the question. The main finding in our study differs from those of previous studies since it suggests that children are sensitive to the perceptual context at an earlier age than they are to previous discourse context if they need to take the questioner's perspective into account. Our finding indicates that, while children are in principle sensitive to both factors, young children rely on perceptual availability when a conflict arises.
  • Salverda, A. P., Dahan, D., & McQueen, J. M. (2003). The role of prosodic boundaries in the resolution of lexical embedding in speech comprehension. Cognition, 90(1), 51-89. doi:10.1016/S0010-0277(03)00139-2.

    Abstract

    Participants' eye movements were monitored as they heard sentences and saw four pictured objects on a computer screen. Participants were instructed to click on the object mentioned in the sentence. There were more transitory fixations to pictures representing monosyllabic words (e.g. ham) when the first syllable of the target word (e.g. hamster) had been replaced by a recording of the monosyllabic word than when it came from a different recording of the target word. This demonstrates that a phonemically identical sequence can contain cues that modulate its lexical interpretation. This effect was governed by the duration of the sequence, rather than by its origin (i.e. which type of word it came from). The longer the sequence, the more monosyllabic-word interpretations it generated. We argue that cues to lexical-embedding disambiguation, such as segmental lengthening, result from the realization of a prosodic boundary that often but not always follows monosyllabic words, and that lexical candidates whose word boundaries are aligned with prosodic boundaries are favored in the word-recognition process.
  • San Roque, L. (2016). 'Where' questions and their responses in Duna (Papua New Guinea). Open Linguistics, 2(1), 85-104. doi:10.1515/opli-2016-0005.

    Abstract

    Despite their central role in question formation, content interrogatives in spontaneous conversation remain relatively under-explored cross-linguistically. This paper outlines the structure of ‘where’ expressions in Duna, a language spoken in Papua New Guinea, and examines where-questions in a small Duna data set in terms of their frequency, function, and the responses they elicit. Questions that ask ‘where?’ have been identified as a useful tool in studying the language of space and place, and, in the Duna case and elsewhere, show high frequency and functional flexibility. Although where-questions formulate place as an information gap, they are not always answered through direct reference to canonical places. While some question types may be especially “socially costly” (Levinson 2012), asking ‘where’ perhaps provides a relatively innocuous way of bringing a particular event or situation into focus.
  • Sánchez-Fernández, M., & Rojas-Berscia, L. M. (2016). Vitalidad lingüística de la lengua paipai de Santa Catarina, Baja California. LIAMES, 16(1), 157-183. doi:10.20396/liames.v16i1.8646171.

    Abstract

    In the last few decades little to nothing has been said about the sociolinguistic situation of Yumanan languages in Mexico. In order to cope with this lack of studies, we present a first study on linguistic vitality in Paipai, as it is spoken in Santa Catarina, Baja California, Mexico. Since languages such as Mexican Spanish and Ko’ahl coexist with this language in the same ecology, both are part of the study as well. This first approach hoists from two axes: on one hand, providing a theoretical framework that explains the sociolinguistic dynamics in the ecology of the language (Mufwene 2001), and, on the other hand, bringing over a quantitative study based on MSF (Maximum Shared Facility) (Terborg & Garcìa 2011), which explains the state of linguistic vitality of paipai, enriched by qualitative information collected in situ
  • Sánchez-Mora, C., Ribasés, M., Casas, M., Bayés, M., Bosch, R., Fernàndez-Castillo, N., Brunso, L., Jacobsen, K. K., Landaas, E. T., Lundervold, A. J., Gross-Lesch, S., Kreiker, S., Jacob, C. P., Lesch, K.-P., Buitelaar, J. K., Hoogman, M., Kiemeney, L. A., Kooij, J. S., Mick, E., Asherson, P. and 7 moreSánchez-Mora, C., Ribasés, M., Casas, M., Bayés, M., Bosch, R., Fernàndez-Castillo, N., Brunso, L., Jacobsen, K. K., Landaas, E. T., Lundervold, A. J., Gross-Lesch, S., Kreiker, S., Jacob, C. P., Lesch, K.-P., Buitelaar, J. K., Hoogman, M., Kiemeney, L. A., Kooij, J. S., Mick, E., Asherson, P., Faraone, S. V., Franke, B., Reif, A., Johansson, S., Haavik, J., Ramos-Quiroga, J. A., & Cormand, B. (2011). Exploring DRD4 and its interaction with SLC6A3 as possible risk factors for adult ADHD: A meta-analysis in four European populations. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 156, 600-612. doi:10.1002/ajmg.b.31202.

    Abstract

    Attention-deficit hyperactivity disorder (ADHD) is a common behavioral disorder affecting about 4–8% of children. ADHD persists into adulthood in around 65% of cases, either as the full condition or in partial remission with persistence of symptoms. Pharmacological, animal and molecular genetic studies support a role for genes of the dopaminergic system in ADHD due to its essential role in motor control, cognition, emotion, and reward. Based on these data, we analyzed two functional polymorphisms within the DRD4 gene (120 bp duplication in the promoter and 48 bp VNTR in exon 3) in a clinical sample of 1,608 adult ADHD patients and 2,352 controls of Caucasian origin from four European countries that had been recruited in the context of the International Multicentre persistent ADHD CollaboraTion (IMpACT). Single-marker analysis of the two polymorphisms did not reveal association with ADHD. In contrast, multiple-marker meta-analysis showed a nominal association (P  = 0.02) of the L-4R haplotype (dup120bp-48bpVNTR) with adulthood ADHD, especially with the combined clinical subtype. Since we previously described association between adulthood ADHD and the dopamine transporter SLC6A3 9R-6R haplotype (3′UTR VNTR-intron 8 VNTR) in the same dataset, we further tested for gene × gene interaction between DRD4 and SLC6A3. However, we detected no epistatic effects but our results rather suggest additive effects of the DRD4 risk haplotype and the SLC6A3 gene.
  • Sassenhagen, J., & Alday, P. M. (2016). A common misapplication of statistical inference: Nuisance control with null-hypothesis significance tests. Brain and Language, 162, 42-45. doi:10.1016/j.bandl.2016.08.001.

    Abstract

    Experimental research on behavior and cognition frequently rests on stimulus or subject selection where not all characteristics can be fully controlled, even when attempting strict matching. For example, when contrasting patients to controls, variables such as intelligence or socioeconomic status are often correlated with patient status. Similarly, when presenting word stimuli, variables such as word frequency are often correlated with primary variables of interest. One procedure very commonly employed to control for such nuisance effects is conducting inferential tests on confounding stimulus or subject characteristics. For example, if word length is not significantly different for two stimulus sets, they are considered as matched for word length. Such a test has high error rates and is conceptually misguided. It reflects a common misunderstanding of statistical tests: interpreting significance not to refer to inference about a particular population parameter, but about 1. the sample in question, 2. the practical relevance of a sample difference (so that a nonsignificant test is taken to indicate evidence for the absence of relevant differences). We show inferential testing for assessing nuisance effects to be inappropriate both pragmatically and philosophically, present a survey showing its high prevalence, and briefly discuss an alternative in the form of regression including nuisance variables.
  • Sauppe, S. (2016). Verbal semantics drives early anticipatory eye movements during the comprehension of verb-initial sentences. Frontiers in Psychology, 7: 95. doi:10.3389/fpsyg.2016.00095.

    Abstract

    Studies on anticipatory processes during sentence comprehension often focus on the prediction of postverbal direct objects. In subject-initial languages (the target of most studies so far), however, the position in the sentence, the syntactic function, and the semantic role of arguments are often conflated. For example, in the sentence “The frog will eat the fly” the syntactic object (“fly”) is at the same time also the last word and the patient argument of the verb. It is therefore not apparent which kind of information listeners orient to for predictive processing during sentence comprehension. A visual world eye tracking study on the verb-initial language Tagalog (Austronesian) tested what kind of information listeners use to anticipate upcoming postverbal linguistic input. The grammatical structure of Tagalog allows to test whether listeners' anticipatory gaze behavior is guided by predictions of the linear order of words, by syntactic functions (e.g., subject/object), or by semantic roles (agent/patient). Participants heard sentences of the type “Eat frog fly” or “Eat fly frog” (both meaning “The frog will eat the fly”) while looking at displays containing an agent referent (“frog”), a patient referent (“fly”) and a distractor. The verb carried morphological marking that allowed the order and syntactic function of agent and patient to be inferred. After having heard the verb, listeners fixated on the agent irrespective of its syntactic function or position in the sentence. While hearing the first-mentioned argument, listeners fixated on the corresponding referent in the display accordingly and then initiated saccades to the last-mentioned referent before it was encountered. The results indicate that listeners used verbal semantics to identify referents and their semantic roles early; information about word order or syntactic functions did not influence anticipatory gaze behavior directly after the verb was heard. In this verb-initial language, event semantics takes early precedence during the comprehension of sentences, while arguments are anticipated temporally more local to when they are encountered. The current experiment thus helps to better understand anticipation during language processing by employing linguistic structures not available in previously studied subject-initial languages.
  • Sauter, D., Le Guen, O., & Haun, D. B. M. (2011). Categorical perception of emotional expressions does not require lexical categories. Emotion, 11, 1479-1483. doi:10.1037/a0025336.

    Abstract

    Does our perception of others’ emotional signals depend on the language we speak or is our perception the same regardless of language and culture? It is well established that human emotional facial expressions are perceived categorically by viewers, but whether this is driven by perceptual or linguistic mechanisms is debated. We report an investigation into the perception of emotional facial expressions, comparing German speakers to native speakers of Yucatec Maya, a language with no lexical labels that distinguish disgust from anger. In a free naming task, speakers of German, but not Yucatec Maya, made lexical distinctions between disgust and anger. However, in a delayed match-to-sample task, both groups perceived emotional facial expressions of these and other emotions categorically. The magnitude of this effect was equivalent across the language groups, as well as across emotion continua with and without lexical distinctions. Our results show that the perception of affective signals is not driven by lexical labels, instead lending support to accounts of emotions as a set of biologically evolved mechanisms.
  • Schaefer, R. S., Farquhar, J., Blokland, Y., Sadakata, M., & Desain, P. (2011). Name that tune: Decoding music from the listening brain. NeuroImage, 56, 843-849. doi:10.1016/j.neuroimage.2010.05.084.

    Abstract

    In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven different musical fragments of about three seconds, both individually and cross-participants, using only time domain information (the event-related potential, ERP). The best individual results are 70% correct in a seven-class problem while using single trials, and when using multiple trials we achieve 100% correct after six presentations of the stimulus. When classifying across participants, a maximum rate of 53% was reached, supporting a general representation of each musical fragment over participants. While for some music stimuli the amplitude envelope correlated well with the ERP, this was not true for all stimuli. Aspects of the stimulus that may contribute to the differences between the EEG responses to the pieces of music are discussed.

    Additional information

    supp_f.pdf
  • Schapper, A., & San Roque, L. (2011). Demonstratives and non-embedded nominalisations in three Papuan languages of the Timor-Alor-Pantar family. Studies in Language, 35, 380-408. doi:10.1075/sl.35.2.05sch.

    Abstract

    This paper explores the use of demonstratives in non-embedded clausal nominalisations. We present data and analysis from three Papuan languages of the Timor-Alor-Pantar family in south-east Indonesia. In these languages, demonstratives can apply to the clausal as well as to the nominal domain, contributing contrastive semantic content in assertive stance-taking and attention-directing utterances. In the Timor-Alor-Pantar constructions, meanings that are to do with spatial and discourse locations at the participant level apply to spatial, temporal and mental locations at the state or event leve
  • Scharenborg, O., ten Bosch, L., Boves, L., & Norris, D. (2003). Bridging automatic speech recognition and psycholinguistics: Extending Shortlist to an end-to-end model of human speech recognition [Letter to the editor]. Journal of the Acoustical Society of America, 114, 3032-3035. doi:10.1121/1.1624065.

    Abstract

    This letter evaluates potential benefits of combining human speech recognition ~HSR! and automatic speech recognition by building a joint model of an automatic phone recognizer ~APR! and a computational model of HSR, viz., Shortlist @Norris, Cognition 52, 189–234 ~1994!#. Experiments based on ‘‘real-life’’ speech highlight critical limitations posed by some of the simplifying assumptions made in models of human speech recognition. These limitations could be overcome by avoiding hard phone decisions at the output side of the APR, and by using a match between the input and the internal lexicon that flexibly copes with deviations from canonical phonemic representations.
  • Scharenborg, O., Ten Bosch, L., & Boves, L. (2003). ‘Early recognition’ of words in continuous speech. Automatic Speech Recognition and Understanding, 2003 IEEE Workshop, 61-66. doi:10.1109/ASRU.2003.1318404.

    Abstract

    In this paper, we present an automatic speech recognition (ASR) system based on the combination of an automatic phone recogniser and a computational model of human speech recognition – SpeM – that is capable of computing ‘word activations’ during the recognition process, in addition to doing normal speech recognition, a task in which conventional ASR architectures only provide output after the end of an utterance. We explain the notion of word activation and show that it can be used for ‘early recognition’, i.e. recognising a word before the end of the word is available. Our ASR system was tested on 992 continuous speech utterances, each containing at least one target word: a city name of at least two syllables. The results show that early recognition was obtained for 72.8% of the target words that were recognised correctly. Also, it is shown that word activation can be used as an effective confidence measure.
  • Scheeringa, R., Fries, P., Petersson, K. M., Oostenveld, R., Grothe, I., Norris, D. G., Hagoort, P., & Bastiaansen, M. C. M. (2011). Neuronal dynamics underlying high- and low- frequency EEG oscillations contribute independently to the human BOLD signal. Neuron, 69, 572-583. doi:10.1016/j.neuron.2010.11.044.

    Abstract

    Work on animals indicates that BOLD is preferentially sensitive to local field potentials, and that it correlates most strongly with gamma band neuronal synchronization. Here we investigate how the BOLD signal in humans performing a cognitive task is related to neuronal synchronization across different frequency bands. We simultaneously recorded EEG and BOLD while subjects engaged in a visual attention task known to induce sustained changes in neuronal synchronization across a wide range of frequencies. Trial-by-trial BOLD luctuations correlated positively with trial-by-trial fluctuations in high-EEG gamma power (60–80 Hz) and negatively with alpha and beta power. Gamma power on the one hand, and alpha and beta power on the other hand, independently contributed to explaining BOLD variance. These results indicate that the BOLD-gamma coupling observed in animals can be extrapolated to humans performing a task and that neuronal dynamics underlying high- and low-frequency synchronization contribute independently to the BOLD signal.

    Additional information

    mmc1.pdf
  • Schepens, J., Van der Silk, F., & Van Hout, R. (2016). L1 and L2 Distance Effects in Learning L3 Dutch. Language Learning, 66, 224-256. doi:10.1111/lang.12150.

    Abstract

    Many people speak more than two languages. How do languages acquired earlier affect the learnability of additional languages? We show that linguistic distances between speakers' first (L1) and second (L2) languages and their third (L3) language play a role. Larger distances from the L1 to the L3 and from the L2 to the L3 correlate with lower degrees of L3 learnability. The evidence comes from L3 Dutch speaking proficiency test scores obtained by candidates who speak a diverse set of L1s and L2s. Lexical and morphological distances between the L1s of the learners and Dutch explained 47.7% of the variation in proficiency scores. Lexical and morphological distances between the L2s of the learners and Dutch explained 32.4% of the variation in proficiency scores in multilingual learners. Cross-linguistic differences require language learners to bridge varying linguistic gaps between their L1 and L2 competences and the target language.
  • Schiller, N. O., Münte, T. F., Horemans, I., & Jansma, B. M. (2003). The influence of semantic and phonological factors on syntactic decisions: An event-related brain potential study. Psychophysiology, 40(6), 869-877. doi:10.1111/1469-8986.00105.

    Abstract

    During language production and comprehension, information about a word's syntactic properties is sometimes needed. While the decision about the grammatical gender of a word requires access to syntactic knowledge, it has also been hypothesized that semantic (i.e., biological gender) or phonological information (i.e., sound regularities) may influence this decision. Event-related potentials (ERPs) were measured while native speakers of German processed written words that were or were not semantically and/or phonologically marked for gender. Behavioral and ERP results showed that participants were faster in making a gender decision when words were semantically and/or phonologically gender marked than when this was not the case, although the phonological effects were less clear. In conclusion, our data provide evidence that even though participants performed a grammatical gender decision, this task can be influenced by semantic and phonological factors.
  • Schiller, N. O., Bles, M., & Jansma, B. M. (2003). Tracking the time course of phonological encoding in speech production: An event-related brain potential study on internal monitoring. Cognitive Brain Research, 17(3), 819-831. doi:10.1016/S0926-6410(03)00204-0.

    Abstract

    This study investigated the time course of phonological encoding during speech production planning. Previous research has shown that conceptual/semantic information precedes syntactic information in the planning of speech production and that syntactic information is available earlier than phonological information. Here, we studied the relative time courses of the two different processes within phonological encoding, i.e. metrical encoding and syllabification. According to one prominent theory of language production, metrical encoding involves the retrieval of the stress pattern of a word, while syllabification is carried out to construct the syllabic structure of a word. However, the relative timing of these two processes is underspecified in the theory. We employed an implicit picture naming task and recorded event-related brain potentials to obtain fine-grained temporal information about metrical encoding and syllabification. Results revealed that both tasks generated effects that fall within the time window of phonological encoding. However, there was no timing difference between the two effects, suggesting that they occur approximately at the same time.
  • Schiller, N. O., & Caramazza, A. (2003). Grammatical feature selection in noun phrase production: Evidence from German and Dutch. Journal of Memory and Language, 48(1), 169-194. doi:10.1016/S0749-596X(02)00508-9.

    Abstract

    In this study, we investigated grammatical feature selection during noun phrase production in German and Dutch. More specifically, we studied the conditions under which different grammatical genders select either the same or different determiners or suffixes. Pictures of one or two objects paired with a gender-congruent or a gender-incongruent distractor word were presented. Participants named the pictures using a singular or plural noun phrase with the appropriate determiner and/or adjective in German or Dutch. Significant effects of gender congruency were only obtained in the singular condition where the selection of determiners is governed by the target’s gender, but not in the plural condition where the determiner is identical for all genders. When different suffixes were to be selected in the gender-incongruent condition, no gender congruency effect was obtained. The results suggest that the so-called gender congruency effect is really a determiner congruency effect. The overall pattern of results is interpreted as indicating that grammatical feature selection is an automatic consequence of lexical node selection and therefore not subject to interference from other grammatical features. This implies that lexical node and grammatical feature selection operate with distinct principles.
  • Schiller, N. O. (1998). The effect of visually masked syllable primes on the naming latencies of words and pictures. Journal of Memory and Language, 39, 484-507. doi:10.1006/jmla.1998.2577.

    Abstract

    To investigate the role of the syllable in Dutch speech production, five experiments were carried out to examine the effect of visually masked syllable primes on the naming latencies for written words and pictures. Targets had clear syllable boundaries and began with a CV syllable (e.g., ka.no) or a CVC syllable (e.g., kak.tus), or had ambiguous syllable boundaries and began with a CV[C] syllable (e.g., ka[pp]er). In the syllable match condition, bisyllabic Dutch nouns or verbs were preceded by primes that were identical to the target’s first syllable. In the syllable mismatch condition, the prime was either shorter or longer than the target’s first syllable. A neutral condition was also included. None of the experiments showed a syllable priming effect. Instead, all related primes facilitated the naming of the targets. It is concluded that the syllable does not play a role in the process of phonological encoding in Dutch. Because the amount of facilitation increased with increasing overlap between prime and target, the priming effect is accounted for by a segmental overlap hypothesis.
  • Schimke, S. (2011). Variable verb placement in second-language German and French: Evidence from production and elicited imitation of finite and nonfinite negated sentences. Applied Psycholinguistics, 32, 635-685. doi:10.1017/S0142716411000014.

    Abstract

    This study examines the placement of finite and nonfinite lexical verbs and finite light verbs (LVs) in semispontaneous production and elicited imitation of adult beginning learners of German and French. Theories assuming nonnativelike syntactic representations at early stages of development predict variable placement of lexical verbs and consistent placement of LVs, whereas theories assuming nativelike syntax predict variability for nonfinite verbs and consistent placement of all finite verbs. The results show that beginning learners of German have consistent preferences only for LVs. More advanced learners of German and learners of French produce and imitate finite verbs in more variable positions than nonfinite verbs. This is argued to support a structure-building view of second-language development.
  • Schmidt, J., Herzog, D., Scharenborg, O., & Janse, E. (2016). Do hearing aids improve affect perception? Advances in Experimental Medicine and Biology, 894, 47-55. doi:10.1007/978-3-319-25474-6_6.

    Abstract

    Normal-hearing listeners use acoustic cues in speech to interpret a speaker's emotional state. This study investigates the effect of hearing aids on the perception of the emotion dimensions arousal (aroused/calm) and valence (positive/negative attitude) in older adults with hearing loss. More specifically, we investigate whether wearing a hearing aid improves the correlation between affect ratings and affect-related acoustic parameters. To that end, affect ratings by 23 hearing-aid users were compared for aided and unaided listening. Moreover, these ratings were compared to the ratings by an age-matched group of 22 participants with age-normal hearing.For arousal, hearing-aid users rated utterances as generally more aroused in the aided than in the unaided condition. Intensity differences were the strongest indictor of degree of arousal. Among the hearing-aid users, those with poorer hearing used additional prosodic cues (i.e., tempo and pitch) for their arousal ratings, compared to those with relatively good hearing. For valence, pitch was the only acoustic cue that was associated with valence. Neither listening condition nor hearing loss severity (differences among the hearing-aid users) influenced affect ratings or the use of affect-related acoustic parameters. Compared to the normal-hearing reference group, ratings of hearing-aid users in the aided condition did not generally differ in both emotion dimensions. However, hearing-aid users were more sensitive to intensity differences in their arousal ratings than the normal-hearing participants.We conclude that the use of hearing aids is important for the rehabilitation of affect perception and particularly influences the interpretation of arousal
  • Schmidt, J., Janse, E., & Scharenborg, O. (2016). Perception of emotion in conversational speech by younger and older listeners. Frontiers in Psychology, 7: 781. doi:10.3389/fpsyg.2016.00781.

    Abstract

    This study investigated whether age and/or differences in hearing sensitivity influence the perception of the emotion dimensions arousal (calm vs. aroused) and valence (positive vs. negative attitude) in conversational speech. To that end, this study specifically focused on the relationship between participants' ratings of short affective utterances and the utterances' acoustic parameters (pitch, intensity, and articulation rate) known to be associated with the emotion dimensions arousal and valence. Stimuli consisted of short utterances taken from a corpus of conversational speech. In two rating tasks, younger and older adults either rated arousal or valence using a 5-point scale. Mean intensity was found to be the main cue participants used in the arousal task (i.e., higher mean intensity cueing higher levels of arousal) while mean F-0 was the main cue in the valence task (i.e., higher mean F-0 being interpreted as more negative). Even though there were no overall age group differences in arousal or valence ratings, compared to younger adults, older adults responded less strongly to mean intensity differences cueing arousal and responded more strongly to differences in mean F-0 cueing valence. Individual hearing sensitivity among the older adults did not modify the use of mean intensity as an arousal cue. However, individual hearing sensitivity generally affected valence ratings and modified the use of mean F-0. We conclude that age differences in the interpretation of mean F-0 as a cue for valence are likely due to age-related hearing loss, whereas age differences in rating arousal do not seem to be driven by hearing sensitivity differences between age groups (as measured by pure-tone audiometry).
  • Schoffelen, J.-M., & Gross, J. (2011). Improving the interpretability of all-to-all pairwise source connectivity analysis in MEG with nonhomogeneous smoothing. Human brain mapping, 32, 426-437. doi:10.1002/hbm.21031.

    Abstract

    Studying the interaction between brain regions is important to increase our understanding of brain function. Magnetoencephalography (MEG) is well suited to investigate brain connectivity, because it provides measurements of activity of the whole brain at very high temporal resolution. Typically, brain activity is reconstructed from the sensor recordings with an inverse method such as a beamformer, and subsequently a connectivity metric is estimated between predefined reference regions-of-interest (ROIs) and the rest of the source space. Unfortunately, this approach relies on a robust estimate of the relevant reference regions and on a robust estimate of the activity in those reference regions, and is not generally applicable to a wide variety of cognitive paradigms. Here, we investigate the possibility to perform all-to-all pairwise connectivity analysis, thus removing the need to define ROIs. Particularly, we evaluate the effect of nonhomogeneous spatial smoothing of differential connectivity maps. This approach is inspired by the fact that the spatial resolution of source reconstructions is typically spatially nonhomogeneous. We use this property to reduce the spatial noise in the cerebro-cerebral connectivity map, thus improving interpretability. Using extensive data simulations we show a superior detection rate and a substantial reduction in the number of spurious connections. We conclude that nonhomogeneous spatial smoothing of cerebro-cerebral connectivity maps could be an important improvement of the existing analysis tools to study neuronal interactions noninvasively.
  • Schoffelen, J.-M., Poort, J., Oostenveld, R., & Fries, P. (2011). Selective movement preparation is subserved by selective increases in corticomuscular gamma-band coherence. Journal of Neuroscience, 31, 6750-6758. doi:10.1523/​JNEUROSCI.4882-10.2011.

    Abstract

    Local groups of neurons engaged in a cognitive task often exhibit rhythmically synchronized activity in the gamma band, a phenomenon that likely enhances their impact on downstream areas. The efficacy of neuronal interactions may be enhanced further by interareal synchronization of these local rhythms, establishing mutually well timed fluctuations in neuronal excitability. This notion suggests that long-range synchronization is enhanced selectively for connections that are behaviorally relevant. We tested this prediction in the human motor system, assessing activity from bilateral motor cortices with magnetoencephalography and corresponding spinal activity through electromyography of bilateral hand muscles. A bimanual isometric wrist extension task engaged the two motor cortices simultaneously into interactions and coherence with their respective corresponding contralateral hand muscles. One of the hands was cued before each trial as the response hand and had to be extended further to report an unpredictable visual go cue. We found that, during the isometric hold phase, corticomuscular coherence was enhanced, spatially selective for the corticospinal connection that was effectuating the subsequent motor response. This effect was spectrally selective in the low gamma-frequency band (40–47 Hz) and was observed in the absence of changes in motor output or changes in local cortical gamma-band synchronization. These findings indicate that, in the anatomical connections between the cortex and the spinal cord, gamma-band synchronization is a mechanism that may facilitate behaviorally relevant interactions between these distant neuronal groups.
  • Schoot, L., Heyselaar, E., Hagoort, P., & Segaert, K. (2016). Does syntactic alignment effectively influence how speakers are perceived by their conversation partner. PLoS One, 11(4): e015352. doi:10.1371/journal.pone.0153521.

    Abstract

    The way we talk can influence how we are perceived by others. Whereas previous studies have started to explore the influence of social goals on syntactic alignment, in the current study, we additionally investigated whether syntactic alignment effectively influences conversation partners’ perception of the speaker. To this end, we developed a novel paradigm in which we can measure the effect of social goals on the strength of syntactic alignment for one participant (primed participant), while simultaneously obtaining usable social opinions about them from their conversation partner (the evaluator). In Study 1, participants’ desire to be rated favorably by their partner was manipulated by assigning pairs to a Control (i.e., primed participants did not know they were being evaluated) or Evaluation context (i.e., primed participants knew they were being evaluated). Surprisingly, results showed no significant difference in the strength with which primed participants aligned their syntactic choices with their partners’ choices. In a follow-up study, we used a Directed Evaluation context (i.e., primed participants knew they were being evaluated and were explicitly instructed to make a positive impression). However, again, there was no evidence supporting the hypothesis that participants’ desire to impress their partner influences syntactic alignment. With respect to the influence of syntactic alignment on perceived likeability by the evaluator, a negative relationship was reported in Study 1: the more primed participants aligned their syntactic choices with their partner, the more that partner decreased their likeability rating after the experiment. However, this effect was not replicated in the Directed Evaluation context of Study 2. In other words, our results do not support the conclusion that speakers’ desire to be liked affects how much they align their syntactic choices with their partner, nor is there convincing evidence that there is a reliable relationship between syntactic alignment and perceived likeability.

    Additional information

    Data availability
  • Schoot, L., Hagoort, P., & Segaert, K. (2016). What can we learn from a two-brain approach to verbal interaction? Neuroscience and Biobehavioral Reviews, 68, 454-459. doi:10.1016/j.neubiorev.2016.06.009.

    Abstract

    Verbal interaction is one of the most frequent social interactions humans encounter on a daily basis. In the current paper, we zoom in on what the multi-brain approach has contributed, and can contribute in the future, to our understanding of the neural mechanisms supporting verbal interaction. Indeed, since verbal interaction can only exist between individuals, it seems intuitive to focus analyses on inter-individual neural markers, i.e. between-brain neural coupling. To date, however, there is a severe lack of theoretically-driven, testable hypotheses about what between-brain neural coupling actually reflects. In this paper, we develop a testable hypothesis in which between-pair variation in between-brain neural coupling is of key importance. Based on theoretical frameworks and empirical data, we argue that the level of between-brain neural coupling reflects speaker-listener alignment at different levels of linguistic and extra-linguistic representation. We discuss the possibility that between-brain neural coupling could inform us about the highest level of inter-speaker alignment: mutual understanding
  • Schuppler, B., Ernestus, M., Scharenborg, O., & Boves, L. (2011). Acoustic reduction in conversational Dutch: A quantitative analysis based on automatically generated segmental transcriptions [Letter to the editor]. Journal of Phonetics, 39(1), 96-109. doi:10.1016/j.wocn.2010.11.006.

    Abstract

    In spontaneous, conversational speech, words are often reduced compared to their citation forms, such that a word like yesterday may sound like [’jεsmall eshei]. The present chapter investigates such acoustic reduction. The study of reduction needs large corpora that are transcribed phonetically. The first part of this chapter describes an automatic transcription procedure used to obtain such a large phonetically transcribed corpus of Dutch spontaneous dialogues, which is subsequently used for the investigation of acoustic reduction. First, the orthographic transcriptions were adapted for automatic processing. Next, the phonetic transcription of the corpus was created by means of a forced alignment using a lexicon with multiple pronunciation variants per word. These variants were generated by applying phonological and reduction rules to the canonical phonetic transcriptions of the words. The second part of this chapter reports the results of a quantitative analysis of reduction in the corpus on the basis of the generated transcriptions and gives an inventory of segmental reductions in standard Dutch. Overall, we found that reduction is more pervasive in spontaneous Dutch than previously documented.
  • Segaert, K., Menenti, L., Weber, K., & Hagoort, P. (2011). A paradox of syntactic priming: Why response tendencies show priming for passives, and response latencies show priming for actives. PLoS One, 6(10), e24209. doi:10.1371/journal.pone.0024209.

    Abstract

    Speakers tend to repeat syntactic structures across sentences, a phenomenon called syntactic priming. Although it has been suggested that repeating syntactic structures should result in speeded responses, previous research has focused on effects in response tendencies. We investigated syntactic priming effects simultaneously in response tendencies and response latencies for active and passive transitive sentences in a picture description task. In Experiment 1, there were priming effects in response tendencies for passives and in response latencies for actives. However, when participants' pre-existing preference for actives was altered in Experiment 2, syntactic priming occurred for both actives and passives in response tendencies as well as in response latencies. This is the first investigation of the effects of structure frequency on both response tendencies and latencies in syntactic priming. We discuss the implications of these data for current theories of syntactic processing.

    Additional information

    Segaert_2011_Supporting_Info.doc
  • Segaert, K., Wheeldon, L., & Hagoort, P. (2016). Unifying structural priming effects on syntactic choices and timing of sentence generation. Journal of Memory and Language, 91, 59-80. doi:10.1016/j.jml.2016.03.011.

    Abstract

    We investigated whether structural priming of production latencies is sensitive to the same factors known to influence persistence of structural choices: structure preference, cumulativity and verb repetition. In two experiments, we found structural persistence only for passives (inverse preference effect) while priming effects on latencies were stronger for the actives (positive preference effect). We found structural persistence for passives to be influenced by immediate primes and long lasting cumulativity (all preceding primes) (Experiment 1), and to be boosted by verb repetition (Experiment 2). In latencies we found effects for actives were sensitive to long lasting cumulativity (Experiment 1). In Experiment 2, in latencies we found priming for actives overall, while for passives the priming effects emerged as the cumulative exposure increased but only when also aided by verb repetition. These findings are consistent with the Two-stage Competition model, an integrated model of structural priming effects for sentence choice and latency
  • Seifart, F. (2003). Marqueurs de classe généraux et spécifiques en Miraña. Faits de Langues, 21, 121-132.
  • Sekine, K. (2011). The role of gesture in the language production of preschool children. Gesture, 11(2), 148-173. doi:10.1075/gest.11.2.03sek.

    Abstract

    The present study investigates the functions of gestures in preschoolers’ descriptions of activities. Specifically, utilizing McNeill’s growth point theory (1992), I examine how gestures contribute to the creation of contrast from the immediate context in the spoken discourse of children. When preschool children describe an activity consisting of multiple actions, like playing on a slide, they often begin with the central action (e.g., sliding-down) instead of with the beginning of the activity sequence (e.g., climbing-up). This study indicates that, in descriptions of activities, gestures may be among the cues the speaker uses for forming a next idea or for repairing the temporal order of the activities described. Gestures may function for the speaker as visual feedback and contribute to the process of utterance formation and provide an index for assessing language development.
  • Selten, M., Meyer, F., Ba, W., Valles, A., Maas, D., Negwer, M., Eijsink, V. D., van Vugt, R. W. M., van Hulten, J. A., van Bakel, N. H. M., Roosen, J., van der Linden, R., Schubert, D., Verheij, M. M. M., Kasri, N. N., & Martens, G. J. M. (2016). Increased GABAB receptor signaling in a rat model for schizophrenia. Scientific Reports, 6: 34240. doi:10.1038/srep34240.

    Abstract

    Schizophrenia is a complex disorder that affects cognitive function and has been linked, both in patients and animal models, to dysfunction of the GABAergic system. However, the pathophysiological consequences of this dysfunction are not well understood. Here, we examined the GABAergic system in an animal model displaying schizophrenia-relevant features, the apomorphine-susceptible (APO-SUS) rat and its phenotypic counterpart, the apomorphine-unsusceptible (APO-UNSUS) rat at postnatal day 20-22. We found changes in the expression of the GABA-synthesizing enzyme GAD67 specifically in the prelimbic-but not the infralimbic region of the medial prefrontal cortex (mPFC), indicative of reduced inhibitory function in this region in APO-SUS rats. While we did not observe changes in basal synaptic transmission onto LII/III pyramidal cells in the mPFC of APO-SUS compared to APO-UNSUS rats, we report reduced paired-pulse ratios at longer inter-stimulus intervals. The GABA(B) receptor antagonist CGP 55845 abolished this reduction, indicating that the decreased paired-pulse ratio was caused by increased GABA(B) signaling. Consistently, we find an increased expression of the GABA(B1) receptor subunit in APO-SUS rats. Our data provide physiological evidence for increased presynaptic GABAB signaling in the mPFC of APO-SUS rats, further supporting an important role for the GABAergic system in the pathophysiology of schizophrenia.
  • Senft, G. (1998). Body and mind in the Trobriand Islands. Ethos, 26, 73-104. doi:10.1525/eth.1998.26.1.73.

    Abstract

    This article discusses how the Trobriand Islanders speak about body and mind. It addresses the following questions: do the linguistic datafit into theories about lexical universals of body-part terminology? Can we make inferences about the Trobrianders' conceptualization of psychological and physical states on the basis of these data? If a Trobriand Islander sees these idioms as external manifestations of inner states, then can we interpret them as a kind of ethnopsychological theory about the body and its role for emotions, knowledge, thought, memory, and so on? Can these idioms be understood as representation of Trobriand ethnopsychological theory?
  • Senft, G. (1998). [Review of the book Anthropological linguistics: An introduction by William A. Foley]. Linguistics, 36, 995-1001.
  • Senft, G. (2003). [Review of the book Representing space in Oceania: Culture in language and mind ed. by Giovanni Bennardo]. Journal of the Polynesian Society, 112, 169-171.
  • Senft, G. (2011). Talking about color and taste on the Trobriand Islands: A diachronic study. The Senses & Society, 6(1), 48 -56. doi:10.2752/174589311X12893982233713.

    Abstract

    How stable is the lexicon for perceptual experiences? This article presents results on how the Trobriand Islanders of Papua New Guinea talk about color and taste and whether this has changed over the years. Comparing the results of research on color terms conducted in 1983 with data collected in 2008 revealed that many English color terms have been integrated into the Kilivila lexicon. Members of the younger generation with school education have been the agents of this language change. However, today not all English color terms are produced correctly according to English lexical semantics. The traditional Kilivila color terms bwabwau ‘black’, pupwakau ‘white’, and bweyani ‘red’ are not affected by this change, probably because of the cultural importance of the art of coloring canoes, big yams houses, and bodies. Comparing the 1983 data on taste vocabulary with the results of my 2008 research revealed no substantial change. The conservatism of the Trobriand Islanders' taste vocabulary may be related to the conservatism of their palate. Moreover, they are more interested in displaying and exchanging food than in savoring it. Although English color terms are integrated into the lexicon, Kilivila provides evidence that traditional terms used for talking about color and terms used to refer to tastes have remained stable over time.
  • Seuren, P. A. M. (1971). Chomsky, man en werk. De Gids, 134, 298-308.
  • Seuren, P. A. M. (1973). [Review of the book A comprehensive etymological dictionary of the English language by Ernst Klein]. Neophilologus, 57(4), 423-426. doi:10.1007/BF01515518.
  • Seuren, P. A. M. (1998). [Review of the book Adverbial subordination; A typology and history of adverbial subordinators based on European languages by Bernd Kortmann]. Cognitive Linguistics, 9(3), 317-319. doi:10.1515/cogl.1998.9.3.315.
  • Seuren, P. A. M. (1979). [Review of the book Approaches to natural language ed. by K. Hintikka, J. Moravcsik and P. Suppes]. Leuvense Bijdragen, 68, 163-168.
  • Seuren, P. A. M. (1971). [Review of the book Introduction à la grammaire générative by Nicolas Ruwet]. Linguistics, 10(78), 111-120. doi:10.1515/ling.1972.10.78.72.
  • Seuren, P. A. M. (1971). [Review of the book La linguistique synchronique by Andre Martinet]. Linguistics, 10(78), 109-111. doi:10.1515/ling.1972.10.78.72.
  • Seuren, P. A. M. (1973). [Review of the book Philosophy of language by Robert J. Clack and Bertrand Russell]. Foundations of Language, 9(3), 440-441.
  • Seuren, P. A. M. (1973). [Review of the book Semantics. An interdisciplinary reader in philosophy, linguistics and psychology ed. by Danny D. Steinberg and Leon A. Jakobovits]. Neophilologus, 57(2), 198-213. doi:10.1007/BF01514332.
  • Seuren, P. A. M. (1971). [Review of the book Syntaxis by A. Kraak and W. Klooster]. Foundations of Language, 7(3), 441-445.
  • Seuren, P. A. M. (1998). [Review of the book The Dutch pendulum: Linguistics in the Netherlands 1740-1900 by Jan Noordegraaf]. Bulletin of the Henry Sweet Society, 31, 46-50.
  • Seuren, P. A. M. (2011). How I remember Evert Beth [In memoriam]. Synthese, 179(2), 207-210. doi:10.1007/s11229-010-9777-4.

    Abstract

    Without Abstract
  • Seuren, P. A. M. (1979). Meer over minder dan hoeft. De Nieuwe Taalgids, 72(3), 236-239.
  • Seuren, P. A. M. (1963). Naar aanleiding van Dr. F. Balk-Smit Duyzentkunst "De Grammatische Functie". Levende Talen, 219, 179-186.
  • Seuren, P. A. M. (1978). Graadadjektieven en oriëntatie. Gramma, 2(1), 1-29.
  • Seuren, P. A. M. (1998). Obituary. Herman Christiaan Wekker 1943–1997. Journal of Pidgin and Creole Languages, 13(1), 159-162.
  • Seuren, P. A. M. (2016). Saussure and his intellectual environment. History of European Ideas, 42(6), 819-847. doi:10.1080/01916599.2016.1154398.

    Abstract

    The present study paints the intellectual environment in which Ferdinand de Saussure developed his ideas about language and linguistics during the fin de siècle. It sketches his dissatisfaction with that environment to the extent that it touched on linguistics, and shows the new course he was trying to steer on the basis of ideas that seemed to open new and exciting perspectives, even though they were still vaguely defined. As Saussure himself was extremely reticent about his sources and intellectual pedigree, his stance in the lively European cultural context in which he lived can only be established through textual critique and conjecture. On this basis, it is concluded that Saussure, though relatively uninformed about its historical roots, essentially aimed at integrating the rationalist tradition current in the sciences in his day into a new, ‘scientific’ general theory of language. In this, he was heavily indebted to a few predecessors, such as the French philosopher-psychologist Victor Egger, and particularly to the French psychologist, historian and philosopher Hippolyte Taine, who was a major cultural influence in nineteenth-century France, though now largely forgotten. The present study thus supports Hans Aarsleff's analysis, where, for the first time, Taine's influence is emphasised, and rejects John Joseph's contention that Taine had no influence and that, instead, Saussure was influenced mainly by the romanticist Adolphe Pictet. Saussure abhorred Pictet's method of etymologising, which predated the Young Grammarian school, central to Saussure's linguistic education. The issue has implications for the positioning of Saussure in the history of linguistics. Is he part of the non-analytical, romanticist and experience-based European strand of thought that is found in art and postmodernist philosophy and is sometimes called structuralism, or is he a representative of the short-lived European branch of specifically linguistic structuralism, which was rationalist in outlook, more science-oriented and more formalist, but lost out to American structuralism? The latter seems to be the case, though phenomenology, postmodernism and art have lately claimed Saussure as an icon
  • Seuren, P. A. M. (1973). Zero-output rules. Foundations of Language, 10(2), 317-328.
  • Shao, Z., & Stiegert, J. (2016). Predictors of photo naming: Dutch norms for 327 photos. Behavior Research Methods, 48(2), 577-584. doi:10.3758/s13428-015-0613-0.

    Abstract

    The present study reports naming latencies and norms for 327 photos of objects in Dutch. We provide norms for eight psycholinguistic variables: age of acquisition, familiarity, imageability, image agreement, objective and subjective visual complexity, word frequency, word length in syllables and in letters, and name agreement. Furthermore, multiple regression analyses reveal that significant predictors of photo naming latencies are name agreement, word frequency, imageability, and image agreement. Naming latencies, norms and stimuli are provided as Supplemental Materials.
  • Shayan, S., Ozturk, O., & Sicoli, M. A. (2011). The thickness of pitch: Crossmodal metaphors in Farsi, Turkish and Zapotec. The Senses & Society, 6(1), 96-105. doi:10.2752/174589311X12893982233911.

    Abstract

    Speakers use vocabulary for spatial verticality and size to describe pitch. A high–low contrast is common to many languages, but others show contrasts like thick–thin and big–small. We consider uses of thick for low pitch and thin for high pitch in three languages: Farsi, Turkish, and Zapotec. We ask how metaphors for pitch structure the sound space. In a language like English, high applies to both high-pitched as well as high-amplitude (loud) sounds; low applies to low-pitched as well as low-amplitude (quiet) sounds. Farsi, Turkish, and Zapotec organize sound in a different way. Thin applies to high pitch and low amplitude and thick to low pitch and high amplitude. We claim that these metaphors have their sources in life experiences. Musical instruments show co-occurrences of higher pitch with thinner, smaller objects and lower pitch with thicker, larger objects. On the other hand bodily experience can ground the high–low metaphor. A raised larynx produces higher pitch and lowered larynx lower pitch. Low-pitched sounds resonate the chest, a lower place than highpitched sounds. While both patterns are available from life experience, linguistic experience privileges one over the other, which results in differential structuring of the multiple dimensions of sound.
  • Shitova, N., Roelofs, A., Schriefers, H., Bastiaansen, M., & Schoffelen, J.-M. (2016). Using Brain Potentials to Functionally Localise Stroop-Like Effects in Colour and Picture Naming: Perceptual Encoding versus Word Planning. PLoS One, 11(9): e0161052. doi:10.1371/journal.pone.0161052.

    Abstract

    The colour-word Stroop task and the picture-word interference task (PWI) have been used extensively to study the functional processes underlying spoken word production. One of the consistent behavioural effects in both tasks is the Stroop-like effect: The reaction time (RT) is longer on incongruent trials than on congruent trials. The effect in the Stroop task is usually linked to word planning, whereas the effect in the PWI task is associated with either word planning or perceptual encoding. To adjudicate between the word planning and perceptual encoding accounts of the effect in PWI, we conducted an EEG experiment consisting of three tasks: a standard colour-word Stroop task (three colours), a standard PWI task (39 pictures), and a Stroop-like version of the PWI task (three pictures). Participants overtly named the colours and pictures while their EEG was recorded. A Stroop-like effect in RTs was observed in all three tasks. ERPs at centro-parietal sensors started to deflect negatively for incongruent relative to congruent stimuli around 350 ms after stimulus onset for the Stroop, Stroop-like PWI, and the Standard PWI tasks: an N400 effect. No early differences were found in the PWI tasks. The onset of the Stroop-like effect at about 350 ms in all three tasks links the effect to word planning rather than perceptual encoding, which has been estimated in the literature to be finished around 200–250 ms after stimulus onset. We conclude that the Stroop-like effect arises during word planning in both Stroop and PWI.
  • Sikora, K., Roelofs, A., & Hermans, D. (2016). Electrophysiology of executive control in spoken noun-phrase production: Dynamics of updating, inhibiting, and shifting. Neuropsychologia, 84, 44-53. doi:10.1016/j.neuropsychologia.2016.01.037.

    Abstract

    Previous studies have provided evidence that updating, inhibiting, and shifting abilities underlying executive control determine response time (RT) in language production. However, little is known about their electrophysiological basis and dynamics. In the present electroencephalography study, we assessed noun-phrase production using picture description and a picture-word interference paradigm. We measured picture description RTs to assess length, distractor, and switch effects, which have been related to the updating, inhibiting, and shifting abilities. In addition, we measured event-related brain potentials (ERPs). Previous research has suggested that inhibiting and shifting are associated with anterior and posterior N200 subcomponents, respectively, and updating with the P300. We obtained length, distractor, and switch effects in the RTs, and an interaction between length and switch. There was a widely distributed switch effect in the N200, an interaction of length and midline site in the N200, and a length effect in the P300, whereas distractor did not yield any ERP modulation. Moreover, length and switch interacted in the posterior N200. We argue that these results provide electrophysiological evidence that inhibiting and shifting of task set occur before updating in phrase planning.
  • Sikora, K., Roelofs, A., Hermans, D., & Knoors, H. (2016). Executive control in spoken noun-phrase production: Contributions of updating, inhibiting, and shifting. Quarterly Journal of Experimental Psychology, 69(9), 1719-1740. doi:10.1080/17470218.2015.1093007.

    Abstract

    The present study examined how the updating, inhibiting, and shifting abilities underlying executive control influence spoken noun-phrase production. Previous studies provided evidence that updating and inhibiting, but not shifting, influence picture-naming response time (RT). However, little is known about the role of executive control in more complex forms of language production like generating phrases. We assessed noun-phrase production using picture description and a picture–word interference procedure. We measured picture description RT to assess length, distractor, and switch effects, which were assumed to reflect, respectively, the updating, inhibiting, and shifting abilities of adult participants. Moreover, for each participant we obtained scores on executive control tasks that measured verbal and nonverbal updating, nonverbal inhibiting, and nonverbal shifting. We found that both verbal and nonverbal updating scores correlated with the overall mean picture description RTs. Furthermore, the length effect in the RTs correlated with verbal but not nonverbal updating scores, while the distractor effect correlated with inhibiting scores. We did not find a correlation between the switch effect in the mean RTs and the shifting scores. However, the shifting scores correlated with the switch effect in the normal part of the underlying RT distribution. These results suggest that updating, inhibiting, and shifting each influence the speed of phrase production, thereby demonstrating a contribution of all three executive control abilities to language production.
  • Silva, S., Reis, A., Casaca, L., Petersson, K. M., & Faísca, L. (2016). When the eyes no longer lead: Familiarity and length effects eye-voice span. Frontiers in Psychology, 7: 1720. doi:10.3389/fpsyg.2016.01720.

    Abstract

    During oral reading, the eyes tend to be ahead of the voice (eye-voice span, EVS). It has been hypothesized that the extent to which this happens depends on the automaticity of reading processes, namely on the speed of print-to-sound conversion. We tested whether EVS is affected by another automaticity component – immunity from interference. To that end, we manipulated word familiarity (high-frequency, lowfrequency, and pseudowords, PW) and word length as proxies of immunity from interference, and we used linear mixed effects models to measure the effects of both variables on the time interval at which readers do parallel processing by gazing at word N C 1 while not having articulated word N yet (offset EVS). Parallel processing was enhanced by automaticity, as shown by familiarity length interactions on offset EVS, and it was impeded by lack of automaticity, as shown by the transformation of offset EVS into voice-eye span (voice ahead of the offset of the eyes) in PWs. The relation between parallel processing and automaticity was strengthened by the fact that offset EVS predicted reading velocity. Our findings contribute to understand how the offset EVS, an index that is obtained in oral reading, may tap into different components of automaticity that underlie reading ability, oral or silent. In addition, we compared the duration of the offset EVS with the average reference duration of stages in word production, and we saw that the offset EVS may accommodate for more than the articulatory programming stage of word N.
  • Silva, S., Faísca, L., Araújo, S., Casaca, L., Carvalho, L., Petersson, K. M., & Reis, A. (2016). Too little or too much? Parafoveal preview benefits and parafoveal load costs in dyslexic adults. Annals of Dyslexia, 66(2), 187-201. doi:10.1007/s11881-015-0113-z.

    Abstract

    Two different forms of parafoveal dysfunction have been hypothesized as core deficits of dyslexic individuals: reduced parafoveal preview benefits (“too little parafovea”) and increased costs of parafoveal load (“too much parafovea”). We tested both hypotheses in a single eye-tracking experiment using a modified serial rapid automatized naming (RAN) task. Comparisons between dyslexic and non-dyslexic adults showed reduced parafoveal preview benefits in dyslexics, without increased costs of parafoveal load. Reduced parafoveal preview benefits were observed in a naming task, but not in a silent letter-finding task, indicating that the parafoveal dysfunction may be consequent to the overload with extracting phonological information from orthographic input. Our results suggest that dyslexics’ parafoveal dysfunction is not based on strict visuo-attentional factors, but nevertheless they stress the importance of extra-phonological processing. Furthermore, evidence of reduced parafoveal preview benefits in dyslexia may help understand why serial RAN is an important reading predictor in adulthood
  • Sjerps, M. J., Mitterer, H., & McQueen, J. M. (2011). Constraints on the processes responsible for the extrinsic normalization of vowels. Attention, Perception & Psychophysics, 73, 1195-1215. doi:10.3758/s13414-011-0096-8.

    Abstract

    Listeners tune in to talkers’ vowels through extrinsic normalization. We asked here whether this process could be based on compensation for the Long Term Average Spectrum (LTAS) of preceding sounds and whether the mechanisms responsible for normalization are indifferent to the nature of those sounds. If so, normalization should apply to nonspeech stimuli. Previous findings were replicated with first formant (F1) manipulations of speech. Targets on a [pIt]-[pEt] (low-high F1) continuum were labeled as [pIt] more after high-F1 than after low-F1 precursors. Spectrally-rotated nonspeech versions of these materials produced similar normalization. None occurred, however, with nonspeech stimuli that were less speech-like, even though precursor-target LTAS relations were equivalent to those used earlier. Additional experiments investigated the roles of pitch movement, amplitude variation, formant location, and the stimuli's perceived similarity to speech. It appears that normalization is not restricted to speech, but that the nature of the preceding sounds does matter. Extrinsic normalization of vowels is due at least in part to an auditory process which may require familiarity with the spectro-temporal characteristics of speech.
  • Sjerps, M. J., Mitterer, H., & McQueen, J. M. (2011). Listening to different speakers: On the time-course of perceptual compensation for vocal-tract characteristics. Neuropsychologia, 49, 3831-3846. doi:10.1016/j.neuropsychologia.2011.09.044.

    Abstract

    This study used an active multiple-deviant oddball design to investigate the time-course of normalization processes that help listeners deal with between-speaker variability. Electroencephalograms were recorded while Dutch listeners heard sequences of non-words (standards and occasional deviants). Deviants were [ɪ papu] or [ɛ papu], and the standard was [ɪɛpapu], where [ɪɛ] was a vowel that was ambiguous between [ɛ] and [ɪ]. These sequences were presented in two conditions, which differed with respect to the vocal-tract characteristics (i.e., the average 1st formant frequency) of the [papu] part, but not of the initial vowels [ɪ], [ɛ] or [ɪɛ] (these vowels were thus identical across conditions). Listeners more often detected a shift from [ɪɛpapu] to [ɛ papu] than from [ɪɛpapu] to [ɪ papu] in the high F1 context condition; the reverse was true in the low F1 context condition. This shows that listeners’ perception of vowels differs depending on the speaker‘s vocal-tract characteristics, as revealed in the speech surrounding those vowels. Cortical electrophysiological responses reflected this normalization process as early as about 120 ms after vowel onset, which suggests that shifts in perception precede influences due to conscious biases or decision strategies. Listeners’ abilities to normalize for speaker-vocal-tract properties are for an important part the result of a process that influences representations of speech sounds early in the speech processing stream.

Share this page