Publications

Displaying 301 - 400 of 506
  • McQueen, J. M., Eisner, F., & Norris, D. (2016). When brain regions talk to each other during speech processing, what are they talking about? Commentary on Gow and Olson (2015). Language, Cognition and Neuroscience, 31(7), 860-863. doi:10.1080/23273798.2016.1154975.

    Abstract

    This commentary on Gow and Olson [2015. Sentential influences on acoustic-phonetic processing: A Granger causality analysis of multimodal imaging data. Language, Cognition and Neuroscience. doi:10.1080/23273798.2015.1029498] questions in three ways their conclusion that speech perception is based on interactive processing. First, it is not clear that the data presented by Gow and Olson reflect normal speech recognition. Second, Gow and Olson's conclusion depends on still-debated assumptions about the functions performed by specific brain regions. Third, the results are compatible with feedforward models of speech perception and appear inconsistent with models in which there are online interactions about phonological content. We suggest that progress in the neuroscience of speech perception requires the generation of testable hypotheses about the function(s) performed by inter-regional connections
  • Menenti, L., & Burani, C. (2007). What causes the effect of age of acquisition in lexical processing? Quarterly Journal of Experimental Psychology, 60(5), 652-660. doi:10.1080/17470210601100126.

    Abstract

    Three hypotheses for effects of age of acquisition (AoA) in lexical processing are compared: the cumulative frequency hypothesis (frequency and AoA both influence the number of encounters with a word, which influences processing speed), the semantic hypothesis (early-acquired words are processed faster because they are more central in the semantic network), and the neural network model (early-acquired words are faster because they are acquired when a network has maximum plasticity). In a regression study of lexical decision (LD) and semantic categorization (SC) in Italian and Dutch, contrary to the cumulative frequency hypothesis, AoA coefficients were larger than frequency coefficients, and, contrary to the semantic hypothesis, the effect of AoA was not larger in SC than in LD. The neural network model was supported.
  • Meyer, A. S., Huettig, F., & Levelt, W. J. M. (2016). Same, different, or closely related: What is the relationship between language production and comprehension? Journal of Memory and Language, 89, 1-7. doi:10.1016/j.jml.2016.03.002.
  • Meyer, A. S., & Huettig, F. (Eds.). (2016). Speaking and Listening: Relationships Between Language Production and Comprehension [Special Issue]. Journal of Memory and Language, 89.
  • Meyer, A. S., & Damian, M. F. (2007). Activation of distractor names in the picture-picture interference paradigm. Memory & Cognition, 35, 494-503.

    Abstract

    In four experiments, participants named target pictures that were accompanied by distractor pictures with phonologically related or unrelated names. Across experiments, the type of phonological relationship between the targets and the related distractors was varied: They were homophones (e.g., bat [animal/baseball]), or they shared word-initial segments (e.g., dog-doll) or word-final segments (e.g., ball-wall). The participants either named the objects after an extensive familiarization and practice phase or without any familiarization or practice. In all of the experiments, the mean target-naming latency was shorter in the related than in the unrelated condition, demonstrating that the phonological form of the name of the distractor picture became activated. These results are best explained within a cascaded model of lexical access—that is, under the assumption that the recognition of an object leads to the activation of its name.
  • Meyer, A. S., Belke, E., Telling, A. L., & Humphreys, G. W. (2007). Early activation of object names in visual search. Psychonomic Bulletin & Review, 14, 710-716.

    Abstract

    In a visual search experiment, participants had to decide whether or not a target object was present in a four-object search array. One of these objects could be a semantically related competitor (e.g., shirt for the target trousers) or a conceptually unrelated object with the same name as the target-for example, bat (baseball) for the target bat (animal). In the control condition, the related competitor was replaced by an unrelated object. The participants' response latencies and eye movements demonstrated that the two types of related competitors had similar effects: Competitors attracted the participants' visual attention and thereby delayed positive and negative decisions. The results imply that semantic and name information associated with the objects becomes rapidly available and affects the allocation of visual attention.
  • Meyer, A. S., Belke, E., Häcker, C., & Mortensen, L. (2007). Use of word length information in utterance planning. Journal of Memory and Language, 57, 210-231. doi:10.1016/j.jml.2006.10.005.

    Abstract

    Griffin [Griffin, Z. M. (2003). A reversed length effect in coordinating the preparation and articulation of words in speaking. Psychonomic Bulletin & Review, 10, 603-609.] found that speakers naming object pairs spent more time before utterance onset looking at the second object when the first object name was short than when it was long. She proposed that this reversed length effect arose because the speakers' decision when to initiate an utterance was based, in part, on their estimate of the spoken duration of the first object name and the time available during its articulation to plan the second object name. In Experiment I of the present study, participants named object pairs. They spent more time looking at the first object when its name was monosyllabic than when it was trisyllabic, and, as in Griffin's study, the average gaze-speech lag (the time between the end of the gaze to the first object and onset of its name, which corresponds closely to the pre-speech inspection time for the second object) showed a reversed length effect. Experiments 2 and 3 showed that this effect was not due to a trade-off between the time speakers spent looking at the first and second object before speech onset. Experiment 4 yielded a reversed length effect when the second object was replaced by a symbol (x or +), which the participants had to categorise. We propose a novel account of the reversed length effect, which links it to the incremental nature of phonological encoding and articulatory planning rather than the speaker's estimate of the length of the first object name.
  • Michalareas, G., Vezoli, J., Van Pelt, S., Schoffelen, J.-M., Kennedy, H., & Fries, P. (2016). Alpha-Beta and Gamma Rhythms Subserve Feedback and Feedforward Influences among Human Visual Cortical Areas. Neuron, 82(2), 384-397. doi:10.1016/j.neuron.2015.12.018.

    Abstract

    Primate visual cortex is hierarchically organized. Bottom-up and top-down influences are exerted through distinct frequency channels, as was recently revealed in macaques by correlating inter-areal influences with laminar anatomical projection patterns. Because this anatomical data cannot be obtained in human subjects, we selected seven homologous macaque and human visual areas, and we correlated the macaque laminar projection patterns to human inter-areal directed influences as measured with magnetoencephalography. We show that influences along feedforward projections predominate in the gamma band, whereas influences along feedback projections predominate in the alpha-beta band. Rhythmic inter-areal influences constrain a functional hierarchy of the seven homologous human visual areas that is in close agreement with the respective macaque anatomical hierarchy. Rhythmic influences allow an extension of the hierarchy to 26 human visual areas including uniquely human brain areas. Hierarchical levels of ventral- and dorsal-stream visual areas are differentially affected by inter-areal influences in the alpha-beta band.
  • Middeldorp, C. M., Hammerschlag, A. R., Ouwens, K. G., Groen-Blokhuis, M. M., St Pourcain, B., Greven, C. U., Pappa, I., Tiesler, C. M. T., Ang, W., Nolte, I. M., Vilor-Tejedor, N., Bacelis, J., Ebejer, J. L., Zhao, H., Davies, G. E., Ehli, E. A., Evans, D. M., Fedko, I. O., Guxens, M., Hottenga, J.-J. and 31 moreMiddeldorp, C. M., Hammerschlag, A. R., Ouwens, K. G., Groen-Blokhuis, M. M., St Pourcain, B., Greven, C. U., Pappa, I., Tiesler, C. M. T., Ang, W., Nolte, I. M., Vilor-Tejedor, N., Bacelis, J., Ebejer, J. L., Zhao, H., Davies, G. E., Ehli, E. A., Evans, D. M., Fedko, I. O., Guxens, M., Hottenga, J.-J., Hudziak, J. J., Jugessur, A., Kemp, J. P., Krapohl, E., Martin, N. G., Murcia, M., Myhre, R., Ormel, J., Ring, S. M., Standl, M., Stergiakouli, E., Stoltenberg, C., Thiering, E., Timpson, N. J., Trzaskowski, M., van der Most, P. J., Wang, C., EArly Genetics and Lifecourse Epidemiology (EAGLE) Consortium, Psychiatric Genomics Consortium ADHD Working Group, Nyholt, D. R., Medland, S. E., Neale, B., Jacobsson, B., Sunyer, J., Hartman, C. A., Whitehouse, A. J. O., Pennell, C. E., Heinrich, J., Plomin, R., Smith, G. D., Tiemeier, H., Posthuma, D., & Boomsma, D. I. (2016). A Genome-Wide Association Meta-Analysis of Attention-Deficit/Hyperactivity Disorder Symptoms in Population-Based Paediatric Cohorts. Journal of the American Academy of Child & Adolescent Psychiatry, 55(10), 896-905. doi:10.1016/j.jaac.2016.05.025.

    Abstract

    Objective To elucidate the influence of common genetic variants on childhood attention-deficit/hyperactivity disorder (ADHD) symptoms, to identify genetic variants that explain its high heritability, and to investigate the genetic overlap of ADHD symptom scores with ADHD diagnosis. Method Within the EArly Genetics and Lifecourse Epidemiology (EAGLE) consortium, genome-wide single nucleotide polymorphisms (SNPs) and ADHD symptom scores were available for 17,666 children (< 13 years) from nine population-based cohorts. SNP-based heritability was estimated in data from the three largest cohorts. Meta-analysis based on genome-wide association (GWA) analyses with SNPs was followed by gene-based association tests, and the overlap in results with a meta-analysis in the Psychiatric Genomics Consortium (PGC) case-control ADHD study was investigated. Results SNP-based heritability ranged from 5% to 34%, indicating that variation in common genetic variants influences ADHD symptom scores. The meta-analysis did not detect genome-wide significant SNPs, but three genes, lying close to each other with SNPs in high linkage disequilibrium (LD), showed a gene-wide significant association (p values between 1.46×10-6 and 2.66×10-6). One gene, WASL, is involved in neuronal development. Both SNP- and gene-based analyses indicated overlap with the PGC meta-analysis results with the genetic correlation estimated at 0.96. Conclusion The SNP-based heritability for ADHD symptom scores indicates a polygenic architecture and genes involved in neurite outgrowth are possibly involved. Continuous and dichotomous measures of ADHD appear to assess a genetically common phenotype. A next step is to combine data from population-based and case-control cohorts in genetic association studies to increase sample size and improve statistical power for identifying genetic variants.
  • Monaco, A., Fisher, S. E., & The SLI Consortium (SLIC) (2007). Multivariate linkage analysis of specific language impairment (SLI). Annals of Human Genetics, 71(5), 660-673. doi:10.1111/j.1469-1809.2007.00361.x.

    Abstract

    Specific language impairment (SLI) is defined as an inability to develop appropriate language skills without explanatory medical conditions, low intelligence or lack of opportunity. Previously, a genome scan of 98 families affected by SLI was completed by the SLI Consortium, resulting in the identification of two quantitative trait loci (QTL) on chromosomes 16q (SLI1) and 19q (SLI2). This was followed by a replication of both regions in an additional 86 families. Both these studies applied linkage methods to one phenotypic trait at a time. However, investigations have suggested that simultaneous analysis of several traits may offer more power. The current study therefore applied a multivariate variance-components approach to the SLI Consortium dataset using additional phenotypic data. A multivariate genome scan was completed and supported the importance of the SLI1 and SLI2 loci, whilst highlighting a possible novel QTL on chromosome 10. Further investigation implied that the effect of SLI1 on non-word repetition was equally as strong on reading and spelling phenotypes. In contrast, SLI2 appeared to have influences on a selection of expressive and receptive language phenotypes in addition to non-word repetition, but did not show linkage to literacy phenotypes.

    Additional information

    Members_SLIC.doc
  • Montero-Melis, G., Jaeger, T. F., & Bylund, E. (2016). Thinking is modulated by recent linguistic experience: Second language priming affects perceived event similarity. Language Learning, 66(3), 636-665. doi:10.1111/lang.12172.

    Abstract

    Can recent second language (L2) exposure affect what we judge to be similar events? Using a priming paradigm, we manipulated whether native Swedish adult learners of L2 Spanish were primed to use path or manner during L2 descriptions of scenes depicting caused motion events (encoding phase). Subsequently, participants engaged in a nonverbal task, arranging events on the screen according to similarity (test phase). Path versus manner priming affected how participants judged event similarity during the test phase. The effects we find support the hypotheses that (a) speakers create or select ad hoc conceptual categories that are based on linguistic knowledge to carry out nonverbal tasks, and that (b) short-term, recent L2 experience can affect this ad hoc process. These findings further suggest that cognition can flexibly draw on linguistic categories that have been implicitly highlighted during recent exposure.
  • Li, S., Morley, M., Lu, M., Zhou, S., Stewart, K., French, C. A., Tucker, H. O., Fisher, S. E., & Morrisey, E. E. (2016). Foxp transcription factors suppress a non-pulmonary gene expression program to permit proper lung development. Developmental Biology, 416(2), 338-346. doi:10.1016/j.ydbio.2016.06.020.

    Abstract

    The inhibitory mechanisms that prevent gene expression programs from one tissue to be expressed in another are poorly understood. Foxp1/2/4 are forkhead transcription factors that repress gene expression and are individually important for endoderm development. We show that combined loss of all three Foxp1/2/4 family members in the developing anterior foregut endoderm leads to a loss of lung endoderm lineage commitment and subsequent development. Foxp1/2/4 deficient lungs express high levels of transcriptional regulators not normally expressed in the developing lung, including Pax2, Pax8, Pax9 and the Hoxa9-13 cluster. Ectopic expression of these transcriptional regulators is accompanied by decreased expression of lung restricted transcription factors including Nkx2-1, Sox2, and Sox9. Foxp1 binds to conserved forkhead DNA binding sites within the Hoxa9-13 cluster, indicating a direct repression mechanism. Thus, Foxp1/2/4 are essential for promoting lung endoderm development by repressing expression of non-pulmonary transcription factors
  • Murakami, S., Verdonschot, R. G., Kataoka, M., Kakimoto, N., Shimamoto, H., & Kreiborg, S. (2016). A standardized evaluation of artefacts from metallic compounds during fast MR imaging. Dentomaxillofacial Radiology, 45(8): 20160094. doi:10.1259/dmfr.20160094.

    Abstract

    Objectives: Metallic compounds present in the oral and maxillofacial regions (OMRs) cause large artefacts during MR scanning. We quantitatively assessed these artefacts embedded within a phantom according to standards set by the American Society for Testing and Materials (ASTM).
    Methods: Seven metallic dental materials (each of which was a 10-mm(3) cube embedded within a phantom) were scanned [i.e. aluminium (Al), silver alloy (Ag), type IV gold alloy (Au), gold-palladium-silver alloy (Au-Pd-Ag), titanium (Ti), nickel-chromium alloy (NC) and cobalt-chromium alloy (CC)] and compared with a reference image. Sequences included gradient echo (GRE), fast spin echo (FSE), gradient recalled acquisition in steady state (GRASS), a spoiled GRASS (SPGR), a fast SPGR (FSPGR), fast imaging employing steady state (FIESTA) and echo planar imaging (EPI; axial/sagittal planes). Artefact areas were determined according to the ASTM-F2119 standard, and artefact volumes were assessed using OsiriX MD software (Pixmeo, Geneva, Switzerland).
    Results: Tukey-Kramer post hoc tests were used for statistical comparisons. For most materials, scanning sequences eliciting artefact volumes in the following (ascending) order FSE-T-1/FSE-T-2 < FSPGR/SPGR < GRASS/GRE < FIESTA < EPI. For all scanning sequences, artefact volumes containing Au, Al, Ag and Au-Pd-Ag were significantly smaller than other materials (in which artefact volume size increased, respectively, from Ti < NC < CC). The artefact-specific shape (elicited by the cubic sample) depended on the scanning plane (i.e. a circular pattern for the axial plane and a "clover-like" pattern for the sagittal plane).
    Conclusions: The availability of standardized information on artefact size and configuration during MRI will enhance diagnosis when faced with metallic compounds in the OMR.
  • Murakami, S., Verdonschot, R. G., Kakimoto, N., Sumida, I., Fujiwara, M., Ogawa, K., & Furukawa, S. (2016). Preventing complications from high-dose rate brachytherapy when treating mobile tongue cancer via the application of a modular lead-lined spacer. PLoS One, 11(4): e0154226. doi:10.1371/journal.pone.0154226.

    Abstract

    Purpose
    To point out the advantages and drawbacks of high-dose rate brachytherapy in the treatment of mobile tongue cancer and indicate the clinical importance of modular lead-lined spacers when applying this technique to patients.
    Methods
    First, all basic steps to construct the modular spacer are shown. Second, we simulate and evaluate the dose rate reduction for a wide range of spacer configurations.
    Results
    With increasing distance to the source absorbed doses dropped considerably. Significantly more shielding was obtained when lead was added to the spacer and this effect was most pronounced on shorter (i.e. more clinically relevant) distances to the source.
    Conclusions
    The modular spacer represents an important addition to the planning and treatment stages of mobile tongue cancer using HDR-ISBT.

    Additional information

    tables
  • Murty, L., Otake, T., & Cutler, A. (2007). Perceptual tests of rhythmic similarity: I. Mora Rhythm. Language and Speech, 50(1), 77-99. doi:10.1177/00238309070500010401.

    Abstract

    Listeners rely on native-language rhythm in segmenting speech; in different languages, stress-, syllable- or mora-based rhythm is exploited. The rhythmic similarity hypothesis holds that where two languages have similar rhythm, listeners of each language should segment their own and the other language similarly. Such similarity in listening was previously observed only for related languages (English-Dutch; French-Spanish). We now report three experiments in which speakers of Telugu, a Dravidian language unrelated to Japanese but similar to it in crucial aspects of rhythmic structure, heard speech in Japanese and in their own language, and Japanese listeners heard Telugu. For the Telugu listeners, detection of target sequences in Japanese speech was harder when target boundaries mismatched mora boundaries, exactly the pattern that Japanese listeners earlier exhibited with Japanese and other languages. The same results appeared when Japanese listeners heard Telugu speech containing only codas permissible in Japanese. Telugu listeners' results with Telugu speech were mixed, but the overall pattern revealed correspondences between the response patterns of the two listener groups, as predicted by the rhythmic similarity hypothesis. Telugu and Japanese listeners appear to command similar procedures for speech segmentation, further bolstering the proposal that aspects of language phonological structure affect listeners' speech segmentation.
  • Nakayama, M., Kinoshita, S., & Verdonschot, R. G. (2016). The emergence of a phoneme-sized unit in L2 speech production: Evidence from Japanese-English bilinguals. Frontiers in Psychology, 7: 175. doi:10.3389/fpsyg.2016.00175.

    Abstract

    Recent research has revealed that the way phonology is constructed during word production differs across languages. Dutch and English native speakers are suggested to incrementally insert phonemes into a metrical frame, whereas Mandarin Chinese speakers use syllables and Japanese speakers use a unit called the mora (often a CV cluster such as "ka" or "ki"). The present study is concerned with the question how bilinguals construct phonology in their L2 when the phonological unit size differs from the unit in their L1. Japanese English bilinguals of varying proficiency read aloud English words preceded by masked primes that overlapped in just the onset (e.g., bark-BENCH) or the onset plus vowel corresponding to the mora-sized unit (e.g., bell-BENCH). Low proficient Japanese English bilinguals showed CV priming but did not show onset priming, indicating that they use their L1 phonological unit when reading L2 English words. In contrast, high-proficient Japanese English bilinguals showed significant onset priming. The size of the onset priming effect was correlated with the length of time spent in English-speaking countries, which suggests that extensive exposure to L2 phonology may play a key role in the emergence of a language-specific phonological unit in L2 word production.
  • Narasimhan, B., Eisenbeiss, S., & Brown, P. (Eds.). (2007). The linguistic encoding of multiple-participant events [Special Issue]. Linguistics, 45(3).

    Abstract

    This issue investigates the linguistic encoding of events with three or more participants from the perspectives of language typology and acquisition. Such “multiple-participant events” include (but are not limited to) any scenario involving at least three participants, typically encoded using transactional verbs like 'give' and 'show', placement verbs like 'put', and benefactive and applicative constructions like 'do (something for someone)', among others. There is considerable crosslinguistic and withinlanguage variation in how the participants (the Agent, Causer, Theme, Goal, Recipient, or Experiencer) and the subevents involved in multipleparticipant situations are encoded, both at the lexical and the constructional levels
  • Narasimhan, B. (2007). Cutting, breaking, and tearing verbs in Hindi and Tamil. Cognitive Linguistics, 18(2), 195-205. doi:10.1515/COG.2007.008.

    Abstract

    Tamil and Hindi verbs of cutting, breaking, and tearing are shown to have a high degree of overlap in their extensions. However, there are also differences in the lexicalization patterns of these verbs in the two languages with regard to their category boundaries, and the number of verb types that are available to make finer-grained distinctions. Moreover, differences in the extensional ranges of corresponding verbs in the two languages can be motivated in terms of the properties of the instrument and the theme object.
  • Narasimhan, B., Eisenbeiss, S., & Brown, P. (2007). "Two's company, more is a crowd": The linguistic encoding of multiple-participant events. Linguistics, 45(3), 383-392. doi:10.1515/LING.2007.013.

    Abstract

    This introduction to a special issue of the journal Linguistics sketches the challenges that multiple-participant events pose for linguistic and psycholinguistic theories, and summarizes the articles in the volume.
  • Nieuwland, M. S., Petersson, K. M., & Van Berkum, J. J. A. (2007). On sense and reference: Examining the functional neuroanatomy of referential processing. NeuroImage, 37(3), 993-1004. doi:10.1016/j.neuroimage.2007.05.048.

    Abstract

    In an event-related fMRI study, we examined the cortical networks involved in establishing reference during language comprehension. We compared BOLD responses to sentences containing referentially ambiguous pronouns (e.g., “Ronald told Frank that he…”), referentially failing pronouns (e.g., “Rose told Emily that he…”) or coherent pronouns. Referential ambiguity selectively recruited medial prefrontal regions, suggesting that readers engaged in problem-solving to select a unique referent from the discourse model. Referential failure elicited activation increases in brain regions associated with morpho-syntactic processing, and, for those readers who took failing pronouns to refer to unmentioned entities, additional regions associated with elaborative inferencing were observed. The networks activated by these two referential problems did not overlap with the network activated by a standard semantic anomaly. Instead, we observed a double dissociation, in that the systems activated by semantic anomaly are deactivated by referential ambiguity, and vice versa. This inverse coupling may reflect the dynamic recruitment of semantic and episodic processing to resolve semantically or referentially problematic situations. More generally, our findings suggest that neurocognitive accounts of language comprehension need to address not just how we parse a sentence and combine individual word meanings, but also how we determine who's who and what's what during language comprehension.
  • Nieuwland, M. S., Otten, M., & Van Berkum, J. J. A. (2007). Who are you talking about? Tracking discourse-level referential processing with event-related brain potentials. Journal of Cognitive Neuroscience, 19(2), 228-236. doi:10.1162/jocn.2007.19.2.228.

    Abstract

    In this event-related brain potentials (ERPs) study, we explored the possibility to selectively track referential ambiguity during spoken discourse comprehension. Earlier ERP research has shown that referentially ambiguous nouns (e.g., “the girl” in a two-girl context) elicit a frontal, sustained negative shift relative to unambiguous control words. In the current study, we examined whether this ERP effect reflects “deep” situation model ambiguity or “superficial” textbase ambiguity. We contrasted these different interpretations by investigating whether a discourse-level semantic manipulation that prevents referential ambiguity also averts the elicitation of a referentially induced ERP effect. We compared ERPs elicited by nouns that were referentially nonambiguous but were associated with two discourse entities (e.g., “the girl” with two girls introduced in the context, but one of which has died or left the scene), with referentially ambiguous and nonambiguous control words. Although temporally referentially ambiguous nouns elicited a frontal negative shift compared to control words, the “double bound” but referentially nonambiguous nouns did not. These results suggest that it is possible to selectively track referential ambiguity with ERPs at the level that is most relevant to discourse comprehension, the situation model.
  • Nieuwland, M. S. (2016). Quantification, prediction, and the online impact of sentence truth-value: Evidence from event-related potentials. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(2), 316-334. doi:10.1037/xlm0000173.

    Abstract

    Do negative quantifiers like “few” reduce people’s ability to rapidly evaluate incoming language with respect to world knowledge? Previous research has addressed this question by examining whether online measures of quantifier comprehension match the “final” interpretation reflected in verification judgments. However, these studies confounded quantifier valence with its impact on the unfolding expectations for upcoming words, yielding mixed results. In the current event-related potentials study, participants read negative and positive quantifier sentences matched on cloze probability and on truth-value (e.g., “Most/Few gardeners plant their flowers during the spring/winter for best results”). Regardless of whether participants explicitly verified the sentences or not, true-positive quantifier sentences elicited reduced N400s compared with false-positive quantifier sentences, reflecting the facilitated semantic retrieval of words that render a sentence true. No such facilitation was seen in negative quantifier sentences. However, mixed-effects model analyses (with cloze value and truth-value as continuous predictors) revealed that decreasing cloze values were associated with an interaction pattern between truth-value and quantifier, whereas increasing cloze values were associated with more similar truth-value effects regardless of quantifier. Quantifier sentences are thus understood neither always in 2 sequential stages, nor always in a partial-incremental fashion, nor always in a maximally incremental fashion. Instead, and in accordance with prediction-based views of sentence comprehension, quantifier sentence comprehension depends on incorporation of quantifier meaning into an online, knowledge-based prediction for upcoming words. Fully incremental quantifier interpretation occurs when quantifiers are incorporated into sufficiently strong online predictions for upcoming words. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
  • Norcliffe, E., & Jaeger, T. F. (2016). Predicting head-marking variability in Yucatec Maya relative clause production. Language and Cognition, 8(2), 167-205. doi:10.1017/langcog.2014.39.

    Abstract

    Recent proposals hold that the cognitive systems underlying language production exhibit computational properties that facilitate communicative efficiency, i.e., an efficient trade-off between production ease and robust information transmission. We contribute to the cross-linguistic evaluation of the communicative efficiency hypothesis by investigating speakers’ preferences in the production of a typologically rare head-marking alternation that occurs in relative clause constructions in Yucatec Maya. In a sentence recall study, we find that speakers of Yucatec Maya prefer to use reduced forms of relative clause verbs when the relative clause is more contextually expected. This result is consistent with communicative efficiency and thus supports its typological generalizability. We compare two types of cue to the presence of a relative clause, pragmatic cues previously investigated in other languages and a highly predictive morphosyntactic cue specific to Yucatec. We find that Yucatec speakers’ preferences for a reduced verb form are primarily conditioned on the more informative cue. This demonstrates the role of both general principles of language production and their language-specific realizations.
  • Norris, D., & Cutler, A. (1985). Juncture detection. Linguistics, 23, 689-705.
  • Norris, D., McQueen, J. M., & Cutler, A. (2016). Prediction, Bayesian inference and feedback in speech recognition. Language, Cognition and Neuroscience, 31(1), 4-18. doi:10.1080/23273798.2015.1081703.

    Abstract

    Speech perception involves prediction, but how is that prediction implemented? In cognitive models prediction has often been taken to imply that there is feedback of activation from lexical to pre-lexical processes as implemented in interactive-activation models (IAMs). We show that simple activation feedback does not actually improve speech recognition. However, other forms of feedback can be beneficial. In particular, feedback can enable the listener to adapt to changing input, and can potentially help the listener to recognise unusual input, or recognise speech in the presence of competing sounds. The common feature of these helpful forms of feedback is that they are all ways of optimising the performance of speech recognition using Bayesian inference. That is, listeners make predictions about speech because speech recognition is optimal in the sense captured in Bayesian models.
  • Nüse, R. (2007). Der Gebrauch und die Bedeutungen von auf, an und unter. Zeitschrift für Germanistische Linguistik, 35, 27-51.

    Abstract

    Present approaches to the semantics of the German prepositions auf an and unter draw on two propositions: First, that spatial prepositions in general specify a region in the surrounding of the relatum object. Second, that in the case of auf an and unter, these regions are to be defined with concepts like the vertical and/or the topological surfa¬ce (the whole surrounding exterior of an object). The present paper argues that the first proposition is right and that the second is wrong. That is, while it is true that prepositions specify regions, the regions specified by auf, an and unter should rather be defined in terms of everyday concepts like SURFACE, SIDE and UNDERSIDE. This idea is suggested by the fact that auf an and unter refer to different regions in different kinds of relatum objects, and that these regions are the same as the regions called surfaces, sides and undersides. Furthermore, reading and usage preferences of auf an and unter can be explained by a corresponding salience of the surfaces, sides and undersides of the relatum objects in question. All in all, therefore, a close look at the use of auf an and unter with different classes of relatum objects reveals problems for a semantic approach that draws on concepts like the vertical, while it suggests mea¬nings of these prepositions that refer to the surface, side and underside of an object.
  • O'Connor, L. (2007). 'Chop, shred, snap apart': Verbs of cutting and breaking in Lowland Chontal. Cognitive Linguistics, 18(2), 219-230. doi:10.1515/COG.2007.010.

    Abstract

    Typological descriptions of understudied languages reveal intriguing crosslinguistic variation in descriptions of events of object separation and destruction. In Lowland Chontal of Oaxaca, verbs of cutting and breaking lexicalize event perspectives that range from the common to the quite unusual, from the tearing of cloth to the snapping apart on the cross-grain of yarn. This paper describes the semantic and syntactic criteria that characterize three verb classes in this semantic domain, examines patterns of event construal, and takes a look at likely changes in these event descriptions from the perspective of endangered language recovery.
  • O'Connor, L. (2007). [Review of the book Pronouns by D.N.S. Bhat]. Journal of Pragmatics, 39(3), 612-616. doi:10.1016/j.pragma.2006.09.007.
  • Okbay, A., Beauchamp, J. P., Fontana, M. A., Lee, J. J., Pers, T. H., Rietveld, C. A., Turley, P., Chen, G. B., Emilsson, V., Meddens, S. F. W., Oskarsson, S., Pickrell, J. K., Thom, K., Timshel, P., De Vlaming, R., Abdellaoui, A., Ahluwalia, T. S., Bacelis, J., Baumbach, C., Bjornsdottir, G. and 236 moreOkbay, A., Beauchamp, J. P., Fontana, M. A., Lee, J. J., Pers, T. H., Rietveld, C. A., Turley, P., Chen, G. B., Emilsson, V., Meddens, S. F. W., Oskarsson, S., Pickrell, J. K., Thom, K., Timshel, P., De Vlaming, R., Abdellaoui, A., Ahluwalia, T. S., Bacelis, J., Baumbach, C., Bjornsdottir, G., Brandsma, J., Pina Concas, M., Derringer, J., Furlotte, N. A., Galesloot, T. E., Girotto, G., Gupta, R., Hall, L. M., Harris, S. E., Hofer, E., Horikoshi, M., Huffman, J. E., Kaasik, K., Kalafati, I. P., Karlsson, R., Kong, A., Lahti, J., Lee, S. J. V. D., DeLeeuw, C., Lind, P. A., Lindgren, K.-.-O., Liu, T., Mangino, M., Marten, J., Mihailov, E., Miller, M. B., Van der Most, P. J., Oldmeadow, C., Payton, A., Pervjakova, N., Peyrot, W. J., Qian, Y., Raitakari, O., Rueedi, R., Salvi, E., Schmidt, B., Schraut, K. E., Shi, J., Smith, A. V., Poot, R. A., St Pourcain, B., Teumer, A., Thorleifsson, G., Verweij, N., Vuckovic, D., Wellmann, J., Westra, H.-.-J., Yang, J., Zhao, W., Zhu, Z., Alizadeh, B. Z., Amin, N., Bakshi, A., Baumeister, S. E., Biino, G., Bønnelykke, K., Boyle, P. A., Campbell, H., Cappuccio, F. P., Davies, G., De Neve, J.-.-E., Deloukas, P., Demuth, I., Ding, J., Eibich, P., Eisele, L., Eklund, N., Evans, D. M., Faul, J. D., Feitosa, M. F., Forstner, A. J., Gandin, I., Gunnarsson, B., Halldórsson, B. V., Harris, T. B., Heath, A. C., Hocking, L. J., Holliday, E. G., Homuth, G., Horan, M. A., Hottenga, J.-.-J., De Jager, P. L., Joshi, P. K., Jugessur, A., Kaakinen, M. A., Kähönen, M., Kanoni, S., Keltigangas-Järvinen, L., Kiemeney, L. A. L. M., Kolcic, I., Koskinen, S., Kraja, A. T., Kroh, M., Kutalik, Z., Latvala, A., Launer, L. J., Lebreton, M. P., Levinson, D. F., Lichtenstein, P., Lichtner, P., Liewald, D. C. M., Cohert Study, L., Loukola, A., Madden, P. A., Mägi, R., Mäki-Opas, T., Marioni, R. E., Marques-Vidal, P., Meddens, G. A., McMahon, G., Meisinger, C., Meitinger, T., Milaneschi, Y., Milani, L., Montgomery, G. W., Myhre, R., Nelson, C. P., Nyholt, D. R., Ollier, W. E. R., Palotie, A., Paternoster, L., Pedersen, N. L., Petrovic, K. E., Porteous, D. J., Räikkönen, K., Ring, S. M., Robino, A., Rostapshova, O., Rudan, I., Rustichini, A., Salomaa, V., Sanders, A. R., Sarin, A.-.-P., Schmidt, H., Scott, R. J., Smith, B. H., Smith, J. A., Staessen, J. A., Steinhagen-Thiessen, E., Strauch, K., Terracciano, A., Tobin, M. D., Ulivi, S., Vaccargiu, S., Quaye, L., Van Rooij, F. J. A., Venturini, C., Vinkhuyzen, A. A. E., Völker, U., Völzke, H., Vonk, J. M., Vozzi, D., Waage, J., Ware, E. B., Willemsen, G., Attia, J. R., Bennett, D. A., Berger, K., Bertram, L., Bisgaard, H., Boomsma, D. I., Borecki, I. B., Bültmann, U., Chabris, C. F., Cucca, F., Cusi, D., Deary, I. J., Dedoussis, G. V., Van Duijn, C. M., Eriksson, J. G., Franke, B., Franke, L., Gasparini, P., Gejman, P. V., Gieger, C., Grabe, H.-.-J., Gratten, J., Groenen, P. J. F., Gudnason, V., Van der Harst, P., Hayward, C., Hinds, D. A., Hoffmann, W., Hyppönen, E., Iacono, W. G., Jacobsson, B., Järvelin, M.-.-R., Jöckel, K.-.-H., Kaprio, J., Kardia, S. L. R., Lehtimäki, T., Lehrer, S. F., Magnusson, P. K. E., Martin, N. G., McGue, M., Metspalu, A., Pendleton, N., Penninx, B. W. J. H., Perola, M., Pirastu, N., Pirastu, M., Polasek, O., Posthuma, D., Power, C., Province, M. A., Samani, N. J., Schlessinger, D., Schmidt, R., Sørensen, T. I. A., Spector, T. D., Stefansson, K., Thorsteinsdottir, U., Thurik, A. R., Timpson, N. J., Tiemeier, H., Tung, J. Y., Uitterlinden, A. G., Vitart, V., Vollenweider, P., Weir, D. R., Wilson, J. F., Wright, A. F., Conley, D. C., Krueger, R. F., Davey Smith, G., Hofman, A., Laibson, D. I., Medland, S. E., Meyer, M. N., Yang, J., Johannesson, M., Visscher, P. M., Esko, T., Koellinger, P. D., Cesarini, D., & Benjamin, D. J. (2016). Genome-wide association study identifies 74 loci associated with educational attainment. Nature, 533, 539-542. doi:10.1038/nature17671.

    Abstract

    Educational attainment is strongly influenced by social and other environmental factors, but genetic factors are estimated to account for at least 20% of the variation across individuals. Here we report the results of a genome-wide association study (GWAS) for educational attainment that extends our earlier discovery sample of 101,069 individuals to 293,723 individuals, and a replication study in an independent sample of 111,349 individuals from the UK Biobank. We identify 74 genome-wide significant loci associated with the number of years of schooling completed. Single-nucleotide polymorphisms associated with educational attainment are disproportionately found in genomic regions regulating gene expression in the fetal brain. Candidate genes are preferentially expressed in neural tissue, especially during the prenatal period, and enriched for biological pathways involved in neural development. Our findings demonstrate that, even for a behavioural phenotype that is mostly environmentally determined, a well-powered GWAS identifies replicable associated genetic variants that suggest biologically relevant pathways. Because educational attainment is measured in large numbers of individuals, it will continue to be useful as a proxy phenotype in efforts to characterize the genetic influences of related phenotypes, including cognition and neuropsychiatric diseases
  • O'Meara, C., & Majid, A. (2016). How changing lifestyles impact Seri smellscapes and smell language. Anthropological Linguistics, 58(2), 107-131. doi:10.1353/anl.2016.0024.

    Abstract

    The sense of smell has widely been viewed as inferior to the other senses. This is reflected in the lack of treatment of olfaction in ethnographies and linguistic descriptions. We present novel data
    from the olfactory lexicon of Seri, a language isolate of Mexico, which sheds new light onto the possibilities for olfactory terminologies. We also present the Seri smellscape, highlighting the cultural significance of odors in Seri culture which, along with the olfactory language, is now
    under threat as globalization takes hold and traditional ways of life are transformed.
  • Otten, M., & Van Berkum, J. J. A. (2007). What makes a discourse constraining? Comparing the effects of discourse message and scenario fit on the discourse-dependent N400 effect. Brain Research, 1153, 166-177. doi:10.1016/j.brainres.2007.03.058.

    Abstract

    A discourse context provides a reader with a great deal of information that can provide constraints for further language processing, at several different levels. In this experiment we used event-related potentials (ERPs) to explore whether discourse-generated contextual constraints are based on the precise message of the discourse or, more `loosely', on the scenario suggested by one or more content words in the text. Participants read constraining stories whose precise message rendered a particular word highly predictable ("The manager thought that the board of directors should assemble to discuss the issue. He planned a...[meeting]") as well as non-constraining control stories that were only biasing in virtue of the scenario suggested by some of the words ("The manager thought that the board of directors need not assemble to discuss the issue. He planned a..."). Coherent words that were inconsistent with the message-level expectation raised in a constraining discourse (e.g., "session" instead of "meeting") elicited a classic centroparietal N400 effect. However, when the same words were only inconsistent with the scenario loosely suggested by earlier words in the text, they elicited a different negativity around 400 ms, with a more anterior, left-lateralized maximum. The fact that the discourse-dependent N400 effect cannot be reduced to scenario-mediated priming reveals that it reflects the rapid use of precise message-level constraints in comprehension. At the same time, the left-lateralized negativity in non-constraining stories suggests that, at least in the absence of strong message-level constraints, scenario-mediated priming does also rapidly affect comprehension.
  • Otten, M., Nieuwland, M. S., & Van Berkum, J. J. A. (2007). Great expectations: Specific lexical anticipation influences the processing of spoken language. BMC Neuroscience, 8: 89. doi:10.1186/1471-2202-8-89.

    Abstract

    Background Recently several studies have shown that people use contextual information to make predictions about the rest of the sentence or story as the text unfolds. Using event related potentials (ERPs) we tested whether these on-line predictions are based on a message-based representation of the discourse or on simple automatic activation by individual words. Subjects heard short stories that were highly constraining for one specific noun, or stories that were not specifically predictive but contained the same prime words as the predictive stories. To test whether listeners make specific predictions critical nouns were preceded by an adjective that was inflected according to, or in contrast with, the gender of the expected noun. Results When the message of the preceding discourse was predictive, adjectives with an unexpected gender-inflection evoked a negative deflection over right-frontal electrodes between 300 and 600 ms. This effect was not present in the prime control context, indicating that the prediction mismatch does not hinge on word-based priming but is based on the actual message of the discourse. Conclusions When listening to a constraining discourse people rapidly make very specific predictions about the remainder of the story, as the story unfolds. These predictions are not simply based on word-based automatic activation, but take into account the actual message of the discourse.
  • Özdemir, R., Roelofs, A., & Levelt, W. J. M. (2007). Perceptual uniqueness point effects in monitoring internal speech. Cognition, 105(2), 457-465. doi:10.1016/j.cognition.2006.10.006.

    Abstract

    Disagreement exists about how speakers monitor their internal speech. Production-based accounts assume that self-monitoring mechanisms exist within the production system, whereas comprehension-based accounts assume that monitoring is achieved through the speech comprehension system. Comprehension-based accounts predict perception-specific effects, like the perceptual uniqueness-point effect, in the monitoring of internal speech. We ran an extensive experiment testing this prediction using internal phoneme monitoring and picture naming tasks. Our results show an effect of the perceptual uniqueness point of a word in internal phoneme monitoring in the absence of such an effect in picture naming. These results support comprehension-based accounts of the monitoring of internal speech.
  • Ozyurek, A., Willems, R. M., Kita, S., & Hagoort, P. (2007). On-line integration of semantic information from speech and gesture: Insights from event-related brain potentials. Journal of Cognitive Neuroscience, 19(4), 605-616. doi:10.1162/jocn.2007.19.4.605.

    Abstract

    During language comprehension, listeners use the global semantic representation from previous sentence or discourse context to immediately integrate the meaning of each upcoming word into the unfolding message-level representation. Here we investigate whether communicative gestures that often spontaneously co-occur with speech are processed in a similar fashion and integrated to previous sentence context in the same way as lexical meaning. Event-related potentials were measured while subjects listened to spoken sentences with a critical verb (e.g., knock), which was accompanied by an iconic co-speech gesture (i.e., KNOCK). Verbal and/or gestural semantic content matched or mismatched the content of the preceding part of the sentence. Despite the difference in the modality and in the specificity of meaning conveyed by spoken words and gestures, the latency, amplitude, and topographical distribution of both word and gesture mismatches are found to be similar, indicating that the brain integrates both types of information simultaneously. This provides evidence for the claim that neural processing in language comprehension involves the simultaneous incorporation of information coming from a broader domain of cognition than only verbal semantics. The neural evidence for similar integration of information from speech and gesture emphasizes the tight interconnection between speech and co-speech gestures.
  • Ozyurek, A., & Kelly, S. D. (2007). Gesture, language, and brain. Brain and Language, 101(3), 181-185. doi:10.1016/j.bandl.2007.03.006.
  • Pappa, I., St Pourcain, B., Benke, K., Cavadino, A., Hakulinen, C., Nivard, M. G., Nolte, I. M., Tiesler, C. M. T., Bakermans-Kranenburg, M. J., Davies, G. E., Evans, D. M., Geoffroy, M.-C., Grallert, H., Groen-Blokhuis, M. M., Hudziak, J. J., Kemp, J. P., Keltikangas-Järvinen, L., McMahon, G., Mileva-Seitz, V. R., Motazedi, E. and 23 morePappa, I., St Pourcain, B., Benke, K., Cavadino, A., Hakulinen, C., Nivard, M. G., Nolte, I. M., Tiesler, C. M. T., Bakermans-Kranenburg, M. J., Davies, G. E., Evans, D. M., Geoffroy, M.-C., Grallert, H., Groen-Blokhuis, M. M., Hudziak, J. J., Kemp, J. P., Keltikangas-Järvinen, L., McMahon, G., Mileva-Seitz, V. R., Motazedi, E., Power, C., Raitakari, O. T., Ring, S. M., Rivadeneira, F., Rodriguez, A., Scheet, P. A., Seppälä, I., Snieder, H., Standl, M., Thiering, E., Timpson, N. J., Veenstra, R., Velders, F. P., Whitehouse, A. J. O., Smith, G. D., Heinrich, J., Hypponen, E., Lehtimäki, T., Middeldorp, C. M., Oldehinkel, A. J., Pennell, C. E., Boomsma, D. I., & Tiemeier, H. (2016). A genome-wide approach to children's aggressive behavior: The EAGLE consortium. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 171(5), 562-572. doi:10.1002/ajmg.b.32333.

    Abstract

    Individual differences in aggressive behavior emerge in early childhood and predict persisting behavioral problems and disorders. Studies of antisocial and severe aggression in adulthood indicate substantial underlying biology. However, little attention has been given to genome-wide approaches of aggressive behavior in children. We analyzed data from nine population-based studies and assessed aggressive behavior using well-validated parent-reported questionnaires. This is the largest sample exploring children's aggressive behavior to date (N = 18,988), with measures in two developmental stages (N = 15,668 early childhood and N = 16,311 middle childhood/early adolescence). First, we estimated the additive genetic variance of children's aggressive behavior based on genome-wide SNP information, using genome-wide complex trait analysis (GCTA). Second, genetic associations within each study were assessed using a quasi-Poisson regression approach, capturing the highly right-skewed distribution of aggressive behavior. Third, we performed meta-analyses of genome-wide associations for both the total age-mixed sample and the two developmental stages. Finally, we performed a gene-based test using the summary statistics of the total sample. GCTA quantified variance tagged by common SNPs (10–54%). The meta-analysis of the total sample identified one region in chromosome 2 (2p12) at near genome-wide significance (top SNP rs11126630, P = 5.30 × 10−8). The separate meta-analyses of the two developmental stages revealed suggestive evidence of association at the same locus. The gene-based analysis indicated association of variation within AVPR1A with aggressive behavior. We conclude that common variants at 2p12 show suggestive evidence for association with childhood aggression. Replication of these initial findings is needed, and further studies should clarify its biological meaning.
  • Peeters, D., & Ozyurek, A. (2016). This and that revisited: A social and multimodal approach to spatial demonstratives. Frontiers in Psychology, 7: 222. doi:10.3389/fpsyg.2016.00222.
  • Pereiro Estevan, Y., Wan, V., & Scharenborg, O. (2007). Finding maximum margin segments in speech. Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference, IV, 937-940. doi:10.1109/ICASSP.2007.367225.

    Abstract

    Maximum margin clustering (MMC) is a relatively new and promising kernel method. In this paper, we apply MMC to the task of unsupervised speech segmentation. We present three automatic speech segmentation methods based on MMC, which are tested on TIMIT and evaluated on the level of phoneme boundary detection. The results show that MMC is highly competitive with existing unsupervised methods for the automatic detection of phoneme boundaries. Furthermore, initial analyses show that MMC is a promising method for the automatic detection of sub-phonetic information in the speech signal.
  • Perniss, P. M. (2007). Achieving spatial coherence in German sign language narratives: The use of classifiers and perspective. Lingua, 117(7), 1315-1338. doi:10.1016/j.lingua.2005.06.013.

    Abstract

    Spatial coherence in discourse relies on the use of devices that provide information about where referents are and where events take place. In signed language, two primary devices for achieving and maintaining spatial coherence are the use of classifier forms and signing perspective. This paper gives a unified account of the relationship between perspective and classifiers, and divides the range of possible correspondences between these two devices into prototypical and non-prototypical alignments. An analysis of German Sign Language narratives of complex events investigates the role of different classifier-perspective constructions in encoding spatial information about location, orientation, action and motion, as well as size and shape of referents. In particular, I show how non-prototypical alignments, including simultaneity of perspectives, contribute to the maintenance of spatial coherence, and provide functional explanations in terms of efficiency and informativeness constraints on discourse.
  • Petersson, K. M., Silva, C., Castro-Caldas, A., Ingvar, M., & Reis, A. (2007). Literacy: A cultural influence on functional left-right differences in the inferior parietal cortex. European Journal of Neuroscience, 26(3), 791-799. doi:10.1111/j.1460-9568.2007.05701.x.

    Abstract

    The current understanding of hemispheric interaction is limited. Functional hemispheric specialization is likely to depend on both genetic and environmental factors. In the present study we investigated the importance of one factor, literacy, for the functional lateralization in the inferior parietal cortex in two independent samples of literate and illiterate subjects. The results show that the illiterate group are consistently more right-lateralized than their literate controls. In contrast, the two groups showed a similar degree of left-right differences in early speech-related regions of the superior temporal cortex. These results provide evidence suggesting that a cultural factor, literacy, influences the functional hemispheric balance in reading and verbal working memory-related regions. In a third sample, we investigated grey and white matter with voxel-based morphometry. The results showed differences between literacy groups in white matter intensities related to the mid-body region of the corpus callosum and the inferior parietal and parietotemporal regions (literate > illiterate). There were no corresponding differences in the grey matter. This suggests that the influence of literacy on brain structure related to reading and verbal working memory is affecting large-scale brain connectivity more than grey matter per se.
  • Petras, K., Ten Oever, S., & Jansma, B. M. (2016). The effect of distance on moral engagement: Event related potentials and alpha power are sensitive to perspective in a virtual shooting task. Frontiers in Psychology, 6: 2008. doi:10.3389/fpsyg.2015.02008.

    Abstract

    In a shooting video game we investigated whether increased distance reduces moral conflict. We measured and analyzed the event related potential (ERP), including the N2 component, which has previously been linked to cognitive conflict from competing decision tendencies. In a modified Go/No-go task designed to trigger moral conflict participants had to shoot suddenly appearing human like avatars in a virtual reality scene. The scene was seen either from an ego perspective with targets appearing directly in front of the participant or from a bird's view, where targets were seen from above and more distant. To control for low level visual features, we added a visually identical control condition, where the instruction to shoot was replaced by an instruction to detect. ERP waveforms showed differences between the two tasks as early as in the N1 time-range, with higher N1 amplitudes for the close perspective in the shoot task. Additionally, we found that pre-stimulus alpha power was significantly decreased in the ego, compared to the bird's view only for the shoot but not for the detect task. In the N2 time window, we observed main amplitude effects for response (No-go > Go) and distance (ego > bird perspective) but no interaction with task type (shoot vs. detect). We argue that the pre-stimulus and N1 effects can be explained by reduced attention and arousal in the distance condition when people are instructed to shoot. These results indicate a reduced moral engagement for increased distance. The lack of interaction in the N2 across tasks suggests that at that time point response execution dominates. We discuss potential implications for real life shooting situations, especially considering recent developments in drone shootings which are per definition of a distant view.
  • Pickering, M. J., & Majid, A. (2007). What are implicit causality and consequentiality? Language and Cognitive Processes, 22(5), 780-788. doi:10.1080/01690960601119876.

    Abstract

    Much work in psycholinguistics and social psychology has investigated the notion of implicit causality associated with verbs. Crinean and Garnham (2006) relate implicit causality to another phenomenon, implicit consequentiality. We argue that they and other researchers have confused the meanings of events and the reasons for those events, so that particular thematic roles (e.g., Agent, Patient) are taken to be causes or consequences of those events by definition. In accord with Garvey and Caramazza (1974), we propose that implicit causality and consequentiality are probabilistic notions that are straightforwardly related to the explicit causes and consequences of events and are analogous to other biases investigated in psycholinguistics.
  • Poletiek, F. H., & Olfers, K. J. F. (2016). Authentication by the crowd: How lay students identify the style of a 17th century artist. CODART e-Zine, 8. Retrieved from http://ezine.codart.nl/17/issue/57/artikel/19-21-june-madrid/?id=349#!/page/3.
  • Poletiek, F. H., Fitz, H., & Bocanegra, B. R. (2016). What baboons can (not) tell us about natural language grammars. Cognition, 151, 108-112. doi:10.1016/j.cognition.2015.04.016.

    Abstract

    Rey et al. (2012) present data from a study with baboons that they interpret in support of the idea that center-embedded structures in human language have their origin in low level memory mechanisms and associative learning. Critically, the authors claim that the baboons showed a behavioral preference that is consistent with center-embedded sequences over other types of sequences. We argue that the baboons’ response patterns suggest that two mechanisms are involved: first, they can be trained to associate a particular response with a particular stimulus, and, second, when faced with two conditioned stimuli in a row, they respond to the most recent one first, copying behavior they had been rewarded for during training. Although Rey et al. (2012) ‘experiment shows that the baboons’ behavior is driven by low level mechanisms, it is not clear how the animal behavior reported, bears on the phenomenon of Center Embedded structures in human syntax. Hence, (1) natural language syntax may indeed have been shaped by low level mechanisms, and (2) the baboons’ behavior is driven by low level stimulus response learning, as Rey et al. propose. But is the second evidence for the first? We will discuss in what ways this study can and cannot give evidential value for explaining the origin of Center Embedded recursion in human grammar. More generally, their study provokes an interesting reflection on the use of animal studies in order to understand features of the human linguistic system.
  • Poort, E. D., Warren, J. E., & Rodd, J. M. (2016). Recent experience with cognates and interlingual homographs in one language affects subsequent processing in another language. Bilingualism: Language and Cognition, 19(1), 206-212. doi:10.1017/S1366728915000395.

    Abstract

    This experiment shows that recent experience in one language influences subsequent processing of the same word-forms in a different language. Dutch–English bilinguals read Dutch sentences containing Dutch–English cognates and interlingual homographs, which were presented again 16 minutes later in isolation in an English lexical decision task. Priming produced faster responses for the cognates but slower responses for the interlingual homographs. These results show that language switching can influence bilingual speakers at the level of individual words, and require models of bilingual word recognition (e.g., BIA+) to allow access to word meanings to be modulated by recent experience.
  • Pouw, W., Van Gog, T., Zwaan, R. A., & Paas, F. (2016). Augmenting instructional animations with a body analogy to help children learn about physical systems. Frontiers in Psychology, 7: 860. doi:10.3389/fpsyg.2016.00860.

    Abstract

    We investigated whether augmenting instructional animations with a body analogy (BA) would improve 10- to 13-year-old children’s learning about class-1 levers. Children with a lower level of general math skill who learned with an instructional animation that provided a BA of the physical system, showed higher accuracy on a lever problem-solving reaction time task than children studying the instructional animation without this BA. Additionally, learning with a BA led to a higher speed–accuracy trade-off during the transfer task for children with a lower math skill, which provided additional evidence that especially this group is likely to be affected by learning with a BA. However, overall accuracy and solving speed on the transfer task was not affected by learning with or without this BA. These results suggest that providing children with a BA during animation study provides a stepping-stone for understanding mechanical principles of a physical system, which may prove useful for instructional designers. Yet, because the BA does not seem effective for all children, nor for all tasks, the degree of effectiveness of body analogies should be studied further. Future research, we conclude, should be more sensitive to the necessary degree of analogous mapping between the body and physical systems, and whether this mapping is effective for reasoning about more complex instantiations of such physical systems.
  • Pouw, W., Eielts, C., Van Gog, T., Zwaan, R. A., & Paas, F. (2016). Does (non‐)meaningful sensori‐motor engagement promote learning with animated physical systems? Mind, Brain and Education, 10(2), 91-104. doi:10.1111/mbe.12105.

    Abstract

    Previous research indicates that sensori‐motor experience with physical systems can have a positive effect on learning. However, it is not clear whether this effect is caused by mere bodily engagement or the intrinsically meaningful information that such interaction affords in performing the learning task. We investigated (N = 74), through the use of a Wii Balance Board, whether different forms of physical engagement that was either meaningfully, non‐meaningfully, or minimally related to the learning content would be beneficial (or detrimental) to learning about the workings of seesaws from instructional animations. The results were inconclusive, indicating that motoric competency on lever problem solving did not significantly differ between conditions, nor were response speed and transfer performance affected. These findings suggest that adult's implicit and explicit knowledge about physical systems is stable and not easily affected by (contradictory) sensori‐motor experiences. Implications for embodied learning are discussed.
  • Pouw, W., & Hostetter, A. B. (2016). Gesture as predictive action. Reti, Saperi, Linguaggi: Italian Journal of Cognitive Sciences, 3, 57-80. doi:10.12832/83918.

    Abstract

    Two broad approaches have dominated the literature on the production of speech-accompanying gestures. On the one hand, there are approaches that aim to explain the origin of gestures by specifying the mental processes that give rise to them. On the other, there are approaches that aim to explain the cognitive function that gestures have for the gesturer or the listener. In the present paper we aim to reconcile both approaches in one single perspective that is informed by a recent sea change in cognitive science, namely, Predictive Processing Perspectives (PPP; Clark 2013b; 2015). We start with the idea put forth by the Gesture as Simulated Action (GSA) framework (Hostetter, Alibali 2008). Under this view, the mental processes that give rise to gesture are re-enactments of sensori-motor experiences (i.e., simulated actions). We show that such anticipatory sensori-motor states and the constraints put forth by the GSA framework can be understood as top-down kinesthetic predictions that function in a broader predictive machinery as proposed by PPP. By establishing this alignment, we aim to show how gestures come to fulfill a genuine cognitive function above and beyond the mental processes that give rise to gesture.
  • Pouw, W., Myrto-Foteini, M., Van Gog, T., & Paas, F. (2016). Gesturing during mental problem solving reduces eye movements, especially for individuals with lower visual working memory capacity. Cognitive Processing, 17, 269-277. doi:10.1007/s10339-016-0757-6.

    Abstract

    Non-communicative hand gestures have been found to benefit problem-solving performance. These gestures seem to compensate for limited internal cognitive capacities, such as visual working memory capacity. Yet, it is not clear how gestures might perform this cognitive function. One hypothesis is that gesturing is a means to spatially index mental simulations, thereby reducing the need for visually projecting the mental simulation onto the visual presentation of the task. If that hypothesis is correct, less eye movements should be made when participants gesture during problem solving than when they do not gesture. We therefore used mobile eye tracking to investigate the effect of co-thought gesturing and visual working memory capacity on eye movements during mental solving of the Tower of Hanoi problem. Results revealed that gesturing indeed reduced the number of eye movements (lower saccade counts), especially for participants with a relatively lower visual working memory capacity. Subsequent problem-solving performance was not affected by having (not) gestured during the mental solving phase. The current findings suggest that our understanding of gestures in problem solving could be improved by taking into account eye movements during gesturing.
  • Prieto, P., & Torreira, F. (2007). The segmental anchoring hypothesis revisited: Syllable structure and speech rate effects on peak timing in Spanish. Journal of Phonetics, 35, 473-500. doi:10.1016/j.wocn.2007.01.001.

    Abstract

    This paper addresses the validity of the segmental anchoring hypothesis for tonal landmarks (henceforth, SAH) as described in recent work by (among others) Ladd, Faulkner, D., Faulkner, H., & Schepman [1999. Constant ‘segmental’ anchoring of f0 movements under changes in speech rate. Journal of the Acoustical Society of America, 106, 1543–1554], Ladd [2003. Phonological conditioning of f0 target alignment. In: M. J. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the XVth international congress of phonetic sciences, Vol. 1, (pp. 249–252). Barcelona: Causal Productions; in press. Segmental anchoring of pitch movements: Autosegmental association or gestural coordination? Italian Journal of Linguistics, 18 (1)]. The alignment of LH* prenuclear peaks with segmental landmarks in controlled speech materials in Peninsular Spanish is analyzed as a function of syllable structure type (open, closed) of the accented syllable, segmental composition, and speaking rate. Contrary to the predictions of the SAH, alignment was affected by syllable structure and speech rate in significant and consistent ways. In: CV syllables the peak was located around the end of the accented vowel, and in CVC syllables around the beginning-mid part of the sonorant coda, but still far from the syllable boundary. With respect to the effects of rate, peaks were located earlier in the syllable as speech rate decreased. The results suggest that the accent gestures under study are synchronized with the syllable unit. In general, the longer the syllable, the longer the rise time. Thus the fundamental idea of the anchoring hypothesis can be taken as still valid. On the other hand, the tonal alignment patterns reported here can be interpreted as the outcome of distinct modes of gestural coordination in syllable-initial vs. syllable-final position: gestures at syllable onsets appear to be more tightly coordinated than gestures at the end of syllables [Browman, C. P., & Goldstein, L.M. (1986). Towards an articulatory phonology. Phonology Yearbook, 3, 219–252; Browman, C. P., & Goldstein, L. (1988). Some notes on syllable structure in articulatory phonology. Phonetica, 45, 140–155; (1992). Articulatory Phonology: An overview. Phonetica, 49, 155–180; Krakow (1999). Physiological organization of syllables: A review. Journal of Phonetics, 27, 23–54; among others]. Intergestural timing can thus provide a unifying explanation for (1) the contrasting behavior between the precise synchronization of L valleys with the onset of the syllable and the more variable timing of the end of the f0 rise, and, more specifically, for (2) the right-hand tonal pressure effects and ‘undershoot’ patterns displayed by peaks at the ends of syllables and other prosodic domains.
  • Protopapas, A., Gerakaki, S., & Alexandri, S. (2007). Sources of information for stress assignment in reading Greek. Applied Psycholinguistics, 28(4), 695 -720. doi:10.1017/S0142716407070373.

    Abstract

    To assign lexical stress when reading, the Greek reader can potentially rely on lexical information (knowledge of the word), visual–orthographic information (processing of the written diacritic), or a default metrical strategy (penultimate stress pattern). Previous studies with secondary education children have shown strong lexical effects on stress assignment and have provided evidence for a default pattern. Here we report two experiments with adult readers, in which we disentangle and quantify the effects of these three potential sources using nonword materials. Stimuli either resembled or did not resemble real words, to manipulate availability of lexical information; and they were presented with or without a diacritic, in a word-congruent or word-incongruent position, to contrast the relative importance of the three sources. Dual-task conditions, in which cognitive load during nonword reading was increased with phonological retention carrying a metrical pattern different from the default, did not support the hypothesis that the default arises from cumulative lexical activation in working memory.
  • Qin, S., Piekema, C., Petersson, K. M., Han, B., Luo, J., & Fernández, G. (2007). Probing the transformation of discontinuous associations into episodic memory: An event-related fMRI study. NeuroImage, 38(1), 212-222. doi:10.1016/j.neuroimage.2007.07.020.

    Abstract

    Using event-related functional magnetic resonance imaging, we identified brain regions involved in storing associations of events discontinuous in time into long-term memory. Participants were scanned while memorizing item-triplets including simultaneous and discontinuous associations. Subsequent memory tests showed that participants remembered both types of associations equally well. First, by constructing the contrast between the subsequent memory effects for discontinuous associations and simultaneous associations, we identified the left posterior parahippocampal region, dorsolateral prefrontal cortex, the basal ganglia, posterior midline structures, and the middle temporal gyrus as being specifically involved in transforming discontinuous associations into episodic memory. Second, we replicated that the prefrontal cortex and the medial temporal lobe (MTL) especially the hippocampus are involved in associative memory formation in general. Our findings provide evidence for distinct neural operation(s) that supports the binding and storing discontinuous associations in memory. We suggest that top-down signals from the prefrontal cortex and MTL may trigger reactivation of internal representation in posterior midline structures of the first event, thus allowing it to be associated with the second event. The dorsolateral prefrontal cortex together with basal ganglia may support this encoding operation by executive and binding processes within working memory, and the posterior parahippocampal region may play a role in binding and memory formation.
  • Ramenzoni, V. C., & Liszkowski, U. (2016). The social reach: 8-month-olds reach for unobtainable objects in the presence of another person. Psychological Science, 27(9), 1278-1285. doi:10.1177/0956797616659938.

    Abstract

    Linguistic communication builds on prelinguistic communicative gestures, but the ontogenetic origins and complexities of these prelinguistic gestures are not well known. The current study tested whether 8-month-olds, who do not yet point communicatively, use instrumental actions for communicative purposes. In two experiments, infants reached for objects when another person was present and when no one else was present; the distance to the objects was varied. When alone, the infants reached for objects within their action boundaries and refrained from reaching for objects out of their action boundaries; thus, they knew about their individual action efficiency. However, when a parent (Experiment 1) or a less familiar person (Experiment 2) sat next to them, the infants selectively increased their reaching for out-of-reach objects. The findings reveal that before they communicate explicitly through pointing gestures, infants use instrumental actions with the apparent expectation that a partner will adopt and complete their goals.
  • Ravignani, A., Delgado, T., & Kirby, S. (2016). Musical evolution in the lab exhibits rhythmic universals. Nature Human Behaviour, 1: 0007. doi:10.1038/s41562-016-0007.

    Abstract

    Music exhibits some cross-cultural similarities, despite its variety across the world. Evidence from a broad range of human cultures suggests the existence of musical universals1, here defined as strong regularities emerging across cultures above chance. In particular, humans demonstrate a general proclivity for rhythm2, although little is known about why music is particularly rhythmic and why the same structural regularities are present in rhythms around the world. We empirically investigate the mechanisms underlying musical universals for rhythm, showing how music can evolve culturally from randomness. Human participants were asked to imitate sets of randomly generated drumming sequences and their imitation attempts became the training set for the next participants in independent transmission chains. By perceiving and imitating drumming sequences from each other, participants turned initially random sequences into rhythmically structured patterns. Drumming patterns developed into rhythms that are more structured, easier to learn, distinctive for each experimental cultural tradition and characterized by all six statistical universals found among world music1; the patterns appear to be adapted to human learning, memory and cognition. We conclude that musical rhythm partially arises from the influence of human cognitive and biological biases on the process of cultural evolution.

    Additional information

    Supplementary information Raw data
  • Ravignani, A., & Cook, P. F. (2016). The evolutionary biology of dance without frills. Current Biology, 26(19), R878-R879. doi:10.1016/j.cub.2016.07.076.

    Abstract

    Recently psychologists have taken up the question of whether dance is reliant on unique human adaptations, or whether it is rooted in neural and cognitive mechanisms shared with other species 1, 2. In its full cultural complexity, human dance clearly has no direct analog in animal behavior. Most definitions of dance include the consistent production of movement sequences timed to an external rhythm. While not sufficient for dance, modes of auditory-motor timing, such as synchronization and entrainment, are experimentally tractable constructs that may be analyzed and compared between species. In an effort to assess the evolutionary precursors to entrainment and social features of human dance, Laland and colleagues [2] have suggested that dance may be an incidental byproduct of adaptations supporting vocal or motor imitation — referred to here as the ‘imitation and sequencing’ hypothesis. In support of this hypothesis, Laland and colleagues rely on four convergent lines of evidence drawn from behavioral and neurobiological research on dance behavior in humans and rhythmic behavior in other animals. Here, we propose a less cognitive, more parsimonious account for the evolution of dance. Our ‘timing and interaction’ hypothesis suggests that dance is scaffolded off of broadly conserved timing mechanisms allowing both cooperative and antagonistic social coordination.
  • Ravignani, A., Fitch, W. T., Hanke, F. D., Heinrich, T., Hurgitsch, B., Kotz, S. A., Scharff, C., Stoeger, A. S., & de Boer, B. (2016). What pinnipeds have to say about human speech, music, and the evolution of rhythm. Frontiers in Neuroscience, 10: 274. doi:10.3389/fnins.2016.00274.

    Abstract

    Research on the evolution of human speech and music benefits from hypotheses and data generated in a number of disciplines. The purpose of this article is to illustrate the high relevance of pinniped research for the study of speech, musical rhythm, and their origins, bridging and complementing current research on primates and birds. We briefly discuss speech, vocal learning, and rhythm from an evolutionary and comparative perspective. We review the current state of the art on pinniped communication and behavior relevant to the evolution of human speech and music, showing interesting parallels to hypotheses on rhythmic behavior in early hominids. We suggest future research directions in terms of species to test and empirical data needed.
  • Reis, A., Faísca, L., Mendonça, S., Ingvar, M., & Petersson, K. M. (2007). Semantic interference on a phonological task in illiterate subjects. Scandinavian Journal of Psychology, 48(1), 69-74. doi:10.1111/j.1467-9450.2006.00544.x.

    Abstract

    Previous research suggests that learning an alphabetic written language influences aspects of the auditory-verbal language system. In this study, we examined whether literacy influences the notion of words as phonological units independent of lexical semantics in literate and illiterate subjects. Subjects had to decide which item in a word- or pseudoword pair was phonologically longest. By manipulating the relationship between referent size and phonological length in three word conditions (congruent, neutral, and incongruent) we could examine to what extent subjects focused on form rather than meaning of the stimulus material. Moreover, the pseudoword condition allowed us to examine global phonological awareness independent of lexical semantics. The results showed that literate performed significantly better than illiterate subjects in the neutral and incongruent word conditions as well as in the pseudoword condition. The illiterate group performed least well in the incongruent condition and significantly better in the pseudoword condition compared to the neutral and incongruent word conditions and suggest that performance on phonological word length comparisons is dependent on literacy. In addition, the results show that the illiterate participants are able to perceive and process phonological length, albeit less well than the literate subjects, when no semantic interference is present. In conclusion, the present results confirm and extend the finding that illiterate subjects are biased towards semantic-conceptual-pragmatic types of cognitive processing.
  • Richter, N., Tiddeman, B., & Haun, D. (2016). Social Preference in Preschoolers: Effects of Morphological Self-Similarity and Familiarity. PLoS One, 11(1): e0145443. doi:10.1371/journal.pone.0145443.

    Abstract

    Adults prefer to interact with others that are similar to themselves. Even slight facial self-resemblance can elicit trust towards strangers. Here we investigate if preschoolers at the age of 5 years already use facial self-resemblance when they make social judgments about others. We found that, in the absence of any additional knowledge about prospective peers, children preferred those who look subtly like themselves over complete strangers. Thus, subtle morphological similarities trigger social preferences well before adulthood.
  • Roberts, S. G., & Verhoef, T. (2016). Double-blind reviewing at EvoLang 11 reveals gender bias. Journal of Language Evolution, 1(2), 163-167. doi:10.1093/jole/lzw009.

    Abstract

    The impact of introducing double-blind reviewing in the most recent Evolution of Language conference is assessed. The ranking of papers is compared between EvoLang 11 (double-blind review) and EvoLang 9 and 10 (single-blind review). Main effects were found for first author gender by conference. The results mirror some findings in the literature on the effects of double-blind review, suggesting that it helps reduce a bias against female authors.

    Additional information

    SI.pdf
  • Roberts, L., Marinis, T., Felser, C., & Clahsen, H. (2007). Antecedent priming at trace positions in children’s sentence processing. Journal of Psycholinguistic Research, 36(2), 175-188. doi: 10.1007/s10936-006-9038-3.

    Abstract

    The present study examines whether children reactivate a moved constituent at its gap position and how children’s more limited working memory span affects the way they process filler-gap dependencies. 46 5–7 year-old children and 54 adult controls participated in a cross-modal picture priming experiment and underwent a standardized working memory test. The results revealed a statistically significant interaction between the participants’ working memory span and antecedent reactivation: High-span children (n = 19) and high-span adults (n = 22) showed evidence of antecedent priming at the gap site, while for low-span children and adults, there was no such effect. The antecedent priming effect in the high-span participants indicates that in both children and adults, dislocated arguments access their antecedents at gap positions. The absence of an antecedent reactivation effect in the low-span participants could mean that these participants required more time to integrate the dislocated constituent and reactivated the filler later during the sentence.
  • Roberts, L. (2007). Investigating real-time sentence processing in the second language. Stem-, Spraak- en Taalpathologie, 15, 115-127.

    Abstract

    Second language (L2) acquisition researchers have always been concerned with what L2 learners know about the grammar of the target language but more recently there has been growing interest in how L2 learners put this knowledge to use in real-time sentence comprehension. In order to investigate real-time L2 sentence processing, the types of constructions studied and the methods used are often borrowed from the field of monolingual processing, but the overall issues are familiar from traditional L2 acquisition research. These cover questions relating to L2 learners’ native-likeness, whether or not L1 transfer is in evidence, and how individual differences such as proficiency and language experience might have an effect. The aim of this paper is to provide for those unfamiliar with the field, an overview of the findings of a selection of behavioral studies that have investigated such questions, and to offer a picture of how L2 learners and bilinguals may process sentences in real time.
  • Robinson, E. B., St Pourcain, B., Anttila, V., Kosmicki, J. A., Bulik-Sullivan, B., Grove, J., Maller, J., Samocha, K. E., Sanders, S. J., Ripke, S., Martin, J., Hollegaard, M. V., Werge, T., Hougaard, D. M., i Psych- S. S. I. Broad Autism Group, Neale, B. M., Evans, D. M., Skuse, D., Mortensen, P. B., Borglum, A. D., Ronald, A. and 2 moreRobinson, E. B., St Pourcain, B., Anttila, V., Kosmicki, J. A., Bulik-Sullivan, B., Grove, J., Maller, J., Samocha, K. E., Sanders, S. J., Ripke, S., Martin, J., Hollegaard, M. V., Werge, T., Hougaard, D. M., i Psych- S. S. I. Broad Autism Group, Neale, B. M., Evans, D. M., Skuse, D., Mortensen, P. B., Borglum, A. D., Ronald, A., Smith, G. D., & Daly, M. J. (2016). Genetic risk for autism spectrum disorders and neuropsychiatric variation in the general population. Nature Genetics, 48, 552-555. doi:10.1038/ng.3529.

    Abstract

    Almost all genetic risk factors for autism spectrum disorders (ASDs) can be found in the general population, but the effects of this risk are unclear in people not ascertained for neuropsychiatric symptoms. Using several large ASD consortium and population-based resources (total n > 38,000), we find genome-wide genetic links between ASDs and typical variation in social behavior and adaptive functioning. This finding is evidenced through both LD score correlation and de novo variant analysis, indicating that multiple types of genetic risk for ASDs influence a continuum of behavioral and developmental traits, the severe tail of which can result in diagnosis with an ASD or other neuropsychiatric disorder. A continuum model should inform the design and interpretation of studies of neuropsychiatric disease biology.

    Additional information

    ng.3529-S1.pdf
  • Rodenas-Cuadrado, P., Pietrafusa, N., Francavilla, T., La Neve, A., Striano, P., & Vernes, S. C. (2016). Characterisation of CASPR2 deficiency disorder - a syndrome involving autism, epilepsy and language impairment. BMC Medical Genetics, 17: 8. doi:10.1186/s12881-016-0272-8.

    Abstract

    Background Heterozygous mutations in CNTNAP2 have been identified in patients with a range of complex phenotypes including intellectual disability, autism and schizophrenia. However heterozygous CNTNAP2 mutations are also found in the normal population. Conversely, homozygous mutations are rare in patient populations and have not been found in any unaffected individuals. Case presentation We describe a consanguineous family carrying a deletion in CNTNAP2 predicted to abolish function of its protein product, CASPR2. Homozygous family members display epilepsy, facial dysmorphisms, severe intellectual disability and impaired language. We compared these patients with previously reported individuals carrying homozygous mutations in CNTNAP2 and identified a highly recognisable phenotype. Conclusions We propose that CASPR2 loss produces a syndrome involving early-onset refractory epilepsy, intellectual disability, language impairment and autistic features that can be recognized as CASPR2 deficiency disorder. Further screening for homozygous patients meeting these criteria, together with detailed phenotypic and molecular investigations will be crucial for understanding the contribution of CNTNAP2 to normal and disrupted development.
  • Roelofs, A. (2007). On the modelling of spoken word planning: Rejoinder to La Heij, Starreveld, and Kuipers (2007). Language and Cognitive Processes, 22(8), 1281-1286. doi:10.1080/01690960701462291.

    Abstract

    The author contests several claims of La Heij, Starreveld, and Kuipers (this issue) concerning the modelling of spoken word planning. The claims are about the relevance of error findings, the interaction between semantic and phonological factors, the explanation of word-word findings, the semantic relatedness paradox, and production rules.
  • Roelofs, A., Piai, V., Garrido Rodriguez, G., & Chwilla, D. J. (2016). Electrophysiology of Cross-Language Interference and Facilitation in Picture Naming. Cortex, 76, 1-16. doi:10.1016/j.cortex.2015.12.003.

    Abstract

    Disagreement exists about how bilingual speakers select words, in particular, whether words in another language compete, or competition is restricted to a target language, or no competition occurs. Evidence that competition occurs but is restricted to a target language comes from response time (RT) effects obtained when speakers name pictures in one language while trying to ignore distractor words in another language. Compared to unrelated distractor words, RT is longer when the picture name and distractor are semantically related, but RT is shorter when the distractor is the translation of the name of the picture in the other language. These effects suggest that distractor words from another language do not compete themselves but activate their counterparts in the target language, thereby yielding the semantic interference and translation facilitation effects. Here, we report an event-related brain potential (ERP) study testing the prediction that priming underlies both of these effects. The RTs showed semantic interference and translation facilitation effects. Moreover, the picture-word stimuli yielded an N400 response, whose amplitude was smaller on semantic and translation trials than on unrelated trials, providing evidence that interference and facilitation priming underlie the RT effects. We present the results of computer simulations showing the utility of a within-language competition account of our findings.
  • Roelofs, A. (2007). A critique of simple name-retrieval models of spoken word planning. Language and Cognitive Processes, 22(8), 1237-1260. doi:10.1080/01690960701461582.

    Abstract

    Simple name-retrieval models of spoken word planning (Bloem & La Heij, 2003; Starreveld & La Heij, 1996) maintain (1) that there are two levels in word planning, a conceptual and a lexical phonological level, and (2) that planning a word in both object naming and oral reading involves the selection of a lexical phonological representation. Here, the name retrieval models are compared to more complex models with respect to their ability to account for relevant data. It appears that the name retrieval models cannot easily account for several relevant findings, including some speech error biases, types of morpheme errors, and context effects on the latencies of responding to pictures and words. New analyses of the latency distributions in previous studies also pose a challenge. More complex models account for all these findings. It is concluded that the name retrieval models are too simple and that the greater complexity of the other models is warranted
  • Roelofs, A. (2007). Attention and gaze control in picture naming, word reading, and word categorizing. Journal of Memory and Language, 57(2), 232-251. doi:10.1016/j.jml.2006.10.001.

    Abstract

    The trigger for shifting gaze between stimuli requiring vocal and manual responses was examined. Participants were presented with picture–word stimuli and left- or right-pointing arrows. They vocally named the picture (Experiment 1), read the word (Experiment 2), or categorized the word (Experiment 3) and shifted their gaze to the arrow to manually indicate its direction. The experiments showed that the temporal coordination of vocal responding and gaze shifting depends on the vocal task and, to a lesser extent, on the type of relationship between picture and word. There was a close temporal link between gaze shifting and manual responding, suggesting that the gaze shifts indexed shifts of attention between the vocal and manual tasks. Computer simulations showed that a simple extension of WEAVER++ [Roelofs, A. (1992). A spreading-activation theory of lemma retrieval in speaking. Cognition, 42, 107–142.; Roelofs, A. (2003). Goal-referenced selection of verbal action: modeling attentional control in the Stroop task. Psychological Review, 110, 88–125.] with assumptions about attentional control in the coordination of vocal responding, gaze shifting, and manual responding quantitatively accounts for the key findings.
  • Roelofs, A., Özdemir, R., & Levelt, W. J. M. (2007). Influences of spoken word planning on speech recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(5), 900-913. doi:10.1037/0278-7393.33.5.900.

    Abstract

    In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway. (PsycINFO Database Record (c) 2007 APA, all rights reserved)
  • Rojas-Berscia, L. M. (2016). Lóxoro, traces of a contemporary Peruvian genderlect. Borealis: An International Journal of Hispanic Linguistics, 5, 157-170.

    Abstract

    Not long after the premiere of Loxoro in 2011, a short-film by Claudia Llosa which presents the problems the transgender community faces in the capital of Peru, a new language variety became visible for the first time to the Lima society. Lóxoro [‘lok.so.ɾo] or Húngaro [‘uŋ.ga.ɾo], as its speakers call it, is a language spoken by transsexuals and the gay community of Peru. The first clues about its existence were given by a comedian, Fernando Armas, in the mid 90’s, however it is said to have appeared not before the 60’s. Following some previous work on gay languages by Baker (2002) and languages and society (cf. Halliday 1978), the main aim of the present article is to provide a primary sketch of this language in its phonological, morphological, lexical and sociological aspects, based on a small corpus extracted from the film of Llosa and natural dialogues from Peruvian TV-journals, in order to classify this variety within modern sociolinguistic models (cf. Muysken 2010) and argue for the “anti-language” (cf. Halliday 1978) nature of it
  • Rossi, G., & Zinken, J. (2016). Grammar and social agency: The pragmatics of impersonal deontic statements. Language, 92(4), e296-e325. doi:10.1353/lan.2016.0083.

    Abstract

    Sentence and construction types generally have more than one pragmatic function. Impersonal deontic declaratives such as ‘it is necessary to X’ assert the existence of an obligation or necessity without tying it to any particular individual. This family of statements can accomplish a range of functions, including getting another person to act, explaining or justifying the speaker’s own behavior as he or she undertakes to do something, or even justifying the speaker’s behavior while simultaneously getting another person to help. How is an impersonal deontic declarative fit for these different functions? And how do people know which function it has in a given context? We address these questions using video recordings of everyday interactions among speakers of Italian and Polish. Our analysis results in two findings. The first is that the pragmatics of impersonal deontic declaratives is systematically shaped by (i) the relative responsibility of participants for the necessary task and (ii) the speaker’s nonverbal conduct at the time of the statement. These two factors influence whether the task in question will be dealt with by another person or by the speaker, often giving the statement the force of a request or, alternatively, of an account of the speaker’s behavior. The second finding is that, although these factors systematically influence their function, impersonal deontic declaratives maintain the potential to generate more complex interactions that go beyond a simple opposition between requests and accounts, where participation in the necessary task may be shared, negotiated, or avoided. This versatility of impersonal deontic declaratives derives from their grammatical makeup: by being deontic and impersonal, they can both mobilize or legitimize an act by different participants in the speech event, while their declarative form does not constrain how they should be responded to. These features make impersonal deontic declaratives a special tool for the management of social agency.
  • Rowbotham, S. J., Holler, J., Wearden, A., & Lloyd, D. M. (2016). I see how you feel: Recipients obtain additional information from speakers’ gestures about pain. Patient Education and Counseling, 99(8), 1333-1342. doi:10.1016/j.pec.2016.03.007.

    Abstract

    Objective

    Despite the need for effective pain communication, pain is difficult to verbalise. Co-speech gestures frequently add information about pain that is not contained in the accompanying speech. We explored whether recipients can obtain additional information from gestures about the pain that is being described.
    Methods

    Participants (n = 135) viewed clips of pain descriptions under one of four conditions: 1) Speech Only; 2) Speech and Gesture; 3) Speech, Gesture and Face; and 4) Speech, Gesture and Face plus Instruction (short presentation explaining the pain information that gestures can depict). Participants provided free-text descriptions of the pain that had been described. Responses were scored for the amount of information obtained from the original clips.
    Findings

    Participants in the Instruction condition obtained the most information, while those in the Speech Only condition obtained the least (all comparisons p<.001).
    Conclusions

    Gestures produced during pain descriptions provide additional information about pain that recipients are able to pick up without detriment to their uptake of spoken information.
    Practice implications

    Healthcare professionals may benefit from instruction in gestures to enhance uptake of information about patients’ pain experiences.
  • Rowland, C. F. (2007). Explaining errors in children’s questions. Cognition, 104(1), 106-134. doi:10.1016/j.cognition.2006.05.011.

    Abstract

    The ability to explain the occurrence of errors in children’s speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813–842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children’s speech, and that errors occur when children resort to other operations to produce questions [e.g. Dąbrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83–102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157–181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.
  • Rubio-Fernández, P., Cummins, C., & Tian, Y. (2016). Are single and extended metaphors processed differently? A test of two Relevance-Theoretic accounts. Journal of Pragmatics, 94, 15-28. doi:10.1016/j.pragma.2016.01.005.

    Abstract

    Carston (2010) proposes that metaphors can be processed via two different routes. In line with the standard Relevance-Theoretic account of loose use, single metaphors are interpreted by a local pragmatic process of meaning adjustment, resulting in the construction of an ad hoc concept. In extended metaphorical passages, by contrast, the reader switches to a second processing mode because the various semantic associates in the passage are mutually reinforcing, which makes the literal meaning highly activated relative to possible meaning adjustments. In the second processing mode the literal meaning of the whole passage is metarepresented and entertained as an ‘imaginary world’ and the intended figurative implications are derived later in processing. The results of three experiments comparing the interpretation of the same target expressions across literal, single-metaphorical and extended-metaphorical contexts, using self-paced reading (Experiment 1), eye-tracking during natural reading (Experiment 2) and cued recall (Experiment 3), offered initial support to Carston's distinction between the processing of single and extended metaphors. We end with a comparison between extended metaphors and allegories, and make a call for further theoretical and experimental work to increase our understanding of the similarities and differences between the interpretation and processing of different figurative uses, single and extended.
  • Rubio-Fernández, P. (2016). How redundant are redundant color adjectives? An efficiency-based analysis of color overspecification. Frontiers in Psychology, 7: 153. doi:10.3389/fpsyg.2016.00153.

    Abstract

    Color adjectives tend to be used redundantly in referential communication. I propose that redundant color adjectives (RCAs) are often intended to exploit a color contrast in the visual context and hence facilitate object identification, despite not being necessary to establish unique reference. Two language-production experiments investigated two types of factors that may affect the use of RCAs: factors related to the efficiency of color in the visual context and factors related to the semantic category of the noun. The results of Experiment 1 confirmed that people produce RCAs when color may facilitate object recognition; e.g., they do so more often in polychrome displays than in monochrome displays, and more often in English (pre-nominal position) than in Spanish (post-nominal position). RCAs are also used when color is a central property of the object category; e.g., people referred to the color of clothes more often than to the color of geometrical figures (Experiment 1), and they overspecified atypical colors more often than variable and stereotypical colors (Experiment 2). These results are relevant for pragmatic models of referential communication based on Gricean pragmatics and informativeness. An alternative analysis is proposed, which focuses on the efficiency and pertinence of color in a given referential situation.
  • Rubio-Fernández, P., & Grassmann, S. (2016). Metaphors as second labels: Difficult for preschool children? Journal of Psycholinguistic Research, 45, 931-944. doi:10.1007/s10936-015-9386-y.

    Abstract

    This study investigates the development of two cognitive abilities that are involved in metaphor comprehension: implicit analogical reasoning and assigning an unconventional label to a familiar entity (as in Romeo’s ‘Juliet is the sun’). We presented 3- and 4-year-old children with literal object-requests in a pretense setting (e.g., ‘Give me the train with the hat’). Both age-groups succeeded in a baseline condition that used building blocks as props (e.g., placed either on the front or the rear of a train engine) and only required spatial analogical reasoning to interpret the referential expression. Both age-groups performed significantly worse in the critical condition, which used familiar objects as props (e.g., small dogs as pretend hats) and required both implicit analogical reasoning and assigning second labels. Only the 4-year olds succeeded in this condition. These results offer a new perspective on young children’s difficulties with metaphor comprehension in the preschool years.
  • Rubio-Fernández, P., & Geurts, B. (2016). Don’t mention the marble! The role of attentional processes in false-belief tasks. Review of Philosophy and Psychology, 7, 835-850. doi:10.1007/s13164-015-0290-z.
  • Rubio-Fernández, P. (2007). Suppression in metaphor interpretation: Differences between meaning selection and meaning construction. Journal of Semantics, 24(4), 345-371. doi:10.1093/jos/ffm006.

    Abstract

    Various accounts of metaphor interpretation propose that it involves constructing an ad hoc concept on the basis of the concept encoded by the metaphor vehicle (i.e. the expression used for conveying the metaphor). This paper discusses some of the differences between these theories and investigates their main empirical prediction: that metaphor interpretation involves enhancing properties of the metaphor vehicle that are relevant for interpretation, while suppressing those that are irrelevant. This hypothesis was tested in a cross-modal lexical priming study adapted from early studies on lexical ambiguity. The different patterns of suppression of irrelevant meanings observed in disambiguation studies and in the experiment on metaphor reported here are discussed in terms of differences between meaning selection and meaning construction.
  • De Ruiter, J. P. (2007). Postcards from the mind: The relationship between speech, imagistic gesture and thought. Gesture, 7(1), 21-38.

    Abstract

    In this paper, I compare three different assumptions about the relationship between speech, thought and gesture. These assumptions have profound consequences for theories about the representations and processing involved in gesture and speech production. I associate these assumptions with three simplified processing architectures. In the Window Architecture, gesture provides us with a 'window into the mind'. In the Language Architecture, properties of language have an influence on gesture. In the Postcard Architecture, gesture and speech are planned by a single process to become one multimodal message. The popular Window Architecture is based on the assumption that gestures come, as it were, straight out of the mind. I argue that during the creation of overt imagistic gestures, many processes, especially those related to (a) recipient design, and (b) effects of language structure, cause an observable gesture to be very different from the original thought that it expresses. The Language Architecture and the Postcard Architecture differ from the Window Architecture in that they both incorporate a central component which plans gesture and speech together, however they differ from each other in the way they align gesture and speech. The Postcard Architecture assumes that the process creating a multimodal message involving both gesture and speech has access to the concepts that are available in speech, while the Language Architecture relies on interprocess communication to resolve potential conflicts between the content of gesture and speech.
  • Salverda, A. P., Dahan, D., Tanenhaus, M. K., Crosswhite, K., Masharov, M., & McDonough, J. (2007). Effects of prosodically modulated sub-phonetic variation on lexical competition. Cognition, 105(2), 466-476. doi:10.1016/j.cognition.2006.10.008.

    Abstract

    Eye movements were monitored as participants followed spoken instructions to manipulate one of four objects pictured on a computer screen. Target words occurred in utterance-medial (e.g., Put the cap next to the square) or utterance-final position (e.g., Now click on the cap). Displays consisted of the target picture (e.g., a cap), a monosyllabic competitor picture (e.g., a cat), a polysyllabic competitor picture (e.g., a captain) and a distractor (e.g., a beaker). The relative proportion of fixations to the two types of competitor pictures changed as a function of the position of the target word in the utterance, demonstrating that lexical competition is modulated by prosodically conditioned phonetic variation.
  • San Roque, L. (2016). 'Where' questions and their responses in Duna (Papua New Guinea). Open Linguistics, 2(1), 85-104. doi:10.1515/opli-2016-0005.

    Abstract

    Despite their central role in question formation, content interrogatives in spontaneous conversation remain relatively under-explored cross-linguistically. This paper outlines the structure of ‘where’ expressions in Duna, a language spoken in Papua New Guinea, and examines where-questions in a small Duna data set in terms of their frequency, function, and the responses they elicit. Questions that ask ‘where?’ have been identified as a useful tool in studying the language of space and place, and, in the Duna case and elsewhere, show high frequency and functional flexibility. Although where-questions formulate place as an information gap, they are not always answered through direct reference to canonical places. While some question types may be especially “socially costly” (Levinson 2012), asking ‘where’ perhaps provides a relatively innocuous way of bringing a particular event or situation into focus.
  • Sánchez-Fernández, M., & Rojas-Berscia, L. M. (2016). Vitalidad lingüística de la lengua paipai de Santa Catarina, Baja California. LIAMES, 16(1), 157-183. doi:10.20396/liames.v16i1.8646171.

    Abstract

    In the last few decades little to nothing has been said about the sociolinguistic situation of Yumanan languages in Mexico. In order to cope with this lack of studies, we present a first study on linguistic vitality in Paipai, as it is spoken in Santa Catarina, Baja California, Mexico. Since languages such as Mexican Spanish and Ko’ahl coexist with this language in the same ecology, both are part of the study as well. This first approach hoists from two axes: on one hand, providing a theoretical framework that explains the sociolinguistic dynamics in the ecology of the language (Mufwene 2001), and, on the other hand, bringing over a quantitative study based on MSF (Maximum Shared Facility) (Terborg & Garcìa 2011), which explains the state of linguistic vitality of paipai, enriched by qualitative information collected in situ
  • Sankoff, G., & Brown, P. (1976). The origins of syntax in discourse: A case study of Tok Pisin relatives. Language, 52(3), 631-666.

    Abstract

    The structure of relative clauses has attracted considerable attention in recent years, and a number of authors have carried out analyses of the syntax of relativization. In our investigation of syntactic structure and change in New Guinea Tok Pisin, we find that the basic processes involved in relativization have much broader discourse functions, and that relativization is only a special instance of the application of general ‘bracketing’ devices used in the organization of information. Syntactic structure, in this case, can be understood as a component of, and derivative from, discourse structure.
  • Sassenhagen, J., & Alday, P. M. (2016). A common misapplication of statistical inference: Nuisance control with null-hypothesis significance tests. Brain and Language, 162, 42-45. doi:10.1016/j.bandl.2016.08.001.

    Abstract

    Experimental research on behavior and cognition frequently rests on stimulus or subject selection where not all characteristics can be fully controlled, even when attempting strict matching. For example, when contrasting patients to controls, variables such as intelligence or socioeconomic status are often correlated with patient status. Similarly, when presenting word stimuli, variables such as word frequency are often correlated with primary variables of interest. One procedure very commonly employed to control for such nuisance effects is conducting inferential tests on confounding stimulus or subject characteristics. For example, if word length is not significantly different for two stimulus sets, they are considered as matched for word length. Such a test has high error rates and is conceptually misguided. It reflects a common misunderstanding of statistical tests: interpreting significance not to refer to inference about a particular population parameter, but about 1. the sample in question, 2. the practical relevance of a sample difference (so that a nonsignificant test is taken to indicate evidence for the absence of relevant differences). We show inferential testing for assessing nuisance effects to be inappropriate both pragmatically and philosophically, present a survey showing its high prevalence, and briefly discuss an alternative in the form of regression including nuisance variables.
  • Sauppe, S. (2016). Verbal semantics drives early anticipatory eye movements during the comprehension of verb-initial sentences. Frontiers in Psychology, 7: 95. doi:10.3389/fpsyg.2016.00095.

    Abstract

    Studies on anticipatory processes during sentence comprehension often focus on the prediction of postverbal direct objects. In subject-initial languages (the target of most studies so far), however, the position in the sentence, the syntactic function, and the semantic role of arguments are often conflated. For example, in the sentence “The frog will eat the fly” the syntactic object (“fly”) is at the same time also the last word and the patient argument of the verb. It is therefore not apparent which kind of information listeners orient to for predictive processing during sentence comprehension. A visual world eye tracking study on the verb-initial language Tagalog (Austronesian) tested what kind of information listeners use to anticipate upcoming postverbal linguistic input. The grammatical structure of Tagalog allows to test whether listeners' anticipatory gaze behavior is guided by predictions of the linear order of words, by syntactic functions (e.g., subject/object), or by semantic roles (agent/patient). Participants heard sentences of the type “Eat frog fly” or “Eat fly frog” (both meaning “The frog will eat the fly”) while looking at displays containing an agent referent (“frog”), a patient referent (“fly”) and a distractor. The verb carried morphological marking that allowed the order and syntactic function of agent and patient to be inferred. After having heard the verb, listeners fixated on the agent irrespective of its syntactic function or position in the sentence. While hearing the first-mentioned argument, listeners fixated on the corresponding referent in the display accordingly and then initiated saccades to the last-mentioned referent before it was encountered. The results indicate that listeners used verbal semantics to identify referents and their semantic roles early; information about word order or syntactic functions did not influence anticipatory gaze behavior directly after the verb was heard. In this verb-initial language, event semantics takes early precedence during the comprehension of sentences, while arguments are anticipated temporally more local to when they are encountered. The current experiment thus helps to better understand anticipation during language processing by employing linguistic structures not available in previously studied subject-initial languages.
  • Sauter, D., & Scott, S. K. (2007). More than one kind of happiness: Can we recognize vocal expressions of different positive states? Motivation and Emotion, 31(3), 192-199.

    Abstract

    Several theorists have proposed that distinctions are needed between different positive emotional states, and that these discriminations may be particularly useful in the domain of vocal signals (Ekman, 1992b, Cognition and Emotion, 6, 169–200; Scherer, 1986, Psychological Bulletin, 99, 143–165). We report an investigation into the hypothesis that positive basic emotions have distinct vocal expressions (Ekman, 1992b, Cognition and Emotion, 6, 169–200). Non-verbal vocalisations are used that map onto five putative positive emotions: Achievement/Triumph, Amusement, Contentment, Sensual Pleasure, and Relief. Data from categorisation and rating tasks indicate that each vocal expression is accurately categorised and consistently rated as expressing the intended emotion. This pattern is replicated across two language groups. These data, we conclude, provide evidence for the existence of robustly recognisable expressions of distinct positive emotions.
  • Scharenborg, O., Seneff, S., & Boves, L. (2007). A two-pass approach for handling out-of-vocabulary words in a large vocabulary recognition task. Computer, Speech & Language, 21, 206-218. doi:10.1016/j.csl.2006.03.003.

    Abstract

    This paper addresses the problem of recognizing a vocabulary of over 50,000 city names in a telephone access spoken dialogue system. We adopt a two-stage framework in which only major cities are represented in the first stage lexicon. We rely on an unknown word model encoded as a phone loop to detect OOV city names (referred to as ‘rare city’ names). We use SpeM, a tool that can extract words and word-initial cohorts from phone graphs from a large fallback lexicon, to provide an N-best list of promising city name hypotheses on the basis of the phone graph corresponding to the OOV. This N-best list is then inserted into the second stage lexicon for a subsequent recognition pass. Experiments were conducted on a set of spontaneous telephone-quality utterances; each containing one rare city name. It appeared that SpeM was able to include nearly 75% of the correct city names in an N-best hypothesis list of 3000 city names. With the names found by SpeM to extend the lexicon of the second stage recognizer, a word accuracy of 77.3% could be obtained. The best one-stage system yielded a word accuracy of 72.6%. The absolute number of correctly recognized rare city names almost doubled, from 62 for the best one-stage system to 102 for the best two-stage system. However, even the best two-stage system recognized only about one-third of the rare city names retrieved by SpeM. The paper discusses ways for improving the overall performance in the context of an application.
  • Scharenborg, O., Ten Bosch, L., & Boves, L. (2007). 'Early recognition' of polysyllabic words in continuous speech. Computer, Speech & Language, 21, 54-71. doi:10.1016/j.csl.2005.12.001.

    Abstract

    Humans are able to recognise a word before its acoustic realisation is complete. This in contrast to conventional automatic speech recognition (ASR) systems, which compute the likelihood of a number of hypothesised word sequences, and identify the words that were recognised on the basis of a trace back of the hypothesis with the highest eventual score, in order to maximise efficiency and performance. In the present paper, we present an ASR system, SpeM, based on principles known from the field of human word recognition that is able to model the human capability of ‘early recognition’ by computing word activation scores (based on negative log likelihood scores) during the speech recognition process. Experiments on 1463 polysyllabic words in 885 utterances showed that 64.0% (936) of these polysyllabic words were recognised correctly at the end of the utterance. For 81.1% of the 936 correctly recognised polysyllabic words the local word activation allowed us to identify the word before its last phone was available, and 64.1% of those words were already identified one phone after their lexical uniqueness point. We investigated two types of predictors for deciding whether a word is considered as recognised before the end of its acoustic realisation. The first type is related to the absolute and relative values of the word activation, which trade false acceptances for false rejections. The second type of predictor is related to the number of phones of the word that have already been processed and the number of phones that remain until the end of the word. The results showed that SpeM’s performance increases if the amount of acoustic evidence in support of a word increases and the risk of future mismatches decreases.
  • Scharenborg, O. (2007). Reaching over the gap: A review of efforts to link human and automatic speech recognition research. Speech Communication, 49, 336-347. doi:10.1016/j.specom.2007.01.009.

    Abstract

    The fields of human speech recognition (HSR) and automatic speech recognition (ASR) both investigate parts of the speech recognition process and have word recognition as their central issue. Although the research fields appear closely related, their aims and research methods are quite different. Despite these differences there is, however, lately a growing interest in possible cross-fertilisation. Researchers from both ASR and HSR are realising the potential benefit of looking at the research field on the other side of the ‘gap’. In this paper, we provide an overview of past and present efforts to link human and automatic speech recognition research and present an overview of the literature describing the performance difference between machines and human listeners. The focus of the paper is on the mutual benefits to be derived from establishing closer collaborations and knowledge interchange between ASR and HSR. The paper ends with an argument for more and closer collaborations between researchers of ASR and HSR to further improve research in both fields.
  • Scharenborg, O., Wan, V., & Moore, R. K. (2007). Towards capturing fine phonetic variation in speech using articulatory features. Speech Communication, 49, 811-826. doi:10.1016/j.specom.2007.01.005.

    Abstract

    The ultimate goal of our research is to develop a computational model of human speech recognition that is able to capture the effects of fine-grained acoustic variation on speech recognition behaviour. As part of this work we are investigating automatic feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. In the experiments reported here, we analysed the classification results from support vector machines (SVMs) and multilayer perceptrons (MLPs). MLPs have been widely and successfully used for the task of multi-value articulatory feature classification, while (to the best of our knowledge) SVMs have not. This paper compares the performance of the two classifiers and analyses the results in order to better understand the articulatory representations. It was found that the SVMs outperformed the MLPs for five out of the seven articulatory feature classes we investigated while using only 8.8–44.2% of the training material used for training the MLPs. The structure in the misclassifications of the SVMs and MLPs suggested that there might be a mismatch between the characteristics of the classification systems and the characteristics of the description of the AF values themselves. The analyses showed that some of the misclassified features are inherently confusable given the acoustic space. We concluded that in order to come to a feature set that can be used for a reliable and accurate automatic description of the speech signal; it could be beneficial to move away from quantised representations.
  • Schepens, J., Van der Silk, F., & Van Hout, R. (2016). L1 and L2 Distance Effects in Learning L3 Dutch. Language Learning, 66, 224-256. doi:10.1111/lang.12150.

    Abstract

    Many people speak more than two languages. How do languages acquired earlier affect the learnability of additional languages? We show that linguistic distances between speakers' first (L1) and second (L2) languages and their third (L3) language play a role. Larger distances from the L1 to the L3 and from the L2 to the L3 correlate with lower degrees of L3 learnability. The evidence comes from L3 Dutch speaking proficiency test scores obtained by candidates who speak a diverse set of L1s and L2s. Lexical and morphological distances between the L1s of the learners and Dutch explained 47.7% of the variation in proficiency scores. Lexical and morphological distances between the L2s of the learners and Dutch explained 32.4% of the variation in proficiency scores in multilingual learners. Cross-linguistic differences require language learners to bridge varying linguistic gaps between their L1 and L2 competences and the target language.
  • Schmidt, J., Herzog, D., Scharenborg, O., & Janse, E. (2016). Do hearing aids improve affect perception? Advances in Experimental Medicine and Biology, 894, 47-55. doi:10.1007/978-3-319-25474-6_6.

    Abstract

    Normal-hearing listeners use acoustic cues in speech to interpret a speaker's emotional state. This study investigates the effect of hearing aids on the perception of the emotion dimensions arousal (aroused/calm) and valence (positive/negative attitude) in older adults with hearing loss. More specifically, we investigate whether wearing a hearing aid improves the correlation between affect ratings and affect-related acoustic parameters. To that end, affect ratings by 23 hearing-aid users were compared for aided and unaided listening. Moreover, these ratings were compared to the ratings by an age-matched group of 22 participants with age-normal hearing.For arousal, hearing-aid users rated utterances as generally more aroused in the aided than in the unaided condition. Intensity differences were the strongest indictor of degree of arousal. Among the hearing-aid users, those with poorer hearing used additional prosodic cues (i.e., tempo and pitch) for their arousal ratings, compared to those with relatively good hearing. For valence, pitch was the only acoustic cue that was associated with valence. Neither listening condition nor hearing loss severity (differences among the hearing-aid users) influenced affect ratings or the use of affect-related acoustic parameters. Compared to the normal-hearing reference group, ratings of hearing-aid users in the aided condition did not generally differ in both emotion dimensions. However, hearing-aid users were more sensitive to intensity differences in their arousal ratings than the normal-hearing participants.We conclude that the use of hearing aids is important for the rehabilitation of affect perception and particularly influences the interpretation of arousal
  • Schmidt, J., Janse, E., & Scharenborg, O. (2016). Perception of emotion in conversational speech by younger and older listeners. Frontiers in Psychology, 7: 781. doi:10.3389/fpsyg.2016.00781.

    Abstract

    This study investigated whether age and/or differences in hearing sensitivity influence the perception of the emotion dimensions arousal (calm vs. aroused) and valence (positive vs. negative attitude) in conversational speech. To that end, this study specifically focused on the relationship between participants' ratings of short affective utterances and the utterances' acoustic parameters (pitch, intensity, and articulation rate) known to be associated with the emotion dimensions arousal and valence. Stimuli consisted of short utterances taken from a corpus of conversational speech. In two rating tasks, younger and older adults either rated arousal or valence using a 5-point scale. Mean intensity was found to be the main cue participants used in the arousal task (i.e., higher mean intensity cueing higher levels of arousal) while mean F-0 was the main cue in the valence task (i.e., higher mean F-0 being interpreted as more negative). Even though there were no overall age group differences in arousal or valence ratings, compared to younger adults, older adults responded less strongly to mean intensity differences cueing arousal and responded more strongly to differences in mean F-0 cueing valence. Individual hearing sensitivity among the older adults did not modify the use of mean intensity as an arousal cue. However, individual hearing sensitivity generally affected valence ratings and modified the use of mean F-0. We conclude that age differences in the interpretation of mean F-0 as a cue for valence are likely due to age-related hearing loss, whereas age differences in rating arousal do not seem to be driven by hearing sensitivity differences between age groups (as measured by pure-tone audiometry).
  • Schoot, L., Heyselaar, E., Hagoort, P., & Segaert, K. (2016). Does syntactic alignment effectively influence how speakers are perceived by their conversation partner. PLoS One, 11(4): e015352. doi:10.1371/journal.pone.0153521.

    Abstract

    The way we talk can influence how we are perceived by others. Whereas previous studies have started to explore the influence of social goals on syntactic alignment, in the current study, we additionally investigated whether syntactic alignment effectively influences conversation partners’ perception of the speaker. To this end, we developed a novel paradigm in which we can measure the effect of social goals on the strength of syntactic alignment for one participant (primed participant), while simultaneously obtaining usable social opinions about them from their conversation partner (the evaluator). In Study 1, participants’ desire to be rated favorably by their partner was manipulated by assigning pairs to a Control (i.e., primed participants did not know they were being evaluated) or Evaluation context (i.e., primed participants knew they were being evaluated). Surprisingly, results showed no significant difference in the strength with which primed participants aligned their syntactic choices with their partners’ choices. In a follow-up study, we used a Directed Evaluation context (i.e., primed participants knew they were being evaluated and were explicitly instructed to make a positive impression). However, again, there was no evidence supporting the hypothesis that participants’ desire to impress their partner influences syntactic alignment. With respect to the influence of syntactic alignment on perceived likeability by the evaluator, a negative relationship was reported in Study 1: the more primed participants aligned their syntactic choices with their partner, the more that partner decreased their likeability rating after the experiment. However, this effect was not replicated in the Directed Evaluation context of Study 2. In other words, our results do not support the conclusion that speakers’ desire to be liked affects how much they align their syntactic choices with their partner, nor is there convincing evidence that there is a reliable relationship between syntactic alignment and perceived likeability.

    Additional information

    Data availability
  • Schoot, L., Hagoort, P., & Segaert, K. (2016). What can we learn from a two-brain approach to verbal interaction? Neuroscience and Biobehavioral Reviews, 68, 454-459. doi:10.1016/j.neubiorev.2016.06.009.

    Abstract

    Verbal interaction is one of the most frequent social interactions humans encounter on a daily basis. In the current paper, we zoom in on what the multi-brain approach has contributed, and can contribute in the future, to our understanding of the neural mechanisms supporting verbal interaction. Indeed, since verbal interaction can only exist between individuals, it seems intuitive to focus analyses on inter-individual neural markers, i.e. between-brain neural coupling. To date, however, there is a severe lack of theoretically-driven, testable hypotheses about what between-brain neural coupling actually reflects. In this paper, we develop a testable hypothesis in which between-pair variation in between-brain neural coupling is of key importance. Based on theoretical frameworks and empirical data, we argue that the level of between-brain neural coupling reflects speaker-listener alignment at different levels of linguistic and extra-linguistic representation. We discuss the possibility that between-brain neural coupling could inform us about the highest level of inter-speaker alignment: mutual understanding
  • Segaert, K., Wheeldon, L., & Hagoort, P. (2016). Unifying structural priming effects on syntactic choices and timing of sentence generation. Journal of Memory and Language, 91, 59-80. doi:10.1016/j.jml.2016.03.011.

    Abstract

    We investigated whether structural priming of production latencies is sensitive to the same factors known to influence persistence of structural choices: structure preference, cumulativity and verb repetition. In two experiments, we found structural persistence only for passives (inverse preference effect) while priming effects on latencies were stronger for the actives (positive preference effect). We found structural persistence for passives to be influenced by immediate primes and long lasting cumulativity (all preceding primes) (Experiment 1), and to be boosted by verb repetition (Experiment 2). In latencies we found effects for actives were sensitive to long lasting cumulativity (Experiment 1). In Experiment 2, in latencies we found priming for actives overall, while for passives the priming effects emerged as the cumulative exposure increased but only when also aided by verb repetition. These findings are consistent with the Two-stage Competition model, an integrated model of structural priming effects for sentence choice and latency
  • Segurado, R., Hamshere, M. L., Glaser, B., Nikolov, I., Moskvina, V., & Holmans, P. A. (2007). Combining linkage data sets for meta-analysis and mega-analysis: the GAW15 rheumatoid arthritis data set. BMC Proceedings, 1(Suppl 1): S104.

    Abstract

    We have used the genome-wide marker genotypes from Genetic Analysis Workshop 15 Problem 2 to explore joint evidence for genetic linkage to rheumatoid arthritis across several samples. The data consisted of four high-density genome scans on samples selected for rheumatoid arthritis. We cleaned the data, removed intermarker linkage disequilibrium, and assembled the samples onto a common genetic map using genome sequence positions as a reference for map interpolation. The individual studies were combined first at the genotype level (mega-analysis) prior to a multipoint linkage analysis on the combined sample, and second using the genome scan meta-analysis method after linkage analysis of each sample. The two approaches were compared, and give strong support to the HLA locus on chromosome 6 as a susceptibility locus. Other regions of interest include loci on chromosomes 11, 2, and 12.
  • Selten, M., Meyer, F., Ba, W., Valles, A., Maas, D., Negwer, M., Eijsink, V. D., van Vugt, R. W. M., van Hulten, J. A., van Bakel, N. H. M., Roosen, J., van der Linden, R., Schubert, D., Verheij, M. M. M., Kasri, N. N., & Martens, G. J. M. (2016). Increased GABAB receptor signaling in a rat model for schizophrenia. Scientific Reports, 6: 34240. doi:10.1038/srep34240.

    Abstract

    Schizophrenia is a complex disorder that affects cognitive function and has been linked, both in patients and animal models, to dysfunction of the GABAergic system. However, the pathophysiological consequences of this dysfunction are not well understood. Here, we examined the GABAergic system in an animal model displaying schizophrenia-relevant features, the apomorphine-susceptible (APO-SUS) rat and its phenotypic counterpart, the apomorphine-unsusceptible (APO-UNSUS) rat at postnatal day 20-22. We found changes in the expression of the GABA-synthesizing enzyme GAD67 specifically in the prelimbic-but not the infralimbic region of the medial prefrontal cortex (mPFC), indicative of reduced inhibitory function in this region in APO-SUS rats. While we did not observe changes in basal synaptic transmission onto LII/III pyramidal cells in the mPFC of APO-SUS compared to APO-UNSUS rats, we report reduced paired-pulse ratios at longer inter-stimulus intervals. The GABA(B) receptor antagonist CGP 55845 abolished this reduction, indicating that the decreased paired-pulse ratio was caused by increased GABA(B) signaling. Consistently, we find an increased expression of the GABA(B1) receptor subunit in APO-SUS rats. Our data provide physiological evidence for increased presynaptic GABAB signaling in the mPFC of APO-SUS rats, further supporting an important role for the GABAergic system in the pathophysiology of schizophrenia.
  • Senft, G. (1985). Emic or etic or just another catch 22? A repartee to Hartmut Haberland. Journal of Pragmatics, 9, 845.
  • Senft, G. (2007). [Review of the book Bislama reference grammar by Terry Crowley]. Linguistics, 45(1), 235-239.
  • Senft, G. (2007). [Review of the book Serial verb constructions - A cross-linguistic typology by Alexandra Y. Aikhenvald and Robert M. W. Dixon]. Linguistics, 45(4), 833-840. doi:10.1515/LING.2007.024.

Share this page