Publications

Displaying 1 - 100 of 2119
  • Abbot-Smith, K., Chang, F., Rowland, C. F., Ferguson, H., & Pine, J. (2017). Do two and three year old children use an incremental first-NP-as-agent bias to process active transitive and passive sentences?: A permutation analysis. PLoS One, 12(10): e0186129. doi:10.1371/journal.pone.0186129.

    Abstract

    We used eye-tracking to investigate if and when children show an incremental bias to assume that the first noun phrase in a sentence is the agent (first-NP-as-agent bias) while processing the meaning of English active and passive transitive sentences. We also investigated whether children can override this bias to successfully distinguish active from passive sentences, after processing the remainder of the sentence frame. For this second question we used eye-tracking (Study 1) and forced-choice pointing (Study 2). For both studies, we used a paradigm in which participants simultaneously saw two novel actions with reversed agent-patient relations while listening to active and passive sentences. We compared English-speaking 25-month-olds and 41-month-olds in between-subjects sentence structure conditions (Active Transitive Condition vs. Passive Condition). A permutation analysis found that both age groups showed a bias to incrementally map the first noun in a sentence onto an agent role. Regarding the second question, 25-month-olds showed some evidence of distinguishing the two structures in the eye-tracking study. However, the 25-month-olds did not distinguish active from passive sentences in the forced choice pointing task. In contrast, the 41-month-old children did reanalyse their initial first-NP-as-agent bias to the extent that they clearly distinguished between active and passive sentences both in the eye-tracking data and in the pointing task. The results are discussed in relation to the development of syntactic (re)parsing.

    Additional information

    Data available from OSF
  • Abbott, M. J., Angele, B., Ahn, D., & Rayner, K. (2015). Skipping syntactically illegal the previews: The role of predictability. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41(6), 1703-1714. doi:10.1037/xlm0000142.

    Abstract

    Readers tend to skip words, particularly when they are short, frequent, or predictable. Angele and Rayner (2013) recently reported that readers are often unable to detect syntactic anomalies in parafoveal vision. In the present study, we manipulated target word predictability to assess whether contextual constraint modulates the-skipping behavior. The results provide further evidence that readers frequently skip the article the when infelicitous in context. Readers skipped predictable words more often than unpredictable words, even when the, which was syntactically illegal and unpredictable from the prior context, was presented as a parafoveal preview. The results of the experiment were simulated using E-Z Reader 10 by assuming that cloze probability can be dissociated from parafoveal visual input. It appears that when a short word is predictable in context, a decision to skip it can be made even if the information available parafoveally conflicts both visually and syntactically with those predictions.
  • Acheson, D. J., Wells, J. B., & MacDonald, M. C. (2008). New and updated tests of print exposure and reading abilities in college students. Behavior Research Methods, 40(1), 278-289. doi:10.3758/BRM.40.1.278.

    Abstract

    The relationship between print exposure and measures of reading skill was examined in college students (N=99, 58 female; mean age=20.3 years). Print exposure was measured with several new self-reports of reading and writing habits, as well as updated versions of the Author Recognition Test and the Magazine Recognition Test (Stanovich & West, 1989). Participants completed a sentence comprehension task with syntactically complex sentences, and reading times and comprehension accuracy were measured. An additional measure of reading skill was provided by participants’ scores on the verbal portions of the ACT, a standardized achievement test. Higher levels of print exposure were associated with higher sentence processing abilities and superior verbal ACT performance. The relative merits of different print exposure assessments are discussed.
  • Acheson, D. J. (2013). Signatures of response conflict monitoring in language production. Procedia - Social and Behavioral Sciences, 94, 214-215. doi:10.1016/j.sbspro.2013.09.106.
  • Acheson, D. J., & Hagoort, P. (2013). Stimulating the brain's language network: Syntactic ambiguity resolution after TMS to the IFG and MTG. Journal of Cognitive Neuroscience, 25(10), 1664-1677. doi:10.1162/jocn_a_00430.

    Abstract

    The posterior middle temporal gyrus (MTG) and inferior frontal gyrus (IFG) are two critical nodes of the brain's language network. Previous neuroimaging evidence has supported a dissociation in language comprehension in which parts of the MTG are involved in the retrieval of lexical syntactic information and the IFG is involved in unification operations that maintain, select, and integrate multiple sources of information over time. In the present investigation, we tested for causal evidence of this dissociation by modulating activity in IFG and MTG using an offline TMS procedure: continuous theta-burst stimulation. Lexical–syntactic retrieval was manipulated by using sentences with and without a temporarily word-class (noun/verb) ambiguity (e.g., run). In one group of participants, TMS was applied to the IFG and MTG, and in a control group, no TMS was applied. Eye movements were recorded and quantified at two critical sentence regions: a temporarily ambiguous region and a disambiguating region. Results show that stimulation of the IFG led to a modulation of the ambiguity effect (ambiguous–unambiguous) at the disambiguating sentence region in three measures: first fixation durations, total reading times, and regressive eye movements into the region. Both IFG and MTG stimulation modulated the ambiguity effect for total reading times in the temporarily ambiguous sentence region relative to a control group. The current results demonstrate that an offline repetitive TMS protocol can have influences at a different point in time during online processing and provide causal evidence for IFG involvement in unification operations during sentence comprehension.
  • Acheson, D. J., Postle, B. R., & MacDonald, M. C. (2010). The interaction of concreteness and phonological similarity in verbal working memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36(1), 17-36. doi:10.1037/a0017679.

    Abstract

    Although phonological representations have been a primary focus of verbal working memory research, lexical-semantic manipulations also influence performance. In the present study, the authors investigated whether a classic phenomenon in verbal working memory, the phonological similarity effect (PSE), is modulated by a lexical-semantic variable, word concreteness. Phonological overlap and concreteness were factorially manipulated in each of four experiments across which presentation modality (Experiments 1 and 2: visual presentation; Experiments 3 and 4: auditory presentation) and concurrent articulation (present in Experiments 2 and 4) were manipulated. In addition to main effects of each variable, results show a Phonological Overlap x Concreteness interaction whereby the magnitude of the PSE is greater for concrete word lists relative to abstract word lists. This effect is driven by superior item memory for nonoverlapping, concrete lists and is robust to the modality of presentation and concurrent articulation. These results demonstrate that in verbal working memory tasks, there are multiple routes to the phonological form of a word and that maintenance and retrieval occur over more than just a phonological level.
  • Acuna-Hidalgo, R., Deriziotis, P., Steehouwer, M., Gilissen, C., Graham, S. A., Van Dam, S., Hoover-Fong, J., Telegrafi, A. B., Destree, A., Smigiel, R., Lambie, L. A., Kayserili, H., Altunoglu, U., Lapi, E., Uzielli, M. L., Aracena, M., Nur, B. G., Mihci, E., Moreira, L. M. A., Ferreira, V. B. and 26 moreAcuna-Hidalgo, R., Deriziotis, P., Steehouwer, M., Gilissen, C., Graham, S. A., Van Dam, S., Hoover-Fong, J., Telegrafi, A. B., Destree, A., Smigiel, R., Lambie, L. A., Kayserili, H., Altunoglu, U., Lapi, E., Uzielli, M. L., Aracena, M., Nur, B. G., Mihci, E., Moreira, L. M. A., Ferreira, V. B., Horovitz, D. D. G., Da Rocha, K. M., Jezela-Stanek, A., Brooks, A. S., Reutter, H., Cohen, J. S., Fatemi, A., Smitka, M., Grebe, T. A., Di Donato, N., Deshpande, C., Vandersteen, A., Marques Lourenço, C., Dufke, A., Rossier, E., Andre, G., Baumer, A., Spencer, C., McGaughran, J., Franke, L., Veltman, J. A., De Vries, B. B. A., Schinzel, A., Fisher, S. E., Hoischen, A., & Van Bon, B. W. (2017). Overlapping SETBP1 gain-of-function mutations in Schinzel-Giedion syndrome and hematologic malignancies. PLoS Genetics, 13: e1006683. doi:10.1371/journal.pgen.1006683.

    Abstract

    Schinzel-Giedion syndrome (SGS) is a rare developmental disorder characterized by multiple malformations, severe neurological alterations and increased risk of malignancy. SGS is caused by de novo germline mutations clustering to a 12bp hotspot in exon 4 of SETBP1. Mutations in this hotspot disrupt a degron, a signal for the regulation of protein degradation, and lead to the accumulation of SETBP1 protein. Overlapping SETBP1 hotspot mutations have been observed recurrently as somatic events in leukemia. We collected clinical information of 47 SGS patients (including 26 novel cases) with germline SETBP1 mutations and of four individuals with a milder phenotype caused by de novo germline mutations adjacent to the SETBP1 hotspot. Different mutations within and around the SETBP1 hotspot have varying effects on SETBP1 stability and protein levels in vitro and in in silico modeling. Substitutions in SETBP1 residue I871 result in a weak increase in protein levels and mutations affecting this residue are significantly more frequent in SGS than in leukemia. On the other hand, substitutions in residue D868 lead to the largest increase in protein levels. Individuals with germline mutations affecting D868 have enhanced cell proliferation in vitro and higher incidence of cancer compared to patients with other germline SETBP1 mutations. Our findings substantiate that, despite their overlap, somatic SETBP1 mutations driving malignancy are more disruptive to the degron than germline SETBP1 mutations causing SGS. Additionally, this suggests that the functional threshold for the development of cancer driven by the disruption of the SETBP1 degron is higher than for the alteration in prenatal development in SGS. Drawing on previous studies of somatic SETBP1 mutations in leukemia, our results reveal a genotype-phenotype correlation in germline SETBP1 mutations spanning a molecular, cellular and clinical phenotype.
  • Adank, P., & Janse, E. (2010). Comprehension of a novel accent by young and older listeners. Psychology and Aging, 25(3), 736-740. doi:10.1037/a0020054.

    Abstract

    The authors investigated perceptual learning of a novel accent in young and older listeners through
    measuring speech reception thresholds (SRTs) using speech materials spoken in a novel—unfamiliar—
    accent. Younger and older listeners adapted to this accent, but older listeners showed poorer comprehension
    of the accent. Furthermore, perceptual learning differed across groups: The older listeners
    stopped learning after the first block, whereas younger listeners showed further improvement with longer
    exposure. Among the older participants, hearing acuity predicted the SRT as well as the effect of the
    novel accent on SRT. Finally, a measure of executive function predicted the impact of accent on SRT.
  • Adank, P., Hagoort, P., & Bekkering, H. (2010). Imitation improves language comprehension. Psychological Science, 21, 1903-1909. doi:10.1177/0956797610389192.

    Abstract

    Humans imitate each other during social interaction. This imitative behavior streamlines social interaction and aids in learning to replicate actions. However, the effect of imitation on action comprehension is unclear. This study investigated whether vocal imitation of an unfamiliar accent improved spoken-language comprehension. Following a pretraining accent comprehension test, participants were assigned to one of six groups. The baseline group received no training, but participants in the other five groups listened to accented sentences, listened to and repeated accented sentences in their own accent, listened to and transcribed accented sentences, listened to and imitated accented sentences, or listened to and imitated accented sentences without being able to hear their own vocalizations. Posttraining measures showed that accent comprehension was most improved for participants who imitated the speaker’s accent. These results show that imitation may aid in streamlining interaction by improving spoken-language comprehension under adverse listening conditions.
  • Agrawal, P., Bhaya Nair, R., Narasimhan, B., Chaudhary, N., & Keller, H. (2008). The development of facial expressions of emotion in Indian culture [meeting abstract]. International Journal of Psychology, 43(3/4), 82.

    Abstract

    The development of emotions in the offspring of any species, especially humans, is one of the most important and complex processes necessary to ensure their survival. Although other nonverbal expressions of emotion such as body movements provide valuable clues, facial expressions in human infants are arguably the most crucial component in tracking emotional responses. Tracing the developmental path of facial expressions is thus the aim of this longitudinal research study which explores mother-child interactions from infancy to pre-school in Indian culture via video-taped datasets recorded as part of multiple projects spanning Indian universities (IITD, JNU, DU), Osnabruck University and MPI-Netherlands.
  • Ahlsson, F., Åkerud, H., Schijven, D., Olivier, J., & Sundström-Poromaa, I. (2015). Gene expression in placentas from nondiabetic women giving birth to large for gestational age infants. Reproductive Sciences, 22(10), 1281-1288. doi:10.1177/1933719115578928.

    Abstract

    Gestational diabetes, obesity, and excessive weight gain are known independent risk factors for the birth of a large for gestational age (LGA) infant. However, only 1 of the 10 infants born LGA is born by mothers with diabetes or obesity. Thus, the aim of the present study was to compare placental gene expression between healthy, nondiabetic mothers (n = 22) giving birth to LGA infants and body mass index-matched mothers (n = 24) giving birth to appropriate for gestational age infants. In the whole gene expression analysis, only 29 genes were found to be differently expressed in LGA placentas. Top upregulated genes included insulin-like growth factor binding protein 1, aminolevulinate δ synthase 2, and prolactin, whereas top downregulated genes comprised leptin, gametocyte-specific factor 1, and collagen type XVII α 1. Two enriched gene networks were identified, namely, (1) lipid metabolism, small molecule biochemistry, and organismal development and (2) cellular development, cellular growth, proliferation, and tumor morphology.
  • Ahrenholz, B., Bredel, U., Klein, W., Rost-Roth, M., & Skiba, R. (Eds.). (2008). Empirische Forschung und Theoriebildung: Beiträge aus Soziolinguistik, Gesprochene-Sprache- und Zweitspracherwerbsforschung: Festschrift für Norbert Dittmar. Frankfurt am Main: Lang.
  • Alday, P. M. (2015). Be Careful When Assuming the Obvious: Commentary on “The Placement of the Head that Minimizes Online Memory: A Complex Systems Approach”. Language Dynamics and Change, 5(1), 138-146. doi:10.1163/22105832-00501008.

    Abstract

    Ferrer-i-Cancho (this volume) presents a mathematical model of both the synchronic and diachronic nature of word order based on the assumption that memory costs are a never decreasing function of distance and a few very general linguistic assumptions. However, even these minimal and seemingly obvious assumptions are not as safe as they appear in light of recent typological and psycholinguistic evidence. The interaction of word order and memory has further depths to be explored.
  • Alday, P. M., Schlesewsky, M., & Bornkessel-Schlesewsky, I. (2015). Discovering prominence and its role in language processing: An individual (differences) approach. Linguistics Vanguard, 1(1), 201-213. doi:10.1515/lingvan-2014-1013.

    Abstract

    It has been suggested that, during real time language comprehension, the human language processing system attempts to identify the argument primarily responsible for the state of affairs (the “actor”) as quickly and unambiguously as possible. However, previous work on a prominence (e.g. animacy, definiteness, case marking) based heuristic for actor identification has suffered from underspecification of the relationship between different cue hierarchies. Qualitative work has yielded a partial ordering of many features (e.g. MacWhinney et al. 1984), but a precise quantification has remained elusive due to difficulties in exploring the full feature space in a particular language. Feature pairs tend to correlate strongly in individual languages for semantic-pragmatic reasons (e.g., animate arguments tend to be actors and actors tend to be morphosyntactically privileged), and it is thus difficult to create acceptable stimuli for a fully factorial design even for binary features. Moreover, the exponential function grows extremely rapidly and a fully crossed factorial design covering the entire feature space would be prohibitively long for a purely within-subjects design. Here, we demonstrate the feasibility of parameter estimation in a short experiment. We are able to estimate parameters at a single subject level for the parameters animacy, case and number. This opens the door for research into individual differences and population variation. Moreover, the framework we introduce here can be used in the field to measure more “exotic” languages and populations, even with small sample sizes. Finally, pooled single-subject results are used to reduce the number of free parameters in previous work based on the extended Argument Dependency Model (Bornkessel-Schlesewsky and Schlesewsky 2006, 2009, 2013, in press; Alday et al. 2014).
  • Alday, P. M., Schlesewsky, M., & Bornkessel-Schlesewsky, I. (2017). Commentary on Sanborn and Chater: Posterior Modes Are Attractor Basins. Trends in Cognitive Sciences, 21(7), 491-492. doi:10.1016/j.tics.2017.04.003.
  • Alday, P. M., Schlesewsky, M., & Bornkessel-Schlesewsky, I. (2017). Electrophysiology reveals the neural dynamics of naturalistic auditory language processing: Event-related potentials reflect continuous model update. eNeuro, 4(6): e0311. doi:10.1523/ENEURO.0311-16.2017.

    Abstract

    The recent trend away from ANOVA-based analyses places experimental investigations into the neurobiology of cognition in more naturalistic and ecologically valid designs within reach. Using mixed-effects models for epoch-based regression, we demonstrate the feasibility of examining event-related potentials (ERPs), and in particular the N400, to study the neural dynamics of human auditory language processing in a naturalistic setting. Despite the large variability between trials during naturalistic stimulation, we replicated previous findings from the literature: the effects of frequency, animacy, word order and find previously unexplored interaction effects. This suggests a new perspective on ERPs, namely as a continuous modulation reflecting continuous stimulation instead of a series of discrete and essentially sequential processes locked to discrete events.

    Significance Statement Laboratory experiments on language often lack ecologicalal validity. In addition to the intrusive laboratory equipment, the language used is often highly constrained in an attempt to control possible confounds. More recent research with naturalistic stimuli has been largely confined to fMRI, where the low temporal resolution helps to smooth over the uneven finer structure of natural language use. Here, we demonstrate the feasibility of using naturalistic stimuli with temporally sensitive methods such as EEG and MEG using modern computational approaches and show how this provides new insights into the nature of ERP components and the temporal dynamics of language as a sensory and cognitive process. The full complexity of naturalistic language use cannot be captured by carefully controlled designs alone.
  • Alday, P. M. (2015). Quantity and Quality:Not a Zero-Sum Game: A computational and neurocognitive examination of human language processing. PhD Thesis, Philipps-Universität Marburg, Marburg.
  • Alferink, I. (2015). Dimensions of convergence in bilingual speech and gesture. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Alhama, R. G., Scha, R., & Zudema, W. (2015). How should we evaluate models of segmentation in artificial language learning? In N. A. Taatgen, M. K. van Vugt, J. P. Borst, & K. Mehlhorn (Eds.), Proceedings of ICCM 2015 (pp. 172-173). Groningen: University of Groningen.

    Abstract

    One of the challenges that infants have to solve when learn- ing their native language is to identify the words in a con- tinuous speech stream. Some of the experiments in Artificial Grammar Learning (Saffran, Newport, and Aslin (1996); Saf- fran, Aslin, and Newport (1996); Aslin, Saffran, and Newport (1998) and many more) investigate this ability. In these ex- periments, subjects are exposed to an artificial speech stream that contains certain regularities. Adult participants are typ- ically tested with 2-alternative Forced Choice Tests (2AFC) in which they have to choose between a word and another sequence (typically a partword, a sequence resulting from misplacing boundaries).
  • Alhama, R. G., & Zuidema, W. (2017). Segmentation as Retention and Recognition: the R&R model. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 1531-1536). Austin, TX: Cognitive Science Society.

    Abstract

    We present the Retention and Recognition model (R&R), a probabilistic exemplar model that accounts for segmentation in Artificial Language Learning experiments. We show that R&R provides an excellent fit to human responses in three segmentation experiments with adults (Frank et al., 2010), outperforming existing models. Additionally, we analyze the results of the simulations and propose alternative explanations for the experimental findings.
  • Alibali, M. W., Flevares, L. M., & Goldin-Meadow, S. (1997). Assessing knowledge conveyed in gesture: Do teachers have the upper hand? Journal of Educational Psychology, 89(1), 183-193. doi:10.1037/0022-0663.89.1.183.

    Abstract

    Children's gestures can reveal important information about their problem-solving strategies. This study investigated whether the information children express only in gesture is accessible to adults not trained in gesture coding. Twenty teachers and 20 undergraduates viewed videotaped vignettes of 12 children explaining their solutions to equations. Six children expressed the same strategy in speech and gesture, and 6 expressed different strategies. After each vignette, adults described the child's reasoning. For children who expressed different strategies in speech and gesture, both teachers and undergraduates frequently described strategies that children had not expressed in speech. These additional strategies could often be traced
    to the children's gestures. Sensitivity to gesture was comparable for teachers and
    undergraduates. Thus, even without training, adults glean information, not only from children's words but also from their hands.
  • Allen, S. E. M. (1997). Towards a discourse-pragmatic explanation for the subject-object asymmetry in early null arguments. In NET-Bulletin 1997 (pp. 1-16). Amsterdam, The Netherlands: Instituut voor Functioneel Onderzoek van Taal en Taalgebruik (IFOTT).
  • Allerhand, M., Butterfield, S., Cutler, A., & Patterson, R. (1992). Assessing syllable strength via an auditory model. In Proceedings of the Institute of Acoustics: Vol. 14 Part 6 (pp. 297-304). St. Albans, Herts: Institute of Acoustics.
  • Altvater-Mackensen, N. (2010). Do manners matter? Asymmetries in the acquisition of manner of articulation features. PhD Thesis, Radboud University of Nijmegen, Nijmegen.
  • Ambridge, B., Kidd, E., Rowland, C. F., & Theakston, A. L. (2015). Authors' response [The ubiquity of frequency effects in first language acquisition]. Journal of Child Language, 42(2), 316-322. doi:10.1017/S0305000914000841.

    Abstract

    Our target paper argued for the ubiquity of frequency effects in acquisition, and that any comprehensive theory must take into account the multiplicity of ways that frequently occurring and co-occurring linguistic units affect the acquisition process. The commentaries on the paper provide a largely unanimous endorsement of this position, but raise additional issues likely to frame further discussion and theoretical development. Specifically, while most commentators did not deny the importance of frequency effects, all saw this as the tip of the theoretical iceberg. In this short response we discuss common themes raised in the commentaries, focusing on the broader issue of what frequency effects mean for language acquisition.

    Additional information

    Target paper
  • Ambridge, B., Rowland, C. F., & Pine, J. M. (2008). Is structure dependence an innate constraint? New experimental evidence from children's complex-question production. Cognitive Science, 32(1), 222-255. doi:10.1080/03640210701703766.

    Abstract

    According to Crain and Nakayama (1987), when forming complex yes/no questions, children do not make errors such as Is the boy who smoking is crazy? because they have innate knowledge of structure dependence and so will not move the auxiliary from the relative clause. However, simple recurrent networks are also able to avoid such errors, on the basis of surface distributional properties of the input (Lewis & Elman, 2001; Reali & Christiansen, 2005). Two new elicited production studies revealed that (a) children occasionally produce structure-dependence errors and (b) the pattern of children's auxiliary-doubling errors (Is the boy who is smoking is crazy?) suggests a sensitivity to surface co-occurrence patterns in the input. This article concludes that current data do not provide any support for the claim that structure dependence is an innate constraint, and that it is possible that children form a structure-dependent grammar on the basis of exposure to input that exhibits this property.
  • Ambridge, B., & Rowland, C. F. (2013). Experimental methods in studying child language acquisition. Wiley Interdisciplinary Reviews: Cognitive Science, 4(2), 149-168. doi:10.1002/wcs.1215.

    Abstract

    This article reviews the some of the most widely used methods used for studying children's language acquisition including (1) spontaneous/naturalistic, diary, parental report data, (2) production methods (elicited production, repetition/elicited imitation, syntactic priming/weird word order), (3) comprehension methods (act-out, pointing, intermodal preferential looking, looking while listening, conditioned head turn preference procedure, functional neuroimaging) and (4) judgment methods (grammaticality/acceptability judgments, yes-no/truth-value judgments). The review outlines the types of studies and age-groups to which each method is most suited, as well as the advantage and disadvantages of each. We conclude by summarising the particular methodological considerations that apply to each paradigm and to experimental design more generally. These include (1) choosing an age-appropriate task that makes communicative sense (2) motivating children to co-operate, (3) choosing a between-/within-subjects design, (4) the use of novel items (e.g., novel verbs), (5) fillers, (6) blocked, counterbalanced and random presentation, (7) the appropriate number of trials and participants, (8) drop-out rates (9) the importance of control conditions, (10) choosing a sensitive dependent measure (11) classification of responses, and (12) using an appropriate statistical test. WIREs Cogn Sci 2013, 4:149–168. doi: 10.1002/wcs.1215
  • Ambridge, B., Pine, J. M., Rowland, C. F., & Young, C. R. (2008). The effect of verb semantic class and verb frequency (entrenchment) on children’s and adults’ graded judgements of argument-structure overgeneralization errors. Cognition, 106(1), 87-129. doi:10.1016/j.cognition.2006.12.015.

    Abstract

    Participants (aged 5–6 yrs, 9–10 yrs and adults) rated (using a five-point scale) grammatical (intransitive) and overgeneralized (transitive causative)1 uses of a high frequency, low frequency and novel intransitive verb from each of three semantic classes [Pinker, S. (1989a). Learnability and cognition: the acquisition of argument structure. Cambridge, MA: MIT Press]: “directed motion” (fall, tumble), “going out of existence” (disappear, vanish) and “semivoluntary expression of emotion” (laugh, giggle). In support of Pinker’s semantic verb class hypothesis, participants’ preference for grammatical over overgeneralized uses of novel (and English) verbs increased between 5–6 yrs and 9–10 yrs, and was greatest for the latter class, which is associated with the lowest degree of direct external causation (the prototypical meaning of the transitive causative construction). In support of Braine and Brooks’s [Braine, M.D.S., & Brooks, P.J. (1995). Verb argument strucure and the problem of avoiding an overgeneral grammar. In M. Tomasello & W. E. Merriman (Eds.), Beyond names for things: Young children’s acquisition of verbs (pp. 352–376). Hillsdale, NJ: Erlbaum] entrenchment hypothesis, all participants showed the greatest preference for grammatical over ungrammatical uses of high frequency verbs, with this preference smaller for low frequency verbs, and smaller again for novel verbs. We conclude that both the formation of semantic verb classes and entrenchment play a role in children’s retreat from argument-structure overgeneralization errors.
  • Ambridge, B., Bidgood, A., Twomey, K. E., Pine, J. M., Rowland, C. F., & Freudenthal, D. (2015). Preemption versus Entrenchment: Towards a Construction-General Solution to the Problem of the Retreat from Verb Argument Structure Overgeneralization. PLoS One, 10(4): e0123723. doi:10.1371/journal.pone.0123723.

    Abstract

    Participants aged 5;2-6;8, 9;2-10;6 and 18;1-22;2 (72 at each age) rated verb argument structure overgeneralization errors (e.g., *Daddy giggled the baby) using a five-point scale. The study was designed to investigate the feasibility of two proposed construction-general solutions to the question of how children retreat from, or avoid, such errors. No support was found for the prediction of the preemption hypothesis that the greater the frequency of the verb in the single most nearly synonymous construction (for this example, the periphrastic causative; e.g., Daddy made the baby giggle), the lower the acceptability of the error. Support was found, however, for the prediction of the entrenchment hypothesis that the greater the overall frequency of the verb, regardless of construction, the lower the acceptability of the error, at least for the two older groups. Thus while entrenchment appears to be a robust solution to the problem of the retreat from error, and one that generalizes across different error types, we did not find evidence that this is the case for preemption. The implication is that the solution to the retreat from error lies not with specialized mechanisms, but rather in a probabilistic process of construction competition.
  • Ambridge, B., Pine, J. M., Rowland, C. F., Chang, F., & Bidgood, A. (2013). The retreat from overgeneralization in child language acquisition: Word learning, morphology, and verb argument structure. Wiley Interdisciplinary Reviews: Cognitive Science, 4(1), 47-62. doi:10.1002/wcs.1207.

    Abstract

    This review investigates empirical evidence for different theoretical proposals regarding the retreat from overgeneralization errors in three domains: word learning (e.g., *doggie to refer to all animals), morphology [e.g., *spyer, *cooker (one who spies/cooks), *unhate, *unsqueeze, *sitted; *drawed], and verb argument structure [e.g., *Don't giggle me (c.f. Don't make me giggle); *Don't say me that (c.f. Don't say that to me)]. The evidence reviewed provides support for three proposals. First, in support of the pre-emption hypothesis, the acquisition of competing forms that express the desired meaning (e.g., spy for *spyer, sat for *sitted, and Don't make me giggle for *Don't giggle me) appears to block errors. Second, in support of the entrenchment hypothesis, repeated occurrence of particular items in particular constructions (e.g., giggle in the intransitive construction) appears to contribute to an ever strengthening probabilistic inference that non-attested uses (e.g., *Don't giggle me) are ungrammatical for adult speakers. That is, both the rated acceptability and production probability of particular errors decline with increasing frequency of pre-empting and entrenching forms in the input. Third, learners appear to acquire semantic and morphophonological constraints on particular constructions, conceptualized as properties of slots in constructions [e.g., the (VERB) slot in the morphological un-(VERB) construction or the transitive-causative (SUBJECT) (VERB) (OBJECT) argument-structure construction]. Errors occur as children acquire the fine-grained semantic and morphophonological properties of particular items and construction slots, and so become increasingly reluctant to use items in slots with which they are incompatible. Findings also suggest some role for adult feedback and conventionality; the principle that, for many given meanings, there is a conventional form that is used by all members of the speech community.
  • Ambridge, B., Kidd, E., Rowland, C. F., & Theakston, A. L. (2015). The ubiquity of frequency effects in first language acquisition. Journal of Child Language, 42(2), 239-273. doi:10.1017/S030500091400049X.

    Abstract

    This review article presents evidence for the claim that frequency effects are pervasive in children's first language acquisition, and hence constitute a phenomenon that any successful account must explain. The article is organized around four key domains of research: children's acquisition of single words, inflectional morphology, simple syntactic constructions, and more advanced constructions. In presenting this evidence, we develop five theses. (i) There exist different types of frequency effect, from effects at the level of concrete lexical strings to effects at the level of abstract cues to thematic-role assignment, as well as effects of both token and type, and absolute and relative, frequency. High-frequency forms are (ii) early acquired and (iii) prevent errors in contexts where they are the target, but also (iv) cause errors in contexts in which a competing lower-frequency form is the target. (v) Frequency effects interact with other factors (e.g. serial position, utterance length), and the patterning of these interactions is generally informative with regard to the nature of the learning mechanism. We conclude by arguing that any successful account of language acquisition, from whatever theoretical standpoint, must be frequency sensitive to the extent that it can explain the effects documented in this review, and outline some types of account that do and do not meet this criterion.

    Additional information

    Author's response
  • Ameka, F. K. (2008). Aspect and modality in Ewe: A survey. In F. K. Ameka, & M. E. Kropp Dakubu (Eds.), Aspect and modality in Kwa languages (pp. 135-194). Amsterdam: Benjamins.
  • Ameka, F. K., & Kropp Dakubu, M. E. (Eds.). (2008). Aspect and modality in Kwa Languages. Amsterdam: Benjamins.

    Abstract

    This book explores the thesis that in the Kwa languages of West Africa, aspect and modality are more central to the grammar of the verb than tense. Where tense marking has emerged it is invariably in the expression of the future, and therefore concerned with the impending actualization or potentiality of an event, hence with modality, rather than the purely temporal sequencing associated with tense. The primary grammatical contrasts are perfective versus imperfective. The main languages discussed are Akan, Dangme, Ewe, Ga and Tuwuli while Nzema-Ahanta, Likpe and Eastern Gbe are also mentioned. Knowledge about these languages has deepened considerably during the past decade or so and ideas about their structure have changed. The volume therefore presents novel analyses of grammatical forms like the so-called S-Aux-O-V-Other or “future” constructions, and provides empirical data for theorizing about aspect and modality. It should be of considerable interest to Africanist linguists, typologists, and creolists interested in substrate issues.
  • Ameka, F. K. (2008). He died old dying to be dead right: Transitivity and semantic shifts of 'die' in Ewe in crosslinguistic perspective. In M. Bowerman, & P. Brown (Eds.), Crosslinguistic perspectives on argument structure: Implications for learnability (pp. 231-254). Mahwah, NJ: Erlbaum.

    Abstract

    This paper examines some of the claims of the Unaccusativity hypothesis.It shows that the supposedly unaccusative ‘die’ verb in Ewe (Kwa), kú can appear as both a one-place and a two-place predicate and has three senses which do not correlate with the number of surface arguments of the verb. For instance, the same sense is involved in both a one-place construction (e.g. she died) and a two-place cognate object construction (she died a wicked death). By contrast, different senses are expressed by formally identical two-place constructions, e.g. ‘the garment die dirt’ (= the garment is dead dirty; intensity) vs., ‘he died ear (to the matter)’ (=he does not want to hear; negative desiderative). The paper explores the learnability problems posed by the non-predictability of the different senses of Ewe ‘die’ from its syntactic frame and suggests that since the meanings are indirectly related to the properties of the event participants, such as animacy, a learner must pay close attention to the properties of the verb’s participants. The paper concludes by demonstrating that the meaning shifts observed in Ewe are also attested in other typologically and genetically unrelated languages such as Japanese, Arrernte (Australian), Oluta (Mixean), Dutch and English.
  • Ameka, F. K., & Kropp Dakubu, M. E. (2008). Imperfective constructions: Progressive and prospective in Ewe and Dangme. In F. K. Ameka, & M. E. Kropp Dakubu (Eds.), Aspect and modality in Kwa languages (pp. 215-289). Amsterdam: Benjamins.
  • Ameka, F. K. (2010). Information packaging constructions in Kwa: Micro-variation and typology. In E. O. Aboh, & J. Essegbey (Eds.), Topics in Kwa syntax (pp. 141-176). Dordrecht: Springer.

    Abstract

    Kwa languages such as Akye, Akan, Ewe, Ga, Likpe, Yoruba etc. are not prototypically “topic-prominent” like Chinese nor “focus-prominent” like Somali, yet they have dedicated structural positions in the clause, as well as morphological markers for signalling the information status of the component parts of information units. They could thus be seen as “discourse configurational languages” (Kiss 1995). In this chapter, I first argue for distinct positions in the left periphery of the clause in these languages for scene-setting topics, contrastive topics and focus. I then describe the morpho-syntactic properties of various information packaging constructions and the variations that we find across the languages in this domain.
  • Ameka, F. K. (1992). Interjections: The universal yet neglected part of speech. Journal of Pragmatics, 18(2/3), 101-118. doi:10.1016/0378-2166(92)90048-G.
  • Ameka, F. K., & Kropp Dakubu, M. E. (2008). Introduction. In F. K. Ameka, & M. E. Kropp Dakubu (Eds.), Aspect and modality in Kwa languages (pp. 1-7). Amsterdam: Benjamins.
  • Ameka, F. K., & Essegbey, J. (2013). Serialising languages: Satellite-framed, verb-framed or neither. Ghana Journal of Linguistics, 2(1), 19-38.

    Abstract

    The diversity in the coding of the core schema of motion, i.e., Path, has led to a traditional typology of languages into verb-framed and satellite-framed languages. In the former Path is encoded in verbs and in the latter it is encoded in non-verb elements that function as sisters to co-event expressing verbs such as manner verbs. Verb serializing languages pose a challenge to this typology as they express Path as well as the Co-event of manner in finite verbs that together function as a single predicate in translational motion clause. We argue that these languages do not fit in the typology and constitute a type of their own. We draw on data from Akan and Frog story narrations in Ewe, a Kwa language, and Sranan, a Caribbean Creole with Gbe substrate, to show that in terms of discourse properties verb serializing languages behave like Verb-framed with respect to some properties and like Satellite-framed languages in terms of others. This study fed into the revision of the typology and such languages are now said to be equipollently-framed languages.
  • Ameka, F. K. (2013). Possessive constructions in Likpe (Sɛkpɛlé). In A. Aikhenvald, & R. Dixon (Eds.), Possession and ownership: A crosslinguistic typology (pp. 224-242). Oxford: Oxford University Press.
  • Ameka, F. K. (1992). The meaning of phatic and conative interjections. Journal of Pragmatics, 18(2/3), 245-271. doi:10.1016/0378-2166(92)90054-F.

    Abstract

    The purpose of this paper is to investigate the meanings of the members of two subclasses of interjections in Ewe: the conative/volitive which are directed at an auditor, and the phatic which are used in the maintenance of social and communicative contact. It is demonstrated that interjections like other linguistic signs have meanings which can be rigorously stated. In addition, the paper explores the differences and similarities between the semantic structures of interjections on one hand and formulaic words on the other. This is done through a comparison of the semantics and pragmatics of an interjection and a formulaic word which are used for welcoming people in Ewe. It is contended that formulaic words are speech acts qua speech acts while interjections are not fully fledged speech acts because they lack illocutionary dictum in their semantic structure.
  • Ameka, F. K. (2017). The Uselessness of the Useful: Language Standardisation and Variation in Multilingual Context. In I. Tieken-Boon van Ostade, & C. Percy (Eds.), Prescription and tradition in language: Establishing standards across the time and space (pp. 71-87). Bristol: Multilingual Matters.
  • Anderson, P., Harandi, N. M., Moisik, S. R., Stavness, I., & Fels, S. (2015). A comprehensive 3D biomechanically-driven vocal tract model including inverse dynamics for speech research. In Proceedings of Interspeech 2015: The 16th Annual Conference of the International Speech Communication Association (pp. 2395-2399).

    Abstract

    We introduce a biomechanical model of oropharyngeal structures that adds the soft-palate, pharynx, and larynx to our previous models of jaw, skull, hyoid, tongue, and face in a unified model. The model includes a comprehensive description of the upper airway musculature, using point-to-point muscles that may either be embedded within the deformable structures or operate exter- nally. The airway is described by an air-tight mesh that fits and deforms with the surrounding articulators, which enables dynamic coupling to our articulatory speech synthesizer. We demonstrate that the biomechanics, in conjunction with the skinning, supports a range from physically realistic to simplified vocal tract geometries to investigate different approaches to aeroacoustic modeling of vocal tract. Furthermore, our model supports inverse modeling to support investigation of plausible muscle activation patterns to generate speech.
  • Andics, A. (2013). Who is talking? Behavioural and neural evidence for norm-based coding in voice identity learning. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Andics, A., Gál, V., Vicsi, K., Rudas, G., & Vidnyánszky, Z. (2013). FMRI repetition suppression for voices is modulated by stimulus expectations. NeuroImage, 69, 277-283. doi:10.1016/j.neuroimage.2012.12.033.

    Abstract

    According to predictive coding models of sensory processing, stimulus expectations have a profound effect on sensory cortical responses. This was supported by experimental results, showing that fMRI repetition suppression (fMRI RS) for face stimuli is strongly modulated by the probability of stimulus repetitions throughout the visual cortical processing hierarchy. To test whether processing of voices is also affected by stimulus expectations, here we investigated the effect of repetition probability on fMRI RS in voice-selective cortical areas. Changing (‘alt’) and identical (‘rep’) voice stimulus pairs were presented to the listeners in blocks, with a varying probability of alt and rep trials across blocks. We found auditory fMRI RS in the nonprimary voice-selective cortical regions, including the bilateral posterior STS, the right anterior STG and the right IFC, as well as in the IPL. Importantly, fMRI RS effects in all of these areas were strongly modulated by the probability of stimulus repetition: auditory fMRI RS was reduced or not present in blocks with low repetition probability. Our results revealed that auditory fMRI RS in higher-level voice-selective cortical regions is modulated by repetition probabilities and thus suggest that in audition, similarly to the visual modality, processing of sensory information is shaped by stimulus expectation processes.
  • Andics, A., McQueen, J. M., & Petersson, K. M. (2013). Mean-based neural coding of voices. NeuroImage, 79, 351-360. doi:10.1016/j.neuroimage.2013.05.002.

    Abstract

    The social significance of recognizing the person who talks to us is obvious, but the neural mechanisms that mediate talker identification are unclear. Regions along the bilateral superior temporal sulcus (STS) and the inferior frontal cortex (IFC) of the human brain are selective for voices, and they are sensitive to rapid voice changes. Although it has been proposed that voice recognition is supported by prototype-centered voice representations, the involvement of these category-selective cortical regions in the neural coding of such "mean voices" has not previously been demonstrated. Using fMRI in combination with a voice identity learning paradigm, we show that voice-selective regions are involved in the mean-based coding of voice identities. Voice typicality is encoded on a supra-individual level in the right STS along a stimulus-dependent, identity-independent (i.e., voice-acoustic) dimension, and on an intra-individual level in the right IFC along a stimulus-independent, identity-dependent (i.e., voice identity) dimension. Voice recognition therefore entails at least two anatomically separable stages, each characterized by neural mechanisms that reference the central tendencies of voice categories.
  • Andics, A., McQueen, J. M., Petersson, K. M., Gál, V., Rudas, G., & Vidnyánszky, Z. (2010). Neural mechanisms for voice recognition. NeuroImage, 52, 1528-1540. doi:10.1016/j.neuroimage.2010.05.048.

    Abstract

    We investigated neural mechanisms that support voice recognition in a training paradigm with fMRI. The same listeners were trained on different weeks to categorize the mid-regions of voice-morph continua as an individual's voice. Stimuli implicitly defined a voice-acoustics space, and training explicitly defined a voice-identity space. The predefined centre of the voice category was shifted from the acoustic centre each week in opposite directions, so the same stimuli had different training histories on different tests. Cortical sensitivity to voice similarity appeared over different time-scales and at different representational stages. First, there were short-term adaptation effects: Increasing acoustic similarity to the directly preceding stimulus led to haemodynamic response reduction in the middle/posterior STS and in right ventrolateral prefrontal regions. Second, there were longer-term effects: Response reduction was found in the orbital/insular cortex for stimuli that were most versus least similar to the acoustic mean of all preceding stimuli, and, in the anterior temporal pole, the deep posterior STS and the amygdala, for stimuli that were most versus least similar to the trained voice-identity category mean. These findings are interpreted as effects of neural sharpening of long-term stored typical acoustic and category-internal values. The analyses also reveal anatomically separable voice representations: one in a voice-acoustics space and one in a voice-identity space. Voice-identity representations flexibly followed the trained identity shift, and listeners with a greater identity effect were more accurate at recognizing familiar voices. Voice recognition is thus supported by neural voice spaces that are organized around flexible ‘mean voice’ representations.
  • Aparicio, X., Heidlmayr, K., & Isel, F. (2017). Inhibition efficiency in highly proficient bilinguals and simultaneous interpreters: Evidence from language switching and stroop tasks. Journal of Psycholinguistic Research, 46, 1427-1451. doi:10.1007/s10936-017-9501-3.

    Abstract

    The present behavioral study aimed to examine the impact of language control expertise on two domain-general control processes, i.e. active inhibition of competing representations and overcoming of inhibition. We compared how Simultaneous Interpreters (SI) and Highly Proficient Bilinguals—two groups assumed to differ in language control capacity—performed executive tasks involving specific inhibition processes. In Experiment 1 (language decision task), both active and overcoming of inhibition processes are involved, while in Experiment 2 (bilingual Stroop task) only interference suppression is supposed to be required. The results of Experiment 1 showed a language switching effect only for the highly proficient bilinguals, potentially because overcoming of inhibition requires more cognitive resources than in SI. Nevertheless, both groups performed similarly on the Stroop task in Experiment 2, which suggests that active inhibition may work similarly in both groups. These contrasting results suggest that overcoming of inhibition may be harder to master than active inhibition. Taken together, these data indicate that some executive control processes may be less sensitive to the degree of expertise in bilingual language control than others. Our findings lend support to psycholinguistic models of bilingualism postulating a higher-order mechanism regulating language activation.
  • Araújo, S., Faísca, L., Bramão, I., Reis, A., & Petersson, K. M. (2015). Lexical and sublexical orthographic processing: An ERP study with skilled and dyslexic adult readers. Brain and Language, 141, 16-27. doi:10.1016/j.bandl.2014.11.007.

    Abstract

    This ERP study investigated the cognitive nature of the P1–N1 components during orthographic processing. We used an implicit reading task with various types of stimuli involving different amounts of sublexical or lexical orthographic processing (words, pseudohomophones, pseudowords, nonwords, and symbols), and tested average and dyslexic readers. An orthographic regularity effect (pseudowords– nonwords contrast) was observed in the average but not in the dyslexic group. This suggests an early sensitivity to the dependencies among letters in word-forms that reflect orthographic structure, while the dyslexic brain apparently fails to be appropriately sensitive to these complex features. Moreover, in the adults the N1-response may already reflect lexical access: (i) the N1 was sensitive to the familiar vs. less familiar orthographic sequence contrast; (ii) and early effects of the phonological form (words-pseudohomophones contrast) were also found. Finally, the later N320 component was attenuated in the dyslexics, suggesting suboptimal processing in later stages of phonological analysis.
  • Araújo, S., Reis, A., Petersson, K. M., & Faísca, L. (2015). Rapid automatized naming and reading performance: A meta-analysis. Journal of Educational Psychology, 107(3), 868-883. doi:10.1037/edu0000006.

    Abstract

    Evidence that rapid naming skill is associated with reading ability has become increasingly prevalent in recent years. However, there is considerable variation in the literature concerning the magnitude of this relationship. The objective of the present study was to provide a comprehensive analysis of the evidence on the relationship between rapid automatized naming (RAN) and reading performance. To this end, we conducted a meta-analysis of the correlational relationship between these 2 constructs to (a) determine the overall strength of the RAN–reading association and (b) identify variables that systematically moderate this relationship. A random-effects model analysis of data from 137 studies (857 effect sizes; 28,826 participants) indicated a moderate-to-strong relationship between RAN and reading performance (r = .43, I2 = 68.40). Further analyses revealed that RAN contributes to the 4 measures of reading (word reading, text reading, non-word reading, and reading comprehension), but higher coefficients emerged in favor of real word reading and text reading. RAN stimulus type and type of reading score were the factors with the greatest moderator effect on the magnitude of the RAN–reading relationship. The consistency of orthography and the subjects’ grade level were also found to impact this relationship, although the effect was contingent on reading outcome. It was less evident whether the subjects’ reading proficiency played a role in the relationship. Implications for future studies are discussed.
  • Araújo, S., Pacheco, A., Faísca, L., Petersson, K. M., & Reis, A. (2010). Visual rapid naming and phonological abilities: Different subtypes in dyslexic children. International Journal of Psychology, 45, 443-452. doi:10.1080/00207594.2010.499949.

    Abstract

    One implication of the double-deficit hypothesis for dyslexia is that there should be subtypes of dyslexic readers that exhibit rapid naming deficits with or without concomitant phonological processing problems. In the current study, we investigated the validity of this hypothesis for Portuguese orthography, which is more consistent than English orthography, by exploring different cognitive profiles in a sample of dyslexic children. In particular, we were interested in identifying readers characterized by a pure rapid automatized naming deficit. We also examined whether rapid naming and phonological awareness independently account for individual differences in reading performance. We characterized the performance of dyslexic readers and a control group of normal readers matched for age on reading, visual rapid naming and phonological processing tasks. Our results suggest that there is a subgroup of dyslexic readers with intact phonological processing capacity (in terms of both accuracy and speed measures) but poor rapid naming skills. We also provide evidence for an independent association between rapid naming and reading competence in the dyslexic sample, when the effect of phonological skills was controlled. Altogether, the results are more consistent with the view that rapid naming problems in dyslexia represent a second core deficit rather than an exclusive phonological explanation for the rapid naming deficits. Furthermore, additional non-phonological processes, which subserve rapid naming performance, contribute independently to reading development.
  • Armeni, K., Willems, R. M., & Frank, S. (2017). Probabilistic language models in cognitive neuroscience: Promises and pitfalls. Neuroscience and Biobehavioral Reviews, 83, 579-588. doi:10.1016/j.neubiorev.2017.09.001.

    Abstract

    Cognitive neuroscientists of language comprehension study how neural computations relate to cognitive computations during comprehension. On the cognitive part of the equation, it is important that the computations and processing complexity are explicitly defined. Probabilistic language models can be used to give a computationally explicit account of language complexity during comprehension. Whereas such models have so far predominantly been evaluated against behavioral data, only recently have the models been used to explain neurobiological signals. Measures obtained from these models emphasize the probabilistic, information-processing view of language understanding and provide a set of tools that can be used for testing neural hypotheses about language comprehension. Here, we provide a cursory review of the theoretical foundations and example neuroimaging studies employing probabilistic language models. We highlight the advantages and potential pitfalls of this approach and indicate avenues for future research
  • Arnhold, A., Vainio, M., Suni, A., & Järvikivi, J. (2010). Intonation of Finnish verbs. Speech Prosody 2010, 100054, 1-4. Retrieved from http://speechprosody2010.illinois.edu/papers/100054.pdf.

    Abstract

    A production experiment investigated the tonal shape of Finnish finite verbs in transitive sentences without narrow focus. Traditional descriptions of Finnish stating that non-focused finite verbs do not receive accents were only partly supported. Verbs were found to have a consistently smaller pitch range than words in other word classes, but their pitch contours were neither flat nor explainable by pure interpolation.
  • Asaridou, S. S., Hagoort, P., & McQueen, J. M. (2015). Effects of early bilingual experience with a tone and a non-tone language on speech-music. PLoS One, 10(12): e0144225. doi:10.1371/journal.pone.0144225.

    Abstract

    We investigated music and language processing in a group of early bilinguals who spoke a tone language and a non-tone language (Cantonese and Dutch). We assessed online speech-music processing interactions, that is, interactions that occur when speech and music are processed simultaneously in songs, with a speeded classification task. In this task, participants judged sung pseudowords either musically (based on the direction of the musical interval) or phonologically (based on the identity of the sung vowel). We also assessed longer-term effects of linguistic experience on musical ability, that is, the influence of extensive prior experience with language when processing music. These effects were assessed with a task in which participants had to learn to identify musical intervals and with four pitch-perception tasks. Our hypothesis was that due to their experience in two different languages using lexical versus intonational tone, the early Cantonese-Dutch bilinguals would outperform the Dutch control participants. In online processing, the Cantonese-Dutch bilinguals processed speech and music more holistically than controls. This effect seems to be driven by experience with a tone language, in which integration of segmental and pitch information is fundamental. Regarding longer-term effects of linguistic experience, we found no evidence for a bilingual advantage in either the music-interval learning task or the pitch-perception tasks. Together, these results suggest that being a Cantonese-Dutch bilingual does not have any measurable longer-term effects on pitch and music processing, but does have consequences for how speech and music are processed jointly.

    Additional information

    Data Availability
  • Asaridou, S. S. (2015). An ear for pitch: On the effects of experience and aptitude in processing pitch in language and music. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Asaridou, S. S., & McQueen, J. M. (2013). Speech and music shape the listening brain: Evidence for shared domain-general mechanisms. Frontiers in Psychology, 4: 321. doi:10.3389/fpsyg.2013.00321.

    Abstract

    Are there bi-directional influences between speech perception and music perception? An answer to this question is essential for understanding the extent to which the speech and music that we hear are processed by domain-general auditory processes and/or by distinct neural auditory mechanisms. This review summarizes a large body of behavioral and neuroscientific findings which suggest that the musical experience of trained musicians does modulate speech processing, and a sparser set of data, largely on pitch processing, which suggest in addition that linguistic experience, in particular learning a tone language, modulates music processing. Although research has focused mostly on music on speech effects, we argue that both directions of influence need to be studied, and conclude that the picture which thus emerges is one of mutual interaction across domains. In particular, it is not simply that experience with spoken language has some effects on music perception, and vice versa, but that because of shared domain-general subcortical and cortical networks, experiences in both domains influence behavior in both domains.
  • Ashby, J., & Martin, A. E. (2008). Prosodic phonological representations early in visual word recognition. Journal of Experimental Psychology: Human Perception and Performance, 34(1), 224-236. doi:10.1037/0096-1523.34.1.224.

    Abstract

    Two experiments examined the nature of the phonological representations used during visual word recognition. We tested whether a minimality constraint (R. Frost, 1998) limits the complexity of early representations to a simple string of phonemes. Alternatively, readers might activate elaborated representations that include prosodic syllable information before lexical access. In a modified lexical decision task (Experiment 1), words were preceded by parafoveal previews that were congruent with a target's initial syllable as well as previews that contained 1 letter more or less than the initial syllable. Lexical decision times were faster in the syllable congruent conditions than in the incongruent conditions. In Experiment 2, we recorded brain electrical potentials (electroencephalograms) during single word reading in a masked priming paradigm. The event-related potential waveform elicited in the syllable congruent condition was more positive 250-350 ms posttarget compared with the waveform elicited in the syllable incongruent condition. In combination, these experiments demonstrate that readers process prosodic syllable information early in visual word recognition in English. They offer further evidence that skilled readers routinely activate elaborated, speechlike phonological representations during silent reading. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
  • Aslin, R., Clayards, M., & Bardhan, N. P. (2008). Mechanisms of auditory reorganization during development: From sounds to words. In C. Nelson, & M. Luciana (Eds.), Handbook of developmental cognitive neuroscience (2nd, pp. 97-116). Cambridge, MA: MIT Press.
  • Athanasopoulos, P., Bylund, E., Montero-Melis, G., Damjanovic, L., Schartner, A., Kibbe, A., Riches, N., & Thierry, G. (2015). Two languages, two minds: Flexible cognitive processing driven by language of operation. Psychological Science, 26(4), 518-526. doi:10.1177/0956797614567509.

    Abstract

    People make sense of objects and events around them by classifying them into identifiable categories. The extent to which language affects this process has been the focus of a long-standing debate: Do different languages cause their speakers to behave differently? Here, we show that fluent German-English bilinguals categorize motion events according to the grammatical constraints of the language in which they operate. First, as predicted from cross-linguistic differences in motion encoding, bilingual participants functioning in a German testing context prefer to match events on the basis of motion completion to a greater extent than do bilingual participants in an English context. Second, when bilingual participants experience verbal interference in English, their categorization behavior is congruent with that predicted for German; when bilingual participants experience verbal interference in German, their categorization becomes congruent with that predicted for English. These findings show that language effects on cognition are context-bound and transient, revealing unprecedented levels of malleability in human cognition.

    Files private

    Request files
  • Auer, E., Wittenburg, P., Sloetjes, H., Schreer, O., Masneri, S., Schneider, D., & Tschöpel, S. (2010). Automatic annotation of media field recordings. In C. Sporleder, & K. Zervanou (Eds.), Proceedings of the ECAI 2010 Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities (LaTeCH 2010) (pp. 31-34). Lisbon: University de Lisbon. Retrieved from http://ilk.uvt.nl/LaTeCH2010/.

    Abstract

    In the paper we describe a new attempt to come to automatic detectors processing real scene audio-video streams that can be used by researchers world-wide to speed up their annotation and analysis work. Typically these recordings are taken in field and experimental situations mostly with bad quality and only little corpora preventing to use standard stochastic pattern recognition techniques. Audio/video processing components are taken out of the expert lab and are integrated in easy-to-use interactive frameworks so that the researcher can easily start them with modified parameters and can check the usefulness of the created annotations. Finally a variety of detectors may have been used yielding a lattice of annotations. A flexible search engine allows finding combinations of patterns opening completely new analysis and theorization possibilities for the researchers who until were required to do all annotations manually and who did not have any help in pre-segmenting lengthy media recordings.
  • Auer, E., Russel, A., Sloetjes, H., Wittenburg, P., Schreer, O., Masnieri, S., Schneider, D., & Tschöpel, S. (2010). ELAN as flexible annotation framework for sound and image processing detectors. In N. Calzolari, B. Maegaard, J. Mariani, J. Odjik, K. Choukri, S. Piperidis, M. Rosner, & D. Tapias (Eds.), Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (pp. 890-893). European Language Resources Association (ELRA).

    Abstract

    Annotation of digital recordings in humanities research still is, to a largeextend, a process that is performed manually. This paper describes the firstpattern recognition based software components developed in the AVATecH projectand their integration in the annotation tool ELAN. AVATecH (AdvancingVideo/Audio Technology in Humanities Research) is a project that involves twoMax Planck Institutes (Max Planck Institute for Psycholinguistics, Nijmegen,Max Planck Institute for Social Anthropology, Halle) and two FraunhoferInstitutes (Fraunhofer-Institut für Intelligente Analyse- undInformationssysteme IAIS, Sankt Augustin, Fraunhofer Heinrich-Hertz-Institute,Berlin) and that aims to develop and implement audio and video technology forsemi-automatic annotation of heterogeneous media collections as they occur inmultimedia based research. The highly diverse nature of the digital recordingsstored in the archives of both Max Planck Institutes, poses a huge challenge tomost of the existing pattern recognition solutions and is a motivation to makesuch technology available to researchers in the humanities.
  • Ayub, Q., Yngvadottir, B., Chen, Y., Xue, Y., Hu, M., Vernes, S. C., Fisher, S. E., & Tyler-Smith, C. (2013). FOXP2 targets show evidence of positive selection in European populations. American Journal of Human Genetics, 92, 696-706. doi:10.1016/j.ajhg.2013.03.019.

    Abstract

    Forkhead box P2 (FOXP2) is a highly conserved transcription factor that has been implicated in human speech and language disorders and plays important roles in the plasticity of the developing brain. The pattern of nucleotide polymorphisms in FOXP2 in modern populations suggests that it has been the target of positive (Darwinian) selection during recent human evolution. In our study, we searched for evidence of selection that might have followed FOXP2 adaptations in modern humans. We examined whether or not putative FOXP2 targets identified by chromatin-immunoprecipitation genomic screening show evidence of positive selection. We developed an algorithm that, for any given gene list, systematically generates matched lists of control genes from the Ensembl database, collates summary statistics for three frequency-spectrum-based neutrality tests from the low-coverage resequencing data of the 1000 Genomes Project, and determines whether these statistics are significantly different between the given gene targets and the set of controls. Overall, there was strong evidence of selection of FOXP2 targets in Europeans, but not in the Han Chinese, Japanese, or Yoruba populations. Significant outliers included several genes linked to cellular movement, reproduction, development, and immune cell trafficking, and 13 of these constituted a significant network associated with cardiac arteriopathy. Strong signals of selection were observed for CNTNAP2 and RBFOX1, key neurally expressed genes that have been consistently identified as direct FOXP2 targets in multiple studies and that have themselves been associated with neurodevelopmental disorders involving language dysfunction.
  • Azar, Z., & Ozyurek, A. (2015). Discourse Management: Reference tracking in speech and gesture in Turkish narratives. Dutch Journal of Applied Linguistics, 4(2), 222-240. doi:10.1075/dujal.4.2.06aza.

    Abstract

    Speakers achieve coherence in discourse by alternating between differential lexical forms e.g. noun phrase, pronoun, and null form in accordance with the accessibility of the entities they refer to, i.e. whether they introduce an entity into discourse for the first time or continue referring to an entity they already mentioned before. Moreover, tracking of entities in discourse is a multimodal phenomenon. Studies show that speakers are sensitive to the informational structure of discourse and use fuller forms (e.g. full noun phrases) in speech and gesture more when re-introducing an entity while they use attenuated forms (e.g. pronouns) in speech and gesture less when maintaining a referent. However, those studies focus mainly on non-pro-drop languages (e.g. English, German and French). The present study investigates whether the same pattern holds for pro-drop languages. It draws data from adult native speakers of Turkish using elicited narratives. We find that Turkish speakers mostly use fuller forms to code subject referents in re-introduction context and the null form in maintenance context and they point to gesture space for referents more in re-introduction context compared maintained context. Hence we provide supportive evidence for the reverse correlation between the accessibility of a discourse referent and its coding in speech and gesture. We also find that, as a novel contribution, third person pronoun is used in re-introduction context only when the referent was previously mentioned as the object argument of the immediately preceding clause.
  • Azar, Z., Backus, A., & Ozyurek, A. (2017). Highly proficient bilinguals maintain language-specific pragmatic constraints on pronouns: Evidence from speech and gesture. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 81-86). Austin, TX: Cognitive Science Society.

    Abstract

    The use of subject pronouns by bilingual speakers using both a pro-drop and a non-pro-drop language (e.g. Spanish heritage speakers in the USA) is a well-studied topic in research on cross-linguistic influence in language contact situations. Previous studies looking at bilinguals with different proficiency levels have yielded conflicting results on whether there is transfer from the non-pro-drop patterns to the pro-drop language. Additionally, previous research has focused on speech patterns only. In this paper, we study the two modalities of language, speech and gesture, and ask whether and how they reveal cross-linguistic influence on the use of subject pronouns in discourse. We focus on elicited narratives from heritage speakers of Turkish in the Netherlands, in both Turkish (pro-drop) and Dutch (non-pro-drop), as well as from monolingual control groups. The use of pronouns was not very common in monolingual Turkish narratives and was constrained by the pragmatic contexts, unlike in Dutch. Furthermore, Turkish pronouns were more likely to be accompanied by localized gestures than Dutch pronouns, presumably because pronouns in Turkish are pragmatically marked forms. We did not find any cross-linguistic influence in bilingual speech or gesture patterns, in line with studies (speech only) of highly proficient bilinguals. We therefore suggest that speech and gesture parallel each other not only in monolingual but also in bilingual production. Highly proficient heritage speakers who have been exposed to diverse linguistic and gestural patterns of each language from early on maintain monolingual patterns of pragmatic constraints on the use of pronouns multimodally.
  • Aziz-Zadeh, L., Casasanto, D., Feldman, J., Saxe, R., & Talmy, L. (2008). Discovering the conceptual primitives. In B. C. Love, K. McRae, & V. M. Sloutsky (Eds.), Proceedings of the 30th Annual Conference of the Cognitive Science Society (pp. 27-28). Austin, TX: Cognitive Science Society.
  • Baayen, R. H., Davidson, D. J., & Bates, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59(4), 390-412. doi:10.1016/j.jml.2007.12.005.

    Abstract

    This paper provides an introduction to mixed-effects models for the analysis of repeated measurement data with subjects and items as crossed random effects. A worked-out example of how to use recent software for mixed-effects modeling is provided. Simulation studies illustrate the advantages offered by mixed-effects analyses compared to traditional analyses based on quasi-F tests, by-subjects analyses, combined by-subjects and by-items analyses, and random regression. Applications and possibilities across a range of domains of inquiry are discussed.
  • Baayen, R. H., Dijkstra, T., & Schreuder, R. (1997). Singulars and Plurals in Dutch: Evidence for a Parallel Dual-Route Model. Journal of Memory and Language, 37(1), 94-117. doi:10.1006/jmla.1997.2509.

    Abstract

    Are regular morphologically complex words stored in the mental lexicon? Answers to this question have ranged from full listing to parsing for every regular complex word. We investigated the roles of storage and parsing in the visual domain for the productive Dutch plural suffix -en.Two experiments are reported that show that storage occurs for high-frequency noun plurals. A mathematical formalization of a parallel dual-route race model is presented that accounts for the patterns in the observed reaction time data with essentially one free parameter, the speed of the parsing route. Parsing for noun plurals appears to be a time-costly process, which we attribute to the ambiguity of -en,a suffix that is predominantly used as a verbal ending. A third experiment contrasted nouns and verbs. This experiment revealed no effect of surface frequency for verbs, but again a solid effect for nouns. Together, our results suggest that many noun plurals are stored in order to avoid the time-costly resolution of the subcategorization conflict that arises when the -ensuffix is attached to nouns.

    Files private

    Request files
  • Baayen, R. H. (1997). The pragmatics of the 'tenses' in biblical Hebrew. Studies in Language, 21(2), 245-285. doi:10.1075/sl.21.2.02baa.

    Abstract

    In this paper, I present an analysis of the so-called tense forms of Biblical Hebrew. While there is fairly broad consensus on the interpretation of the yiqtol tense form, the interpretation of the qdtal tense form has led to considerable controversy. I will argue that the qātal form has no intrinsic semantic value and that it serves a pragmatic function only, namely, signaling to the hearer that the event or state expressed by the verb cannot be tightly integrated into the discourse representation of the hearer, given the speaker's estimate of their common ground.
  • Baayen, R. H., Lieber, R., & Schreuder, R. (1997). The morphological complexity of simplex nouns. Linguistics, 35, 861-877. doi:10.1515/ling.1997.35.5.861.
  • Baayen, R. H., & Lieber, R. (1997). Word frequency distributions and lexical semantics. Computers and the Humanities, 30, 281-291.

    Abstract

    This paper addresses the relation between meaning, lexical productivity, and frequency of use. Using density estimation as a visualization tool, we show that differences in semantic structure can be reflected in probability density functions estimated for word frequency distributions. We call attention to an example of a bimodal density, and suggest that bimodality arises when distributions of well-entrenched lexical tems, which appear to be lognormal, are mixed with distributions of productively reated nonce formations
  • Baggio, G., Choma, T., Van Lambalgen, M., & Hagoort, P. (2010). Coercion and compositionality. Journal of Cognitive Neuroscience, 22, 2131-2140. doi:10.1162/jocn.2009.21303.

    Abstract

    Research in psycholinguistics and in the cognitive neuroscience of language has suggested that semantic and syntactic integration are associated with different neurophysiologic correlates, such as the N400 and the P600 in the ERPs. However, only a handful of studies have investigated the neural basis of the syntax–semantics interface, and even fewer experiments have dealt with the cases in which semantic composition can proceed independently of the syntax. Here we looked into one such case—complement coercion—using ERPs. We compared sentences such as, “The journalist wrote the article” with “The journalist began the article.” The second sentence seems to involve a silent semantic element, which is expressed in the first sentence by the head of the VP “wrote the article.” The second type of construction may therefore require the reader to infer or recover from memory a richer event sense of the VP “began the article,” such as began writing the article, and to integrate that into a semantic representation of the sentence. This operation is referred to as “complement coercion.” Consistently with earlier reading time, eye tracking, and MEG studies, we found traces of such additional computations in the ERPs: Coercion gives rise to a long-lasting negative shift, which differs at least in duration from a standard N400 effect. Issues regarding the nature of the computation involved are discussed in the light of a neurocognitive model of language processing and a formal semantic analysis of coercion.
  • Baggio, G., Van Lambalgen, M., & Hagoort, P. (2008). Computing and recomputing discourse models: An ERP study. Journal of Memory and Language, 59, 36-53. doi:10.1016/j.jml.2008.02.005.

    Abstract

    While syntactic reanalysis has been extensively investigated in psycholinguistics, comparatively little is known about reanalysis in the semantic domain. We used event-related brain potentials (ERPs) to keep track of semantic processes involved in understanding short narratives such as ‘The girl was writing a letter when her friend spilled coffee on the paper’. We hypothesize that these sentences are interpreted in two steps: (1) when the progressive clause is processed, a discourse model is computed in which the goal state (a complete letter) is predicted to hold; (2) when the subordinate clause is processed, the initial representation is recomputed to the effect that, in the final discourse structure, the goal state is not satisfied. Critical sentences evoked larger sustained anterior negativities (SANs) compared to controls, starting around 400 ms following the onset of the sentence-final word, and lasting for about 400 ms. The amplitude of the SAN was correlated with the frequency with which participants, in an offline probe-selection task, responded that the goal state was not attained. Our results raise the possibility that the brain supports some form of non-monotonic recomputation to integrate information which invalidates previously held assumptions.
  • Baggio, G., van Lambalgen, M., & Hagoort, P. (2015). Logic as Marr's computational level: Four case studies. Topics in Cognitive Science, 7, 287-298. doi:10.1111/tops.12125.

    Abstract

    We sketch four applications of Marr's levels-of-analysis methodology to the relations between logic and experimental data in the cognitive neuroscience of language and reasoning. The first part of the paper illustrates the explanatory power of computational level theories based on logic. We show that a Bayesian treatment of the suppression task in reasoning with conditionals is ruled out by EEG data, supporting instead an analysis based on defeasible logic. Further, we describe how results from an EEG study on temporal prepositions can be reanalyzed using formal semantics, addressing a potential confound. The second part of the article demonstrates the predictive power of logical theories drawing on EEG data on processing progressive constructions and on behavioral data on conditional reasoning in people with autism. Logical theories can constrain processing hypotheses all the way down to neurophysiology, and conversely neuroscience data can guide the selection of alternative computational level models of cognition.
  • Bai, C., Bornkessel-Schlesewsky, I., Wang, L., Hung, Y.-C., Schlesewsky, M., & Burkhardt, P. (2008). Semantic composition engenders an N400: Evidence from Chinese compounds. NeuroReport, 19(6), 695-699. doi:10.1097/WNR.0b013e3282fc1eb7.

    Abstract

    This study provides evidence for the role of semantic composition in compound word processing. We examined the online processing of isolated two meaning unit compounds in Chinese, a language that uses compounding to ‘disambiguate’ meaning. Using auditory presentation, we manipulated the semantic meaning and syntactic category of the two meaning units forming a compound. Event-related brain potential-recordings revealed a significant influence of semantic information, which was reflected in an N400 signature for compounds whose meaning differed from the constituent meanings. This finding suggests that the combination of distinct constituent meanings to form an overall compound meaning consumes processing resources. By contrast, no comparable difference was observed based on syntactic category information. Our findings indicate that combinatory semantic processing at the word level correlates with N400 effects.
  • Bakker, I., Takashima, A., Van Hall, J. G., & McQueen, J. M. (2015). Changes in theta and beta oscillations as signatures of novel word consolidation. Journal of cognitive neuroscience, 27(7), 1286-1297. doi:10.1162/jocn_a_00801.

    Abstract

    The complementary learning systems account of word learning states that novel words, like other types of memories, undergo an offline consolidation process during which they are gradually integrated into the neocortical memory network. A fundamental change in the neural representation of a novel word should therefore occur in the hours after learning. The present EEG study tested this hypothesis by investigating whether novel words learned before a 24-hr consolidation period elicited more word-like oscillatory responses than novel words learned immediately before testing. In line with previous studies indicating that theta synchronization reflects lexical access, unfamiliar novel words elicited lower power in the theta band (4–8 Hz) than existing words. Recently learned words still showed a marginally lower theta increase than existing words, but theta responses to novel words that had been acquired 24 hr earlier were indistinguishable from responses to existing words. Consistent with evidence that beta desynchronization (16–21 Hz) is related to lexical-semantic processing, we found that both unfamiliar and recently learned novel words elicited less beta desynchronization than existing words. In contrast, no difference was found between novel words learned 24 hr earlier and existing words. These data therefore suggest that an offline consolidation period enables novel words to acquire lexically integrated, word-like neural representations.
  • Bakker, I., Takashima, A., van Hell, J. G., Janzen, G., & McQueen, J. M. (2015). Tracking lexical consolidation with ERPs: Lexical and semantic-priming effects on N400 and LPC responses to newly-learned words. Neuropsychologia, 79, 33-41. doi:10.1016/j.neuropsychologia.2015.10.020.
  • Banissy, M., Sauter, D., Ward, J., Warren, J. E., Walsh, V., & Scott, S. K. (2010). Suppressing sensorimotor activity modulates the discrimination of auditory emotions but not speaker identity. Journal of Neuroscience, 30(41), 13552-13557. doi:10.1523/JNEUROSCI.0786-10.2010.

    Abstract

    Our ability to recognise the emotions of others is a crucial feature of human social cognition. Functional neuroimaging studies indicate that activity in sensorimotor cortices is evoked during the perception of emotion. In the visual domain, right somatosensory cortex activity has been shown to be critical for facial emotion recognition. However, the importance of sensorimotor representations in modalities outside of vision remains unknown. Here we use continuous theta-burst transcranial magnetic stimulation (cTBS) to investigate whether neural activity in the right postcentral gyrus (rPoG) and right lateral premotor cortex (rPM) is involved in non-verbal auditory emotion recognition. Three groups of participants completed same-different tasks on auditory stimuli, discriminating between either the emotion expressed or the speakers' identities, prior to and following cTBS targeted at rPoG, rPM or the vertex (control site). A task-selective deficit in auditory emotion discrimination was observed. Stimulation to rPoG and rPM resulted in a disruption of participants' abilities to discriminate emotion, but not identity, from vocal signals. These findings suggest that sensorimotor activity may be a modality independent mechanism which aids emotion discrimination.

    Additional information

    S1_Banissy.pdf
  • Bank, R., Crasborn, O., & Van Hout, R. (2015). Alignment of two languages: The spreading of mouthings in Sign Language of the Netherlands. International Journal of Bilingualism, 19, 40-55. doi:10.1177/1367006913484991.

    Abstract

    Mouthings and mouth gestures are omnipresent in Sign Language of the Netherlands (NGT). Mouthings in NGT are mouth actions that have their origin in spoken Dutch, and are usually time aligned with the signs they co-occur with. Frequently, however, they spread over one or more adjacent signs, so that one mouthing co-occurs with multiple manual signs. We conducted a corpus study to explore how frequently this occurs in NGT and whether there is any sociolinguistic variation in the use of spreading. Further, we looked at the circumstances under which spreading occurs. Answers to these questions may give us insight into the prosodic structure of sign languages. We investigated a sample of the Corpus NGT containing 5929 mouthings by 46 participants. We found that spreading over an adjacent sign is independent of social factors. Further, mouthings that spread are longer than non-spreading mouthings, whether measured in syllables or in milliseconds. By using a relatively large amount of natural data, we succeeded in gaining more insight into the way mouth actions are utilised in sign languages
  • Bank, R. (2015). The ubiquity of mouthings in NGT: A corpus study. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Baranova, J. (2015). Other-initiated repair in Russian. Open linguistics, 1(1), 555-577. doi:10.1515/opli-2015-0019.

    Abstract

    This article describes the interactional patterns and linguistic structures associated with otherinitiated repair, as observed in a corpus of video-recorded conversations in Russian. In the discussion of various repair cases special attention is given to the modifications that the trouble source turn undergoes in response to an open versus a restricted repair initiation. Speakers often modify their problematic turn in multiple ways at ones when responding to an open repair initiation. They can alter the word order of the problematic turn, change prosodic contour of the utterance, omit redundant elements and add more specific ones. By contrast, restricted repair initiations usually receive specific repair solutions that target only one problem at a time
  • Barbiers, S., & Van Dooren, A. (2017). Modal Auxiliaries. In M. Everaert, & H. C. Van Riemsdijk (Eds.), The Wiley Blackwell Companion to Syntax (2nd ed.). Hoboken, NJ, USA: Wiley.

    Abstract

    In many languages modal auxiliaries such as English can, must, may, need, will, ought, want are ambiguous between two types of interpretations: epistemic and root interpretations. In the epistemic interpretation the modal expresses how likely it is that a proposition is true (for example, necessarily, possibly, probably true) while in the root interpretations the modal expresses the obligatoriness, permissibility, desirability, or possibility of a state or event. A central question in much syntactic research on modal auxiliaries has been whether this systematic semantic ambiguity corresponds to a syntactic distinction. A commonly accepted answer has been that in epistemic interpretations the modal verb is a monadic predicate while in root interpretations it is a dyadic predicate, typically a relation between a subject and an infinitival verb. This distinction between monadic and dyadic modal predicates has been modeled syntactically in various ways: (i) in terms of lexical argument structure, that is, as the distinction between raising and control verbs; (ii) in terms of different base positions in the array of functional heads making up the clausal spine, with epistemic modals being higher than root modals; (iii) in terms of a higher syntactic position for epistemically interpreted modals after raising at the level of semantic interpretation (LF raising); (iv) in terms of the nature of the complement of the modal. This chapter evaluates these proposals, drawing on data from, among others, English, Dutch, Icelandic, German, and Catalan and taking into account cross-linguistic differences in the modal systems. One important conclusion is that the alleged correspondence between the epistemic/root distinction and the raising/control distinction is too simple, as there are sentences with root interpretations but a raising syntax. The chapter ends with a list of questions for future research.
  • Bardhan, N. P. (2010). Adults’ self-directed learning of an artificial lexicon: The dynamics of neighborhood reorganization. PhD Thesis, University of Rochester, Rochester, New York.

    Abstract

    Artificial lexicons have previously been used to examine the time course of the learning and recognition of spoken words, the role of segment type in word learning, and the integration of context during spoken word recognition. However, in all of these studies the experimenter determined the frequency and order of the words to be learned. In three experiments, we asked whether adult learners choose to listen to novel words in a particular order based on their acoustic similarity. We use a new paradigm for learning an artificial lexicon in which the learner, rather than the experimenter, determines the order and frequency of exposure to items. We analyze both the proportions of selections and the temporal clustering of subjects' sampling of lexical neighborhoods during training as well as their performance during repeated testing phases (accuracy and reaction time) to determine the time course of learning these neighborhoods. In the first experiment, subjects sampled the high and low density neighborhoods randomly in early learning, and then over-sampled the high density neighborhood until test performance on both neighborhoods reached asymptote. A second experiment involved items similar to the first, but also neighborhoods that are not fully revealed at the start of the experiment. Subjects adjusted their training patterns to focus their selections on neighborhoods of increasing density was revealed; evidence of learning in the test phase was slower to emerge than in the first experiment, impaired by the presence of additional sets of items of varying density. Crucially, in both the first and second experiments there was no effect of dense vs. sparse neighborhood in the accuracy results, which is accounted for by subjects’ over-sampling of items from the dense neighborhood. The third experiment was identical in design to the second except for a second day of further training and testing on the same items. Testing at the beginning of the second day showed impaired, not improved, accuracy, except for the consistently dense items. Further training, however, improved accuracy for some items to above Day 1 levels. Overall, these results provide a new window on the time-course of learning an artificial lexicon and the role that learners’ implicit preferences, stemming from their self-selected experience with the entire lexicon, play in learning highly confusable words.
  • Bardhan, N. P., Aslin, R., & Tanenhaus, M. (2010). Adults' self-directed learning of an artificial lexicon: The dynamics of neighborhood reorganization. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32nd Annual Meeting of the Cognitive Science Society (pp. 364-368). Austin, TX: Cognitive Science Society.
  • Barendse, M. T. (2015). Dimensionality assessment with factor analysis methods. PhD Thesis, University of Groningen, Groningen.
  • Barendse, M. T., Oort, F. J., Jak, S., & Timmerman, M. E. (2013). Multilevel exploratory factor analysis of discrete data. Netherlands Journal of Psychology, 67(4), 114-121.
  • Barendse, M. T., Oort, F. J., & Timmerman, M. E. (2015). Using exploratory factor analysis to determine the dimensionality of discrete responses. Structural Equation Modeling: A Multidisciplinary Journal, 22(1), 87-101. doi:10.1080/10705511.2014.934850.

    Abstract

    Exploratory factor analysis (EFA) is commonly used to determine the dimensionality of continuous data. In a simulation study we investigate its usefulness with discrete data. We vary response scales (continuous, dichotomous, polytomous), factor loadings (medium, high), sample size (small, large), and factor structure (simple, complex). For each condition, we generate 1,000 data sets and apply EFA with 5 estimation methods (maximum likelihood [ML] of covariances, ML of polychoric correlations, robust ML, weighted least squares [WLS], and robust WLS) and 3 fit criteria (chi-square test, root mean square error of approximation, and root mean square residual). The various EFA procedures recover more factors when sample size is large, factor loadings are high, factor structure is simple, and response scales have more options. Robust WLS of polychoric correlations is the preferred method, as it is theoretically justified and shows fewer convergence problems than the other estimation methods.
  • Barendse, M. T., Oort, F. J., & Garst, G. J. A. (2010). Using restricted factor analysis with latent moderated structures to detect uniform and nonuniform measurement bias: A simulation study. AStA Advances in Statistical Analysis, 94, 117-127. doi:10.1007/s10182-010-0126-1.

    Abstract

    Factor analysis is an established technique for the detection of measurement bias. Multigroup factor analysis (MGFA) can detect both uniform and nonuniform bias. Restricted factor analysis (RFA) can also be used to detect measurement bias, albeit only uniform measurement bias. Latent moderated structural equations (LMS) enable the estimation of nonlinear interaction effects in structural equation modelling. By extending the RFA method with LMS, the RFA method should be suited to detect nonuniform bias as well as uniform bias. In a simulation study, the RFA/LMS method and the MGFA method are compared in detecting uniform and nonuniform measurement bias under various conditions, varying the size of uniform bias, the size of nonuniform bias, the sample size, and the ability distribution. For each condition, 100 sets of data were generated and analysed through both detection methods. The RFA/LMS and MGFA methods turned out to perform equally well. Percentages of correctly identified items as biased (true positives) generally varied between 92% and 100%, except in small sample size conditions in which the bias was nonuniform and small. For both methods, the percentages of false positives were generally higher than the nominal levels of significance.
  • Baron-Cohen, S., Johnson, D., Asher, J. E., Wheelwright, S., Fisher, S. E., Gregersen, P. K., & Allison, C. (2013). Is synaesthesia more common in autism? Molecular Autism, 4(1): 40. doi:10.1186/2040-2392-4-40.

    Abstract

    BACKGROUND:
    Synaesthesia is a neurodevelopmental condition in which a sensation in one modality triggers a perception in a second modality. Autism (shorthand for Autism Spectrum Conditions) is a neurodevelopmental condition involving social-communication disability alongside resistance to change and unusually narrow interests or activities. Whilst on the surface they appear distinct, they have been suggested to share common atypical neural connectivity.

    METHODS:
    In the present study, we carried out the first prevalence study of synaesthesia in autism to formally test whether these conditions are independent. After exclusions, 164 adults with autism and 97 controls completed a synaesthesia questionnaire, autism spectrum quotient, and test of genuineness-revised (ToG-R) online.

    RESULTS:
    The rate of synaesthesia in adults with autism was 18.9% (31 out of 164), almost three times greater than in controls (7.22%, 7 out of 97, P <0.05). ToG-R proved unsuitable for synaesthetes with autism.

    CONCLUSIONS:
    The significant increase in synaesthesia prevalence in autism suggests that the two conditions may share some common underlying mechanisms. Future research is needed to develop more feasible validation methods of synaesthesia in autism.

    Files private

    Request files
  • Barr, D. J., & Seyfeddinipur, M. (2010). The role of fillers in listener attributions for speaker disfluency. Language and Cognitive Processes, 25, 441-455. doi:10.1080/01690960903047122.

    Abstract

    When listeners hear a speaker become disfluent, they expect the speaker to refer to something new. What is the mechanism underlying this expectation? In a mouse-tracking experiment, listeners sought to identify images that a speaker was describing. Listeners more strongly expected new referents when they heard a speaker say um than when they heard a matched utterance where the um was replaced by noise. This expectation was speaker-specific: it depended on what was new and old for the current speaker, not just on what was new or old for the listener. This finding suggests that listeners treat fillers as collateral signals.
  • Barthel, M., Meyer, A. S., & Levinson, S. C. (2017). Next speakers plan their turn early and speak after turn-final ‘go-signals’. Frontiers in Psychology, 8: 393. doi:10.3389/fpsyg.2017.00393.

    Abstract

    In conversation, turn-taking is usually fluid, with next speakers taking their turn right after the end of the previous turn. Most, but not all, previous studies show that next speakers start to plan their turn early, if possible already during the incoming turn. The present study makes use of the list-completion paradigm (Barthel et al., 2016), analyzing speech onset latencies and eye-movements of participants in a task-oriented dialogue with a confederate. The measures are used to disentangle the contributions to the timing of turn-taking of early planning of content on the one hand and initiation of articulation as a reaction to the upcoming turn-end on the other hand. Participants named objects visible on their computer screen in response to utterances that did, or did not, contain lexical and prosodic cues to the end of the incoming turn. In the presence of an early lexical cue, participants showed earlier gaze shifts toward the target objects and responded faster than in its absence, whereas the presence of a late intonational cue only led to faster response times and did not affect the timing of participants' eye movements. The results show that with a combination of eye-movement and turn-transition time measures it is possible to tease apart the effects of early planning and response initiation on turn timing. They are consistent with models of turn-taking that assume that next speakers (a) start planning their response as soon as the incoming turn's message can be understood and (b) monitor the incoming turn for cues to turn-completion so as to initiate their response when turn-transition becomes relevant
  • Bašnákova, J., Van Berkum, J. J. A., Weber, K., & Hagoort, P. (2015). A job interview in the MRI scanner: How does indirectness affect addressees and overhearers? Neuropsychologia, 76, 79-91. doi:10.1016/j.neuropsychologia.2015.03.030.

    Abstract

    In using language, people not only exchange information, but also navigate their social world – for example, they can express themselves indirectly to avoid losing face. In this functional magnetic resonance imaging study, we investigated the neural correlates of interpreting face-saving indirect replies, in a situation where participants only overheard the replies as part of a conversation between two other people, as well as in a situation where the participants were directly addressed themselves. We created a fictional job interview context where indirect replies serve as a natural communicative strategy to attenuate one’s shortcomings, and asked fMRI participants to either pose scripted questions and receive answers from three putative job candidates (addressee condition) or to listen to someone else interview the same candidates (overhearer condition). In both cases, the need to evaluate the candidate ensured that participants had an active interest in comprehending the replies. Relative to direct replies, face-saving indirect replies increased activation in medial prefrontal cortex, bilateral temporo-parietal junction (TPJ), bilateral inferior frontal gyrus and bilateral middle temporal gyrus, in active overhearers and active addressees alike, with similar effect size, and comparable to findings obtained in an earlier passive listening study (Bašnáková et al., 2013). In contrast, indirectness effects in bilateral anterior insula and pregenual ACC, two regions implicated in emotional salience and empathy, were reliably stronger in addressees than in active overhearers. Our findings indicate that understanding face-saving indirect language requires additional cognitive perspective-taking and other discourse-relevant cognitive processing, to a comparable extent in active overhearers and addressees. Furthermore, they indicate that face-saving indirect language draws upon affective systems more in addressees than in overhearers, presumably because the addressee is the one being managed by a face-saving reply. In all, face-saving indirectness provides a window on the cognitive as well as affect-related neural systems involved in human communication.
  • Bastiaansen, M. C. M., Oostenveld, R., Jensen, O., & Hagoort, P. (2008). I see what you mean: Theta power increases are involved in the retrieval of lexical semantic information. Brain and Language, 106(1), 15-28. doi:10.1016/j.bandl.2007.10.006.

    Abstract

    An influential hypothesis regarding the neural basis of the mental lexicon is that semantic representations are neurally implemented as distributed networks carrying sensory, motor and/or more abstract functional information. This work investigates whether the semantic properties of words partly determine the topography of such networks. Subjects performed a visual lexical decision task while their EEG was recorded. We compared the EEG responses to nouns with either visual semantic properties (VIS, referring to colors and shapes) or with auditory semantic properties (AUD, referring to sounds). A time–frequency analysis of the EEG revealed power increases in the theta (4–7 Hz) and lower-beta (13–18 Hz) frequency bands, and an early power increase and subsequent decrease for the alpha (8–12 Hz) band. In the theta band we observed a double dissociation: temporal electrodes showed larger theta power increases in the AUD condition, while occipital leads showed larger theta responses in the VIS condition. The results support the notion that semantic representations are stored in functional networks with a topography that reflects the semantic properties of the stored items, and provide further evidence that oscillatory brain dynamics in the theta frequency range are functionally related to the retrieval of lexical semantic information.
  • Bastiaansen, M. C. M., & Hagoort, P. (2015). Frequency-based segregation of syntactic and semantic unification during online sentence level language comprehension. Journal of Cognitive Neuroscience, 27(11), 2095-2107. doi:10.1162/jocn_a_00829.

    Abstract

    During sentence level language comprehension, semantic and syntactic unification are functionally distinct operations. Nevertheless, both recruit roughly the same brain areas (spatially overlapping networks in the left frontotemporal cortex) and happen at the same time (in the first few hundred milliseconds after word onset). We tested the hypothesis that semantic and syntactic unification are segregated by means of neuronal synchronization of the functionally relevant networks in different frequency ranges: gamma (40 Hz and up) for semantic unification and lower beta (10–20 Hz) for syntactic unification. EEG power changes were quantified as participants read either correct sentences, syntactically correct though meaningless sentences (syntactic prose), or sentences that did not contain any syntactic structure (random word lists). Other sentences contained either a semantic anomaly or a syntactic violation at a critical word in the sentence. Larger EEG gamma-band power was observed for semantically coherent than for semantically anomalous sentences. Similarly, beta-band power was larger for syntactically correct sentences than for incorrect ones. These results confirm the existence of a functional dissociation in EEG oscillatory dynamics during sentence level language comprehension that is compatible with the notion of a frequency-based segregation of syntactic and semantic unification.
  • Bastiaansen, M. C. M., Magyari, L., & Hagoort, P. (2010). Syntactic unification operations are reflected in oscillatory dynamics during on-line sentence comprehension. Journal of Cognitive Neuroscience, 22, 1333-1347. doi:10.1162/jocn.2009.21283.

    Abstract

    There is growing evidence suggesting that synchronization changes in the oscillatory neuronal dynamics in the EEG or MEG reflect the transient coupling and uncoupling of functional networks related to different aspects of language comprehension. In this work, we examine how sentence-level syntactic unification operations are reflected in the oscillatory dynamics of the MEG. Participants read sentences that were either correct, contained a word category violation, or were constituted of random word sequences devoid of syntactic structure. A time-frequency analysis of MEG power changes revealed three types of effects. The first type of effect was related to the detection of a (word category) violation in a syntactically structured sentence, and was found in the alpha and gamma frequency bands. A second type of effect was maximally sensitive to the syntactic manipulations: A linear increase in beta power across the sentence was present for correct sentences, was disrupted upon the occurrence of a word category violation, and was absent in syntactically unstructured random word sequences. We therefore relate this effect to syntactic unification operations. Thirdly, we observed a linear increase in theta power across the sentence for all syntactically structured sentences. The effects are tentatively related to the building of a working memory trace of the linguistic input. In conclusion, the data seem to suggest that syntactic unification is reflected by neuronal synchronization in the lower-beta frequency band.
  • Bastos, A. M., Vezoli, J., Bosman, C. A., Schoffelen, J.-M., Oostenveld, R., Dowdall, J. R., De Weerd, P., Kennedy, H., & Fries, P. (2015). Visual areas exert feedforward and feedback influences through distinct frequency channels. Neuron, 85(2), 390-401. doi:10.1016/j.neuron.2014.12.018.

    Abstract

    Visual cortical areas subserve cognitive functions by interacting in both feedforward and feedback directions. While feedforward influences convey sensory signals, feedback influences modulate feedforward signaling according to the current behavioral context. We investigated whether these interareal influences are subserved differentially by rhythmic synchronization. We correlated frequency-specific directed influences among 28 pairs of visual areas with anatomical metrics of the feedforward or feedback character of the respective interareal projections. This revealed that in the primate visual system, feedforward influences are carried by theta-band ( approximately 4 Hz) and gamma-band ( approximately 60-80 Hz) synchronization, and feedback influences by beta-band ( approximately 14-18 Hz) synchronization. The functional directed influences constrain a functional hierarchy similar to the anatomical hierarchy, but exhibiting task-dependent dynamic changes in particular with regard to the hierarchical positions of frontal areas. Our results demonstrate that feedforward and feedback signaling use distinct frequency channels, suggesting that they subserve differential communication requirements.
  • Bauer, B. L. M. (1997). The adjective in Italic and Romance: Genetic or areal factors affecting word order patterns?”. In B. Palek (Ed.), Proceedings of LP'96: Typology: Prototypes, item orderings and universals (pp. 295-306). Prague: Charles University Press.
  • Bauer, B. L. M. (1992). Du latin au français: Le passage d'une langue SOV à une langue SVO. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Bauer, B. L. M. (2010). Fore-runners of Romance -mente adverbs in Latin prose and poetry. In E. Dickey, & A. Chahoud (Eds.), Colloquial and literary Latin (pp. 339-353). Cambridge: Cambridge University Press.
  • Bauer, B. L. M. (2013). Impersonal verbs. In G. K. Giannakis (Ed.), Encyclopedia of Ancient Greek Language and Linguistics Online (pp. 197-198). Leiden: Brill. doi:10.1163/2214-448X_eagll_SIM_00000481.

    Abstract

    Impersonal verbs in Greek ‒ as in the other Indo-European languages ‒ exclusively feature 3rd person singular finite forms and convey one of three types of meaning: (a) meteorological conditions; (b) emotional and physical state/experience; (c) modality. In Greek, impersonal verbs predominantly convey meteorological conditions and modality.

    Impersonal verbs in Greek, as in the other Indo-European languages, exclusively feature 3rd person singular finite forms and convey one of three types of me…

    Files private

    Request files
  • Bauer, B. L. M. (1992). Evolution in language: Evidence from the Romance auxiliary. In B. Chiarelli, J. Wind, A. Nocentini, & B. Bichakjian (Eds.), Language origin: A multidisciplinary approach (pp. 517-528). Dordrecht: Kluwer.

Share this page