Andrea E. Martin

Presentations

Displaying 1 - 19 of 19
  • Alday, P. M., & Martin, A. E. (2017). Decoding linguistic structure building in the time-frequency domain. Poster presented at the 24th Annual Meeting of the Cognitive Neuroscience Society (CNS 2017), San Francisco, CA, USA.
  • Alday, P. M., & Martin, A. E. (2017). Decoding linguistic structure building in the time-frequency domain. Poster presented at the 30th Annual CUNY Conference on Human Sentence Processing, Cambridge, MA, USA.
  • Alday, P. M., & Martin, A. E. (2017). Stress-timing via oscillatory phase-locking in naturalistic language. Poster presented at the Ninth Annual Meeting of the Society for the Neurobiology of Language (SNL 2017), Baltimore, MD, USA.
  • Martin, A. E. (2017). Brain Rhythms and Cortical Computation (BryCoCo) [Keynote lecture]. Talk presented at the University Geneva. Geneva, Switzerland. 2017-10.
  • Martin, A. E., & Doumas, L. A. A. (2017). A mechanism for the cortical computation of hierarchical linguistic structure. Poster presented at the 30th Annual CUNY Conference on Human Sentence Processing, Cambridge, MA, USA.
  • Martin, A. E., & Doumas, L. A. A. (2017). A mechanism for the cortical computation of hierarchical linguistic structure. Poster presented at the 24th Annual Meeting of the Cognitive Neuroscience Society (CNS 2017), San Francisco, CA, USA.
  • Martin, A. E., & Doumas, L. A. A. (2017). A mechanism for the cortical computation of hierarchical linguistic structure. Talk presented at the 24th Annual Meeting of the Cognitive Neuroscience Society (CNS 2017) Data Blitz Session. San Francisco, CA, USA. 2017-03-25 - 2017-03-28.
  • Martin, A. E. (2017). Learning, representing, and inferring a symbolic system from neural representations distributed across time and frequency. Talk presented at the Workshop Key Questions and New Methods in the Language Sciences. Berg en Dal, The Netherlands. 2017-06-14 - 2017-06-17.
  • Martin, A. E. (2017). Linking linguistic and cortical computation via hierarchy and time. Talk presented at the Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences. Leipzig, Germany. 2017-11-15.
  • Martin, A. E. (2017). Linking linguistic and cortical computation via hierarchy and time. Talk presented at the Psychology Department, University of Amsterdam. Amsterdam, The Netherlands. 2017-10-26.
  • Martin, A. E. (2017). Linking linguistic and cortical computation via hierarchy and time [Keynote lecture]. Talk presented at the Workshop "The Neural Oscillations of Speech and Language Processing". Berlin, Germany. 2017-05-29.
  • Doumas, L. A., & Martin, A. E. (2016). Abstraction in time: Finding hierarchical linguistic structure in a model of relational processing. Talk presented at the 38th Annual Conference of the Cognitive Science Society (CogSci 2016). Philadelphia, PA. 2016-08-10 - 2016-08-13.
  • Martin, A. E., & Doumas, L. A. A. (2016). A mechanism for the cortical computation of syntax. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.
  • Martin, A. E., & Doumas, L. A. A. (2016). A mechanism for the cortical computation of syntax. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.
  • Martin, A. E. (2016). Retrieval cues in language comprehension: Interference effects in monologue but not dialogue. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London.

    Abstract

    Language production and comprehension require us to integrate incoming linguistic representations with past input, often across intervening words and phrases (Miller & Chomsky, 1963). In recent years, the cue-based retrieval framework has amassed evidence that interference is the main determinant of processing difficulty during long-distance dependency resolution (Lewis et al., 2006; Lewis & Vasishth, 2005; McElree et al., 2003; McElree, 2006; Van Dyke & McElree, 2006, 2011). Yet, little is known about the representations that function of cues in language processing. Furthermore, most of the aforementioned data comes from experiments on silent reading, a form of monologue. But, the computational challenge of dependency resolution is actually most potent in dialogue, where representations are often omitted, compressed, reduced, or elided, and where production and comprehension must occur dynamically between two brains. Previous event-related brain potential (ERP) studies of ellipsis in silent reading have shown interference effects from different types of relevant linguistic representations (Martin et al., 2012, 2014). The current study presents ERP data from a dialogue-overhearing paradigm where the distance between antecedent and ellipsis site was manipulated. Thirty-six native speakers of British English listened to 120 spoken discourses that were spoken either by one speaker (Monologue) or split over two speakers (Dialogue). The second factor, the recency of the antecedent compared to the ellipsis site (Antecedent: Recent, Distant), yielded a 2x2 Dialogue x Recency design: Dialogue, Recent antecedent A: After reading the exposé on the MP, Jane filed a complaint. B: I don’t remember about what_. Perhaps about job shortages. Dialogue, Distant antecedent A: Jane filed a complaint after reading the exposé on the MP. B: I don’t remember about what_. Perhaps about job shortages. Monologue, Recent antecedent A: After reading the exposé on the MP, Jane filed a complaint. A: I don’t remember about what_. Perhaps about job shortages. Monologue, Distant Antecedent A: Jane filed a complaint after reading the exposé on the MP. A: I don’t remember about what_. Perhaps about job shortages. All stimuli were grammatical, and listeners answered a comprehension question on 25% of trials. A Dialogue x Recency interaction was observed on a frontal late, positive-going component that was maximal between 800-1000msec, starting ~400msec post-CW onset. The interaction was driven by the reliable difference between the Monologue Distant condition and the Monologue Recent condition, whereby the former was more positive compared to the latter. This interaction pattern suggests that interference effects can be ameliorated by speaker-related cues in dialogue listening. That suggests, minimally, that interference patterns in ellipsis resolution differ as a function of speaker information. If speaker-related cues are relevant during linguistic dependency resolution, then retrieval cues must be composite in nature, containing both speaker information and grammatical information. Such an architecture would mean that a wider range of information types might interact incrementally during in online language comprehension ‘in the wild.’
  • Martin, A. E. (2016). Retrieval cues in language comprehension: Interference effects in monologue but not dialogue. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.
  • Nieuwland, M. S., & Martin, A. E. (2016). A neural oscillatory signature of reference. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language, London.

    Abstract

    The ability to use linguistic representations to refer to the world is a vital mechanism that gives human language its communicative power. In particular, the anaphoric use of words to refer to previously mentioned concepts (antecedents) is what allows dialogue to be coherent and meaningful. Psycholinguistic theory posits that anaphor comprehension involves reactivating an episodic memory representation of the antecedent [1-2]. Whereas this implies the involvement of memory structures, the neural processes for reference resolution are largely unknown. Here, we report time-frequency analysis of four EEG experiments [3-6], revealing the increased coupling of functional neural systems associated with coherent referring expressions compared to referentially ambiguous expressions. We performed time-frequency analysis on data from four experiments in which referentially ambiguous expressions elicited a sustained negativity in the ERP waveform compared to coherent expressions. In Experiment 1, 32 participants read 120 correct Dutch sentences with coherent or ambiguous pronouns. In Experiment 2, 31 participants listened to 90 naturally spoken Dutch mini-stories containing coherent or ambiguous NP anaphora. In Experiment 3, 22 participants each read 60 Spanish sentences with a coherent or ambiguous ellipsis determiner. In Experiment 4, 19 participants each read 180 grammatically correct English sentences containing coherent or ambiguous pronouns. Analysis was performed with Fieldtrip [7], separately for low frequency (2-30 Hz) and high frequency (25-90 Hz) activity. Power-changes per trial were computed as a relative change from a pre-CW baseline interval, average power changes were computed per subject for coherent and ambiguous conditions separately. Statistical tests used cluster-based random permutation [8]. Despite varying in modality, language and type of expression, all experiments showed larger gamma-band power around 80 Hz for coherence compared to ambiguity, within a similar time range. No differences were observed in low frequencies. In high-density EEG Experiment 4, an additional short-duration gamma-increase was observed around 40 Hz, around 300-500 ms after pronoun-onset, which was localised using Beamformer analysis [9] to left posterior parietal cortex (PPC). The 80 Hz power increase around 600-1200 ms after word onset was localised to left inferior frontal-temporal cortex. We argue that the observed gamma-band power increases reflect successful referential binding and resolution, linking incoming information to previously encountered concepts and integrates that information into the unfolding discourse representation. Specifically, we argue that this involves antecedent reactivation in the PPC episodic memory network [10-11], interacting with unification processes in the frontal-temporal language network [12]. Based on these results, and on results of patient [13] and fMRI [14] research on pronoun comprehension, we propose an initial neurobiological account of reference, by bridging the psycholinguistics of anaphora with the neurobiology of language and of episodic memory. [1] Dell et al., 1983 [2] Gerrig & McKoon, 1998 [3] Nieuwland & Van Berkum, 2006 [4] Nieuwland et al., 2007a [5] Martin et al., 2012 [6] Nieuwland, 2014 [7] Oostenveld et al., 2011 [8] Maris & Oostenveld, 2007 [9] Gross et al., 2001 [10] Shannon & Buckner, 2004 [11] Wager et al., 2005 [12] Hagoort & Indefrey, 2014 [13] Kurczek et al., 2013 [14] Nieuwland et al., 2007b
  • Nieuwland, M. S., & Martin, A. E. (2016). A neural oscillatory signature of reference. Talk presented at the Architectures and mechanisms for language processing (AMLaP2016). Bilbao, Spain. 2016-09-01 - 2016-09-03.

    Abstract

    The ability to use words to refer to the world is a vital mechanism that gives human language its communicative power. In particular, the use of words to refer to previously mentioned concepts (anaphora) is what allows dialogue to be coherent and meaningful. Psycholinguistic theory posits that anaphor comprehension involves reactivating a memory representation of the antecedent. Whereas this implies the involvement of episodic memory, the neural processes for reference resolution are largely unknown. Here, we report time-frequency analysis of four EEG experiments to reveal the increased coupling of functional neural systems associated with referring expressions that can be straightforwardly understood compared to those that cannot (referential coherence or ambiguity). Despite varying in modality, language and type of referential expression, all experiments showed larger gamma-band power for coherence compared to ambiguity. In high-density EEG Experiment 4, Beamformer analysis localised this increase to the posterior parietal cortex around 300-500 ms after onset of the anaphor and to frontal-temporal cortex around 500-1000 ms. We argue that the observed gamma-band power increases reflect successful referential binding and resolution, which links incoming information to previously encountered concepts through an interaction between the episodic memory network and the frontal-temporal language network.
  • Sturt, P., & Martin, A. E. (2016). Using grammatical features to forecast incoming structure: The processing of Across-the-board extraction. Poster presented at The 29th CUNY Human Sentence Processing Conference, Gainesville, Florida, USA.

Share this page