Andrea E. Martin

Publications

Displaying 1 - 21 of 21
  • Coopmans, C. W., Mai, A., & Martin, A. E. (2024). “Not” in the brain and behavior. PLOS Biology, 22: e3002656. doi:10.1371/journal.pbio.3002656.
  • Ding, R., Ten Oever, S., & Martin, A. E. (2024). Delta-band activity underlies referential meaning representation during pronoun resolution. Journal of Cognitive Neuroscience, 36(7), 1472-1492. doi:10.1162/jocn_a_02163.

    Abstract

    Human language offers a variety of ways to create meaning, one of which is referring to entities, objects, or events in the world. One such meaning maker is understanding to whom or to what a pronoun in a discourse refers to. To understand a pronoun, the brain must access matching entities or concepts that have been encoded in memory from previous linguistic context. Models of language processing propose that internally stored linguistic concepts, accessed via exogenous cues such as phonological input of a word, are represented as (a)synchronous activities across a population of neurons active at specific frequency bands. Converging evidence suggests that delta band activity (1–3 Hz) is involved in temporal and representational integration during sentence processing. Moreover, recent advances in the neurobiology of memory suggest that recollection engages neural dynamics similar to those which occurred during memory encoding. Integrating from these two research lines, we here tested the hypothesis that neural dynamic patterns, especially in delta frequency range, underlying referential meaning representation, would be reinstated during pronoun resolution. By leveraging neural decoding techniques (i.e., representational similarity analysis) on a magnetoencephalogram data set acquired during a naturalistic story-listening task, we provide evidence that delta-band activity underlies referential meaning representation. Our findings suggest that, during spoken language comprehension, endogenous linguistic representations such as referential concepts may be proactively retrieved and represented via activation of their underlying dynamic neural patterns.
  • Slaats, S., Meyer, A. S., & Martin, A. E. (2024). Lexical surprisal shapes the time course of syntactic structure building. Neurobiology of Language, 5(4), 942-980. doi:10.1162/nol_a_00155.

    Abstract

    When we understand language, we recognize words and combine them into sentences. In this article, we explore the hypothesis that listeners use probabilistic information about words to build syntactic structure. Recent work has shown that lexical probability and syntactic structure both modulate the delta-band (<4 Hz) neural signal. Here, we investigated whether the neural encoding of syntactic structure changes as a function of the distributional properties of a word. To this end, we analyzed MEG data of 24 native speakers of Dutch who listened to three fairytales with a total duration of 49 min. Using temporal response functions and a cumulative model-comparison approach, we evaluated the contributions of syntactic and distributional features to the variance in the delta-band neural signal. This revealed that lexical surprisal values (a distributional feature), as well as bottom-up node counts (a syntactic feature) positively contributed to the model of the delta-band neural signal. Subsequently, we compared responses to the syntactic feature between words with high- and low-surprisal values. This revealed a delay in the response to the syntactic feature as a consequence of the surprisal value of the word: high-surprisal values were associated with a delayed response to the syntactic feature by 150–190 ms. The delay was not affected by word duration, and did not have a lexical origin. These findings suggest that the brain uses probabilistic information to infer syntactic structure, and highlight an importance for the role of time in this process.

    Additional information

    supplementary data
  • Ten Oever, S., & Martin, A. E. (2024). Interdependence of “what” and “when” in the brain. Journal of Cognitive Neuroscience, 36(1), 167-186. doi:10.1162/jocn_a_02067.

    Abstract

    From a brain's-eye-view, when a stimulus occurs and what it is are interrelated aspects of interpreting the perceptual world. Yet in practice, the putative perceptual inferences about sensory content and timing are often dichotomized and not investigated as an integrated process. We here argue that neural temporal dynamics can influence what is perceived, and in turn, stimulus content can influence the time at which perception is achieved. This computational principle results from the highly interdependent relationship of what and when in the environment. Both brain processes and perceptual events display strong temporal variability that is not always modeled; we argue that understanding—and, minimally, modeling—this temporal variability is key for theories of how the brain generates unified and consistent neural representations and that we ignore temporal variability in our analysis practice at the peril of both data interpretation and theory-building. Here, we review what and when interactions in the brain, demonstrate via simulations how temporal variability can result in misguided interpretations and conclusions, and outline how to integrate and synthesize what and when in theories and models of brain computation.
  • Ten Oever, S., Titone, L., te Rietmolen, N., & Martin, A. E. (2024). Phase-dependent word perception emerges from region-specific sensitivity to the statistics of language. Proceedings of the National Academy of Sciences of the United States of America, 121(3): e2320489121. doi:10.1073/pnas.2320489121.

    Abstract

    Neural oscillations reflect fluctuations in excitability, which biases the percept of ambiguous sensory input. Why this bias occurs is still not fully understood. We hypothesized that neural populations representing likely events are more sensitive, and thereby become active on earlier oscillatory phases, when the ensemble itself is less excitable. Perception of ambiguous input presented during less-excitable phases should therefore be biased toward frequent or predictable stimuli that have lower activation thresholds. Here, we show such a frequency bias in spoken word recognition using psychophysics, magnetoencephalography (MEG), and computational modelling. With MEG, we found a double dissociation, where the phase of oscillations in the superior temporal gyrus and medial temporal gyrus biased word-identification behavior based on phoneme and lexical frequencies, respectively. This finding was reproduced in a computational model. These results demonstrate that oscillations provide a temporal ordering of neural activity based on the sensitivity of separable neural populations.
  • Weissbart, H., & Martin, A. E. (2024). The structure and statistics of language jointly shape cross-frequency neural dynamics during spoken language comprehension. Nature Communications, 15: 8850. doi:10.1038/s41467-024-53128-1.

    Abstract

    Humans excel at extracting structurally-determined meaning from speech despite inherent physical variability. This study explores the brain’s ability to predict and understand spoken language robustly. It investigates the relationship between structural and statistical language knowledge in brain dynamics, focusing on phase and amplitude modulation. Using syntactic features from constituent hierarchies and surface statistics from a transformer model as predictors of forward encoding models, we reconstructed cross-frequency neural dynamics from MEG data during audiobook listening. Our findings challenge a strict separation of linguistic structure and statistics in the brain, with both aiding neural signal reconstruction. Syntactic features have a more temporally spread impact, and both word entropy and the number of closing syntactic constituents are linked to the phase-amplitude coupling of neural dynamics, implying a role in temporal prediction and cortical oscillation alignment during speech processing. Our results indicate that structured and statistical information jointly shape neural dynamics during spoken language comprehension and suggest an integration process via a cross-frequency coupling mechanism
  • Zhao, J., Martin, A. E., & Coopmans, C. W. (2024). Structural and sequential regularities modulate phrase-rate neural tracking. Scientific Reports, 14: 16603. doi:10.1038/s41598-024-67153-z.

    Abstract

    Electrophysiological brain activity has been shown to synchronize with the quasi-regular repetition of grammatical phrases in connected speech—so-called phrase-rate neural tracking. Current debate centers around whether this phenomenon is best explained in terms of the syntactic properties of phrases or in terms of syntax-external information, such as the sequential repetition of parts of speech. As these two factors were confounded in previous studies, much of the literature is compatible with both accounts. Here, we used electroencephalography (EEG) to determine if and when the brain is sensitive to both types of information. Twenty native speakers of Mandarin Chinese listened to isochronously presented streams of monosyllabic words, which contained either grammatical two-word phrases (e.g., catch fish, sell house) or non-grammatical word combinations (e.g., full lend, bread far). Within the grammatical conditions, we varied two structural factors: the position of the head of each phrase and the type of attachment. Within the non-grammatical conditions, we varied the consistency with which parts of speech were repeated. Tracking was quantified through evoked power and inter-trial phase coherence, both derived from the frequency-domain representation of EEG responses. As expected, neural tracking at the phrase rate was stronger in grammatical sequences than in non-grammatical sequences without syntactic structure. Moreover, it was modulated by both attachment type and head position, revealing the structure-sensitivity of phrase-rate tracking. We additionally found that the brain tracks the repetition of parts of speech in non-grammatical sequences. These data provide an integrative perspective on the current debate about neural tracking effects, revealing that the brain utilizes regularities computed over multiple levels of linguistic representation in guiding rhythmic computation.
  • Zioga, I., Zhou, Y. J., Weissbart, H., Martin, A. E., & Haegens, S. (2024). Alpha and beta oscillations differentially support word production in a rule-switching task. eNeuro, 11(4): ENEURO.0312-23.2024. doi:10.1523/ENEURO.0312-23.2024.

    Abstract

    Research into the role of brain oscillations in basic perceptual and cognitive functions has suggested that the alpha rhythm reflects functional inhibition while the beta rhythm reflects neural ensemble (re)activation. However, little is known regarding the generalization of these proposed fundamental operations to linguistic processes, such as speech comprehension and production. Here, we recorded magnetoencephalography in participants performing a novel rule-switching paradigm. Specifically, Dutch native speakers had to produce an alternative exemplar from the same category or a feature of a given target word embedded in spoken sentences (e.g., for the word “tuna”, an exemplar from the same category—“seafood”—would be “shrimp”, and a feature would be “pink”). A cue indicated the task rule—exemplar or feature—either before (pre-cue) or after (retro-cue) listening to the sentence. Alpha power during the working memory delay was lower for retro-cue compared with that for pre-cue in the left hemispheric language-related regions. Critically, alpha power negatively correlated with reaction times, suggestive of alpha facilitating task performance by regulating inhibition in regions linked to lexical retrieval. Furthermore, we observed a different spatiotemporal pattern of beta activity for exemplars versus features in the right temporoparietal regions, in line with the proposed role of beta in recruiting neural networks for the encoding of distinct categories. Overall, our study provides evidence for the generalizability of the role of alpha and beta oscillations from perceptual to more “complex, linguistic processes” and offers a novel task to investigate links between rule-switching, working memory, and word production.
  • Cutter, M. G., Martin, A. E., & Sturt, P. (2020). Capitalization interacts with syntactic complexity. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(6), 1146-1164. doi:10.1037/xlm0000780.

    Abstract

    We investigated whether readers use the low-level cue of proper noun capitalization in the parafovea to infer syntactic category, and whether this results in an early update of the representation of a sentence’s syntactic structure. Participants read sentences containing either a subject relative or object relative clause, in which the relative clause’s overt argument was a proper noun (e.g., The tall lanky guard who alerted Charlie/Charlie alerted to the danger was young) across three experiments. In Experiment 1 these sentences were presented in normal sentence casing or entirely in upper case. In Experiment 2 participants received either valid or invalid parafoveal previews of the relative clause. In Experiment 3 participants viewed relative clauses in only normal conditions. We hypothesized that we would observe relative clause effects (i.e., inflated fixation times for object relative clauses) while readers were still fixated on the word who, if readers use capitalization to infer a parafoveal word’s syntactic class. This would constitute a syntactic parafoveal-on-foveal effect. Furthermore, we hypothesised that this effect should be influenced by sentence casing in Experiment 1 (with no cue for syntactic category being available in upper case sentences) but not by parafoveal preview validity of the target words. We observed syntactic parafoveal-on-foveal effects in Experiment 1 and 3, and a Bayesian analysis of the combined data from all three experiments. These effects seemed to be influenced more by noun capitalization than lexical processing. We discuss our findings in relation to models of eye movement control and sentence processing theories.
  • Cutter, M. G., Martin, A. E., & Sturt, P. (2020). Readers detect an low-level phonological violation between two parafoveal words. Cognition, 204: 104395. doi:10.1016/j.cognition.2020.104395.

    Abstract

    In two eye-tracking studies we investigated whether readers can detect a violation of the phonological-grammatical convention for the indefinite article an to be followed by a word beginning with a vowel when these two words appear in the parafovea. Across two experiments participants read sentences in which the word an was followed by a parafoveal preview that was either correct (e.g. Icelandic), incorrect and represented a phonological violation (e.g. Mongolian), or incorrect without representing a phonological violation (e.g. Ethiopian), with this parafoveal preview changing to the target word as participants made a saccade into the space preceding an. Our data suggests that participants detected the phonological violation while the target word was still two words to the right of fixation, with participants making more regressions from the previewed word and having longer go-past times on this word when they received a violation preview as opposed to a non-violation preview. We argue that participants were attempting to perform aspects of sentence integration on the basis of low-level orthographic information from the previewed word.

    Additional information

    Data files and R Scripts
  • Cutter, M. G., Martin, A. E., & Sturt, P. (2020). The activation of contextually predictable words in syntactically illegal positions. Quarterly Journal of Experimental Psychology, 73(9), 1423-1430. doi:10.1177/1747021820911021.

    Abstract

    We present an eye-tracking study testing a hypothesis emerging from several theories of prediction during language processing, whereby predictable words should be skipped more than unpredictable words even in syntactically illegal positions. Participants read sentences in which a target word became predictable by a certain point (e.g., “bone” is 92% predictable given, “The dog buried his. . .”), with the next word actually being an intensifier (e.g., “really”), which a noun cannot follow. The target noun remained predictable to appear later in the sentence. We used the boundary paradigm to present the predictable noun or an alternative unpredictable noun (e.g., “food”) directly after the intensifier, until participants moved beyond the intensifier, at which point the noun changed to a syntactically legal word. Participants also read sentences in which predictable or unpredictable nouns appeared in syntactically legal positions. A Bayesian linear-mixed model suggested a 5.7% predictability effect on skipping of nouns in syntactically legal positions, and a 3.1% predictability effect on skipping of nouns in illegal positions. We discuss our findings in relation to theories of lexical prediction during reading.

    Additional information

    OSF data
  • Doumas, L. A. A., Martin, A. E., & Hummel, J. E. (2020). Relation learning in a neurocomputational architecture supports cross-domain transfer. In S. Denison, M. Mack, Y. Xu, & B. C. Armstrong (Eds.), Proceedings of the 42nd Annual Virtual Meeting of the Cognitive Science Society (CogSci 2020) (pp. 932-937). Montreal, QB: Cognitive Science Society.

    Abstract

    Humans readily generalize, applying prior knowledge to novel situations and stimuli. Advances in machine learning have begun to approximate and even surpass human performance, but these systems struggle to generalize what they have learned to untrained situations. We present a model based on wellestablished neurocomputational principles that demonstrates human-level generalisation. This model is trained to play one video game (Breakout) and performs one-shot generalisation to a new game (Pong) with different characteristics. The model
    generalizes because it learns structured representations that are functionally symbolic (viz., a role-filler binding calculus) from unstructured training data. It does so without feedback, and without requiring that structured representations are specified a priori. Specifically, the model uses neural co-activation to discover which characteristics of the input are invariant and to learn relational predicates, and oscillatory regularities in network firing to bind predicates to arguments. To our knowledge,
    this is the first demonstration of human-like generalisation in a machine system that does not assume structured representa-
    tions to begin with.
  • Hashemzadeh, M., Kaufeld, G., White, M., Martin, A. E., & Fyshe, A. (2020). From language to language-ish: How brain-like is an LSTM representation of nonsensical language stimuli? In T. Cohn, Y. He, & Y. Liu (Eds.), Findings of the Association for Computational Linguistics: EMNLP 2020 (pp. 645-655). Association for Computational Linguistics.

    Abstract

    The representations generated by many mod-
    els of language (word embeddings, recurrent
    neural networks and transformers) correlate
    to brain activity recorded while people read.
    However, these decoding results are usually
    based on the brain’s reaction to syntactically
    and semantically sound language stimuli. In
    this study, we asked: how does an LSTM (long
    short term memory) language model, trained
    (by and large) on semantically and syntac-
    tically intact language, represent a language
    sample with degraded semantic or syntactic
    information? Does the LSTM representation
    still resemble the brain’s reaction? We found
    that, even for some kinds of nonsensical lan-
    guage, there is a statistically significant rela-
    tionship between the brain’s activity and the
    representations of an LSTM. This indicates
    that, at least in some instances, LSTMs and the
    human brain handle nonsensical data similarly.
  • Kaufeld, G., Naumann, W., Meyer, A. S., Bosker, H. R., & Martin, A. E. (2020). Contextual speech rate influences morphosyntactic prediction and integration. Language, Cognition and Neuroscience, 35(7), 933-948. doi:10.1080/23273798.2019.1701691.

    Abstract

    Understanding spoken language requires the integration and weighting of multiple cues, and may call on cue integration mechanisms that have been studied in other areas of perception. In the current study, we used eye-tracking (visual-world paradigm) to examine how contextual speech rate (a lower-level, perceptual cue) and morphosyntactic knowledge (a higher-level, linguistic cue) are iteratively combined and integrated. Results indicate that participants used contextual rate information immediately, which we interpret as evidence of perceptual inference and the generation of predictions about upcoming morphosyntactic information. Additionally, we observed that early rate effects remained active in the presence of later conflicting lexical information. This result demonstrates that (1) contextual speech rate functions as a cue to morphosyntactic inferences, even in the presence of subsequent disambiguating information; and (2) listeners iteratively use multiple sources of information to draw inferences and generate predictions during speech comprehension. We discuss the implication of these demonstrations for theories of language processing
  • Kaufeld, G., Ravenschlag, A., Meyer, A. S., Martin, A. E., & Bosker, H. R. (2020). Knowledge-based and signal-based cues are weighted flexibly during spoken language comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(3), 549-562. doi:10.1037/xlm0000744.

    Abstract

    During spoken language comprehension, listeners make use of both knowledge-based and signal-based sources of information, but little is known about how cues from these distinct levels of representational hierarchy are weighted and integrated online. In an eye-tracking experiment using the visual world paradigm, we investigated the flexible weighting and integration of morphosyntactic gender marking (a knowledge-based cue) and contextual speech rate (a signal-based cue). We observed that participants used the morphosyntactic cue immediately to make predictions about upcoming referents, even in the presence of uncertainty about the cue’s reliability. Moreover, we found speech rate normalization effects in participants’ gaze patterns even in the presence of preceding morphosyntactic information. These results demonstrate that cues are weighted and integrated flexibly online, rather than adhering to a strict hierarchy. We further found rate normalization effects in the looking behavior of participants who showed a strong behavioral preference for the morphosyntactic gender cue. This indicates that rate normalization effects are robust and potentially automatic. We discuss these results in light of theories of cue integration and the two-stage model of acoustic context effects
  • Kaufeld, G., Bosker, H. R., Ten Oever, S., Alday, P. M., Meyer, A. S., & Martin, A. E. (2020). Linguistic structure and meaning organize neural oscillations into a content-specific hierarchy. The Journal of Neuroscience, 49(2), 9467-9475. doi:10.1523/JNEUROSCI.0302-20.2020.

    Abstract

    Neural oscillations track linguistic information during speech comprehension (e.g., Ding et al., 2016; Keitel et al., 2018), and are known to be modulated by acoustic landmarks and speech intelligibility (e.g., Doelling et al., 2014; Zoefel & VanRullen, 2015). However, studies investigating linguistic tracking have either relied on non-naturalistic isochronous stimuli or failed to fully control for prosody. Therefore, it is still unclear whether low frequency activity tracks linguistic structure during natural speech, where linguistic structure does not follow such a palpable temporal pattern. Here, we measured electroencephalography (EEG) and manipulated the presence of semantic and syntactic information apart from the timescale of their occurrence, while carefully controlling for the acoustic-prosodic and lexical-semantic information in the signal. EEG was recorded while 29 adult native speakers (22 women, 7 men) listened to naturally-spoken Dutch sentences, jabberwocky controls with morphemes and sentential prosody, word lists with lexical content but no phrase structure, and backwards acoustically-matched controls. Mutual information (MI) analysis revealed sensitivity to linguistic content: MI was highest for sentences at the phrasal (0.8-1.1 Hz) and lexical timescale (1.9-2.8 Hz), suggesting that the delta-band is modulated by lexically-driven combinatorial processing beyond prosody, and that linguistic content (i.e., structure and meaning) organizes neural oscillations beyond the timescale and rhythmicity of the stimulus. This pattern is consistent with neurophysiologically inspired models of language comprehension (Martin, 2016, 2020; Martin & Doumas, 2017) where oscillations encode endogenously generated linguistic content over and above exogenous or stimulus-driven timing and rhythm information.
  • Martin, A. E. (2020). A compositional neural architecture for language. Journal of Cognitive Neuroscience, 32(8), 1407-1427. doi:10.1162/jocn_a_01552.

    Abstract

    Hierarchical structure and compositionality imbue human language with unparalleled expressive power and set it apart from other perception–action systems. However, neither formal nor neurobiological models account for how these defining computational properties might arise in a physiological system. I attempt to reconcile hierarchy and compositionality with principles from cell assembly computation in neuroscience; the result is an emerging theory of how the brain could convert distributed perceptual representations into hierarchical structures across multiple timescales while representing interpretable incremental stages of (de) compositional meaning. The model's architecture—a multidimensional coordinate system based on neurophysiological models of sensory processing—proposes that a manifold of neural trajectories encodes sensory, motor, and abstract linguistic states. Gain modulation, including inhibition, tunes the path in the manifold in accordance with behavior and is how latent structure is inferred. As a consequence, predictive information about upcoming sensory input during production and comprehension is available without a separate operation. The proposed processing mechanism is synthesized from current models of neural entrainment to speech, concepts from systems neuroscience and category theory, and a symbolic-connectionist computational model that uses time and rhythm to structure information. I build on evidence from cognitive neuroscience and computational modeling that suggests a formal and mechanistic alignment between structure building and neural oscillations and moves toward unifying basic insights from linguistics and psycholinguistics with the currency of neural computation.
  • Meyer, L., Sun, Y., & Martin, A. E. (2020). Synchronous, but not entrained: Exogenous and endogenous cortical rhythms of speech and language processing. Language, Cognition and Neuroscience, 35(9), 1089-1099. doi:10.1080/23273798.2019.1693050.

    Abstract

    Research on speech processing is often focused on a phenomenon termed “entrainment”, whereby the cortex shadows rhythmic acoustic information with oscillatory activity. Entrainment has been observed to a range of rhythms present in speech; in addition, synchronicity with abstract information (e.g. syntactic structures) has been observed. Entrainment accounts face two challenges: First, speech is not exactly rhythmic; second, synchronicity with representations that lack a clear acoustic counterpart has been described. We propose that apparent entrainment does not always result from acoustic information. Rather, internal rhythms may have functionalities in the generation of abstract representations and predictions. While acoustics may often provide punctate opportunities for entrainment, internal rhythms may also live a life of their own to infer and predict information, leading to intrinsic synchronicity – not to be counted as entrainment. This possibility may open up new research avenues in the psycho– and neurolinguistic study of language processing and language development.
  • Meyer, L., Sun, Y., & Martin, A. E. (2020). “Entraining” to speech, generating language? Language, Cognition and Neuroscience, 35(9), 1138-1148. doi:10.1080/23273798.2020.1827155.

    Abstract

    Could meaning be read from acoustics, or from the refraction rate of pyramidal cells innervated by the cochlea, everyone would be an omniglot. Speech does not contain sufficient acoustic cues to identify linguistic units such as morphemes, words, and phrases without prior knowledge. Our target article (Meyer, L., Sun, Y., & Martin, A. E. (2019). Synchronous, but not entrained: Exogenous and endogenous cortical rhythms of speech and language processing. Language, Cognition and Neuroscience, 1–11. https://doi.org/10.1080/23273798.2019.1693050) thus questioned the concept of “entrainment” of neural oscillations to such units. We suggested that synchronicity with these points to the existence of endogenous functional “oscillators”—or population rhythmic activity in Giraud’s (2020) terms—that underlie the inference, generation, and prediction of linguistic units. Here, we address a series of inspirational commentaries by our colleagues. As apparent from these, some issues raised by our target article have already been raised in the literature. Psycho– and neurolinguists might still benefit from our reply, as “oscillations are an old concept in vision and motor functions, but a new one in linguistics” (Giraud, A.-L. 2020. Oscillations for all A commentary on Meyer, Sun & Martin (2020). Language, Cognition and Neuroscience, 1–8).
  • Martin, A. E., & McElree, B. (2009). Memory operations that support language comprehension: Evidence from verb-phrase ellipsis. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(5), 1231-1239. doi:10.1037/a0016271.

    Abstract

    Comprehension of verb-phrase ellipsis (VPE) requires reevaluation of recently processed constituents, which often necessitates retrieval of information about the elided constituent from memory. A. E. Martin and B. McElree (2008) argued that representations formed during comprehension are content addressable and that VPE antecedents are retrieved from memory via a cue-dependent direct-access pointer rather than via a search process. This hypothesis was further tested by manipulating the location of interfering material—either before the onset of the antecedent (proactive interference; PI) or intervening between antecedent and ellipsis site (retroactive interference; RI). The speed–accuracy tradeoff procedure was used to measure the time course of VPE processing. The location of the interfering material affected VPE comprehension accuracy: RI conditions engendered lower accuracy than PI conditions. Crucially, location did not affect the speed of processing VPE, which is inconsistent with both forward and backward search mechanisms. The observed time-course profiles are consistent with the hypothesis that VPE antecedents are retrieved via a cue-dependent direct-access operation. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
  • Pylkkänen, L., Martin, A. E., McElree, B., & Smart, A. (2009). The Anterior Midline Field: Coercion or decision making? Brain and Language, 108(3), 184-190. doi:10.1016/j.bandl.2008.06.006.

    Abstract

    To study the neural bases of semantic composition in language processing without confounds from syntactic composition, recent magnetoencephalography (MEG) studies have investigated the processing of constructions that exhibit some type of syntax-semantics mismatch. The most studied case of such a mismatch is complement coercion; expressions such as the author began the book, where an entity-denoting noun phrase is coerced into an eventive meaning in order to match the semantic properties of the event-selecting verb (e.g., ‘the author began reading/writing the book’). These expressions have been found to elicit increased activity in the Anterior Midline Field (AMF), an MEG component elicited at frontomedial sensors at ∼400 ms after the onset of the coercing noun [Pylkkänen, L., & McElree, B. (2007). An MEG study of silent meaning. Journal of Cognitive Neuroscience, 19, 11]. Thus, the AMF constitutes a potential neural correlate of coercion. However, the AMF was generated in ventromedial prefrontal regions, which are heavily associated with decision-making. This raises the possibility that, instead of semantic processing, the AMF effect may have been related to the experimental task, which was a sensicality judgment. We tested this hypothesis by assessing the effect of coercion when subjects were simply reading for comprehension, without a decision-task. Additionally, we investigated coercion in an adjectival rather than a verbal environment to further generalize the findings. Our results show that an AMF effect of coercion is elicited without a decision-task and that the effect also extends to this novel syntactic environment. We conclude that in addition to its role in non-linguistic higher cognition, ventromedial prefrontal regions contribute to the resolution of syntax-semantics mismatches in language processing.

Share this page