Publications

Displaying 301 - 400 of 585
  • Levelt, W. J. M., Praamstra, P., Meyer, A. S., Helenius, P., & Salmelin, R. (1998). An MEG study of picture naming. Journal of Cognitive Neuroscience, 10(5), 553-567. doi:10.1162/089892998562960.

    Abstract

    The purpose of this study was to relate a psycholinguistic processing model of picture naming to the dynamics of cortical activation during picture naming. The activation was recorded from eight Dutch subjects with a whole-head neuromagnetometer. The processing model, based on extensive naming latency studies, is a stage model. In preparing a picture's name, the speaker performs a chain of specific operations. They are, in this order, computing the visual percept, activating an appropriate lexical concept, selecting the target word from the mental lexicon, phonological encoding, phonetic encoding, and initiation of articulation. The time windows for each of these operations are reasonably well known and could be related to the peak activity of dipole sources in the individual magnetic response patterns. The analyses showed a clear progression over these time windows from early occipital activation, via parietal and temporal to frontal activation. The major specific findings were that (1) a region in the left posterior temporal lobe, agreeing with the location of Wernicke's area, showed prominent activation starting about 200 msec after picture onset and peaking at about 350 msec, (i.e., within the stage of phonological encoding), and (2) a consistent activation was found in the right parietal cortex, peaking at about 230 msec after picture onset, thus preceding and partly overlapping with the left temporal response. An interpretation in terms of the management of visual attention is proposed.
  • Levelt, W. J. M. (1983). Monitoring and self-repair in speech. Cognition, 14, 41-104. doi:10.1016/0010-0277(83)90026-4.

    Abstract

    Making a self-repair in speech typically proceeds in three phases. The first phase involves the monitoring of one’s own speech and the interruption of the flow of speech when trouble is detected. From an analysis of 959 spontaneous self-repairs it appears that interrupting follows detection promptly, with the exception that correct words tend to be completed. Another finding is that detection of trouble improves towards the end of constituents. The second phase is characterized by hesitation, pausing, but especially the use of so-called editing terms. Which editing term is used depends on the nature of the speech trouble in a rather regular fashion: Speech errors induce other editing terms than words that are merely inappropriate, and trouble which is detected quickly by the speaker is preferably signalled by the use of ‘uh’. The third phase consists of making the repair proper The linguistic well-formedness of a repair is not dependent on the speaker’s respecting the integriv of constituents, but on the structural relation between original utterance and repair. A bi-conditional well-formedness rule links this relation to a corresponding relation between the conjuncts of a coordination. It is suggested that a similar relation holds also between question and answer. In all three cases the speaker respects certain Istructural commitments derived from an original utterance. It was finally shown that the editing term plus the first word of the repair proper almost always contain sufficient information for the listener to decide how the repair should be related to the original utterance. Speakers almost never produce misleading information in this respect. It is argued that speakers have little or no access to their speech production process; self-monitoring is probably based on parsing one’s own inner or overt speech.
  • Levelt, W. J. M. (1989). Hochleistung in Millisekunden: Sprechen und Sprache verstehen. Universitas, 44(511), 56-68.
  • Levelt, W. J. M. (1995). Hoezo 'neuro'? Hoezo 'linguïstisch'? Intermediair, 31(46), 32-37.
  • Levelt, W. J. M. (1994). Hoofdstukken uit de psychologie. Nederlands tijdschrift voor de psychologie, 49, 1-14.
  • Levelt, W. J. M. (Ed.). (1993). Lexical access in speech production. Cambridge, Mass.: Blackwell.

    Abstract

    Formerly published in: Cognition : international journal of cognitive science, vol. 42, nos. 1-3, 1992
  • Levelt, W. J. M., & Schiller, N. O. (1998). Is the syllable frame stored? [Commentary on the BBS target article 'The frame/content theory of evolution of speech production' by Peter F. McNeilage]. Behavioral and Brain Sciences, 21, 520.

    Abstract

    This commentary discusses whether abstract metrical frames are stored. For stress-assigning languages (e.g., Dutch and English), which have a dominant stress pattern, metrical frames are stored only for words that deviate from the default stress pattern. The majority of the words in these languages are produced without retrieving any independent syllabic or metrical frame.
  • Levelt, W. J. M., & Cutler, A. (1983). Prosodic marking in speech repair. Journal of semantics, 2, 205-217. doi:10.1093/semant/2.2.205.

    Abstract

    Spontaneous self-corrections in speech pose a communication problem; the speaker must make clear to the listener not only that the original Utterance was faulty, but where it was faulty and how the fault is to be corrected. Prosodic marking of corrections - making the prosody of the repair noticeably different from that of the original utterance - offers a resource which the speaker can exploit to provide the listener with such information. A corpus of more than 400 spontaneous speech repairs was analysed, and the prosodic characteristics compared with the syntactic and semantic characteristics of each repair. Prosodic marking showed no relationship at all with the syntactic characteristics of repairs. Instead, marking was associated with certain semantic factors: repairs were marked when the original utterance had been actually erroneous, rather than simply less appropriate than the repair; and repairs tended to be marked more often when the set of items encompassing the error and the repair was small rather than when it was large. These findings lend further weight to the characterization of accent as essentially semantic in function.
  • Levelt, W. J. M. (1995). The ability to speak: From intentions to spoken words. European Review, 3(1), 13-23. doi:10.1017/S1062798700001290.

    Abstract

    In recent decades, psychologists have become increasingly interested in our ability to speak. This paper sketches the present theoretical perspective on this most complex skill of homo sapiens. The generation of fluent speech is based on the interaction of various processing components. These mechanisms are highly specialized, dedicated to performing specific subroutines, such as retrieving appropriate words, generating morpho-syntactic structure, computing the phonological target shape of syllables, words, phrases and whole utterances, and creating and executing articulatory programmes. As in any complex skill, there is a self-monitoring mechanism that checks the output. These component processes are targets of increasingly sophisticated experimental research, of which this paper presents a few salient examples.
  • Levelt, W. J. M. (1998). The genetic perspective in psycholinguistics, or: Where do spoken words come from? Journal of Psycholinguistic Research, 27(2), 167-180. doi:10.1023/A:1023245931630.

    Abstract

    The core issue in the 19-century sources of psycholinguistics was the question, "Where does language come from?'' This genetic perspective unified the study of the ontogenesis, the phylogenesis, the microgenesis, and to some extent the neurogenesis of language. This paper makes the point that this original perspective is still a valid and attractive one. It is exemplified by a discussion of the genesis of spoken words.
  • Levelt, W. J. M. (1983). Wetenschapsbeleid: Drie actuele idolen en een godin. Grafiet, 1(4), 178-184.
  • Levelt, W. J. M. (1993). Timing in speech production with special reference to word form encoding. Annals of the New York Academy of Sciences, 682, 283-295. doi:10.1111/j.1749-6632.1993.tb22976.x.
  • Levinson, S. C. (2003). Space in language and cognition: Explorations in cognitive diversity. Cambridge: Cambridge University Press.
  • Levinson, S. C., & Brown, P. (1993). Background to "Immanuel Kant among the Tenejapans". Anthropology Newsletter, 34(3), 22-23. doi:10.1111/an.1993.34.3.22.
  • Levinson, S. C. (1989). A review of Relevance [book review of Dan Sperber & Deirdre Wilson, Relevance: communication and cognition]. Journal of Linguistics, 25, 455-472.
  • Levinson, S. C., & Brown, P. (2003). Emmanuel Kant chez les Tenejapans: L'Anthropologie comme philosophie empirique [Translated by Claude Vandeloise for 'Langues et Cognition']. Langues et Cognition, 239-278.

    Abstract

    This is a translation of Levinson and Brown (1994).
  • Levinson, S. C., & Meira, S. (2003). 'Natural concepts' in the spatial topological domain - adpositional meanings in crosslinguistic perspective: An exercise in semantic typology. Language, 79(3), 485-516.

    Abstract

    Most approaches to spatial language have assumed that the simplest spatial notions are (after Piaget) topological and universal (containment, contiguity, proximity, support, represented as semantic primitives suchas IN, ON, UNDER, etc.). These concepts would be coded directly in language, above all in small closed classes suchas adpositions—thus providing a striking example of semantic categories as language-specific projections of universal conceptual notions. This idea, if correct, should have as a consequence that the semantic categories instantiated in spatial adpositions should be essentially uniform crosslinguistically. This article attempts to verify this possibility by comparing the semantics of spatial adpositions in nine unrelated languages, with the help of a standard elicitation procedure, thus producing a preliminary semantic typology of spatial adpositional systems. The differences between the languages turn out to be so significant as to be incompatible withstronger versions of the UNIVERSAL CONCEPTUAL CATEGORIES hypothesis. Rather, the language-specific spatial adposition meanings seem to emerge as compact subsets of an underlying semantic space, withcertain areas being statistical ATTRACTORS or FOCI. Moreover, a comparison of systems withdifferent degrees of complexity suggests the possibility of positing implicational hierarchies for spatial adpositions. But such hierarchies need to be treated as successive divisions of semantic space, as in recent treatments of basic color terms. This type of analysis appears to be a promising approachfor future work in semantic typology.
  • Levinson, S. C., & Brown, P. (1994). Immanuel Kant among the Tenejapans: Anthropology as empirical philosophy. Ethos, 22(1), 3-41. Retrieved from http://www.jstor.org/stable/640467.

    Abstract

    This paper confronts Kant’s (1768) view of human conceptions of space as fundamentally divided along the three planes of the human body with an empirical case study in the Mayan community of Tenejapa in southern Mexico, whose inhabitants do not use left/right distinctions to project regions in space. Tenejapans have names for the left hand and the right hand, and also a term for hand/arm in general, but they do not generalize the distinction to spatial regions -- there is no linguistic expression glossing as 'to the left' or 'on the left-hand side', for example. Tenejapans also show a remarkable indifference to incongruous counterparts. Nor is there any system of value associations with the left and the right. The Tenejapan evidence that speaks to these Kantian themes points in two directions: (a) Kant was wrong to think that the structure of spatial regions founded on the human frame, and in particular the distinctions based on left and right, are in some sense essential human intuitions; (b) Kant may have been right to think that the left/right opposition, the perception of enantiomorphs, clockwiseness, East-West dichotomies, etc., are intimately connected to an overall system of spatial conception.
  • Levinson, S. C. (1993). La Pragmatica [Italian translation of Pragmatics]. Bologna: Il Mulino.
  • Levinson, S. C., & Haviland, J. B. (1994). Introduction: Spatial conceptualization in Mayan languages. Linguistics, 32(4/5), 613-622.
  • Levinson, S. C. (1998). Studying spatial conceptualization across cultures: Anthropology and cognitive science. Ethos, 26(1), 7-24. doi:10.1525/eth.1998.26.1.7.

    Abstract

    Philosophers, psychologists, and linguists have argued that spatial conception is pivotal to cognition in general, providing a general, egocentric, and universal framework for cognition as well as metaphors for conceptualizing many other domains. But in an aboriginal community in Northern Queensland, a system of cardinal directions informs not only language, but also memory for arbitrary spatial arrays and directions. This work suggests that fundamental cognitive parameters, like the system of coding spatial locations, can vary cross-culturally, in line with the language spoken by a community. This opens up the prospect of a fruitful dialogue between anthropology and the cognitive sciences on the complex interaction between cultural and universal factors in the constitution of mind.
  • Levinson, S. C. (1989). Pragmática [Spanish translation]. Barcelona: Teide.
  • Levinson, S. C. (1983). Pragmatics. Cambridge: Cambridge University Press.
  • Levinson, S. C. (1994). Vision, shape and linguistic description: Tzeltal body-part terminology and object description. Linguistics, 32(4/5), 791-856.
  • Lewis, A. G., Schoffelen, J.-M., Hoffmann, C., Bastiaansen, M. C. M., & Schriefers, H. (2017). Discourse-level semantic coherence influences beta oscillatory dynamics and the N400 during sentence comprehension. Language, Cognition and Neuroscience, 32(5), 601-617. doi:10.1080/23273798.2016.1211300.

    Abstract

    In this study, we used electroencephalography to investigate the influence of discourse-level semantic coherence on electrophysiological signatures of local sentence-level processing. Participants read groups of four sentences that could either form coherent stories or were semantically unrelated. For semantically coherent discourses compared to incoherent ones, the N400 was smaller at sentences 2–4, while the visual N1 was larger at the third and fourth sentences. Oscillatory activity in the beta frequency range (13–21 Hz) was higher for coherent discourses. We relate the N400 effect to a disruption of local sentence-level semantic processing when sentences are unrelated. Our beta findings can be tentatively related to disruption of local sentence-level syntactic processing, but it cannot be fully ruled out that they are instead (or also) related to disrupted local sentence-level semantic processing. We conclude that manipulating discourse-level semantic coherence does have an effect on oscillatory power related to local sentence-level processing.
  • Little, H., Eryilmaz, K., & de Boer, B. (2017). Conventionalisation and Discrimination as Competing Pressures on Continuous Speech-like Signals. Interaction studies, 18(3), 355-378. doi:10.1075/is.18.3.04lit.

    Abstract

    Arbitrary communication systems can emerge from iconic beginnings through processes of conventionalisation via interaction. Here, we explore whether this process of conventionalisation occurs with continuous, auditory signals. We conducted an artificial signalling experiment. Participants either created signals for themselves, or for a partner in a communication game. We found no evidence that the speech-like signals in our experiment became less iconic or simpler through interaction. We hypothesise that the reason for our results is that when it is difficult to be iconic initially because of the constraints of the modality, then iconicity needs to emerge to enable grounding before conventionalisation can occur. Further, pressures for discrimination, caused by the expanding meaning space in our study, may cause more complexity to emerge, again as a result of the restrictive signalling modality. Our findings have possible implications for the processes of conventionalisation possible in signed and spoken languages, as the spoken modality is more restrictive than the manual modality.
  • Little, H., Rasilo, H., van der Ham, S., & Eryılmaz, K. (2017). Empirical approaches for investigating the origins of structure in speech. Interaction studies, 18(3), 332-354. doi:10.1075/is.18.3.03lit.

    Abstract

    In language evolution research, the use of computational and experimental methods to investigate the emergence of structure in language is exploding. In this review, we look exclusively at work exploring the emergence of structure in speech, on both a categorical level (what drives the emergence of an inventory of individual speech sounds), and a combinatorial level (how these individual speech sounds emerge and are reused as part of larger structures). We show that computational and experimental methods for investigating population-level processes can be effectively used to explore and measure the effects of learning, communication and transmission on the emergence of structure in speech. We also look at work on child language acquisition as a tool for generating and validating hypotheses for the emergence of speech categories. Further, we review the effects of noise, iconicity and production effects.
  • Little, H. (2017). Introduction to the Special Issue on the Emergence of Sound Systems. Journal of Language Evolution, 2(1), 1-3. doi:10.1093/jole/lzx014.

    Abstract

    How did human sound systems get to be the way they are? Collecting contributions implementing a wealth of methods to address this question, this special issue treats language and speech as being the result of a complex adaptive system. The work throughout provides evidence and theory at the levels of phylogeny, glossogeny and ontogeny. In taking a multi-disciplinary approach that considers interactions within and between these levels of selection, the papers collectively provide a valuable, integrated contribution to existing work on the evolution of speech and sound systems.
  • Little, H., Eryılmaz, K., & de Boer, B. (2017). Signal dimensionality and the emergence of combinatorial structure. Cognition, 168, 1-15. doi:10.1016/j.cognition.2017.06.011.

    Abstract

    In language, a small number of meaningless building blocks can be combined into an unlimited set of meaningful utterances. This is known as combinatorial structure. One hypothesis for the initial emergence of combinatorial structure in language is that recombining elements of signals solves the problem of overcrowding in a signal space. Another hypothesis is that iconicity may impede the emergence of combinatorial structure. However, how these two hypotheses relate to each other is not often discussed. In this paper, we explore how signal space dimensionality relates to both overcrowding in the signal space and iconicity. We use an artificial signalling experiment to test whether a signal space and a meaning space having similar topologies will generate an iconic system and whether, when the topologies differ, the emergence of combinatorially structured signals is facilitated. In our experiments, signals are created from participants' hand movements, which are measured using an infrared sensor. We found that participants take advantage of iconic signal-meaning mappings where possible. Further, we use trajectory predictability, measures of variance, and Hidden Markov Models to measure the use of structure within the signals produced and found that when topologies do not match, then there is more evidence of combinatorial structure. The results from these experiments are interpreted in the context of the differences between the emergence of combinatorial structure in different linguistic modalities (speech and sign).

    Additional information

    mmc1.zip
  • Lopopolo, A., Frank, S. L., Van den Bosch, A., & Willems, R. M. (2017). Using stochastic language models (SLM) to map lexical, syntactic, and phonological information processing in the brain. PLoS One, 12(5): e0177794. doi:10.1371/journal.pone.0177794.

    Abstract

    Language comprehension involves the simultaneous processing of information at the phonological, syntactic, and lexical level. We track these three distinct streams of information in the brain by using stochastic measures derived from computational language models to detect neural correlates of phoneme, part-of-speech, and word processing in an fMRI experiment. Probabilistic language models have proven to be useful tools for studying how language is processed as a sequence of symbols unfolding in time. Conditional probabilities between sequences of words are at the basis of probabilistic measures such as surprisal and perplexity which have been successfully used as predictors of several behavioural and neural correlates of sentence processing. Here we computed perplexity from sequences of words and their parts of speech, and their phonemic transcriptions. Brain activity time-locked to each word is regressed on the three model-derived measures. We observe that the brain keeps track of the statistical structure of lexical, syntactic and phonological information in distinct areas.

    Additional information

    Data availability
  • Lundstrom, B. N., Petersson, K. M., Andersson, J., Johansson, M., Fransson, P., & Ingvar, M. (2003). Isolating the retrieval of imagined pictures during episodic memory: Activation of the left precuneus and left prefrontal cortex. Neuroimage, 20, 1934-1943. doi:10.1016/j.neuroimage.2003.07.017.

    Abstract

    The posterior medial parietal cortex and the left prefrontal cortex have both been implicated in the recollection of past episodes. In order to clarify their functional significance, we performed this functional magnetic resonance imaging study, which employed event-related source memory and item recognition retrieval of words paired with corresponding imagined or viewed pictures. Our results suggest that episodic source memory is related to a functional network including the posterior precuneus and the left lateral prefrontal cortex. This network is activated during explicit retrieval of imagined pictures and results from the retrieval of item-context associations. This suggests that previously imagined pictures provide a context with which encoded words can be more strongly associated.
  • Mace, R., Jordan, F., & Holden, C. (2003). Testing evolutionary hypotheses about human biological adaptation using cross-cultural comparison. Comparative Biochemistry and Physiology A-Molecular & Integrative Physiology, 136(1), 85-94. doi:10.1016/S1095-6433(03)00019-9.

    Abstract

    Physiological data from a range of human populations living in different environments can provide valuable information for testing evolutionary hypotheses about human adaptation. By taking into account the effects of population history, phylogenetic comparative methods can help us determine whether variation results from selection due to particular environmental variables. These selective forces could even be due to cultural traits-which means that gene-culture co-evolution may be occurring. In this paper, we outline two examples of the use of these approaches to test adaptive hypotheses that explain global variation in two physiological traits: the first is lactose digestion capacity in adults, and the second is population sex-ratio at birth. We show that lower than average sex ratio at birth is associated with high fertility, and argue that global variation in sex ratio at birth has evolved as a response to the high physiological costs of producing boys in high fertility populations.
  • Magnuson, J. S., Tanenhaus, M. K., Aslin, R. N., & Dahan, D. (2003). The time course of spoken word learning and recognition: Studies with artificial lexicons. Journal of Experimental Psychology: General, 132(2), 202-227. doi:10.1037/0096-3445.132.2.202.

    Abstract

    The time course of spoken word recognition depends largely on the frequencies of a word and its competitors, or neighbors (similar-sounding words). However, variability in natural lexicons makes systematic analysis of frequency and neighbor similarity difficult. Artificial lexicons were used to achieve precise control over word frequency and phonological similarity. Eye tracking provided time course measures of lexical activation and competition (during spoken instructions to perform visually guided tasks) both during and after word learning, as a function of word frequency, neighbor type, and neighbor frequency. Apparent shifts from holistic to incremental competitor effects were observed in adults and neural network simulations, suggesting such shifts reflect general properties of learning rather than changes in the nature of lexical representations.
  • Magyari, L. (2003). Mit ne gondoljunk az állatokról? [What not to think about animals?] [Review of the book Wild Minds: What animals really think by M. Hauser]. Magyar Pszichológiai Szemle (Hungarian Psychological Review), 58(3), 417-424. doi:10.1556/MPSzle.58.2003.3.5.
  • Magyari, L., De Ruiter, J. P., & Levinson, S. C. (2017). Temporal preparation for speaking in question-answer sequences. Frontiers in Psychology, 8: 211. doi:10.3389/fpsyg.2017.00211.

    Abstract

    In every-day conversations, the gap between turns of conversational partners is most frequently between 0 and 200 ms. We were interested how speakers achieve such fast transitions. We designed an experiment in which participants listened to pre-recorded questions about images presented on a screen and were asked to answer these questions. We tested whether speakers already prepare their answers while they listen to questions and whether they can prepare for the time of articulation by anticipating when questions end. In the experiment, it was possible to guess the answer at the beginning of the questions in half of the experimental trials. We also manipulated whether it was possible to predict the length of the last word of the questions. The results suggest when listeners know the answer early they start speech production already during the questions. Speakers can also time when to speak by predicting the duration of turns. These temporal predictions can be based on the length of anticipated words and on the overall probability of turn durations.

    Additional information

    presentation 1.pdf
  • Mainz, N., Shao, Z., Brysbaert, M., & Meyer, A. S. (2017). Vocabulary Knowledge Predicts Lexical Processing: Evidence from a Group of Participants with Diverse Educational Backgrounds. Frontiers in Psychology, 8: 1164. doi:10.3389/fpsyg.2017.01164.

    Abstract

    Vocabulary knowledge is central to a speaker's command of their language. In previous research, greater vocabulary knowledge has been associated with advantages in language processing. In this study, we examined the relationship between individual differences in vocabulary and language processing performance more closely by (i) using a battery of vocabulary tests instead of just one test, and (ii) testing not only university students (Experiment 1) but young adults from a broader range of educational backgrounds (Experiment 2). Five vocabulary tests were developed, including multiple-choice and open antonym and synonym tests and a definition test, and administered together with two established measures of vocabulary. Language processing performance was measured using a lexical decision task. In Experiment 1, vocabulary and word frequency were found to predict word recognition speed while we did not observe an interaction between the effects. In Experiment 2, word recognition performance was predicted by word frequency and the interaction between word frequency and vocabulary, with high-vocabulary individuals showing smaller frequency effects. While overall the individual vocabulary tests were correlated and showed similar relationships with language processing as compared to a composite measure of all tests, they appeared to share less variance in Experiment 2 than in Experiment 1. Implications of our findings concerning the assessment of vocabulary size in individual differences studies and the investigation of individuals from more varied backgrounds are discussed.

    Additional information

    Supplementary Material Appendices.pdf
  • Majid, A. (2003). Towards behavioural genomics. The Psychologist, 16(6), 298-298.
  • Majid, A. (2003). Into the deep. The Psychologist, 16(6), 300-300.
  • Majid, A., Speed, L., Croijmans, I., & Arshamian, A. (2017). What makes a better smeller? Perception, 46, 406-430. doi:10.1177/0301006616688224.

    Abstract

    Olfaction is often viewed as difficult, yet the empirical evidence suggests a different picture. A closer look shows people around the world differ in their ability to detect, discriminate, and name odors. This gives rise to the question of what influences our ability to smell. Instead of focusing on olfactory deficiencies, this review presents a positive perspective by focusing on factors that make someone a better smeller. We consider three driving forces in improving olfactory ability: one’s biological makeup, one’s experience, and the environment. For each factor, we consider aspects proposed to improve odor perception and critically examine the evidence; as well as introducing lesser discussed areas. In terms of biology, there are cases of neurodiversity, such as olfactory synesthesia, that serve to enhance olfactory ability. Our lifetime experience, be it typical development or unique training experience, can also modify the trajectory of olfaction. Finally, our odor environment, in terms of ambient odor or culinary traditions, can influence odor perception too. Rather than highlighting the weaknesses of olfaction, we emphasize routes to harnessing our olfactory potential.
  • Mangione-Smith, R., Stivers, T., Elliott, M. N., McDonald, L., & Heritage, J. (2003). Online commentary during the physical examination: A communication tool for avoiding inappropriate antibiotic prescribing? Social Science and Medicine, 56(2), 313-320.
  • Mansbridge, M. P., Tamaoka, K., Xiong, K., & Verdonschot, R. G. (2017). Ambiguity in the processing of Mandarin Chinese relative clauses: One factor cannot explain it all. PLoS One, 12(6): e0178369. doi:10.1371/journal.pone.0178369.

    Abstract

    This study addresses the question of whether native Mandarin Chinese speakers process and comprehend subject-extracted relative clauses (SRC) more readily than objectextracted relative clauses (ORC) in Mandarin Chinese. Presently, this has been a hotly debated issue, with various studies producing contrasting results. Using two eye-tracking experiments with ambiguous and unambiguous RCs, this study shows that both ORCs and SRCs have different processing requirements depending on the locus and time course during reading. The results reveal that ORC reading was possibly facilitated by linear/ temporal integration and canonicity. On the other hand, similarity-based interference made ORCs more difficult, and expectation-based processing was more prominent for unambiguous ORCs. Overall, RC processing in Mandarin should not be broken down to a single ORC (dis) advantage, but understood as multiple interdependent factors influencing whether ORCs are either more difficult or easier to parse depending on the task and context at hand.
  • Marcus, G. F., & Fisher, S. E. (2003). FOXP2 in focus: What can genes tell us about speech and language? Trends in Cognitive Sciences, 7, 257-262. doi:10.1016/S1364-6613(03)00104-9.

    Abstract

    The human capacity for acquiring speech and language must derive, at least in part, from the genome. In 2001, a study described the first case of a gene, FOXP2, which is thought to be implicated in our ability to acquire spoken language. In the present article, we discuss how this gene was discovered, what it might do, how it relates to other genes, and what it could tell us about the nature of speech and language development. We explain how FOXP2 could, without being specific to the brain or to our own species, still provide an invaluable entry-point into understanding the genetic cascades and neural pathways that contribute to our capacity for speech and language.
  • Marlow, A. J., Fisher, S. E., Francks, C., MacPhie, I. L., Cherny, S. S., Richardson, A. J., Talcott, J. B., Stein, J. F., Monaco, A. P., & Cardon, L. R. (2003). Use of multivariate linkage analysis for dissection of a complex cognitive trait. American Journal of Human Genetics, 72(3), 561-570. doi:10.1086/368201.

    Abstract

    Replication of linkage results for complex traits has been exceedingly difficult, owing in part to the inability to measure the precise underlying phenotype, small sample sizes, genetic heterogeneity, and statistical methods employed in analysis. Often, in any particular study, multiple correlated traits have been collected, yet these have been analyzed independently or, at most, in bivariate analyses. Theoretical arguments suggest that full multivariate analysis of all available traits should offer more power to detect linkage; however, this has not yet been evaluated on a genomewide scale. Here, we conduct multivariate genomewide analyses of quantitative-trait loci that influence reading- and language-related measures in families affected with developmental dyslexia. The results of these analyses are substantially clearer than those of previous univariate analyses of the same data set, helping to resolve a number of key issues. These outcomes highlight the relevance of multivariate analysis for complex disorders for dissection of linkage results in correlated traits. The approach employed here may aid positional cloning of susceptibility genes in a wide spectrum of complex traits.
  • Martin, A. E., & Doumas, L. A. A. (2017). A mechanism for the cortical computation of hierarchical linguistic structure. PLoS Biology, 15(3): e2000663. doi:10.1371/journal.pbio.2000663.

    Abstract

    Biological systems often detect species-specific signals in the environment. In humans, speech and language are species-specific signals of fundamental biological importance. To detect the linguistic signal, human brains must form hierarchical representations from a sequence of perceptual inputs distributed in time. What mechanism underlies this ability? One hypothesis is that the brain repurposed an available neurobiological mechanism when hierarchical linguistic representation became an efficient solution to a computational problem posed to the organism. Under such an account, a single mechanism must have the capacity to perform multiple, functionally related computations, e.g., detect the linguistic signal and perform other cognitive functions, while, ideally, oscillating like the human brain. We show that a computational model of analogy, built for an entirely different purpose—learning relational reasoning—processes sentences, represents their meaning, and, crucially, exhibits oscillatory activation patterns resembling cortical signals elicited by the same stimuli. Such redundancy in the cortical and machine signals is indicative of formal and mechanistic alignment between representational structure building and “cortical” oscillations. By inductive inference, this synergy suggests that the cortical signal reflects structure generation, just as the machine signal does. A single mechanism—using time to encode information across a layered network—generates the kind of (de)compositional representational hierarchy that is crucial for human language and offers a mechanistic linking hypothesis between linguistic representation and cortical computation
  • Martin, A. E., Huettig, F., & Nieuwland, M. S. (2017). Can structural priming answer the important questions about language? A commentary on Branigan and Pickering "An experimental approach to linguistic representation". Behavioral and Brain Sciences, 40: e304. doi:10.1017/S0140525X17000528.

    Abstract

    While structural priming makes a valuable contribution to psycholinguistics, it does not allow direct observation of representation, nor escape “source ambiguity.” Structural priming taps into implicit memory representations and processes that may differ from what is used online. We question whether implicit memory for language can and should be equated with linguistic representation or with language processing.
  • Martin, A. E., Monahan, P. J., & Samuel, A. G. (2017). Prediction of agreement and phonetic overlap shape sublexical identification. Language and Speech, 60(3), 356-376. doi:10.1177/0023830916650714.

    Abstract

    The mapping between the physical speech signal and our internal representations is rarely straightforward. When faced with uncertainty, higher-order information is used to parse the signal and because of this, the lexicon and some aspects of sentential context have been shown to modulate the identification of ambiguous phonetic segments. Here, using a phoneme identification task (i.e., participants judged whether they heard [o] or [a] at the end of an adjective in a noun–adjective sequence), we asked whether grammatical gender cues influence phonetic identification and if this influence is shaped by the phonetic properties of the agreeing elements. In three experiments, we show that phrase-level gender agreement in Spanish affects the identification of ambiguous adjective-final vowels. Moreover, this effect is strongest when the phonetic characteristics of the element triggering agreement and the phonetic form of the agreeing element are identical. Our data are consistent with models wherein listeners generate specific predictions based on the interplay of underlying morphosyntactic knowledge and surface phonetic cues.
  • Massaro, D. W., & Perlman, M. (2017). Quantifying iconicity’s contribution during language acquisition: Implications for vocabulary learning. Frontiers in Communication, 2: 4. doi:10.3389/fcomm.2017.00004.

    Abstract

    Previous research found that iconicity—the motivated correspondence between word form and meaning—contributes to expressive vocabulary acquisition. We present two new experiments with two different databases and with novel analyses to give a detailed quantification of how iconicity contributes to vocabulary acquisition across development, including both receptive understanding and production. The results demonstrate that iconicity is more prevalent early in acquisition and diminishes with increasing age and with increasing vocabulary. In the first experiment, we found that the influence of iconicity on children’s production vocabulary decreased gradually with increasing age. These effects were independent of the observed influence of concreteness, difficulty of articulation, and parental input frequency. Importantly, we substantiated the independence of iconicity, concreteness, and systematicity—a statistical regularity between sounds and meanings. In the second experiment, we found that the average iconicity of both a child’s receptive vocabulary and expressive vocabulary diminished dramatically with increases in vocabulary size. These results indicate that iconic words tend to be learned early in the acquisition of both receptive vocabulary and expressive vocabulary. We recommend that iconicity be included as one of the many different influences on a child’s early vocabulary acquisition. Facing the logically insurmountable challenge to link the form of a novel word (e.g., “gavagai”) with its particular meaning (e.g., “rabbit”; Quine, 1960, 1990/1992), children manage to learn words with incredible ease. Interest in this process has permeated empirical and theoretical research in developmental psychology, psycholinguistics, and language studies more generally. Investigators have studied which words are learned and when they are learned (Fenson et al., 1994), biases in word learning (Markman, 1990, 1991); the perceptual, social, and linguistic properties of the words (Gentner, 1982; Waxman, 1999; Maguire et al., 2006; Vosoughi et al., 2010), the structure of the language being learned (Gentner and Boroditsky, 2001), and the influence of the child’s milieu on word learning (Hart and Risley, 1995; Roy et al., 2015). A growing number of studies also show that the iconicity of words might be a significant factor in word learning (Imai and Kita, 2014; Perniss and Vigliocco, 2014; Perry et al., 2015). Iconicity refers generally to a correspondence between the form of a signal (e.g., spoken word, sign, and written character) and its meaning. For example, the sign for tree is iconic in many signed languages: it resembles a branching tree waving above the ground in American Sign Language, outlines the shape of a tree in Danish Sign Language and forms a tree trunk in Chinese Sign Language. In contrast to signed languages, the words of spoken languages have traditionally been treated as arbitrary, with the assumption that the forms of most words bear no resemblance to their meaning (e.g., Hockett, 1960; Pinker and Bloom, 1990). However, there is now a large body of research showing that iconicity is prevalent in the lexicons of many spoken languages (Nuckolls, 1999; Dingemanse et al., 2015). Most languages have an inventory of iconic words for sounds—onomatopoeic words such as splash, slurp, and moo, which sound somewhat like the sound of the real-world event to which they refer. Rhodes (1994), for example, counts more than 100 of these words in English. Many languages also contain large inventories of ideophones—a distinctively iconic class of words that is used to express a variety of sensorimotor-rich meanings (Nuckolls, 1999; Voeltz and Kilian-Hatz, 2001; Dingemanse, 2012). For example, in Japanese, the word “koron”—with a voiceless [k] refers to a light object rolling once, the reduplicated “korokoro” to a light object rolling repeatedly, and “gorogoro”—with a voiced [g]—to a heavy object rolling repeatedly (Imai and Kita, 2014). And in Siwu, spoken in Ghana, ideophones include words like fwεfwε “springy, elastic” and saaa “cool sensation” (Dingemanse et al., 2015). Outside of onomatopoeia and ideophones, there is also evidence that adjectives and verbs—which also tend to convey sensorimotor imagery—are also relatively iconic (Nygaard et al., 2009; Perry et al., 2015). Another domain of iconic words involves some correspondence between the point of articulation of a word and its meaning. For example, there appears to be some prevalence across languages of nasal consonants in words for nose and bilabial consonants in words for lip (Urban, 2011). Spoken words can also have a correspondence between a word’s meaning and other aspects of its pronunciation. The word teeny, meaning small, is pronounced with a relatively small vocal tract, with high front vowels characterized by retracted lips and a high-frequency second formant (Ohala, 1994). Thus, teeny can be recognized as iconic of “small” (compared to the larger vocal tract configuration of the back, rounded vowel in huge), a pattern that is documented in the lexicons of a diversity of languages (Ultan, 1978; Blasi et al., 2016). Lewis and Frank (2016) have studied a more abstract form of iconicity that more meaningfully complex words tend to be longer. An evaluation of many diverse languages revealed that conceptually more complex meanings tend to have longer spoken forms. In their study, participants tended to assign a relatively long novel word to a conceptually more complex referent. Understanding that more complex meaning is usually represented by a longer word could aid a child’s parsing of a stream of spoken language and thus facilitate word learning. Some developmental psychologists have theorized that iconicity helps young children learn words by “bootstrapping” or “bridging” the association between a symbol and its referent (Imai and Kita, 2014; Perniss and Vigliocco, 2014). According to this idea, children begin to master word learning with the aid of iconic cues, which help to profile the connection between the form of a word and its meaning out in the world. The learning of verbs in particular may benefit from iconicity, as the referents of verbs are more abstract and challenging for young children to identify (Gentner, 1982; Snedeker and Gleitman, 2004). By helping children gain a firmer grasp of the concept of a symbol, iconicity might set the stage for the ensuing word-learning spurt of non-iconic words. The hypothesis that iconicity plays a role in word learning is supported by experimental studies showing that young children are better at learning words—especially verbs—when they are iconic (Imai et al., 2008; Kantartzis et al., 2011; Yoshida, 2012). In one study, for example, 3-year-old Japanese children were taught a set of novel verbs for actions. Some of the words the children learned were iconic (“sound-symbolic”), created on the basis of iconic patterns found in Japanese mimetics (e.g., the novel word nosunosu for a slow manner of walking; Imai et al., 2008). The results showed that children were better able to generalize action words across agents when the verb was iconic of the action compared to when it was not. A subsequent study also using novel verbs based on Japanese mimetics replicated the finding with 3-year-old English-speaking children (Kantartzis et al., 2011). However, it remains to be determined whether children trained in an iconic condition can generalize their learning to a non-iconic condition that would not otherwise be learned. Children as young as 14 months of age have been shown to benefit from iconicity in word learning (Imai et al., 2015). These children were better at learning novel words for spikey and rounded shapes when the words were iconic, corresponding to kiki and bouba sound symbolism (e.g., Köhler, 1947; Ramachandran and Hubbard, 2001). If iconic words are indeed easier to learn, there should be a preponderance of iconic words early in the learning of natural languages. There is evidence that this is the case in signed languages, which are widely recognized to contain a prevalence of iconic signs [Klima and Bellugi, 1979; e.g., as evident in Signing Savvy (2016)]. Although the role of iconicity in sign acquisition has been disputed [e.g., Orlansky and Bonvillian, 1984; see Thompson (2011) for discussion], the most thorough study to date found that signs of British Sign Language (BSL) that were learned earlier by children tended to be more iconic (Thompson et al., 2012). Thompson et al.’s measure of the age of acquisition of signs came from parental reports from a version of the MacArthur-Bates Communicative Development Inventory (MCDI; Fenson et al., 1994) adapted for BSL (Woolfe et al., 2010). The iconicity of signs was taken from norms based on BSL signers’ judgments using a scale of 1 (not at all iconic) to 7 [highly iconic; see Vinson et al. (2008), for norming details and BSL videos]. Thompson et al. (2012) found a positive correlation between iconicity judgments and words understood and produced. This relationship held up even after controlling for the contribution of imageability and familiarity. Surprisingly, however, there was a significantly stronger correlation for older children (21- to 30-month olds) than for younger children (age 11- to 20-month olds). Thompson et al. suggested that the larger role for iconicity for the older children may result from their increasing cognitive abilities or their greater experience in understanding meaningful form-meaning mappings. However, this suggestion does not fit with the expectation that iconicity should play a larger role earlier in language use. Thus, although supporting a role for iconicity in word learning, the larger influence for older children is inconsistent with the bootstrapping hypothesis, in which iconicity should play a larger role earlier in vocabulary learning (Imai and Kita, 2014; Perniss and Vigliocco, 2014). There is also evidence in spoken languages that earlier learned words tend to be more iconic. Perry et al. (2015) collected iconicity ratings on the roughly 600 English and Spanish words that are learned earliest by children, selected from their respective MCDIs. Native speakers on Amazon Mechanical Turk rated the iconicity of the words on a scale from −5 to 5, where 5 indicated that a word was highly iconic, −5 that it sounded like the opposite of its meaning, and 0 that it was completely arbitrary. Their instructions to raters are given in the Appendix because the same instructions were used for acquiring our iconicity ratings. The Perry et al. (2015) results showed that the likelihood of a word in children’s production vocabulary in both English and Spanish at 30 months was positively correlated with the iconicity ratings, even when several other possible contributing factors were partialed out, including log word frequency, concreteness, and word length. The pattern in Spanish held for two collections of iconicity ratings, one with the verbs of the 600-word set presented in infinitive form, and one with the verbs conjugated in the third person singular form. In English, the correlation between age of acquisition and iconicity held when the ratings were collected for words presented in written form only and in written form plus a spoken recording. It also held for ratings based on a more implicit measure of iconicity in which participants rated how accurately a space alien could guess the meaning of the word based on its sound alone. The pattern in English also held when Perry et al. (2015) factored out the systematicity of words [taken from Monaghan et al. (2014)]. Systematicity is measured as a correlation between form similarity and meaning similarity—that is, the degree to which words with similar meanings have similar forms. Monaghan et al. computed systematicity for a large number of English words and found a negative correlation with the age of acquisition of the word from 2 to 13+ years of age—more systematic words are learned earlier. Monaghan et al. (2014) and Christiansen and Chater (2016) observe that consistent sound-meaning patterns may facilitate early vocabulary acquisition, but the child would soon have to master arbitrary relationships necessitated by increases in vocabulary size. In theory, systematicity, sometimes called “relative iconicity,” is independent of iconicity. For example, the English cluster gl– occurs systematically in several words related to “vision” and “light,” such as glitter, glimmer, and glisten (Bergen, 2004), but the segments bear no obvious resemblance to this meaning. Monaghan et al. (2014) question whether spoken languages afford sufficient degrees of articulatory freedom for words to be iconic but not systematic. As evidence, they give the example of onomatopoeic words for the calls of small animals (e.g., peep and cheep) versus calls of big animals (roar and grrr), which would systematically reflect the size of the animal. Although Perry et al. (2015) found a positive effect of iconicity at 30 months, they did not evaluate its influence across the first years of a child’s life. To address this question, we conduct a more detailed examination of the time course of iconicity in word learning across the first 4 years of expressive vocabulary acquisition. In addition, we examine the role of iconicity in the acquisition of receptive vocabulary as well as productive vocabulary. There is some evidence that although receptive vocabulary and productive vocabulary are correlated with one another, a variable might not have equivalent influences on these two expressions of vocabulary. Massaro and Rowe (2015), for example, showed that difficulty of articulation had a strong effect on word production but not word comprehension. Thus, it is possible that the influence of iconicity on vocabulary development differs between production and comprehension. In particular, a larger influence on comprehension might follow from the emphasis of the bootstrapping hypothesis on iconicity serving to perceptually cue children to the connection between the sound of a word and its meaning
  • McLaughlin, R. L., Schijven, D., Van Rheenen, W., Van Eijk, K. R., O’Brien, M., Project MinE GWAS Consortium, Schizophrenia Working Group of the Psychiatric Genomics Consortium, Kahn, R. S., Ophoff, R. A., Goris, A., Bradley, D. G., Al-Chalabi, A., van den Berg, L. H., Luykx, J. J., Hardiman, O., & Veldink, J. H. (2017). Genetic correlation between amyotrophic lateral sclerosis and schizophrenia. Nature Communications, 8: 14774. doi:10.1038/ncomms14774.

    Abstract

    We have previously shown higher-than-expected rates of schizophrenia in relatives of patients with amyotrophic lateral sclerosis (ALS), suggesting an aetiological relationship between the diseases. Here, we investigate the genetic relationship between ALS and schizophrenia using genome-wide association study data from over 100,000 unique individuals. Using linkage disequilibrium score regression, we estimate the genetic correlation between ALS and schizophrenia to be 14.3% (7.05–21.6; P=1 × 10−4) with schizophrenia polygenic risk scores explaining up to 0.12% of the variance in ALS (P=8.4 × 10−7). A modest increase in comorbidity of ALS and schizophrenia is expected given these findings (odds ratio 1.08–1.26) but this would require very large studies to observe epidemiologically. We identify five potential novel ALS-associated loci using conditional false discovery rate analysis. It is likely that shared neurobiological mechanisms between these two disorders will engender novel hypotheses in future preclinical and clinical studies.
  • McQueen, J. M. (2003). The ghost of Christmas future: Didn't Scrooge learn to be good? Commentary on Magnuson, McMurray, Tanenhaus and Aslin (2003). Cognitive Science, 27(5), 795-799. doi:10.1207/s15516709cog2705_6.

    Abstract

    Magnuson, McMurray, Tanenhaus, and Aslin [Cogn. Sci. 27 (2003) 285] suggest that they have evidence of lexical feedback in speech perception, and that this evidence thus challenges the purely feedforward Merge model [Behav. Brain Sci. 23 (2000) 299]. This evidence is open to an alternative explanation, however, one which preserves the assumption in Merge that there is no lexical-prelexical feedback during on-line speech processing. This explanation invokes the distinction between perceptual processing that occurs in the short term, as an utterance is heard, and processing that occurs over the longer term, for perceptual learning.
  • McQueen, J. M., Norris, D., & Cutler, A. (1994). Competition in spoken word recognition: Spotting words in other words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 621-638.

    Abstract

    Although word boundaries are rarely clearly marked, listeners can rapidly recognize the individual words of spoken sentences. Some theories explain this in terms of competition between multiply activated lexical hypotheses; others invoke sensitivity to prosodic structure. We describe a connectionist model, SHORTLIST, in which recognition by activation and competition is successful with a realistically sized lexicon. Three experiments are then reported in which listeners detected real words embedded in nonsense strings, some of which were themselves the onsets of longer words. Effects both of competition between words and of prosodic structure were observed, suggesting that activation and competition alone are not sufficient to explain word recognition in continuous speech. However, the results can be accounted for by a version of SHORTLIST that is sensitive to prosodic structure.
  • McQueen, J. M., Cutler, A., & Norris, D. (2003). Flow of information in the spoken word recognition system. Speech Communication, 41(1), 257-270. doi:10.1016/S0167-6393(02)00108-5.

    Abstract

    Spoken word recognition consists of two major component processes. First, at the prelexical stage, an abstract description of the utterance is generated from the information in the speech signal. Second, at the lexical stage, this description is used to activate all the words stored in the mental lexicon which match the input. These multiple candidate words then compete with each other. We review evidence which suggests that positive (match) and negative (mismatch) information of both a segmental and a suprasegmental nature is used to constrain this activation and competition process. We then ask whether, in addition to the necessary influence of the prelexical stage on the lexical stage, there is also feedback from the lexicon to the prelexical level. In two phonetic categorization experiments, Dutch listeners were asked to label both syllable-initial and syllable-final ambiguous fricatives (e.g., sounds ranging from [f] to [s]) in the word–nonword series maf–mas, and the nonword–word series jaf–jas. They tended to label the sounds in a lexically consistent manner (i.e., consistent with the word endpoints of the series). These lexical effects became smaller in listeners’ slower responses, even when the listeners were put under pressure to respond as fast as possible. Our results challenge models of spoken word recognition in which feedback modulates the prelexical analysis of the component sounds of a word whenever that word is heard
  • McQueen, J. M., Cutler, A., Briscoe, T., & Norris, D. (1995). Models of continuous speech recognition and the contents of the vocabulary. Language and Cognitive Processes, 10, 309-331. doi:10.1080/01690969508407098.

    Abstract

    Several models of spoken word recognition postulate that recognition is achieved via a process of competition between lexical hypotheses. Competition not only provides a mechanism for isolated word recognition, it also assists in continuous speech recognition, since it offers a means of segmenting continuous input into individual words. We present statistics on the pattern of occurrence of words embedded in the polysyllabic words of the English vocabulary, showing that an overwhelming majority (84%) of polysyllables have shorter words embedded within them. Positional analyses show that these embeddings are most common at the onsets of the longer word. Although both phonological and syntactic constraints could rule out some embedded words, they do not remove the problem. Lexical competition provides a means of dealing with lexical embedding. It is also supported by a growing body of experimental evidence. We present results which indicate that competition operates both between word candidates that begin at the same point in the input and candidates that begin at different points (McQueen, Norris, & Cutler, 1994, Noms, McQueen, & Cutler, in press). We conclude that lexical competition is an essential component in models of continuous speech recognition.
  • Meeuwissen, M., Roelofs, A., & Levelt, W. J. M. (2003). Planning levels in naming and reading complex numerals. Memory & Cognition, 31(8), 1238-1249.

    Abstract

    On the basis of evidence from studies of the naming and reading of numerals, Ferrand (1999) argued that the naming of objects is slower than reading their names, due to a greater response uncertainty in naming than in reading, rather than to an obligatory conceptual preparation for naming, but not for reading. We manipulated the need for conceptual preparation, while keeping response uncertainty constant in the naming and reading of complex numerals. In Experiment 1, participants named three-digit Arabic numerals either as house numbers or clock times. House number naming latencies were determined mostly by morphophonological factors, such as morpheme frequency and the number of phonemes, whereas clock time naming latencies revealed an additional conceptual involvement. In Experiment 2, the numerals were presented in alphabetic format and had to be read aloud. Reading latencies were determined mostly by morphophonological factors in both modes. These results suggest that conceptual preparation, rather than response uncertainty, is responsible for the difference between naming and reading latencies.
  • Menks, W. M., Furger, R., Lenz, C., Fehlbaum, L. V., Stadler, C., & Raschle, N. M. (2017). Microstructural white matter alterations in the corpus callosum of girls with conduct disorder. Journal of the American Academy of Child & Adolescent Psychiatry, 56, 258-265. doi:10.1016/j.jaac.2016.12.006.

    Abstract

    Objective

    Diffusion tensor imaging (DTI) studies in adolescent conduct disorder (CD) have demonstrated white matter alterations of tracts connecting functionally distinct fronto-limbic regions, but only in boys or mixed-gender samples. So far, no study has investigated white matter integrity in girls with CD on a whole-brain level. Therefore, our aim was to investigate white matter alterations in adolescent girls with CD.
    Method

    We collected high-resolution DTI data from 24 girls with CD and 20 typically developing control girls using a 3T magnetic resonance imaging system. Fractional anisotropy (FA) and mean diffusivity (MD) were analyzed for whole-brain as well as a priori−defined regions of interest, while controlling for age and intelligence, using a voxel-based analysis and an age-appropriate customized template.
    Results

    Whole-brain findings revealed white matter alterations (i.e., increased FA) in girls with CD bilaterally within the body of the corpus callosum, expanding toward the right cingulum and left corona radiata. The FA and MD results in a priori−defined regions of interest were more widespread and included changes in the cingulum, corona radiata, fornix, and uncinate fasciculus. These results were not driven by age, intelligence, or attention-deficit/hyperactivity disorder comorbidity.
    Conclusion

    This report provides the first evidence of white matter alterations in female adolescents with CD as indicated through white matter reductions in callosal tracts. This finding enhances current knowledge about the neuropathological basis of female CD. An increased understanding of gender-specific neuronal characteristics in CD may influence diagnosis, early detection, and successful intervention strategies.
  • Meyer, A. S., Roelofs, A., & Levelt, W. J. M. (2003). Word length effects in object naming: The role of a response criterion. Journal of Memory and Language, 48(1), 131-147. doi:10.1016/S0749-596X(02)00509-0.

    Abstract

    According to Levelt, Roelofs, and Meyer (1999) speakers generate the phonological and phonetic representations of successive syllables of a word in sequence and only begin to speak after having fully planned at least one complete phonological word. Therefore, speech onset latencies should be longer for long than for short words. We tested this prediction in four experiments in which Dutch participants named or categorized objects with monosyllabic or disyllabic names. Experiment 1 yielded a length effect on production latencies when objects with long and short names were tested in separate blocks, but not when they were mixed. Experiment 2 showed that the length effect was not due to a difference in the ease of object recognition. Experiment 3 replicated the results of Experiment 1 using a within-participants design. In Experiment 4, the long and short target words appeared in a phrasal context. In addition to the speech onset latencies, we obtained the viewing times for the target objects, which have been shown to depend on the time necessary to plan the form of the target names. We found word length effects for both dependent variables, but only when objects with short and long names were presented in separate blocks. We argue that in pure and mixed blocks speakers used different response deadlines, which they tried to meet by either generating the motor programs for one syllable or for all syllables of the word before speech onset. Computer simulations using WEAVER++ support this view.
  • Meyer, A. S., & Gerakaki, S. (2017). The art of conversation: Why it’s harder than you might think. Contact Magazine, 43(2), 11-15. Retrieved from http://contact.teslontario.org/the-art-of-conversation-why-its-harder-than-you-might-think/.
  • Meyer, A. S. (2017). Structural priming is not a Royal Road to representations. Commentary on Branigan and Pickering "An experimental approach to linguistic representation". Behavioral and Brain Sciences, 40: e305. doi:10.1017/S0140525X1700053X.

    Abstract

    Branigan & Pickering (B&P) propose that the structural priming paradigm is a Royal Road to linguistic representations of any kind, unobstructed by in fl uences of psychological processes. In my view, however, they are too optimistic about the versatility of the paradigm and, more importantly, its ability to provide direct evidence about the nature of stored linguistic representations.
  • Meyer, A. S. (1994). Timing in sentence production. Journal of Memory and Language, 33, 471-492. doi:doi:10.1006/jmla.1994.1022.

    Abstract

    Recently, a new theory of timing in sentence production has been proposed by Ferreira (1993). This theory assumes that at the phonological level, each syllable of an utterance is assigned one or more abstract timing units depending on its position in the prosodic structure. The number of timing units associated with a syllable determines the time interval between its onset and the onset of the next syllable. An interesting prediction from the theory, which was confirmed in Ferreira's experiments with speakers of American English, is that the time intervals between syllable onsets should only depend on the syllables' positions in the prosodic structure, but not on their segmental content. However, in the present experiments, which were carried out in Dutch, the intervals between syllable onsets were consistently longer for phonetically long syllables than for short syllables. The implications of this result for models of timing in sentence production are discussed.
  • Meyer, A. S., Sleiderink, A. M., & Levelt, W. J. M. (1998). Viewing and naming objects: Eye movements during noun phrase production. Cognition, 66(2), B25-B33. doi:10.1016/S0010-0277(98)00009-2.

    Abstract

    Eye movements have been shown to reflect word recognition and language comprehension processes occurring during reading and auditory language comprehension. The present study examines whether the eye movements speakers make during object naming similarly reflect speech planning processes. In Experiment 1, speakers named object pairs saying, for instance, 'scooter and hat'. The objects were presented as ordinary line drawings or with partly dele:ed contours and had high or low frequency names. Contour type and frequency both significantly affected the mean naming latencies and the mean time spent looking at the objects. The frequency effects disappeared in Experiment 2, in which the participants categorized the objects instead of naming them. This suggests that the frequency effects of Experiment 1 arose during lexical retrieval. We conclude that eye movements during object naming indeed reflect linguistic planning processes and that the speakers' decision to move their eyes from one object to the next is contingent upon the retrieval of the phonological form of the object names.
  • Moers, C., Meyer, A. S., & Janse, E. (2017). Effects of word frequency and transitional probability on word reading durations of younger and older speakers. Language and Speech, 60(2), 289-317. doi:10.1177/0023830916649215.

    Abstract

    High-frequency units are usually processed faster than low-frequency units in language comprehension and language production. Frequency effects have been shown for words as well as word combinations. Word co-occurrence effects can be operationalized in terms of transitional probability (TP). TPs reflect how probable a word is, conditioned by its right or left neighbouring word. This corpus study investigates whether three different age groups–younger children (8–12 years), adolescents (12–18 years) and older (62–95 years) Dutch speakers–show frequency and TP context effects on spoken word durations in reading aloud, and whether age groups differ in the size of these effects. Results show consistent effects of TP on word durations for all age groups. Thus, TP seems to influence the processing of words in context, beyond the well-established effect of word frequency, across the entire age range. However, the study also indicates that age groups differ in the size of TP effects, with older adults having smaller TP effects than adolescent readers. Our results show that probabilistic reduction effects in reading aloud may at least partly stem from contextual facilitation that leads to faster reading times in skilled readers, as well as in young language learners.
  • Moisik, S. R., & Dediu, D. (2017). Anatomical biasing and clicks: Evidence from biomechanical modeling. Journal of Language Evolution, 2(1), 37-51. doi:10.1093/jole/lzx004.

    Abstract

    It has been observed by several researchers that the Khoisan palate tends to lack a prominent alveolar ridge. A biomechanical model of click production was created to examine if these sounds might be subject to an anatomical bias associated with alveolar ridge size. Results suggest the bias is plausible, taking the form of decreased articulatory effort and improved volume change characteristics; however, further modeling and experimental research is required to solidify the claim.

    Additional information

    lzx004_Supp.zip
  • Moisik, S. R., & Gick, B. (2017). The quantal larynx: The stable regions of laryngeal biomechanics and implications for speech production. Journal of Speech, Language, and Hearing Research, 60, 540-560. doi:10.1044/2016_JSLHR-S-16-0019.

    Abstract

    Purpose: Recent proposals suggest that (a) the high dimensionality of speech motor control may be reduced via modular neuromuscular organization that takes advantage of intrinsic biomechanical regions of stability and (b) computational modeling provides a means to study whether and how such modularization works. In this study, the focus is on the larynx, a structure that is fundamental to speech production because of its role in phonation and numerous articulatory functions. Method: A 3-dimensional model of the larynx was created using the ArtiSynth platform (http://www.artisynth.org). This model was used to simulate laryngeal articulatory states, including inspiration, glottal fricative, modal prephonation, plain glottal stop, vocal–ventricular stop, and aryepiglotto– epiglottal stop and fricative. Results: Speech-relevant laryngeal biomechanics is rich with “quantal” or highly stable regions within muscle activation space. Conclusions: Quantal laryngeal biomechanics complement a modular view of speech control and have implications for the articulatory–biomechanical grounding of numerous phonetic and phonological phenomena
  • Monaghan, P. (2017). Canalization of language structure from environmental constraints: A computational model of word learning from multiple cues. Topics in Cognitive Science, 9(1), 21-34. doi:10.1111/tops.12239.

    Abstract

    There is substantial variation in language experience, yet there is surprising similarity in the language structure acquired. Constraints on language structure may be external modulators that result in this canalization of language structure, or else they may derive from the broader, communicative environment in which language is acquired. In this paper, the latter perspective is tested for its adequacy in explaining robustness of language learning to environmental variation. A computational model of word learning from cross‐situational, multimodal information was constructed and tested. Key to the model's robustness was the presence of multiple, individually unreliable information sources to support learning. This “degeneracy” in the language system has a detrimental effect on learning, compared to a noise‐free environment, but has a critically important effect on acquisition of a canalized system that is resistant to environmental noise in communication.
  • Monaghan, P., & Rowland, C. F. (2017). Combining language corpora with experimental and computational approaches for language acquisition research. Language Learning, 67(S1), 14-39. doi:10.1111/lang.12221.

    Abstract

    Historically, first language acquisition research was a painstaking process of observation, requiring the laborious hand coding of children's linguistic productions, followed by the generation of abstract theoretical proposals for how the developmental process unfolds. Recently, the ability to collect large-scale corpora of children's language exposure has revolutionized the field. New techniques enable more precise measurements of children's actual language input, and these corpora constrain computational and cognitive theories of language development, which can then generate predictions about learning behavior. We describe several instances where corpus, computational, and experimental work have been productively combined to uncover the first language acquisition process and the richness of multimodal properties of the environment, highlighting how these methods can be extended to address related issues in second language research. Finally, we outline some of the difficulties that can be encountered when applying multimethod approaches and show how these difficulties can be obviated
  • Monaghan, P., Chang, Y.-N., Welbourne, S., & Brysbaert, M. (2017). Exploring the relations between word frequency, language exposure, and bilingualism in a computational model of reading. Journal of Memory and Language, 93, 1-27. doi:10.1016/j.jml.2016.08.003.

    Abstract

    Individuals show differences in the extent to which psycholinguistic variables predict their responses for lexical processing tasks. A key variable accounting for much variance in lexical processing is frequency, but the size of the frequency effect has been demonstrated to reduce as a consequence of the individual’s vocabulary size. Using a connectionist computational implementation of the triangle model on a large set of English words, where orthographic, phonological, and semantic representations interact during processing, we show that the model demonstrates a reduced frequency effect as a consequence of amount of exposure to the language, a variable that was also a cause of greater vocabulary size in the model. The model was also trained to learn a second language, Dutch, and replicated behavioural observations that increased proficiency in a second language resulted in reduced frequency effects for that language but increased frequency effects in the first language. The model provides a first step to demonstrating causal relations between psycholinguistic variables in a model of individual differences in lexical processing, and the effect of bilingualism on interacting variables within the language processing system
  • Mongelli, V., Dehaene, S., Vinckier, F., Peretz, I., Bartolomeo, P., & Cohen, L. (2017). Music and words in the visual cortex: The impact of musical expertise. Cortex, 86, 260-274. doi:10.1016/j.cortex.2016.05.016.

    Abstract

    How does the human visual system accommodate expertise for two simultaneously acquired
    symbolic systems? We used fMRI to compare activations induced in the visual
    cortex by musical notation, written words and other classes of objects, in professional
    musicians and in musically naı¨ve controls. First, irrespective of expertise, selective activations
    for music were posterior and lateral to activations for words in the left occipitotemporal
    cortex. This indicates that symbols characterized by different visual features
    engage distinct cortical areas. Second, musical expertise increased the volume of activations
    for music and led to an anterolateral displacement of word-related activations. In
    musicians, there was also a dramatic increase of the brain-scale networks connected to the
    music-selective visual areas. Those findings reveal that acquiring a double visual expertise
    involves an expansion of category-selective areas, the development of novel long-distance
    functional connectivity, and possibly some competition between categories for the colonization
    of cortical space
  • Montero-Melis, G., & Bylund, E. (2017). Getting the ball rolling: the cross-linguistic conceptualization of caused motion. Language and Cognition, 9(3), 446–472. doi:10.1017/langcog.2016.22.

    Abstract

    Does the way we talk about events correspond to how we conceptualize them? Three experiments (N = 135) examined how Spanish and Swedish native speakers judge event similarity in the domain of caused motion (‘He rolled the tyre into the barn’). Spanish and Swedish motion descriptions regularly encode path (‘into’), but differ in how systematically they include manner information (‘roll’). We designed a similarity arrangement task which allowed participants to give varying weights to different dimensions when gauging event similarity. The three experiments progressively reduced the likelihood that speakers were using language to solve the task. We found that, as long as the use of language was possible (Experiments 1 and 2), Swedish speakers were more likely than Spanish speakers to base their similarity arrangements on object manner (rolling/sliding). However, when recruitment of language was hindered through verbal interference, cross-linguistic differences disappeared (Experiment 3). A compound analysis of all experiments further showed that (i) cross-linguistic differences were played out against a backdrop of commonly represented event components, and (ii) describing vs. not describing the events did not augment cross-linguistic differences, but instead had similar effects across languages. We interpret these findings as suggesting a dynamic role of language in event conceptualization.
  • Montero-Melis, G., Eisenbeiss, S., Narasimhan, B., Ibarretxe-Antuñano, I., Kita, S., Kopecka, A., Lüpke, F., Nikitina, T., Tragel, I., Jaeger, T. F., & Bohnemeyer, J. (2017). Satellite- vs. Verb-Framing Underpredicts Nonverbal Motion Categorization: Insights from a Large Language Sample and Simulations. Cognitive Semantics, 3(1), 36-61. doi:10.1163/23526416-00301002.

    Abstract

    Is motion cognition influenced by the large-scale typological patterns proposed in Talmy’s (2000) two-way distinction between verb-framed (V) and satellite-framed (S) languages? Previous studies investigating this question have been limited to comparing two or three languages at a time and have come to conflicting results. We present the largest cross-linguistic study on this question to date, drawing on data from nineteen genealogically diverse languages, all investigated in the same behavioral paradigm and using the same stimuli. After controlling for the different dependencies in the data by means of multilevel regression models, we find no evidence that S- vs. V-framing affects nonverbal categorization of motion events. At the same time, statistical simulations suggest that our study and previous work within the same behavioral paradigm suffer from insufficient statistical power. We discuss these findings in the light of the great variability between participants, which suggests flexibility in motion representation. Furthermore, we discuss the importance of accounting for language variability, something which can only be achieved with large cross-linguistic samples.
  • Murakami, S., Verdonschot, R. G., Kreiborg, S., Kakimoto, N., & Kawaguchi, A. (2017). Stereoscopy in dental education: An investigation. Journal of Dental Education, 81(4), 450-457. doi:10.21815/JDE.016.002.

    Abstract

    The aim of this study was to investigate whether stereoscopy can play a meaningful role in dental education. The study used an anaglyph technique in which two images were presented separately to the left and right eyes (using red/cyan filters), which, combined in the brain, give enhanced depth perception. A positional judgment task was performed to assess whether the use of stereoscopy would enhance depth perception among dental students at Osaka University in Japan. Subsequently, the optimum angle was evaluated to obtain maximum ability to discriminate among complex anatomical structures. Finally, students completed a questionnaire on a range of matters concerning their experience with stereoscopic images including their views on using stereoscopy in their future careers. The results showed that the students who used stereoscopy were better able than students who did not to appreciate spatial relationships between structures when judging relative positions. The maximum ability to discriminate among complex anatomical structures was between 2 and 6 degrees. The students' overall experience with the technique was positive, and although most did not have a clear vision for stereoscopy in their own practice, they did recognize its merits for education. These results suggest that using stereoscopic images in dental education can be quite valuable as stereoscopy greatly helped these students' understanding of the spatial relationships in complex anatomical structures.
  • Narasimhan, B. (2003). Motion events and the lexicon: The case of Hindi. Lingua, 113(2), 123-160. doi:10.1016/S0024-3841(02)00068-2.

    Abstract

    English, and a variety of Germanic languages, allow constructions such as the bottle floated into the cave , whereas languages such as Spanish, French, and Hindi are highly restricted in allowing manner of motion verbs to occur with path phrases. This typological observation has been accounted for in terms of the conflation of complex meaning in basic or derived verbs [Talmy, L., 1985. Lexicalization patterns: semantic structure in lexical forms. In: Shopen, T. (Ed.), Language Typology and Syntactic Description 3: Grammatical Categories and the Lexicon. Cambridge University Press, Cambridge, pp. 57–149; Levin, B., Rappaport-Hovav, M., 1995. Unaccusativity: At the Syntax–Lexical Semantics Interface. MIT Press, Cambridge, MA], or the presence of path “satellites” with special grammatical properties in the lexicon of languages such as English, which allow such phrasal combinations [cf. Talmy, L., 1985. Lexicalization patterns: semantic structure in lexical forms. In: Shopen, T. (Ed.), Language Typology and Syntactic Description 3: Grammatical Categories and the Lexicon. Cambridge University Press, Cambridge, pp. 57–149; Talmy, L., 1991. Path to realisation: via aspect and result. In: Proceedings of the Seventeenth Annual Meeting of the Berkeley Linguistics Society. Berkeley Linguistics Society, Berkeley, pp. 480–520]. I use data from Hindi to show that there is little empirical support for the claim that the constraint on the phrasal combination is correlated with differences in verb meaning or the presence of satellites in the lexicon of a language. However, proposals which eschew lexicalization accounts for more general aspectual constraints on the manner verb + path phrase combination in Spanish-type languages (Aske, J., 1989. Path Predicates in English and Spanish: A Closer look. In: Proceedings of the Fifteenth Annual Meeting of the Berkeley Linguistics Society. Berkeley Linguistics Society, Berkeley, pp. 1–14) cannot account for the full range of data in Hindi either. On the basis of these facts, I argue that an empirically adequate account can be formulated in terms of a general mapping constraint, formulated in terms of whether the lexical requirements of the verb strictly or weakly constrain its syntactic privileges of occurrence. In Hindi, path phrases can combine with manner of motion verbs only to the degree that they are compatible with the semantic profile of the verb. Path phrases in English, on the other hand, can extend the verb's “semantic profile” subject to certain constraints. I suggest that path phrases are licensed in English by the semantic requirements of the “construction” in which they appear rather than by the selectional requirements of the verb (Fillmore, C., Kay, P., O'Connor, M.C., 1988, Regularity and idiomaticity in grammatical constructions. Language 64, 501–538; Jackendoff, 1990, Semantic Structures. MIT Press, Cambridge, MA; Goldberg, 1995, Constructions: A Construction Grammar Approach to Argument Structure. University of Chicago Press, Chicago and London).
  • Nederstigt, U. (2003). Auch and noch in child and adult German. Berlin: Mouton de Gruyter.
  • Negwer, M., & Schubert, D. (2017). Talking convergence: Growing evidence links FOXP2 and retinoic acidin shaping speech-related motor circuitry. Frontiers in Neuroscience, 11: 19. doi:10.3389/fnins.2017.00019.

    Abstract

    A commentary on
    FOXP2 drives neuronal differentiation by interacting with retinoic acid signaling pathways

    by Devanna, P., Middelbeek, J., and Vernes, S. C. (2014). Front. Cell. Neurosci. 8:305. doi: 10.3389/fncel.2014.00305
  • Niccolai, V., Klepp, A., Indefrey, P., Schnitzler, A., & Biermann-Ruben, K. (2017). Semantic discrimination impacts tDCS modulation of verb processing. Scientific Reports, 7: 17162. doi:10.1038/s41598-017-17326-w.

    Abstract

    Motor cortex activation observed during body-related verb processing hints at simulation accompanying linguistic understanding. By exploiting the up- and down-regulation that anodal and cathodal transcranial direct current stimulation (tDCS) exert on motor cortical excitability, we aimed at further characterizing the functional contribution of the motor system to linguistic processing. In a double-blind sham-controlled within-subjects design, online stimulation was applied to the left hemispheric hand-related motor cortex of 20 healthy subjects. A dual, double-dissociation task required participants to semantically discriminate concrete (hand/foot) from abstract verb primes as well as to respond with the hand or with the foot to verb-unrelated geometric targets. Analyses were conducted with linear mixed models. Semantic priming was confirmed by faster and more accurate reactions when the response effector was congruent with the verb’s body part. Cathodal stimulation induced faster responses for hand verb primes thus indicating a somatotopical distribution of cortical activation as induced by body-related verbs. Importantly, this effect depended on performance in semantic discrimination. The current results point to verb processing being selectively modifiable by neuromodulation and at the same time to a dependence of tDCS effects on enhanced simulation. We discuss putative mechanisms operating in this reciprocal dependence of neuromodulation and motor resonance.

    Additional information

    41598_2017_17326_MOESM1_ESM.pdf
  • Nieuwland, M. S., & Martin, A. E. (2017). Neural oscillations and a nascent corticohippocampal theory of reference. Journal of Cognitive Neuroscience, 29(5), 896-910. doi:10.1162/jocn_a_01091.

    Abstract

    The ability to use words to refer to the world is vital to the communicative power of human language. In particular, the anaphoric use of words to refer to previously mentioned concepts (antecedents) allows dialogue to be coherent and meaningful. Psycholinguistic theory posits that anaphor comprehension involves reactivating a memory representation of the antecedent. Whereas this implies the involvement of recognition memory, or the mnemonic sub-routines by which people distinguish old from new, the neural processes for reference resolution are largely unknown. Here, we report time-frequency analysis of four EEG experiments to reveal the increased coupling of functional neural systems associated with referentially coherent expressions compared to referentially problematic expressions. Despite varying in modality, language, and type of referential expression, all experiments showed larger gamma-band power for referentially coherent expressions compared to referentially problematic expressions. Beamformer analysis in high-density Experiment 4 localised the gamma-band increase to posterior parietal cortex around 400-600 ms after anaphor-onset and to frontaltemporal cortex around 500-1000 ms. We argue that the observed gamma-band power increases reflect successful referential binding and resolution, which links incoming information to antecedents through an interaction between the brain’s recognition memory networks and frontal-temporal language network. We integrate these findings with previous results from patient and neuroimaging studies, and we outline a nascent cortico-hippocampal theory of reference.
  • Nivard, M. G., Gage, S. H., Hottenga, J. J., van Beijsterveldt, C. E. M., Abdellaoui, A., Bartels, M., Baselmans, B. M. L., Ligthart, L., St Pourcain, B., Boomsma, D. I., Munafò, M. R., & Middeldorp, C. M. (2017). Genetic overlap between schizophrenia and developmental psychopathology: Longitudinal and multivariate polygenic risk prediction of common psychiatric traits during development. Schizophrenia Bulletin, 43(6), 1197-1207. doi:10.1093/schbul/sbx031.

    Abstract

    Background: Several nonpsychotic psychiatric disorders in childhood and adolescence can precede the onset of schizophrenia, but the etiology of this relationship remains unclear. We investigated to what extent the association between schizophrenia and psychiatric disorders in childhood is explained by correlated genetic risk factors. Methods: Polygenic risk scores (PRS), reflecting an individual’s genetic risk for schizophrenia, were constructed for 2588 children from the Netherlands Twin Register (NTR) and 6127 from the Avon Longitudinal Study of Parents And Children (ALSPAC). The associations between schizophrenia PRS and measures of anxiety, depression, attention deficit hyperactivity disorder (ADHD), and oppositional defiant disorder/conduct disorder (ODD/CD) were estimated at age 7, 10, 12/13, and 15 years in the 2 cohorts. Results were then meta-analyzed, and a meta-regression analysis was performed to test differences in effects sizes over, age and disorders. Results: Schizophrenia PRS were associated with childhood and adolescent psychopathology. Meta-regression analysis showed differences in the associations over disorders, with the strongest association with childhood and adolescent depression and a weaker association for ODD/CD at age 7. The associations increased with age and this increase was steepest for ADHD and ODD/CD. Genetic correlations varied between 0.10 and 0.25. Conclusion: By optimally using longitudinal data across diagnoses in a multivariate meta-analysis this study sheds light on the development of childhood disorders into severe adult psychiatric disorders. The results are consistent with a common genetic etiology of schizophrenia and developmental psychopathology as well as with a stronger shared genetic etiology between schizophrenia and adolescent onset psychopathology.
  • Nivard, M. G., Lubke, G. H., Dolan, C. V., Evans, D. M., St Pourcain, B., Munafo, M. R., & Middeldorp, C. M. (2017). Joint developmental trajectories of internalizing and externalizing disorders between childhood and adolescence. Development and Psychopathology, 29(3), 919-928. doi:10.1017/S0954579416000572.

    Abstract

    This study sought to identify trajectories of DSM-IV based internalizing (INT) and externalizing (EXT) problem scores across childhood and adolescence and to provide insight into the comorbidity by modeling the co-occurrence of INT and EXT trajectories. INT and EXT were measured repeatedly between age 7 and age 15 years in over 7,000 children and analyzed using growth mixture models. Five trajectories were identified for both INT and EXT, including very low, low, decreasing, and increasing trajectories. In addition, an adolescent onset trajectory was identified for INT and a stable high trajectory was identified for EXT. Multinomial regression showed that similar EXT and INT trajectories were associated. However, the adolescent onset INT trajectory was independent of high EXT trajectories, and persisting EXT was mainly associated with decreasing INT. Sex and early life environmental risk factors predicted EXT and, to a lesser extent, INT trajectories. The association between trajectories indicates the need to consider comorbidity when a child presents with INT or EXT disorders, particularly when symptoms start early. This is less necessary when INT symptoms start at adolescence. Future studies should investigate the etiology of co-occurring INT and EXT and the specific treatment needs of these severely affected children.
  • Nix, A. J., Mehta, G., Dye, J., & Cutler, A. (1993). Phoneme detection as a tool for comparing perception of natural and synthetic speech. Computer Speech and Language, 7, 211-228. doi:10.1006/csla.1993.1011.

    Abstract

    On simple intelligibility measures, high-quality synthesiser output now scores almost as well as natural speech. Nevertheless, it is widely agreed that perception of synthetic speech is a harder task for listeners than perception of natural speech; in particular, it has been hypothesized that listeners have difficulty identifying phonemes in synthetic speech. If so, a simple measure of the speed with which a phoneme can be identified should prove a useful tool for comparing perception of synthetic and natural speech. The phoneme detection task was here used in three experiments comparing perception of natural and synthetic speech. In the first, response times to synthetic and natural targets were not significantly different, but in the second and third experiments response times to synthetic targets were significantly slower than to natural targets. A speed-accuracy tradeoff in the third experiment suggests that an important factor in this task is the response criterion adopted by subjects. It is concluded that the phoneme detection task is a useful tool for investigating phonetic processing of synthetic speech input, but subjects must be encouraged to adopt a response criterion which emphasizes rapid responding. When this is the case, significantly longer response times for synthetic targets can indicate a processing disadvantage for synthetic speech at an early level of phonetic analysis.
  • Noordman, L. G. M., & Vonk, W. (1998). Memory-based processing in understanding causal information. Discourse Processes, 191-212. doi:10.1080/01638539809545044.

    Abstract

    The reading process depends both on the text and on the reader. When we read a text, propositions in the current input are matched to propositions in the memory representation of the previous discourse but also to knowledge structures in long‐term memory. Therefore, memory‐based text processing refers both to the bottom‐up processing of the text and to the top‐down activation of the reader's knowledge. In this article, we focus on the role of cognitive structures in the reader's knowledge. We argue that causality is an important category in structuring human knowledge and that this property has consequences for text processing. Some research is discussed that illustrates that the more the information in the text reflects causal categories, the more easily the information is processed.
  • Norris, D., McQueen, J. M., & Cutler, A. (2003). Perceptual learning in speech. Cognitive Psychology, 47(2), 204-238. doi:10.1016/S0010-0285(03)00006-9.

    Abstract

    This study demonstrates that listeners use lexical knowledge in perceptual learning of speech sounds. Dutch listeners first made lexical decisions on Dutch words and nonwords. The final fricative of 20 critical words had been replaced by an ambiguous sound, between [f] and [s]. One group of listeners heard ambiguous [f]-final words (e.g., [WI tlo?], from witlof, chicory) and unambiguous [s]-final words (e.g., naaldbos, pine forest). Another group heard the reverse (e.g., ambiguous [na:ldbo?], unambiguous witlof). Listeners who had heard [?] in [f]-final words were subsequently more likely to categorize ambiguous sounds on an [f]–[s] continuum as [f] than those who heard [?] in [s]-final words. Control conditions ruled out alternative explanations based on selective adaptation and contrast. Lexical information can thus be used to train categorization of speech. This use of lexical information differs from the on-line lexical feedback embodied in interactive models of speech perception. In contrast to on-line feedback, lexical feedback for learning is of benefit to spoken word recognition (e.g., in adapting to a newly encountered dialect).
  • Norris, D., McQueen, J. M., & Cutler, A. (1995). Competition and segmentation in spoken word recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 1209-1228.

    Abstract

    Spoken utterances contain few reliable cues to word boundaries, but listeners nonetheless experience little difficulty identifying words in continuous speech. The authors present data and simulations that suggest that this ability is best accounted for by a model of spoken-word recognition combining competition between alternative lexical candidates and sensitivity to prosodic structure. In a word-spotting experiment, stress pattern effects emerged most clearly when there were many competing lexical candidates for part of the input. Thus, competition between simultaneously active word candidates can modulate the size of prosodic effects, which suggests that spoken-word recognition must be sensitive both to prosodic structure and to the effects of competition. A version of the Shortlist model ( D. G. Norris, 1994b) incorporating the Metrical Segmentation Strategy ( A. Cutler & D. Norris, 1988) accurately simulates the results using a lexicon of more than 25,000 words.
  • Nyberg, L., Marklund, P., Persson, J., Cabeza, R., Forkstam, C., Petersson, K. M., & Ingvar, M. (2003). Common prefrontal activations during working memory, episodic memory, and semantic memory. Neuropsychologia, 41(3), 371-377. doi:10.1016/S0028-3932(02)00168-9.

    Abstract

    Regions of the prefrontal cortex (PFC) are typically activated in many different cognitive functions. In most studies, the focus has been on the role of specific PFC regions in specific cognitive domains, but more recently similarities in PFC activations across cognitive domains have been stressed. Such similarities may suggest that a region mediates a common function across a variety of cognitive tasks. In this study, we compared the activation patterns associated with tests of working memory, semantic memory and episodic memory. The results converged on a general involvement of four regions across memory tests. These were located in left frontopolar cortex, left mid-ventrolateral PFC, left mid-dorsolateral PFC and dorsal anterior cingulate cortex. These findings provide evidence that some PFC regions are engaged during many different memory tests. The findings are discussed in relation to theories about the functional contribition of the PFC regions and the architecture of memory.
  • Nyberg, L., Sandblom, J., Jones, S., Stigsdotter Neely, A., Petersson, K. M., Ingvar, M., & Bäckman, L. (2003). Neural correlates of training-related memory improvement in adulthood and aging. Proceedings of the National Academy of Sciences of the United States of America, 100(23), 13728-13733. doi:10.1073/pnas.1735487100.

    Abstract

    Cognitive studies show that both younger and older adults can increase their memory performance after training in using a visuospatial mnemonic, although age-related memory deficits tend to be magnified rather than reduced after training. Little is known about the changes in functional brain activity that accompany training-induced memory enhancement, and whether age-related activity changes are associated with the size of training-related gains. Here, we demonstrate that younger adults show increased activity during memory encoding in occipito-parietal and frontal brain regions after learning the mnemonic. Older adults did not show increased frontal activity, and only those elderly persons who benefited from the mnemonic showed increased occipitoparietal activity. These findings suggest that age-related differences in cognitive reserve capacity may reflect both a frontal processing deficiency and a posterior production deficiency.
  • O'Brien, D. P., & Bowerman, M. (1998). Martin D. S. Braine (1926–1996): Obituary. American Psychologist, 53, 563. doi:10.1037/0003-066X.53.5.563.

    Abstract

    Memorializes Martin D. S. Braine, whose research on child language acquisition and on both child and adult thinking and reasoning had a major influence on modern cognitive psychology. Addressing meaning as well as position, Braine argued that children start acquiring language by learning narrow-scope positional formulas that map components of meaning to positions in the utterance. These proposals were critical in starting discussions of the possible universality of the pivot-grammar stage and of the role of syntax, semantics,and pragmatics in children's early grammar and were pivotal to the rise of approaches in which cognitive development in language acquisition is stressed.
  • Ocklenburg, S., Schmitz, J., Moinfar, Z., Moser, D., Klose, R., Lor, S., Kunz, G., Tegenthoff, M., Faustmann, P., Francks, C., Epplen, J. T., Kumsta, R., & Güntürkün, O. (2017). Epigenetic regulation of lateralized fetal spinal gene expression underlies hemispheric asymmetries. eLife, 6: e22784. doi:10.7554/eLife.22784.001.

    Abstract

    Lateralization is a fundamental principle of nervous system organization but its molecular determinants are mostly unknown. In humans, asymmetric gene expression in the fetal cortex has been suggested as the molecular basis of handedness. However, human fetuses already show considerable asymmetries in arm movements before the motor cortex is functionally linked to the spinal cord, making it more likely that spinal gene expression asymmetries form the molecular basis of handedness. We analyzed genome-wide mRNA expression and DNA methylation in cervical and anterior thoracal spinal cord segments of five human fetuses and show development-dependent gene expression asymmetries. These gene expression asymmetries were epigenetically regulated by miRNA expression asymmetries in the TGF-β signaling pathway and lateralized methylation of CpG islands. Our findings suggest that molecular mechanisms for epigenetic regulation within the spinal cord constitute the starting point for handedness, implying a fundamental shift in our understanding of the ontogenesis of hemispheric asymmetries in humans
  • Ogdie, M. N., MacPhie, I. L., Minassian, S. L., Yang, M., Fisher, S. E., Francks, C., Cantor, R. M., McCracken, J. T., McGough, J. J., Nelson, S. F., Monaco, A. P., & Smalley, S. L. (2003). A genomewide scan for Attention-Deficit/Hyperactivity Disorder in an extended sample: Suggestive linkage on 17p11. American Journal of Human Genetics, 72(5), 1268-1279. doi:10.1086/375139.

    Abstract

    Attention-deficit/hyperactivity disorder (ADHD [MIM 143465]) is a common, highly heritable neurobehavioral disorder of childhood onset, characterized by hyperactivity, impulsivity, and/or inattention. As part of an ongoing study of the genetic etiology of ADHD, we have performed a genomewide linkage scan in 204 nuclear families comprising 853 individuals and 270 affected sibling pairs (ASPs). Previously, we reported genomewide linkage analysis of a “first wave” of these families composed of 126 ASPs. A follow-up investigation of one region on 16p yielded significant linkage in an extended sample. The current study extends the original sample of 126 ASPs to 270 ASPs and provides linkage analyses of the entire sample, using polymorphic microsatellite markers that define an ∼10-cM map across the genome. Maximum LOD score (MLS) analysis identified suggestive linkage for 17p11 (MLS=2.98) and four nominal regions with MLS values >1.0, including 5p13, 6q14, 11q25, and 20q13. These data, taken together with the fine mapping on 16p13, suggest two regions as highly likely to harbor risk genes for ADHD: 16p13 and 17p11. Interestingly, both regions, as well as 5p13, have been highlighted in genomewide scans for autism.
  • Ortega, G. (2017). Iconicity and sign lexical acquisition: A review. Frontiers in Psychology, 8: 1280. doi:10.3389/fpsyg.2017.01280.

    Abstract

    The study of iconicity, defined as the direct relationship between a linguistic form and its referent, has gained momentum in recent years across a wide range of disciplines. In the spoken modality, there is abundant evidence showing that iconicity is a key factor that facilitates language acquisition. However, when we look at sign languages, which excel in the prevalence of iconic structures, there is a more mixed picture, with some studies showing a positive effect and others showing a null or negative effect. In an attempt to reconcile the existing evidence the present review presents a critical overview of the literature on the acquisition of a sign language as first (L1) and second (L2) language and points at some factor that may be the source of disagreement. Regarding sign L1 acquisition, the contradicting findings may relate to iconicity being defined in a very broad sense when a more fine-grained operationalisation might reveal an effect in sign learning. Regarding sign L2 acquisition, evidence shows that there is a clear dissociation in the effect of iconicity in that it facilitates conceptual-semantic aspects of sign learning but hinders the acquisition of the exact phonological form of signs. It will be argued that when we consider the gradient nature of iconicity and that signs consist of a phonological form attached to a meaning we can discern how iconicity impacts sign learning in positive and negative ways
  • Ortega, G., Sumer, B., & Ozyurek, A. (2017). Type of iconicity matters in the vocabulary development of signing children. Developmental Psychology, 53(1), 89-99. doi:10.1037/dev0000161.

    Abstract

    Recent research on signed as well as spoken language shows that the iconic features of the target language might play a role in language development. Here, we ask further whether different types of iconic depictions modulate children’s preferences for certain types of sign-referent links during vocabulary development in sign language. Results from a picture description task indicate that lexical signs with 2 possible variants are used in different proportions by deaf signers from different age groups. While preschool and school-age children favored variants representing actions associated with their referent (e.g., a writing hand for the sign PEN), adults preferred variants representing the perceptual features of those objects (e.g., upward index finger representing a thin, elongated object for the sign PEN). Deaf parents interacting with their children, however, used action- and perceptual-based variants in equal proportion and favored action variants more than adults signing to other adults. We propose that when children are confronted with 2 variants for the same concept, they initially prefer action-based variants because they give them the opportunity to link a linguistic label to familiar schemas linked to their action/motor experiences. Our results echo findings showing a bias for action-based depictions in the development of iconic co-speech gestures suggesting a modality bias for such representations during development.
  • Ostarek, M., & Huettig, F. (2017). Spoken words can make the invisible visible – Testing the involvement of low-level visual representations in spoken word processing. Journal of Experimental Psychology: Human Perception and Performance, 43, 499-508. doi:10.1037/xhp0000313.

    Abstract

    The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. In the present study, we investigated whether participants can detect otherwise invisible pictures of objects when they are presented with the corresponding spoken word shortly before the picture appears. Our results showed facilitated detection for congruent ("bottle" -> picture of a bottle) vs. incongruent ("bottle" -> picture of a banana) trials. A second experiment investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200-400ms after word onset and decays at 600ms after word onset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, i.e. what we see. More generally our findings fit best with the notion that spoken words activate modality-specific visual representations that are low-level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens but also for generalizing to novel exemplars one has never seen before.
  • Ostarek, M., & Huettig, F. (2017). A task-dependent causal role for low-level visual processes in spoken word comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(8), 1215-1224. doi:10.1037/xlm0000375.

    Abstract

    It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual representations contribute functionally to concrete word comprehension using an interference paradigm. We interfered with basic visual processing while participants performed a concreteness task (Experiment 1), a lexical decision task (Experiment 2), and a word class judgment task (Experiment 3). We found that visual noise interfered more with concrete vs. abstract word processing, but only when the task required visual information to be accessed. This suggests that basic visual processes can be causally involved in language comprehension, but that their recruitment is not automatic and rather depends on the type of information that is required in a given task situation.

    Additional information

    XLM-2016-2822_supp.docx
  • Ostarek, M., & Vigliocco, G. (2017). Reading sky and seeing a cloud: On the relevance of events for perceptual simulation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(4), 579-590. doi:10.1037/xlm0000318.

    Abstract

    Previous research has shown that processing words with an up/down association (e.g., bird, foot) can influence the subsequent identification of visual targets in congruent location (at the top/bottom of the screen). However, as facilitation and interference were found under similar conditions, the nature of the underlying mechanisms remained unclear. We propose that word comprehension relies on the perceptual simulation of a prototypical event involving the entity denoted by a word in order to provide a general account of the different findings. In three experiments, participants had to discriminate between two target pictures appearing at the top or the bottom of the screen by pressing the left vs. right button. Immediately before the targets appeared, they saw an up/down word belonging to the target’s event, an up/down word unrelated to the target, or a spatially neutral control word. Prime words belonging to target event facilitated identification of targets at 250ms SOA (experiment 1), but only when presented in the vertical location where they are typically seen, indicating that targets were integrated in the simulations activated by the prime words. Moreover, at the same SOA, there was a robust facilitation effect for targets appearing in their typical location regardless of the prime type. However, when words were presented for 100ms (experiment 2) or 800ms (experiment 3), only a location non-specific priming effect was found, suggesting that the visual system was not activated. Implications for theories of semantic processing are discussed.
  • Otake, T., Hatano, G., Cutler, A., & Mehler, J. (1993). Mora or syllable? Speech segmentation in Japanese. Journal of Memory and Language, 32, 258-278. doi:10.1006/jmla.1993.1014.

    Abstract

    Four experiments examined segmentation of spoken Japanese words by native and non-native listeners. Previous studies suggested that language rhythm determines the segmentation unit most natural to native listeners: French has syllabic rhythm, and French listeners use the syllable in segmentation, while English has stress rhythm, and segmentation by English listeners is based on stress. The rhythm of Japanese is based on a subsyllabic unit, the mora. In the present experiments Japanese listeners′ response patterns were consistent with moraic segmentation; acoustic artifacts could not have determined the results since nonnative (English and French) listeners showed different response patterns with the same materials. Predictions of a syllabic hypothesis were disconfirmed in the Japanese listeners′ results; in contrast, French listeners showed a pattern of responses consistent with the syllabic hypothesis. The results provide further evidence that listeners′ segmentation of spoken words relies on procedures determined by the characteristic phonology of their native language.
  • Ozker, M., Schepers, I., Magnotti, J., Yoshor, D., & Beauchamp, M. (2017). A double dissociation between anterior and posterior superior temporal gyrus for processing audiovisual speech demonstrated by electrocorticography. Journal of Cognitive Neuroscience, 29(6), 1044-1060. doi:10.1162/jocn_a_01110.

    Abstract

    Human speech can be comprehended using only auditory information from the talker's voice. However, comprehension is improved if the talker's face is visible, especially if the auditory information is degraded as occurs in noisy environments or with hearing loss. We explored the neural substrates of audiovisual speech perception using electrocorticography, direct recording of neural activity using electrodes implanted on the cortical surface. We observed a double dissociation in the responses to audiovisual speech with clear and noisy auditory component within the superior temporal gyrus (STG), a region long known to be important for speech perception. Anterior STG showed greater neural activity to audiovisual speech with clear auditory component, whereas posterior STG showed similar or greater neural activity to audiovisual speech in which the speech was replaced with speech-like noise. A distinct border between the two response patterns was observed, demarcated by a landmark corresponding to the posterior margin of Heschl's gyrus. To further investigate the computational roles of both regions, we considered Bayesian models of multisensory integration, which predict that combining the independent sources of information available from different modalities should reduce variability in the neural responses. We tested this prediction by measuring the variability of the neural responses to single audiovisual words. Posterior STG showed smaller variability than anterior STG during presentation of audiovisual speech with noisy auditory component. Taken together, these results suggest that posterior STG but not anterior STG is important for multisensory integration of noisy auditory and visual speech.
  • Paterson, K. B., Liversedge, S. P., Rowland, C. F., & Filik, R. (2003). Children's comprehension of sentences with focus particles. Cognition, 89(3), 263-294. doi:10.1016/S0010-0277(03)00126-4.

    Abstract

    We report three studies investigating children's and adults' comprehension of sentences containing the focus particle only. In Experiments 1 and 2, four groups of participants (6–7 years, 8–10 years, 11–12 years and adult) compared sentences with only in different syntactic positions against pictures that matched or mismatched events described by the sentence. Contrary to previous findings (Crain, S., Ni, W., & Conway, L. (1994). Learning, parsing and modularity. In C. Clifton, L. Frazier, & K. Rayner (Eds.), Perspectives on sentence processing. Hillsdale, NJ: Lawrence Erlbaum; Philip, W., & Lynch, E. (1999). Felicity, relevance, and acquisition of the grammar of every and only. In S. C. Howell, S. A. Fish, & T. Keith-Lucas (Eds.), Proceedings of the 24th annual Boston University conference on language development. Somerville, MA: Cascadilla Press) we found that young children predominantly made errors by failing to process contrast information rather than errors in which they failed to use syntactic information to restrict the scope of the particle. Experiment 3 replicated these findings with pre-schoolers.
  • Pederson, E., & Roelofs, A. (1994). Max-Planck-Institute for Psycholinguistics: Annual Report Nr.15 1994. Nijmegen: MPI for Psycholinguistics.
  • Pederson, E., Danziger, E., Wilkins, D. G., Levinson, S. C., Kita, S., & Senft, G. (1998). Semantic typology and spatial conceptualization. Language, 74(3), 557-589. doi:10.2307/417793.
  • Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2017). Linking language to the visual world: Neural correlates of comprehending verbal reference to objects through pointing and visual cues. Neuropsychologia, 95, 21-29. doi:10.1016/j.neuropsychologia.2016.12.004.

    Abstract

    In everyday communication speakers often refer in speech and/or gesture to objects in their immediate environment, thereby shifting their addressee's attention to an intended referent. The neurobiological infrastructure involved in the comprehension of such basic multimodal communicative acts remains unclear. In an event-related fMRI study, we presented participants with pictures of a speaker and two objects while they concurrently listened to her speech. In each picture, one of the objects was singled out, either through the speaker's index-finger pointing gesture or through a visual cue that made the object perceptually more salient in the absence of gesture. A mismatch (compared to a match) between speech and the object singled out by the speaker's pointing gesture led to enhanced activation in left IFG and bilateral pMTG, showing the importance of these areas in conceptual matching between speech and referent. Moreover, a match (compared to a mismatch) between speech and the object made salient through a visual cue led to enhanced activation in the mentalizing system, arguably reflecting an attempt to converge on a jointly attended referent in the absence of pointing. These findings shed new light on the neurobiological underpinnings of the core communicative process of comprehending a speaker's multimodal referential act and stress the power of pointing as an important natural device to link speech to objects.
  • Perlman, M. (2017). Debunking two myths against vocal origins of language: Language is iconic and multimodal to the core. Interaction studies, 18(3), 376-401. doi:10.1075/is.18.3.05per.

    Abstract

    Gesture-first theories of language origins often raise two unsubstantiated arguments against vocal origins. First, they argue that great ape vocal behavior is highly constrained, limited to a fixed, species-typical repertoire of reflexive calls. Second, they argue that vocalizations lack any significant potential to ground meaning through iconicity, or resemblance between form and meaning. This paper reviews the considerable evidence that debunks these two “myths”. Accumulating evidence shows that the great apes exercise voluntary control over their vocal behavior, including their breathing, larynx, and supralaryngeal articulators. They are also able to learn new vocal behaviors, and even show some rudimentary ability for vocal imitation. In addition, an abundance of research demonstrates that the vocal modality affords rich potential for iconicity. People can understand iconicity in sound symbolism, and they can produce iconic vocalizations to communicate a diverse range of meanings. Thus, two of the primary arguments against vocal origins theories are not tenable. As an alternative, the paper concludes that the origins of language – going as far back as our last common ancestor with great apes – are rooted in iconicity in both gesture and vocalization.

    Files private

    Request files
  • Perlman, M., & Salmi, R. (2017). Gorillas may use their laryngeal air sacs for whinny-type vocalizations and male display. Journal of Language Evolution, 2(2), 126-140. doi:10.1093/jole/lzx012.

    Abstract

    Great apes and siamangs—but not humans—possess laryngeal air sacs, suggesting that they were lost over hominin evolution. The absence of air sacs in humans may hold clues to speech evolution, but little is known about their functions in extant apes. We investigated whether gorillas use their air sacs to produce the staccato ‘growling’ of the silverback chest beat display. This hypothesis was formulated after viewing a nature documentary showing a display by a silverback western gorilla (Kingo). As Kingo growls, the video shows distinctive vibrations in his chest and throat under which the air sacs extend. We also investigated whether other similarly staccato vocalizations—the whinny, sex whinny, and copulation grunt—might also involve the air sacs. To examine these hypotheses, we collected an opportunistic sample of video and audio evidence from research records and another documentary of Kingo’s group, and from videos of other gorillas found on YouTube. Analysis shows that the four vocalizations are each emitted in rapid pulses of a similar frequency (8–16 pulses per second), and limited visual evidence indicates that they may all occur with upper torso vibrations. Future research should determine how consistently the vibrations co-occur with the vocalizations, whether they are synchronized, and their precise location and timing. Our findings fit with the hypothesis that apes—especially, but not exclusively males—use their air sacs for vocalizations and displays related to size exaggeration for sex and territory. Thus changes in social structure, mating, and sexual dimorphism might have led to the obsolescence of the air sacs and their loss in hominin evolution.
  • Petersson, K. M. (1998). Comments on a Monte Carlo approach to the analysis of functional neuroimaging data. NeuroImage, 8, 108-112.
  • Petersson, K. M., Sandblom, J., Elfgren, C., & Ingvar, M. (2003). Instruction-specific brain activations during episodic encoding: A generalized level of processing effect. Neuroimage, 20, 1795-1810. doi:10.1016/S1053-8119(03)00414-2.

    Abstract

    In a within-subject design we investigated the levels-of-processing (LOP) effect using visual material in a behavioral and a corresponding PET study. In the behavioral study we characterize a generalized LOP effect, using pleasantness and graphical quality judgments in the encoding situation, with two types of visual material, figurative and nonfigurative line drawings. In the PET study we investigate the related pattern of brain activations along these two dimensions. The behavioral results indicate that instruction and material contribute independently to the level of recognition performance. Therefore the LOP effect appears to stem both from the relative relevance of the stimuli (encoding opportunity) and an altered processing of stimuli brought about by the explicit instruction (encoding mode). In the PET study, encoding of visual material under the pleasantness (deep) instruction yielded left lateralized frontoparietal and anterior temporal activations while surface-based perceptually oriented processing (shallow instruction) yielded right lateralized frontoparietal, posterior temporal, and occipitotemporal activations. The result that deep encoding was related to the left prefrontal cortex while shallow encoding was related to the right prefrontal cortex, holding the material constant, is not consistent with the HERA model. In addition, we suggest that the anterior medial superior frontal region is related to aspects of self-referential semantic processing and that the inferior parts of the anterior cingulate as well as the medial orbitofrontal cortex is related to affective processing, in this case pleasantness evaluation of the stimuli regardless of explicit semantic content. Finally, the left medial temporal lobe appears more actively engaged by elaborate meaning-based processing and the complex response pattern observed in different subregions of the MTL lends support to the suggestion that this region is functionally segregated.

Share this page