Publications

Displaying 301 - 400 of 651
  • Levelt, W. J. M., Meyer, A. S., & Roelofs, A. (2004). Relations of lexical access to neural implementation and syntactic encoding [author's response]. Behavioral and Brain Sciences, 27, 299-301. doi:10.1017/S0140525X04270078.

    Abstract

    How can one conceive of the neuronal implementation of the processing model we proposed in our target article? In his commentary (Pulvermüller 1999, reprinted here in this issue), Pulvermüller makes various proposals concerning the underlying neural mechanisms and their potential localizations in the brain. These proposals demonstrate the compatibility of our processing model and current neuroscience. We add further evidence on details of localization based on a recent meta-analysis of neuroimaging studies of word production (Indefrey & Levelt 2000). We also express some minor disagreements with respect to Pulvermüller’s interpretation of the “lemma” notion, and concerning his neural modeling of phonological code retrieval. Branigan & Pickering discuss important aspects of syntactic encoding, which was not the topic of the target article. We discuss their well-taken proposal that multiple syntactic frames for a single verb lemma are represented as independent nodes, which can be shared with other verbs, such as accounting for syntactic priming in speech production. We also discuss how, in principle, the alternative multiple-frame-multiplelemma account can be tested empirically. The available evidence does not seem to support that account.
  • Levelt, W. J. M. (2004). Speech, gesture and the origins of language. European Review, 12(4), 543-549. doi:10.1017/S1062798704000468.

    Abstract

    During the second half of the 19th century, the psychology of language was invented as a discipline for the sole purpose of explaining the evolution of spoken language. These efforts culminated in Wilhelm Wundt’s monumental Die Sprache of 1900, which outlined the psychological mechanisms involved in producing utterances and considered how these mechanisms could have evolved. Wundt assumes that articulatory movements were originally rather arbitrary concomitants of larger, meaningful expressive bodily gestures. The sounds such articulations happened to produce slowly acquired the meaning of the gesture as a whole, ultimately making the gesture superfluous. Over a century later, gestural theories of language origins still abound. I argue that such theories are unlikely and wasteful, given the biological, neurological and genetic evidence.
  • Levelt, W. J. M. (2004). Een huis voor kunst en wetenschap. Boekman: Tijdschrift voor Kunst, Cultuur en Beleid, 16(58/59), 212-215.
  • Levelt, W. J. M. (1965). Binocular brightness averaging and contour information. British Journal of Psychology, 56, 1-13.
  • Levelt, W. J. M. (1991). Die konnektionistische Mode. Sprache und Kognition, 10(2), 61-72.
  • Levelt, W. J. M., Praamstra, P., Meyer, A. S., Helenius, P., & Salmelin, R. (1998). An MEG study of picture naming. Journal of Cognitive Neuroscience, 10(5), 553-567. doi:10.1162/089892998562960.

    Abstract

    The purpose of this study was to relate a psycholinguistic processing model of picture naming to the dynamics of cortical activation during picture naming. The activation was recorded from eight Dutch subjects with a whole-head neuromagnetometer. The processing model, based on extensive naming latency studies, is a stage model. In preparing a picture's name, the speaker performs a chain of specific operations. They are, in this order, computing the visual percept, activating an appropriate lexical concept, selecting the target word from the mental lexicon, phonological encoding, phonetic encoding, and initiation of articulation. The time windows for each of these operations are reasonably well known and could be related to the peak activity of dipole sources in the individual magnetic response patterns. The analyses showed a clear progression over these time windows from early occipital activation, via parietal and temporal to frontal activation. The major specific findings were that (1) a region in the left posterior temporal lobe, agreeing with the location of Wernicke's area, showed prominent activation starting about 200 msec after picture onset and peaking at about 350 msec, (i.e., within the stage of phonological encoding), and (2) a consistent activation was found in the right parietal cortex, peaking at about 230 msec after picture onset, thus preceding and partly overlapping with the left temporal response. An interpretation in terms of the management of visual attention is proposed.
  • Levelt, W. J. M., & Schiller, N. O. (1998). Is the syllable frame stored? [Commentary on the BBS target article 'The frame/content theory of evolution of speech production' by Peter F. McNeilage]. Behavioral and Brain Sciences, 21, 520.

    Abstract

    This commentary discusses whether abstract metrical frames are stored. For stress-assigning languages (e.g., Dutch and English), which have a dominant stress pattern, metrical frames are stored only for words that deviate from the default stress pattern. The majority of the words in these languages are produced without retrieving any independent syllabic or metrical frame.
  • Levelt, W. J. M., Schriefers, H., Vorberg, D., Meyer, A. S., Pechmann, T., & Havinga, J. (1991). Normal and deviant lexical processing: Reply to Dell and O'Seaghdha. Psychological Review, 98(4), 615-618. doi:10.1037/0033-295X.98.4.615.

    Abstract

    In their comment, Dell and O'Seaghdha (1991) adduced any effect on phonological probes for semantic alternatives to the activation of these probes in the lexical network. We argue that that interpretation is false and, in addition, that the model still cannot account for our data. Furthermore, and different from Dell and O'seaghda, we adduce semantic rebound to the lemma level, where it is so substantial that it should have shown up in our data. Finally, we question the function of feedback in a lexical network (other than eliciting speech errors) and discuss Dell's (1988) notion of a unified production-comprehension system.
  • Levelt, W. J. M. (1998). The genetic perspective in psycholinguistics, or: Where do spoken words come from? Journal of Psycholinguistic Research, 27(2), 167-180. doi:10.1023/A:1023245931630.

    Abstract

    The core issue in the 19-century sources of psycholinguistics was the question, "Where does language come from?'' This genetic perspective unified the study of the ontogenesis, the phylogenesis, the microgenesis, and to some extent the neurogenesis of language. This paper makes the point that this original perspective is still a valid and attractive one. It is exemplified by a discussion of the genesis of spoken words.
  • Levelt, W. J. M., Schriefer, H., Vorberg, D., Meyer, A. S., Pechmann, T., & Havinga, J. (1991). The time course of lexical access in speech production: A study of picture naming. Psychological Review, 98(1), 122-142. doi:10.1037/0033-295X.98.1.122.
  • Levinson, S. C. (2006). Parts of the body in Yélî Dnye, the Papuan language of Rossel Island. Language Sciences, 28(2-3), 221-240. doi:10.1016/j.langsci.2005.11.007.

    Abstract

    This paper describes the terminology used to describe parts of the body in Ye´lıˆ Dnye, the Papuan language of Rossel Island (Papua New Guinea). The terms are nouns, which display complex patterns of suppletion in possessive and locative uses. Many of the terms are compounds, many unanalysable. Semantically, visible body parts divide into three main types: (i) a partonomic subsystem dividing the body into nine major parts: head, neck, two upper limbs, trunk, two upper legs, two lower legs, (ii) designated surfaces (e.g. ‘lower belly’), (iii) collections of surface features (‘face’), (iv) taxonomic subsystems (e.g. ‘big toe’ being a kind of ‘toe’). With regards to (i), the lack of any designation for ‘foot’ or ‘hand’ is notable, as is the absence of a term for ‘leg’ as a whole (although this is a lexical not a conceptual gap, as shown by the alternate taboo vocabulary). Ye´lıˆ Dnye body part terms do not have major extensions to other domains (e.g. spatial relators). Indeed, a number of the terms are clearly borrowed from outside human biology (e.g. ‘wing butt’ for shoulder).
  • Levinson, S. C. (2006). Cognition at the heart of human interaction. Discourse Studies, 8(1), 85-93. doi:10.1177/1461445606059557.

    Abstract

    Sometimes it is thought that there are serious differences between theories of discourse that turn on the role of cognition in the theory. This is largely a misconception: for example, with its emphasis on participants’ own understandings, its principles of recipient design and projection, Conversation Analysis is hardly anti-cognitive. If there are genuine disagreements they rather concern a preference for ‘lean’ versus ‘rich’ metalanguages and different methodologies. The possession of a multi-levelled model, separating out what the individual brings to interaction from the emergent properties of interaction, would make it easier to resolve some of these issues. Meanwhile, these squabbles on the margins distract us from a much more central and more interesting issue: is there a very special cognition-for-interaction, which underlies and underpins all language and discourse? Prime facie evidence suggests that there is, and different approaches can contribute to our understanding of it.
  • Levinson, S. C. (2006). Matrilineal clans and kin terms on Rossel Island. Anthropological Linguistics, 48, 1-43.

    Abstract

    Yélî Dnye, the language of Rossel Island, Louisiade archipelago, Papua New Guinea, is a non-Austronesian isolate of considerable interest for the prehistory of the area. The kin term, clan, and kinship systems have some superficial similarities with surrounding Austronesian ones, but many underlying differences. The terminology, here properly described for the first time, is highly complex, and seems adapted to a dual descent system, with Crow-type skewing reflecting matrilineal descent, but a system of reciprocals also reflecting the "unity of the patriline." It may be analyzed in three mutually consistent ways: as a system of classificatory reciprocals, as a clan-based sociocentric system, and as collapses and skewings across a genealogical net. It makes an interesting contrast to the Trobriand system, and suggests that the alternative types of account offered by Edmund Leach and Floyd Lounsbury for the Trobriand system both have application to the Rossel system. The Rossel system has features (e.g., patrilineal biases, dual descent, collective [dyadic] kin terms, terms for alternating generations) that may be indicative of pre-Austronesian social systems of the area
  • Levinson, S. C. (2006). Language in the 21st century. Language, 82, 1-2.
  • Levinson, S. C., & Senft, G. (1991). Forschungsgruppe für Kognitive Anthropologie - Eine neue Forschungsgruppe in der Max-Planck-Gesellschaft. Linguistische Berichte, 133, 244-246.
  • Levinson, S. C., & Senft, G. (1991). Research group for cognitive anthropology - A new research group of the Max Planck Society. Cognitive Linguistics, 2, 311-312.
  • Levinson, S. C. (1998). Studying spatial conceptualization across cultures: Anthropology and cognitive science. Ethos, 26(1), 7-24. doi:10.1525/eth.1998.26.1.7.

    Abstract

    Philosophers, psychologists, and linguists have argued that spatial conception is pivotal to cognition in general, providing a general, egocentric, and universal framework for cognition as well as metaphors for conceptualizing many other domains. But in an aboriginal community in Northern Queensland, a system of cardinal directions informs not only language, but also memory for arbitrary spatial arrays and directions. This work suggests that fundamental cognitive parameters, like the system of coding spatial locations, can vary cross-culturally, in line with the language spoken by a community. This opens up the prospect of a fruitful dialogue between anthropology and the cognitive sciences on the complex interaction between cultural and universal factors in the constitution of mind.
  • Levinson, S. C. (1991). Pragmatic reduction of the Binding Conditions revisited. Journal of Linguistics, 27, 107-161. doi:10.1017/S0022226700012433.

    Abstract

    In an earlier article (Levinson, 1987b), I raised the possibility that a Gricean theory of implicature might provide a systematic partial reduction of the Binding Conditions; the briefest of outlines is given in Section 2.1 below but the argumentation will be found in the earlier article. In this article I want, first, to show how that account might be further justified and extended, but then to introduce a radical alternative. This alternative uses the same pragmatic framework, but gives an account better adjusted to some languages. Finally, I shall attempt to show that both accounts can be combined by taking a diachronic perspective. The attraction of the combined account is that, suddenly, many facts about long-range reflexives and their associated logophoricity fall into place.
  • Lewis, A. G., Schoffelen, J.-M., Hoffmann, C., Bastiaansen, M. C. M., & Schriefers, H. (2017). Discourse-level semantic coherence influences beta oscillatory dynamics and the N400 during sentence comprehension. Language, Cognition and Neuroscience, 32(5), 601-617. doi:10.1080/23273798.2016.1211300.

    Abstract

    In this study, we used electroencephalography to investigate the influence of discourse-level semantic coherence on electrophysiological signatures of local sentence-level processing. Participants read groups of four sentences that could either form coherent stories or were semantically unrelated. For semantically coherent discourses compared to incoherent ones, the N400 was smaller at sentences 2–4, while the visual N1 was larger at the third and fourth sentences. Oscillatory activity in the beta frequency range (13–21 Hz) was higher for coherent discourses. We relate the N400 effect to a disruption of local sentence-level semantic processing when sentences are unrelated. Our beta findings can be tentatively related to disruption of local sentence-level syntactic processing, but it cannot be fully ruled out that they are instead (or also) related to disrupted local sentence-level semantic processing. We conclude that manipulating discourse-level semantic coherence does have an effect on oscillatory power related to local sentence-level processing.
  • Lind, J., Persson, J., Ingvar, M., Larsson, A., Cruts, M., Van Broeckhoven, C., Adolfsson, R., Bäckman, L., Nilsson, L.-G., Petersson, K. M., & Nyberg, L. (2006). Reduced functional brain activity response in cognitively intact apolipoprotein E ε4 carriers. Brain, 129(5), 1240-1248. doi:10.1093/brain/awl054.

    Abstract

    The apolipoprotein E {varepsilon}4 (APOE {varepsilon}4) is the main known genetic risk factor for Alzheimer's disease. Genetic assessments in combination with other diagnostic tools, such as neuroimaging, have the potential to facilitate early diagnosis. In this large-scale functional MRI (fMRI) study, we have contrasted 30 APOE {varepsilon}4 carriers (age range: 49–74 years; 19 females), of which 10 were homozygous for the {varepsilon}4 allele, and 30 non-carriers with regard to brain activity during a semantic categorization task. Test groups were closely matched for sex, age and education. Critically, both groups were cognitively intact and thus symptom-free of Alzheimer's disease. APOE {varepsilon}4 carriers showed reduced task-related responses in the left inferior parietal cortex, and bilaterally in the anterior cingulate region. A dose-related response was observed in the parietal area such that diminution was most pronounced in homozygous compared with heterozygous carriers. In addition, contrasts of processing novel versus familiar items revealed an abnormal response in the right hippocampus in the APOE {varepsilon}4 group, mainly expressed as diminished sensitivity to the relative novelty of stimuli. Collectively, these findings indicate that genetic risk translates into reduced functional brain activity, in regions pertinent to Alzheimer's disease, well before alterations can be detected at the behavioural level.
  • Liszkowski, U., Carpenter, M., Striano, T., & Tomasello, M. (2006). Twelve- and 18-month-olds point to provide information for others. JOURNAL OF COGNITION AND DEVELOPMENT, 7, 173-187. doi:10.1207/s15327647jcd0702_2.

    Abstract

    Classically, infants are thought to point for 2 main reasons: (a) They point imperatively when they want an adult to do something for them (e.g., give them something; “Juice!”), and (b) they point declaratively when they want an adult to share attention with them to some interesting event or object (“Look!”). Here we demonstrate the existence of another motive for infants' early pointing gestures: to inform another person of the location of an object that person is searching for. This informative motive for pointing suggests that from very early in ontogeny humans conceive of others as intentional agents with informational states and they have the motivation to provide such information communicatively
  • Liszkowski, U., Carpenter, M., Henning, A., Striano, T., & Tomasello, M. (2004). Twelve-month-olds point to share attention and interest. Developmental Science, 7(3), 297-307. doi:10.1111/j.1467-7687.2004.00349.x.

    Abstract

    Infants point for various motives. Classically, one such motive is declarative, to share attention and interest with adults to events. Recently, some researchers have questioned whether infants have this motivation. In the current study, an adult reacted to 12-month-olds' pointing in different ways, and infants' responses were observed. Results showed that when the adult shared attention and interest (i.e. alternated gaze and emoted), infants pointed more frequently across trials and tended to prolong each point – presumably to prolong the satisfying interaction. However, when the adult emoted to the infant alone or looked only to the event, infants pointed less across trials and repeated points more within trials – presumably in an attempt to establish joint attention. Results suggest that 12-month-olds point declaratively and understand that others have psychological states that can be directed and shared.
  • Little, H., Eryilmaz, K., & de Boer, B. (2017). Conventionalisation and Discrimination as Competing Pressures on Continuous Speech-like Signals. Interaction studies, 18(3), 355-378. doi:10.1075/is.18.3.04lit.

    Abstract

    Arbitrary communication systems can emerge from iconic beginnings through processes of conventionalisation via interaction. Here, we explore whether this process of conventionalisation occurs with continuous, auditory signals. We conducted an artificial signalling experiment. Participants either created signals for themselves, or for a partner in a communication game. We found no evidence that the speech-like signals in our experiment became less iconic or simpler through interaction. We hypothesise that the reason for our results is that when it is difficult to be iconic initially because of the constraints of the modality, then iconicity needs to emerge to enable grounding before conventionalisation can occur. Further, pressures for discrimination, caused by the expanding meaning space in our study, may cause more complexity to emerge, again as a result of the restrictive signalling modality. Our findings have possible implications for the processes of conventionalisation possible in signed and spoken languages, as the spoken modality is more restrictive than the manual modality.
  • Little, H., Rasilo, H., van der Ham, S., & Eryılmaz, K. (2017). Empirical approaches for investigating the origins of structure in speech. Interaction studies, 18(3), 332-354. doi:10.1075/is.18.3.03lit.

    Abstract

    In language evolution research, the use of computational and experimental methods to investigate the emergence of structure in language is exploding. In this review, we look exclusively at work exploring the emergence of structure in speech, on both a categorical level (what drives the emergence of an inventory of individual speech sounds), and a combinatorial level (how these individual speech sounds emerge and are reused as part of larger structures). We show that computational and experimental methods for investigating population-level processes can be effectively used to explore and measure the effects of learning, communication and transmission on the emergence of structure in speech. We also look at work on child language acquisition as a tool for generating and validating hypotheses for the emergence of speech categories. Further, we review the effects of noise, iconicity and production effects.
  • Little, H. (2017). Introduction to the Special Issue on the Emergence of Sound Systems. Journal of Language Evolution, 2(1), 1-3. doi:10.1093/jole/lzx014.

    Abstract

    How did human sound systems get to be the way they are? Collecting contributions implementing a wealth of methods to address this question, this special issue treats language and speech as being the result of a complex adaptive system. The work throughout provides evidence and theory at the levels of phylogeny, glossogeny and ontogeny. In taking a multi-disciplinary approach that considers interactions within and between these levels of selection, the papers collectively provide a valuable, integrated contribution to existing work on the evolution of speech and sound systems.
  • Little, H., Eryılmaz, K., & de Boer, B. (2017). Signal dimensionality and the emergence of combinatorial structure. Cognition, 168, 1-15. doi:10.1016/j.cognition.2017.06.011.

    Abstract

    In language, a small number of meaningless building blocks can be combined into an unlimited set of meaningful utterances. This is known as combinatorial structure. One hypothesis for the initial emergence of combinatorial structure in language is that recombining elements of signals solves the problem of overcrowding in a signal space. Another hypothesis is that iconicity may impede the emergence of combinatorial structure. However, how these two hypotheses relate to each other is not often discussed. In this paper, we explore how signal space dimensionality relates to both overcrowding in the signal space and iconicity. We use an artificial signalling experiment to test whether a signal space and a meaning space having similar topologies will generate an iconic system and whether, when the topologies differ, the emergence of combinatorially structured signals is facilitated. In our experiments, signals are created from participants' hand movements, which are measured using an infrared sensor. We found that participants take advantage of iconic signal-meaning mappings where possible. Further, we use trajectory predictability, measures of variance, and Hidden Markov Models to measure the use of structure within the signals produced and found that when topologies do not match, then there is more evidence of combinatorial structure. The results from these experiments are interpreted in the context of the differences between the emergence of combinatorial structure in different linguistic modalities (speech and sign).

    Additional information

    mmc1.zip
  • Little, H. (Ed.). (2017). Special Issue on the Emergence of Sound Systems [Special Issue]. The Journal of Language Evolution, 2(1).
  • Loo, S. K., Fisher, S. E., Francks, C., Ogdie, M. N., MacPhie, I. L., Yang, M., McCracken, J. T., McGough, J. J., Nelson, S. F., Monaco, A. P., & Smalley, S. L. (2004). Genome-wide scan of reading ability in affected sibling pairs with attention-deficit/hyperactivity disorder: Unique and shared genetic effects. Molecular Psychiatry, 9, 485-493. doi:10.1038/sj.mp.4001450.

    Abstract

    Attention-deficit/hyperactivity disorder (ADHD) and reading disability (RD) are common highly heritable disorders of childhood, which frequently co-occur. Data from twin and family studies suggest that this overlap is, in part, due to shared genetic underpinnings. Here, we report the first genome-wide linkage analysis of measures of reading ability in children with ADHD, using a sample of 233 affected sibling pairs who previously participated in a genome-wide scan for susceptibility loci in ADHD. Quantitative trait locus (QTL) analysis of a composite reading factor defined from three highly correlated reading measures identified suggestive linkage (multipoint maximum lod score, MLS>2.2) in four chromosomal regions. Two regions (16p, 17q) overlap those implicated by our previous genome-wide scan for ADHD in the same sample: one region (2p) provides replication for an RD susceptibility locus, and one region (10q) falls approximately 35 cM from a modestly highlighted region in an independent genome-wide scan of siblings with ADHD. Investigation of an individual reading measure of Reading Recognition supported linkage to putative RD susceptibility regions on chromosome 8p (MLS=2.4) and 15q (MLS=1.38). Thus, the data support the existence of genetic factors that have pleiotropic effects on ADHD and reading ability--as suggested by shared linkages on 16p, 17q and possibly 10q--but also those that appear to be unique to reading--as indicated by linkages on 2p, 8p and 15q that coincide with those previously found in studies of RD. Our study also suggests that reading measures may represent useful phenotypes in ADHD research. The eventual identification of genes underlying these unique and shared linkages may increase our understanding of ADHD, RD and the relationship between the two.
  • Lopopolo, A., Frank, S. L., Van den Bosch, A., & Willems, R. M. (2017). Using stochastic language models (SLM) to map lexical, syntactic, and phonological information processing in the brain. PLoS One, 12(5): e0177794. doi:10.1371/journal.pone.0177794.

    Abstract

    Language comprehension involves the simultaneous processing of information at the phonological, syntactic, and lexical level. We track these three distinct streams of information in the brain by using stochastic measures derived from computational language models to detect neural correlates of phoneme, part-of-speech, and word processing in an fMRI experiment. Probabilistic language models have proven to be useful tools for studying how language is processed as a sequence of symbols unfolding in time. Conditional probabilities between sequences of words are at the basis of probabilistic measures such as surprisal and perplexity which have been successfully used as predictors of several behavioural and neural correlates of sentence processing. Here we computed perplexity from sequences of words and their parts of speech, and their phonemic transcriptions. Brain activity time-locked to each word is regressed on the three model-derived measures. We observe that the brain keeps track of the statistical structure of lexical, syntactic and phonological information in distinct areas.

    Additional information

    Data availability
  • Magyari, L. (2004). Nyelv és/vagy evolúció? [Book review]. Magyar Pszichológiai Szemle, 59(4), 591-607. doi:10.1556/MPSzle.59.2004.4.7.

    Abstract

    Nyelv és/vagy evolúció: Lehetséges-e a nyelv evolúciós magyarázata? [Derek Bickerton: Nyelv és evolúció] (Magyari Lilla); Történelmi olvasókönyv az agyról [Charles G. Gross: Agy, látás, emlékezet. Mesék az idegtudomány történetéből] (Garab Edit Anna); Művészet vagy tudomány [Margitay Tihamér: Az érvelés mestersége. Érvelések elemzése, értékelése és kritikája] (Zemplén Gábor); Tényleg ésszerűek vagyunk? [Herbert Simon: Az ésszerűség szerepe az emberi életben] (Kardos Péter); Nemi különbségek a megismerésben [Doreen Kimura: Női agy, férfi agy]. (Hahn Noémi);
  • Magyari, L., De Ruiter, J. P., & Levinson, S. C. (2017). Temporal preparation for speaking in question-answer sequences. Frontiers in Psychology, 8: 211. doi:10.3389/fpsyg.2017.00211.

    Abstract

    In every-day conversations, the gap between turns of conversational partners is most frequently between 0 and 200 ms. We were interested how speakers achieve such fast transitions. We designed an experiment in which participants listened to pre-recorded questions about images presented on a screen and were asked to answer these questions. We tested whether speakers already prepare their answers while they listen to questions and whether they can prepare for the time of articulation by anticipating when questions end. In the experiment, it was possible to guess the answer at the beginning of the questions in half of the experimental trials. We also manipulated whether it was possible to predict the length of the last word of the questions. The results suggest when listeners know the answer early they start speech production already during the questions. Speakers can also time when to speak by predicting the duration of turns. These temporal predictions can be based on the length of anticipated words and on the overall probability of turn durations.

    Additional information

    presentation 1.pdf
  • Mainz, N., Shao, Z., Brysbaert, M., & Meyer, A. S. (2017). Vocabulary Knowledge Predicts Lexical Processing: Evidence from a Group of Participants with Diverse Educational Backgrounds. Frontiers in Psychology, 8: 1164. doi:10.3389/fpsyg.2017.01164.

    Abstract

    Vocabulary knowledge is central to a speaker's command of their language. In previous research, greater vocabulary knowledge has been associated with advantages in language processing. In this study, we examined the relationship between individual differences in vocabulary and language processing performance more closely by (i) using a battery of vocabulary tests instead of just one test, and (ii) testing not only university students (Experiment 1) but young adults from a broader range of educational backgrounds (Experiment 2). Five vocabulary tests were developed, including multiple-choice and open antonym and synonym tests and a definition test, and administered together with two established measures of vocabulary. Language processing performance was measured using a lexical decision task. In Experiment 1, vocabulary and word frequency were found to predict word recognition speed while we did not observe an interaction between the effects. In Experiment 2, word recognition performance was predicted by word frequency and the interaction between word frequency and vocabulary, with high-vocabulary individuals showing smaller frequency effects. While overall the individual vocabulary tests were correlated and showed similar relationships with language processing as compared to a composite measure of all tests, they appeared to share less variance in Experiment 2 than in Experiment 1. Implications of our findings concerning the assessment of vocabulary size in individual differences studies and the investigation of individuals from more varied backgrounds are discussed.

    Additional information

    Supplementary Material Appendices.pdf
  • Majid, A. (2004). Out of context. The Psychologist, 17(6), 330-330.
  • Majid, A., Enfield, N. J., & Van Staden, M. (Eds.). (2006). Parts of the body: Cross-linguistic categorisation [Special Issue]. Language Sciences, 28(2-3).
  • Majid, A., Sanford, A. J., & Pickering, M. J. (2006). Covariation and quantifier polarity: What determines causal attribution in vignettes? Cognition, 99(1), 35-51. doi:10.1016/j.cognition.2004.12.004.

    Abstract

    Tests of causal attribution often use verbal vignettes, with covariation information provided through statements quantified with natural language expressions. The effect of covariation information has typically been taken to show that set size information affects attribution. However, recent research shows that quantifiers provide information about discourse focus as well as covariation information. In the attribution literature, quantifiers are used to depict covariation, but they confound quantity and focus. In four experiments, we show that focus explains all (Experiment 1) or some (Experiments 2, 3 and 4) of the impact of covariation information on the attributions made, confirming the importance of the confound. Attribution experiments using vignettes that present covariation information with natural language quantifiers may overestimate the impact of set size information, and ignore the impact of quantifier-induced focus.
  • Majid, A. (2004). Data elicitation methods. Language Archive Newsletter, 1(2), 6-6.
  • Majid, A. (2004). Developing clinical understanding. The Psychologist, 17, 386-387.
  • Majid, A. (2004). Coned to perfection. The Psychologist, 17(7), 386-386.
  • Majid, A. (2006). Body part categorisation in Punjabi. Language Sciences, 28(2-3), 241-261. doi:10.1016/j.langsci.2005.11.012.

    Abstract

    A key question in categorisation is to what extent people categorise in the same way, or differently. This paper examines categorisation of the body in Punjabi, an Indo-European language spoken in Pakistan and India. First, an inventory of body part terms is presented, illustrating how Punjabi speakers segment and categorise the body. There are some noteworthy terms in the inventory, which illustrate categories in Punjabi that are unusual when compared to other languages presented in this volume. Second, Punjabi speakers’ conceptualisation of the relationship between body parts is explored. While some body part terms are viewed as being partonomically related, others are viewed as being in a locative relationship. It is suggested that there may be key ways in which languages differ in both the categorisation of the body into parts, and in how these parts are related to one another.
  • Majid, A., Bowerman, M., Kita, S., Haun, D. B. M., & Levinson, S. C. (2004). Can language restructure cognition? The case for space. Trends in Cognitive Sciences, 8(3), 108-114. doi:10.1016/j.tics.2004.01.003.

    Abstract

    Frames of reference are coordinate systems used to compute and specify the location of objects with respect to other objects. These have long been thought of as innate concepts, built into our neurocognition. However, recent work shows that the use of such frames in language, cognition and gesture varies crossculturally, and that children can acquire different systems with comparable ease. We argue that language can play a significant role in structuring, or restructuring, a domain as fundamental as spatial cognition. This suggests we need to rethink the relation between the neurocognitive underpinnings of spatial cognition and the concepts we use in everyday thinking, and, more generally, to work out how to account for cross-cultural cognitive diversity in core cognitive domains.
  • Majid, A. (2004). An integrated view of cognition [Review of the book Rethinking implicit memory ed. by J. S. Bowers and C. J. Marsolek]. The Psychologist, 17(3), 148-149.
  • Majid, A. (2004). [Review of the book The new handbook of language and social psychology ed. by W. Peter Robinson and Howard Giles]. Language and Society, 33(3), 429-433.
  • Majid, A., Speed, L., Croijmans, I., & Arshamian, A. (2017). What makes a better smeller? Perception, 46, 406-430. doi:10.1177/0301006616688224.

    Abstract

    Olfaction is often viewed as difficult, yet the empirical evidence suggests a different picture. A closer look shows people around the world differ in their ability to detect, discriminate, and name odors. This gives rise to the question of what influences our ability to smell. Instead of focusing on olfactory deficiencies, this review presents a positive perspective by focusing on factors that make someone a better smeller. We consider three driving forces in improving olfactory ability: one’s biological makeup, one’s experience, and the environment. For each factor, we consider aspects proposed to improve odor perception and critically examine the evidence; as well as introducing lesser discussed areas. In terms of biology, there are cases of neurodiversity, such as olfactory synesthesia, that serve to enhance olfactory ability. Our lifetime experience, be it typical development or unique training experience, can also modify the trajectory of olfaction. Finally, our odor environment, in terms of ambient odor or culinary traditions, can influence odor perception too. Rather than highlighting the weaknesses of olfaction, we emphasize routes to harnessing our olfactory potential.
  • Mak, W. M., Vonk, W., & Schriefers, H. (2006). Animacy in processing relative clauses: The hikers that rocks crush. Journal of Memory and Language, 54(4), 466-490. doi:10.1016/j.jml.2006.01.001.

    Abstract

    For several languages, a preference for subject relative clauses over object relative clauses has been reported. However, Mak, Vonk, and Schriefers (2002) showed that there is no such preference for relative clauses with an animate subject and an inanimate object. A Dutch object relative clause as …de rots, die de wandelaars beklommen hebben… (‘the rock, that the hikers climbed’) did not show longer reading times than its subject relative clause counterpart …de wandelaars, die de rots beklommen hebben… (‘the hikers, who climbed the rock’). In the present paper, we explore the factors that might contribute to this modulation of the usual preference for subject relative clauses. Experiment 1 shows that the animacy of the antecedent per se is not the decisive factor. On the contrary, in relative clauses with an inanimate antecedent and an inanimate relative-clause-internal noun phrase, the usual preference for subject relative clauses is found. In Experiments 2 and 3, subject and object relative clauses were contrasted in which either the subject or the object was inanimate. The results are interpreted in a framework in which the choice for an analysis of the relative clause is based on the interplay of animacy with topichood and verb semantics. This framework accounts for the commonly reported preference for subject relative clauses over object relative clauses as well as for the pattern of data found in the present experiments.
  • Mangione-Smith, R., Elliott, M. N., Stivers, T., McDonald, L., Heritage, J., & McGlynn, E. A. (2004). Racial/ethnic variation in parent expectations for antibiotics: Implications for public health campaigns. Pediatrics, 113(5), 385-394.
  • Mangione-Smith, R., Elliott, M. N., Stivers, T., McDonald, L. L., & Heritage, J. (2006). Ruling out the need for antibiotics: Are we sending the right message? Archives of Pediatrics & Adolescent Medicine, 160(9), 945-952.
  • Mansbridge, M. P., Tamaoka, K., Xiong, K., & Verdonschot, R. G. (2017). Ambiguity in the processing of Mandarin Chinese relative clauses: One factor cannot explain it all. PLoS One, 12(6): e0178369. doi:10.1371/journal.pone.0178369.

    Abstract

    This study addresses the question of whether native Mandarin Chinese speakers process and comprehend subject-extracted relative clauses (SRC) more readily than objectextracted relative clauses (ORC) in Mandarin Chinese. Presently, this has been a hotly debated issue, with various studies producing contrasting results. Using two eye-tracking experiments with ambiguous and unambiguous RCs, this study shows that both ORCs and SRCs have different processing requirements depending on the locus and time course during reading. The results reveal that ORC reading was possibly facilitated by linear/ temporal integration and canonicity. On the other hand, similarity-based interference made ORCs more difficult, and expectation-based processing was more prominent for unambiguous ORCs. Overall, RC processing in Mandarin should not be broken down to a single ORC (dis) advantage, but understood as multiple interdependent factors influencing whether ORCs are either more difficult or easier to parse depending on the task and context at hand.
  • Martin, A. E., & Doumas, L. A. A. (2017). A mechanism for the cortical computation of hierarchical linguistic structure. PLoS Biology, 15(3): e2000663. doi:10.1371/journal.pbio.2000663.

    Abstract

    Biological systems often detect species-specific signals in the environment. In humans, speech and language are species-specific signals of fundamental biological importance. To detect the linguistic signal, human brains must form hierarchical representations from a sequence of perceptual inputs distributed in time. What mechanism underlies this ability? One hypothesis is that the brain repurposed an available neurobiological mechanism when hierarchical linguistic representation became an efficient solution to a computational problem posed to the organism. Under such an account, a single mechanism must have the capacity to perform multiple, functionally related computations, e.g., detect the linguistic signal and perform other cognitive functions, while, ideally, oscillating like the human brain. We show that a computational model of analogy, built for an entirely different purpose—learning relational reasoning—processes sentences, represents their meaning, and, crucially, exhibits oscillatory activation patterns resembling cortical signals elicited by the same stimuli. Such redundancy in the cortical and machine signals is indicative of formal and mechanistic alignment between representational structure building and “cortical” oscillations. By inductive inference, this synergy suggests that the cortical signal reflects structure generation, just as the machine signal does. A single mechanism—using time to encode information across a layered network—generates the kind of (de)compositional representational hierarchy that is crucial for human language and offers a mechanistic linking hypothesis between linguistic representation and cortical computation
  • Martin, A. E., Huettig, F., & Nieuwland, M. S. (2017). Can structural priming answer the important questions about language? A commentary on Branigan and Pickering "An experimental approach to linguistic representation". Behavioral and Brain Sciences, 40: e304. doi:10.1017/S0140525X17000528.

    Abstract

    While structural priming makes a valuable contribution to psycholinguistics, it does not allow direct observation of representation, nor escape “source ambiguity.” Structural priming taps into implicit memory representations and processes that may differ from what is used online. We question whether implicit memory for language can and should be equated with linguistic representation or with language processing.
  • Martin, A. E., Monahan, P. J., & Samuel, A. G. (2017). Prediction of agreement and phonetic overlap shape sublexical identification. Language and Speech, 60(3), 356-376. doi:10.1177/0023830916650714.

    Abstract

    The mapping between the physical speech signal and our internal representations is rarely straightforward. When faced with uncertainty, higher-order information is used to parse the signal and because of this, the lexicon and some aspects of sentential context have been shown to modulate the identification of ambiguous phonetic segments. Here, using a phoneme identification task (i.e., participants judged whether they heard [o] or [a] at the end of an adjective in a noun–adjective sequence), we asked whether grammatical gender cues influence phonetic identification and if this influence is shaped by the phonetic properties of the agreeing elements. In three experiments, we show that phrase-level gender agreement in Spanish affects the identification of ambiguous adjective-final vowels. Moreover, this effect is strongest when the phonetic characteristics of the element triggering agreement and the phonetic form of the agreeing element are identical. Our data are consistent with models wherein listeners generate specific predictions based on the interplay of underlying morphosyntactic knowledge and surface phonetic cues.
  • Massaro, D. W., & Perlman, M. (2017). Quantifying iconicity’s contribution during language acquisition: Implications for vocabulary learning. Frontiers in Communication, 2: 4. doi:10.3389/fcomm.2017.00004.

    Abstract

    Previous research found that iconicity—the motivated correspondence between word form and meaning—contributes to expressive vocabulary acquisition. We present two new experiments with two different databases and with novel analyses to give a detailed quantification of how iconicity contributes to vocabulary acquisition across development, including both receptive understanding and production. The results demonstrate that iconicity is more prevalent early in acquisition and diminishes with increasing age and with increasing vocabulary. In the first experiment, we found that the influence of iconicity on children’s production vocabulary decreased gradually with increasing age. These effects were independent of the observed influence of concreteness, difficulty of articulation, and parental input frequency. Importantly, we substantiated the independence of iconicity, concreteness, and systematicity—a statistical regularity between sounds and meanings. In the second experiment, we found that the average iconicity of both a child’s receptive vocabulary and expressive vocabulary diminished dramatically with increases in vocabulary size. These results indicate that iconic words tend to be learned early in the acquisition of both receptive vocabulary and expressive vocabulary. We recommend that iconicity be included as one of the many different influences on a child’s early vocabulary acquisition. Facing the logically insurmountable challenge to link the form of a novel word (e.g., “gavagai”) with its particular meaning (e.g., “rabbit”; Quine, 1960, 1990/1992), children manage to learn words with incredible ease. Interest in this process has permeated empirical and theoretical research in developmental psychology, psycholinguistics, and language studies more generally. Investigators have studied which words are learned and when they are learned (Fenson et al., 1994), biases in word learning (Markman, 1990, 1991); the perceptual, social, and linguistic properties of the words (Gentner, 1982; Waxman, 1999; Maguire et al., 2006; Vosoughi et al., 2010), the structure of the language being learned (Gentner and Boroditsky, 2001), and the influence of the child’s milieu on word learning (Hart and Risley, 1995; Roy et al., 2015). A growing number of studies also show that the iconicity of words might be a significant factor in word learning (Imai and Kita, 2014; Perniss and Vigliocco, 2014; Perry et al., 2015). Iconicity refers generally to a correspondence between the form of a signal (e.g., spoken word, sign, and written character) and its meaning. For example, the sign for tree is iconic in many signed languages: it resembles a branching tree waving above the ground in American Sign Language, outlines the shape of a tree in Danish Sign Language and forms a tree trunk in Chinese Sign Language. In contrast to signed languages, the words of spoken languages have traditionally been treated as arbitrary, with the assumption that the forms of most words bear no resemblance to their meaning (e.g., Hockett, 1960; Pinker and Bloom, 1990). However, there is now a large body of research showing that iconicity is prevalent in the lexicons of many spoken languages (Nuckolls, 1999; Dingemanse et al., 2015). Most languages have an inventory of iconic words for sounds—onomatopoeic words such as splash, slurp, and moo, which sound somewhat like the sound of the real-world event to which they refer. Rhodes (1994), for example, counts more than 100 of these words in English. Many languages also contain large inventories of ideophones—a distinctively iconic class of words that is used to express a variety of sensorimotor-rich meanings (Nuckolls, 1999; Voeltz and Kilian-Hatz, 2001; Dingemanse, 2012). For example, in Japanese, the word “koron”—with a voiceless [k] refers to a light object rolling once, the reduplicated “korokoro” to a light object rolling repeatedly, and “gorogoro”—with a voiced [g]—to a heavy object rolling repeatedly (Imai and Kita, 2014). And in Siwu, spoken in Ghana, ideophones include words like fwεfwε “springy, elastic” and saaa “cool sensation” (Dingemanse et al., 2015). Outside of onomatopoeia and ideophones, there is also evidence that adjectives and verbs—which also tend to convey sensorimotor imagery—are also relatively iconic (Nygaard et al., 2009; Perry et al., 2015). Another domain of iconic words involves some correspondence between the point of articulation of a word and its meaning. For example, there appears to be some prevalence across languages of nasal consonants in words for nose and bilabial consonants in words for lip (Urban, 2011). Spoken words can also have a correspondence between a word’s meaning and other aspects of its pronunciation. The word teeny, meaning small, is pronounced with a relatively small vocal tract, with high front vowels characterized by retracted lips and a high-frequency second formant (Ohala, 1994). Thus, teeny can be recognized as iconic of “small” (compared to the larger vocal tract configuration of the back, rounded vowel in huge), a pattern that is documented in the lexicons of a diversity of languages (Ultan, 1978; Blasi et al., 2016). Lewis and Frank (2016) have studied a more abstract form of iconicity that more meaningfully complex words tend to be longer. An evaluation of many diverse languages revealed that conceptually more complex meanings tend to have longer spoken forms. In their study, participants tended to assign a relatively long novel word to a conceptually more complex referent. Understanding that more complex meaning is usually represented by a longer word could aid a child’s parsing of a stream of spoken language and thus facilitate word learning. Some developmental psychologists have theorized that iconicity helps young children learn words by “bootstrapping” or “bridging” the association between a symbol and its referent (Imai and Kita, 2014; Perniss and Vigliocco, 2014). According to this idea, children begin to master word learning with the aid of iconic cues, which help to profile the connection between the form of a word and its meaning out in the world. The learning of verbs in particular may benefit from iconicity, as the referents of verbs are more abstract and challenging for young children to identify (Gentner, 1982; Snedeker and Gleitman, 2004). By helping children gain a firmer grasp of the concept of a symbol, iconicity might set the stage for the ensuing word-learning spurt of non-iconic words. The hypothesis that iconicity plays a role in word learning is supported by experimental studies showing that young children are better at learning words—especially verbs—when they are iconic (Imai et al., 2008; Kantartzis et al., 2011; Yoshida, 2012). In one study, for example, 3-year-old Japanese children were taught a set of novel verbs for actions. Some of the words the children learned were iconic (“sound-symbolic”), created on the basis of iconic patterns found in Japanese mimetics (e.g., the novel word nosunosu for a slow manner of walking; Imai et al., 2008). The results showed that children were better able to generalize action words across agents when the verb was iconic of the action compared to when it was not. A subsequent study also using novel verbs based on Japanese mimetics replicated the finding with 3-year-old English-speaking children (Kantartzis et al., 2011). However, it remains to be determined whether children trained in an iconic condition can generalize their learning to a non-iconic condition that would not otherwise be learned. Children as young as 14 months of age have been shown to benefit from iconicity in word learning (Imai et al., 2015). These children were better at learning novel words for spikey and rounded shapes when the words were iconic, corresponding to kiki and bouba sound symbolism (e.g., Köhler, 1947; Ramachandran and Hubbard, 2001). If iconic words are indeed easier to learn, there should be a preponderance of iconic words early in the learning of natural languages. There is evidence that this is the case in signed languages, which are widely recognized to contain a prevalence of iconic signs [Klima and Bellugi, 1979; e.g., as evident in Signing Savvy (2016)]. Although the role of iconicity in sign acquisition has been disputed [e.g., Orlansky and Bonvillian, 1984; see Thompson (2011) for discussion], the most thorough study to date found that signs of British Sign Language (BSL) that were learned earlier by children tended to be more iconic (Thompson et al., 2012). Thompson et al.’s measure of the age of acquisition of signs came from parental reports from a version of the MacArthur-Bates Communicative Development Inventory (MCDI; Fenson et al., 1994) adapted for BSL (Woolfe et al., 2010). The iconicity of signs was taken from norms based on BSL signers’ judgments using a scale of 1 (not at all iconic) to 7 [highly iconic; see Vinson et al. (2008), for norming details and BSL videos]. Thompson et al. (2012) found a positive correlation between iconicity judgments and words understood and produced. This relationship held up even after controlling for the contribution of imageability and familiarity. Surprisingly, however, there was a significantly stronger correlation for older children (21- to 30-month olds) than for younger children (age 11- to 20-month olds). Thompson et al. suggested that the larger role for iconicity for the older children may result from their increasing cognitive abilities or their greater experience in understanding meaningful form-meaning mappings. However, this suggestion does not fit with the expectation that iconicity should play a larger role earlier in language use. Thus, although supporting a role for iconicity in word learning, the larger influence for older children is inconsistent with the bootstrapping hypothesis, in which iconicity should play a larger role earlier in vocabulary learning (Imai and Kita, 2014; Perniss and Vigliocco, 2014). There is also evidence in spoken languages that earlier learned words tend to be more iconic. Perry et al. (2015) collected iconicity ratings on the roughly 600 English and Spanish words that are learned earliest by children, selected from their respective MCDIs. Native speakers on Amazon Mechanical Turk rated the iconicity of the words on a scale from −5 to 5, where 5 indicated that a word was highly iconic, −5 that it sounded like the opposite of its meaning, and 0 that it was completely arbitrary. Their instructions to raters are given in the Appendix because the same instructions were used for acquiring our iconicity ratings. The Perry et al. (2015) results showed that the likelihood of a word in children’s production vocabulary in both English and Spanish at 30 months was positively correlated with the iconicity ratings, even when several other possible contributing factors were partialed out, including log word frequency, concreteness, and word length. The pattern in Spanish held for two collections of iconicity ratings, one with the verbs of the 600-word set presented in infinitive form, and one with the verbs conjugated in the third person singular form. In English, the correlation between age of acquisition and iconicity held when the ratings were collected for words presented in written form only and in written form plus a spoken recording. It also held for ratings based on a more implicit measure of iconicity in which participants rated how accurately a space alien could guess the meaning of the word based on its sound alone. The pattern in English also held when Perry et al. (2015) factored out the systematicity of words [taken from Monaghan et al. (2014)]. Systematicity is measured as a correlation between form similarity and meaning similarity—that is, the degree to which words with similar meanings have similar forms. Monaghan et al. computed systematicity for a large number of English words and found a negative correlation with the age of acquisition of the word from 2 to 13+ years of age—more systematic words are learned earlier. Monaghan et al. (2014) and Christiansen and Chater (2016) observe that consistent sound-meaning patterns may facilitate early vocabulary acquisition, but the child would soon have to master arbitrary relationships necessitated by increases in vocabulary size. In theory, systematicity, sometimes called “relative iconicity,” is independent of iconicity. For example, the English cluster gl– occurs systematically in several words related to “vision” and “light,” such as glitter, glimmer, and glisten (Bergen, 2004), but the segments bear no obvious resemblance to this meaning. Monaghan et al. (2014) question whether spoken languages afford sufficient degrees of articulatory freedom for words to be iconic but not systematic. As evidence, they give the example of onomatopoeic words for the calls of small animals (e.g., peep and cheep) versus calls of big animals (roar and grrr), which would systematically reflect the size of the animal. Although Perry et al. (2015) found a positive effect of iconicity at 30 months, they did not evaluate its influence across the first years of a child’s life. To address this question, we conduct a more detailed examination of the time course of iconicity in word learning across the first 4 years of expressive vocabulary acquisition. In addition, we examine the role of iconicity in the acquisition of receptive vocabulary as well as productive vocabulary. There is some evidence that although receptive vocabulary and productive vocabulary are correlated with one another, a variable might not have equivalent influences on these two expressions of vocabulary. Massaro and Rowe (2015), for example, showed that difficulty of articulation had a strong effect on word production but not word comprehension. Thus, it is possible that the influence of iconicity on vocabulary development differs between production and comprehension. In particular, a larger influence on comprehension might follow from the emphasis of the bootstrapping hypothesis on iconicity serving to perceptually cue children to the connection between the sound of a word and its meaning
  • McLaughlin, R. L., Schijven, D., Van Rheenen, W., Van Eijk, K. R., O’Brien, M., Project MinE GWAS Consortium, Schizophrenia Working Group of the Psychiatric Genomics Consortium, Kahn, R. S., Ophoff, R. A., Goris, A., Bradley, D. G., Al-Chalabi, A., van den Berg, L. H., Luykx, J. J., Hardiman, O., & Veldink, J. H. (2017). Genetic correlation between amyotrophic lateral sclerosis and schizophrenia. Nature Communications, 8: 14774. doi:10.1038/ncomms14774.

    Abstract

    We have previously shown higher-than-expected rates of schizophrenia in relatives of patients with amyotrophic lateral sclerosis (ALS), suggesting an aetiological relationship between the diseases. Here, we investigate the genetic relationship between ALS and schizophrenia using genome-wide association study data from over 100,000 unique individuals. Using linkage disequilibrium score regression, we estimate the genetic correlation between ALS and schizophrenia to be 14.3% (7.05–21.6; P=1 × 10−4) with schizophrenia polygenic risk scores explaining up to 0.12% of the variance in ALS (P=8.4 × 10−7). A modest increase in comorbidity of ALS and schizophrenia is expected given these findings (odds ratio 1.08–1.26) but this would require very large studies to observe epidemiologically. We identify five potential novel ALS-associated loci using conditional false discovery rate analysis. It is likely that shared neurobiological mechanisms between these two disorders will engender novel hypotheses in future preclinical and clinical studies.
  • McQueen, J. M., Cutler, A., & Norris, D. (2006). Phonological abstraction in the mental lexicon. Cognitive Science, 30(6), 1113-1126. doi:10.1207/s15516709cog0000_79.

    Abstract

    A perceptual learning experiment provides evidence that the mental lexicon cannot consist solely of detailed acoustic traces of recognition episodes. In a training lexical decision phase, listeners heard an ambiguous [f–s] fricative sound, replacing either [f] or [s] in words. In a test phase, listeners then made lexical decisions to visual targets following auditory primes. Critical materials were minimal pairs that could be a word with either [f] or [s] (cf. English knife–nice), none of which had been heard in training. Listeners interpreted the minimal pair words differently in the second phase according to the training received in the first phase. Therefore, lexically mediated retuning of phoneme perception not only influences categorical decisions about fricatives (Norris, McQueen, & Cutler, 2003), but also benefits recognition of words outside the training set. The observed generalization across words suggests that this retuning occurs prelexically. Therefore, lexical processing involves sublexical phonological abstraction, not only accumulation of acoustic episodes.
  • McQueen, J. M., Norris, D., & Cutler, A. (2006). The dynamic nature of speech perception. Language and Speech, 49(1), 101-112.

    Abstract

    The speech perception system must be flexible in responding to the variability in speech sounds caused by differences among speakers and by language change over the lifespan of the listener. Indeed, listeners use lexical knowledge to retune perception of novel speech (Norris, McQueen, & Cutler, 2003). In that study, Dutch listeners made lexical decisions to spoken stimuli, including words with an ambiguous fricative (between [f] and [s]), in either [f]- or [s]-biased lexical contexts. In a subsequent categorization test, the former group of listeners identified more sounds on an [εf] - [εs] continuum as [f] than the latter group. In the present experiment, listeners received the same exposure and test stimuli, but did not make lexical decisions to the exposure items. Instead, they counted them. Categorization results were indistinguishable from those obtained earlier. These adjustments in fricative perception therefore do not depend on explicit judgments during exposure. This learning effect thus reflects automatic retuning of the interpretation of acoustic-phonetic information.
  • McQueen, J. M., Norris, D., & Cutler, A. (2006). Are there really interactive processes in speech perception? Trends in Cognitive Sciences, 10(12), 533-533. doi:10.1016/j.tics.2006.10.004.
  • Meeuwissen, M., Roelofs, A., & Levelt, W. J. M. (2004). Naming analog clocks conceptually facilitates naming digital clocks. Brain and Language, 90(1-3), 434-440. doi:10.1016/S0093-934X(03)00454-1.

    Abstract

    This study investigates how speakers of Dutch compute and produce relative time expressions. Naming digital clocks (e.g., 2:45, say ‘‘quarter to three’’) requires conceptual operations on the minute and hour information for the correct relative time expression. The interplay of these conceptual operations was investigated using a repetition priming paradigm. Participants named analog clocks (the primes) directly before naming digital clocks (the targets). The targets referred to the hour (e.g., 2:00), half past the hour (e.g., 2:30), or the coming hour (e.g., 2:45). The primes differed from the target in one or two hour and in five or ten minutes. Digital clock naming latencies were shorter with a five- than with a ten-min difference between prime and target, but the difference in hour had no effect. Moreover, the distance in minutes had only an effect for half past the hour and the coming hour, but not for the hour. These findings suggest that conceptual facilitation occurs when conceptual transformations are shared between prime and target in telling time.
  • Melinger, A., & Levelt, W. J. M. (2004). Gesture and the communicative intention of the speaker. Gesture, 4(2), 119-141.

    Abstract

    This paper aims to determine whether iconic tracing gestures produced while speaking constitute part of the speaker’s communicative intention. We used a picture description task in which speakers must communicate the spatial and color information of each picture to an interlocutor. By establishing the necessary minimal content of an intended message, we determined whether speech produced with concurrent gestures is less explicit than speech without gestures. We argue that a gesture must be communicatively intended if it expresses necessary information that was nevertheless omitted from speech. We found that speakers who produced iconic gestures representing spatial relations omitted more required spatial information from their descriptions than speakers who did not gesture. These results provide evidence that speakers intend these gestures to communicate. The results have implications for the cognitive architectures that underlie the production of gesture and speech.
  • Menenti, L. (2006). L2-L1 word association in bilinguals: Direct evidence. Nijmegen CNS, 1, 17-24.

    Abstract

    The Revised Hierarchical Model (Kroll and Stewart, 1994) assumes that words in a bilingual’s languages have separate word form representations but shared conceptual representations. Two routes lead from an L2 word form to its conceptual representation: the word association route, where concepts are accessed through the corresponding L1 word form, and the concept mediation route, with direct access from L2 to concepts. To investigate word association, we presented proficient late German-Dutch bilinguals with L2 non-cognate word pairs in which the L1 translation of the first word rhymed with the second word (e.g. GRAP (joke) – Witz – FIETS (bike)). If the first word in a pair activated its L1 equivalent, then a phonological priming effect on the second word was expected. Priming was observed in lexical decision but not in semantic decision (living/non-living) on L2 words. In a control group of Dutch native speakers, no priming effect was found. This suggests that proficient bilinguals still make use of their L1 word form lexicon to process L2 in lexical decision.
  • Menks, W. M., Furger, R., Lenz, C., Fehlbaum, L. V., Stadler, C., & Raschle, N. M. (2017). Microstructural white matter alterations in the corpus callosum of girls with conduct disorder. Journal of the American Academy of Child & Adolescent Psychiatry, 56, 258-265. doi:10.1016/j.jaac.2016.12.006.

    Abstract

    Objective

    Diffusion tensor imaging (DTI) studies in adolescent conduct disorder (CD) have demonstrated white matter alterations of tracts connecting functionally distinct fronto-limbic regions, but only in boys or mixed-gender samples. So far, no study has investigated white matter integrity in girls with CD on a whole-brain level. Therefore, our aim was to investigate white matter alterations in adolescent girls with CD.
    Method

    We collected high-resolution DTI data from 24 girls with CD and 20 typically developing control girls using a 3T magnetic resonance imaging system. Fractional anisotropy (FA) and mean diffusivity (MD) were analyzed for whole-brain as well as a priori−defined regions of interest, while controlling for age and intelligence, using a voxel-based analysis and an age-appropriate customized template.
    Results

    Whole-brain findings revealed white matter alterations (i.e., increased FA) in girls with CD bilaterally within the body of the corpus callosum, expanding toward the right cingulum and left corona radiata. The FA and MD results in a priori−defined regions of interest were more widespread and included changes in the cingulum, corona radiata, fornix, and uncinate fasciculus. These results were not driven by age, intelligence, or attention-deficit/hyperactivity disorder comorbidity.
    Conclusion

    This report provides the first evidence of white matter alterations in female adolescents with CD as indicated through white matter reductions in callosal tracts. This finding enhances current knowledge about the neuropathological basis of female CD. An increased understanding of gender-specific neuronal characteristics in CD may influence diagnosis, early detection, and successful intervention strategies.
  • Meulenbroek, O., Petersson, K. M., Voermans, N., Weber, B., & Fernández, G. (2004). Age differences in neural correlates of route encoding and route recognition. Neuroimage, 22, 1503-1514. doi:10.1016/j.neuroimage.2004.04.007.

    Abstract

    Spatial memory deficits are core features of aging-related changes in cognitive abilities. The neural correlates of these deficits are largely unknown. In the present study, we investigated the neural underpinnings of age-related differences in spatial memory by functional MRI using a navigational memory task with route encoding and route recognition conditions. We investigated 20 healthy young (18 - 29 years old) and 20 healthy old adults (53 - 78 years old) in a random effects analysis. Old subjects showed slightly poorer performance than young subjects. Compared to the control condition, route encoding and route recognition showed activation of the dorsal and ventral visual processing streams and the frontal eye fields in both groups of subjects. Compared to old adults, young subjects showed during route encoding stronger activations in the dorsal and the ventral visual processing stream (supramarginal gyrus and posterior fusiform/parahippocampal areas). In addition, young subjects showed weaker anterior parahippocampal activity during route recognition compared to the old group. In contrast, old compared to young subjects showed less suppressed activity in the left perisylvian region and the anterior cingulate cortex during route encoding. Our findings suggest that agerelated navigational memory deficits might be caused by less effective route encoding based on reduced posterior fusiform/parahippocampal and parietal functionality combined with diminished inhibition of perisylvian and anterior cingulate cortices correlated with less effective suppression of task-irrelevant information. In contrast, age differences in neural correlates of route recognition seem to be rather subtle. Old subjects might show a diminished familiarity signal during route recognition in the anterior parahippocampal region.
  • Meyer, A. S., Van der Meulen, F. F., & Brooks, A. (2004). Eye movements during speech planning: Talking about present and remembered objects. Visual Cognition, 11, 553-576. doi:10.1080/13506280344000248.

    Abstract

    Earlier work has shown that speakers naming several objects usually look at each of them before naming them (e.g., Meyer, Sleiderink, & Levelt, 1998). In the present study, participants saw pictures and described them in utterances such as "The chair next to the cross is brown", where the colour of the first object was mentioned after another object had been mentioned. In Experiment 1, we examined whether the speakers would look at the first object (the chair) only once, before naming the object, or twice (before naming the object and before naming its colour). In Experiment 2, we examined whether speakers about to name the colour of the object would look at the object region again when the colour or the entire object had been removed while they were looking elsewhere. We found that speakers usually looked at the target object again before naming its colour, even when the colour was not displayed any more. Speakers were much less likely to fixate upon the target region when the object had been removed from view. We propose that the object contours may serve as a memory cue supporting the retrieval of the associated colour information. The results show that a speaker's eye movements in a picture description task, far from being random, depend on the available visual information and the content and structure of the planned utterance.
  • Meyer, A. S., & Wheeldon, L. (Eds.). (2006). Language production across the life span [Special Issue]. Language and Cognitive Processes, 21(1-3).
  • Meyer, A. S., & Schriefers, H. (1991). Phonological facilitation in picture-word interference experiments: Effects of stimulus onset asynchrony and types of interfering stimuli. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17, 1146-1160. doi:10.1037/0278-7393.17.6.1146.

    Abstract

    Subjects named pictures while hearing distractor words that shared word-initial or word-final segments with the picture names or were unrelated to the picture names. The relative timing of distractor and picture presentation was varied. Compared with unrelated distractors, both types of related distractors facilitated picture naming under certain timing conditions. Begin-related distractors facilitated the naming responses if the shared segments began 150 ms before, at, or 150 ms after picture onset. By contrast, end-related distractors only facilitated the responses if the shared segments began at or 150 ms after picture onset. The results suggest that the phonological encoding of the beginning of a word is initiated before the encoding of its end.
  • Meyer, A. S., & Gerakaki, S. (2017). The art of conversation: Why it’s harder than you might think. Contact Magazine, 43(2), 11-15. Retrieved from http://contact.teslontario.org/the-art-of-conversation-why-its-harder-than-you-might-think/.
  • Meyer, A. S. (2017). Structural priming is not a Royal Road to representations. Commentary on Branigan and Pickering "An experimental approach to linguistic representation". Behavioral and Brain Sciences, 40: e305. doi:10.1017/S0140525X1700053X.

    Abstract

    Branigan & Pickering (B&P) propose that the structural priming paradigm is a Royal Road to linguistic representations of any kind, unobstructed by in fl uences of psychological processes. In my view, however, they are too optimistic about the versatility of the paradigm and, more importantly, its ability to provide direct evidence about the nature of stored linguistic representations.
  • Meyer, A. S. (1991). The time course of phonological encoding in language production: Phonological encoding inside a syllable. Journal of Memory and Language, 30, 69-69. doi:10.1016/0749-596X(91)90011-8.

    Abstract

    Eight experiments were carried out investigating whether different parts of a syllable must be phonologically encoded in a specific order or whether they can be encoded in any order. A speech production task was used in which the subjects in each test trial had to utter one out of three or five response words as quickly as possible. In the so-called homogeneous condition these words were related in form, while in the heterogeneous condition they were unrelated in form. For monosyllabic response words shorter reaction times were obtained in the homogeneous than in the heterogeneous condition when the words had the same onset, but not when they had the same rhyme. Similarly, for disyllabic response words, the reaction times were shorter in the homogeneous than in the heterogeneous condition when the words shared only the onset of the first syllable, but not when they shared only its rhyme. Furthermore, a stronger facilitatory effect was observed when the words had the entire first syllable in common than when they only shared the onset, or the onset and the nucleus, but not the coda of the first syllable. These results suggest that syllables are phonologically encoded in two ordered steps, the first of which is dedicated to the onset and the second to the rhyme.
  • Meyer, A. S., Sleiderink, A. M., & Levelt, W. J. M. (1998). Viewing and naming objects: Eye movements during noun phrase production. Cognition, 66(2), B25-B33. doi:10.1016/S0010-0277(98)00009-2.

    Abstract

    Eye movements have been shown to reflect word recognition and language comprehension processes occurring during reading and auditory language comprehension. The present study examines whether the eye movements speakers make during object naming similarly reflect speech planning processes. In Experiment 1, speakers named object pairs saying, for instance, 'scooter and hat'. The objects were presented as ordinary line drawings or with partly dele:ed contours and had high or low frequency names. Contour type and frequency both significantly affected the mean naming latencies and the mean time spent looking at the objects. The frequency effects disappeared in Experiment 2, in which the participants categorized the objects instead of naming them. This suggests that the frequency effects of Experiment 1 arose during lexical retrieval. We conclude that eye movements during object naming indeed reflect linguistic planning processes and that the speakers' decision to move their eyes from one object to the next is contingent upon the retrieval of the phonological form of the object names.
  • Mitterer, H. (2006). On the causes of compensation for coarticulation: Evidence for phonological mediation. Perception & Psychophysics, 68(7), 1227-1240.

    Abstract

    This study examined whether compensation for coarticulation in fricative–vowel syllables is phonologically mediated or a consequence of auditory processes. Smits (2001a) had shown that compensation occurs for anticipatory lip rounding in a fricative caused by a following rounded vowel in Dutch. In a first experiment, the possibility that compensation is due to general auditory processing was investigated using nonspeech sounds. These did not cause context effects akin to compensation for coarticulation, although nonspeech sounds influenced speech sound identification in an integrative fashion. In a second experiment, a possible phonological basis for compensation for coarticulation was assessed by using audiovisual speech. Visual displays, which induced the perception of a rounded vowel, also influenced compensation for anticipatory lip rounding in the fricative. These results indicate that compensation for anticipatory lip rounding in fricative–vowel syllables is phonologically mediated. This result is discussed in the light of other compensation-for-coarticulation findings and general theories of speech perception.
  • Mitterer, H., Csépe, V., & Blomert, L. (2006). The role of perceptual integration in the recognition of assimilated word forms. Quarterly Journal of Experimental Psychology, 59(8), 1395-1424. doi:10.1080/17470210500198726.

    Abstract

    We investigated how spoken words are recognized when they have been altered by phonological assimilation. Previous research has shown that there is a process of perceptual compensation for phonological assimilations. Three recently formulated proposals regarding the mechanisms for compensation for assimilation make different predictions with regard to the level at which compensation is supposed to occur as well as regarding the role of specific language experience. In the present study, Hungarian words and nonwords, in which a viable and an unviable liquid assimilation was applied, were presented to Hungarian and Dutch listeners in an identification task and a discrimination task. Results indicate that viably changed forms are difficult to distinguish from canonical forms independent of experience with the assimilation rule applied in the utterances. This reveals that auditory processing contributes to perceptual compensation for assimilation, while language experience has only a minor role to play when identification is required.
  • Mitterer, H., Csépe, V., Honbolygo, F., & Blomert, L. (2006). The recognition of phonologically assimilated words does not depend on specific language experience. Cognitive Science, 30(3), 451-479. doi:10.1207/s15516709cog0000_57.

    Abstract

    In a series of 5 experiments, we investigated whether the processing of phonologically assimilated utterances is influenced by language learning. Previous experiments had shown that phonological assimilations, such as /lean#bacon/→[leam bacon], are compensated for in perception. In this article, we investigated whether compensation for assimilation can occur without experience with an assimilation rule using automatic event-related potentials. Our first experiment indicated that Dutch listeners compensate for a Hungarian assimilation rule. Two subsequent experiments, however, failed to show compensation for assimilation by both Dutch and Hungarian listeners. Two additional experiments showed that this was due to the acoustic properties of the assimilated utterance, confirming earlier reports that phonetic detail is important in compensation for assimilation. Our data indicate that compensation for assimilation can occur without experience with an assimilation rule, in line with phonetic–phonological theories that assume that speech production is influenced by speech-perception abilities.
  • Mitterer, H. (2006). Is vowel normalization independent of lexical processing? Phonetica, 63(4), 209-229. doi:10.1159/000097306.

    Abstract

    Vowel normalization in speech perception was investigated in three experiments. The range of the second formant in a carrier phrase was manipulated and this affected the perception of a target vowel in a compensatory fashion: A low F2 range in the carrier phrase made it more likely that the target vowel was perceived as a front vowel, that is, with a high F2. Recent experiments indicated that this effect might be moderated by the lexical status of the constituents of the carrier phrase. Manipulation of the lexical status in the present experiments, however, did not affect vowel normalization. In contrast, the range of vowels in the carrier phrase did influence vowel normalization. If the carrier phrase consisted of mid-to-high front vowels only, vowel categories shifted only for mid-to-high front vowels. It is argued that these results are a challenge for episodic models of word recognition.
  • Mitterer, H., & Ernestus, M. (2006). Listeners recover /t/s that speakers reduce: Evidence from /t/-lenition in Dutch. Journal of Phonetics, 34(1), 73-103. doi:10.1016/j.wocn.2005.03.003.

    Abstract

    In everyday speech, words may be reduced. Little is known about the consequences of such reductions for spoken word comprehension. This study investigated /t/-lenition in Dutch in two corpus studies and three perceptual experiments. The production studies revealed that /t/-lenition is most likely to occur after [s] and before bilabial consonants. The perception experiments showed that listeners take into account both phonological context, phonetic detail, and the lexical status of the form in the interpretation of codas that may or may not contain a lenited word-final /t/. These results speak against models of word recognition that make hard decisions on a prelexical level.
  • Moers, C., Meyer, A. S., & Janse, E. (2017). Effects of word frequency and transitional probability on word reading durations of younger and older speakers. Language and Speech, 60(2), 289-317. doi:10.1177/0023830916649215.

    Abstract

    High-frequency units are usually processed faster than low-frequency units in language comprehension and language production. Frequency effects have been shown for words as well as word combinations. Word co-occurrence effects can be operationalized in terms of transitional probability (TP). TPs reflect how probable a word is, conditioned by its right or left neighbouring word. This corpus study investigates whether three different age groups–younger children (8–12 years), adolescents (12–18 years) and older (62–95 years) Dutch speakers–show frequency and TP context effects on spoken word durations in reading aloud, and whether age groups differ in the size of these effects. Results show consistent effects of TP on word durations for all age groups. Thus, TP seems to influence the processing of words in context, beyond the well-established effect of word frequency, across the entire age range. However, the study also indicates that age groups differ in the size of TP effects, with older adults having smaller TP effects than adolescent readers. Our results show that probabilistic reduction effects in reading aloud may at least partly stem from contextual facilitation that leads to faster reading times in skilled readers, as well as in young language learners.
  • Moisik, S. R., & Dediu, D. (2017). Anatomical biasing and clicks: Evidence from biomechanical modeling. Journal of Language Evolution, 2(1), 37-51. doi:10.1093/jole/lzx004.

    Abstract

    It has been observed by several researchers that the Khoisan palate tends to lack a prominent alveolar ridge. A biomechanical model of click production was created to examine if these sounds might be subject to an anatomical bias associated with alveolar ridge size. Results suggest the bias is plausible, taking the form of decreased articulatory effort and improved volume change characteristics; however, further modeling and experimental research is required to solidify the claim.

    Additional information

    lzx004_Supp.zip
  • Moisik, S. R., & Gick, B. (2017). The quantal larynx: The stable regions of laryngeal biomechanics and implications for speech production. Journal of Speech, Language, and Hearing Research, 60, 540-560. doi:10.1044/2016_JSLHR-S-16-0019.

    Abstract

    Purpose: Recent proposals suggest that (a) the high dimensionality of speech motor control may be reduced via modular neuromuscular organization that takes advantage of intrinsic biomechanical regions of stability and (b) computational modeling provides a means to study whether and how such modularization works. In this study, the focus is on the larynx, a structure that is fundamental to speech production because of its role in phonation and numerous articulatory functions. Method: A 3-dimensional model of the larynx was created using the ArtiSynth platform (http://www.artisynth.org). This model was used to simulate laryngeal articulatory states, including inspiration, glottal fricative, modal prephonation, plain glottal stop, vocal–ventricular stop, and aryepiglotto– epiglottal stop and fricative. Results: Speech-relevant laryngeal biomechanics is rich with “quantal” or highly stable regions within muscle activation space. Conclusions: Quantal laryngeal biomechanics complement a modular view of speech control and have implications for the articulatory–biomechanical grounding of numerous phonetic and phonological phenomena
  • Monaghan, P. (2017). Canalization of language structure from environmental constraints: A computational model of word learning from multiple cues. Topics in Cognitive Science, 9(1), 21-34. doi:10.1111/tops.12239.

    Abstract

    There is substantial variation in language experience, yet there is surprising similarity in the language structure acquired. Constraints on language structure may be external modulators that result in this canalization of language structure, or else they may derive from the broader, communicative environment in which language is acquired. In this paper, the latter perspective is tested for its adequacy in explaining robustness of language learning to environmental variation. A computational model of word learning from cross‐situational, multimodal information was constructed and tested. Key to the model's robustness was the presence of multiple, individually unreliable information sources to support learning. This “degeneracy” in the language system has a detrimental effect on learning, compared to a noise‐free environment, but has a critically important effect on acquisition of a canalized system that is resistant to environmental noise in communication.
  • Monaghan, P., & Rowland, C. F. (2017). Combining language corpora with experimental and computational approaches for language acquisition research. Language Learning, 67(S1), 14-39. doi:10.1111/lang.12221.

    Abstract

    Historically, first language acquisition research was a painstaking process of observation, requiring the laborious hand coding of children's linguistic productions, followed by the generation of abstract theoretical proposals for how the developmental process unfolds. Recently, the ability to collect large-scale corpora of children's language exposure has revolutionized the field. New techniques enable more precise measurements of children's actual language input, and these corpora constrain computational and cognitive theories of language development, which can then generate predictions about learning behavior. We describe several instances where corpus, computational, and experimental work have been productively combined to uncover the first language acquisition process and the richness of multimodal properties of the environment, highlighting how these methods can be extended to address related issues in second language research. Finally, we outline some of the difficulties that can be encountered when applying multimethod approaches and show how these difficulties can be obviated
  • Monaghan, P., Chang, Y.-N., Welbourne, S., & Brysbaert, M. (2017). Exploring the relations between word frequency, language exposure, and bilingualism in a computational model of reading. Journal of Memory and Language, 93, 1-27. doi:10.1016/j.jml.2016.08.003.

    Abstract

    Individuals show differences in the extent to which psycholinguistic variables predict their responses for lexical processing tasks. A key variable accounting for much variance in lexical processing is frequency, but the size of the frequency effect has been demonstrated to reduce as a consequence of the individual’s vocabulary size. Using a connectionist computational implementation of the triangle model on a large set of English words, where orthographic, phonological, and semantic representations interact during processing, we show that the model demonstrates a reduced frequency effect as a consequence of amount of exposure to the language, a variable that was also a cause of greater vocabulary size in the model. The model was also trained to learn a second language, Dutch, and replicated behavioural observations that increased proficiency in a second language resulted in reduced frequency effects for that language but increased frequency effects in the first language. The model provides a first step to demonstrating causal relations between psycholinguistic variables in a model of individual differences in lexical processing, and the effect of bilingualism on interacting variables within the language processing system
  • Mongelli, V., Dehaene, S., Vinckier, F., Peretz, I., Bartolomeo, P., & Cohen, L. (2017). Music and words in the visual cortex: The impact of musical expertise. Cortex, 86, 260-274. doi:10.1016/j.cortex.2016.05.016.

    Abstract

    How does the human visual system accommodate expertise for two simultaneously acquired
    symbolic systems? We used fMRI to compare activations induced in the visual
    cortex by musical notation, written words and other classes of objects, in professional
    musicians and in musically naı¨ve controls. First, irrespective of expertise, selective activations
    for music were posterior and lateral to activations for words in the left occipitotemporal
    cortex. This indicates that symbols characterized by different visual features
    engage distinct cortical areas. Second, musical expertise increased the volume of activations
    for music and led to an anterolateral displacement of word-related activations. In
    musicians, there was also a dramatic increase of the brain-scale networks connected to the
    music-selective visual areas. Those findings reveal that acquiring a double visual expertise
    involves an expansion of category-selective areas, the development of novel long-distance
    functional connectivity, and possibly some competition between categories for the colonization
    of cortical space
  • Montero-Melis, G., & Bylund, E. (2017). Getting the ball rolling: the cross-linguistic conceptualization of caused motion. Language and Cognition, 9(3), 446–472. doi:10.1017/langcog.2016.22.

    Abstract

    Does the way we talk about events correspond to how we conceptualize them? Three experiments (N = 135) examined how Spanish and Swedish native speakers judge event similarity in the domain of caused motion (‘He rolled the tyre into the barn’). Spanish and Swedish motion descriptions regularly encode path (‘into’), but differ in how systematically they include manner information (‘roll’). We designed a similarity arrangement task which allowed participants to give varying weights to different dimensions when gauging event similarity. The three experiments progressively reduced the likelihood that speakers were using language to solve the task. We found that, as long as the use of language was possible (Experiments 1 and 2), Swedish speakers were more likely than Spanish speakers to base their similarity arrangements on object manner (rolling/sliding). However, when recruitment of language was hindered through verbal interference, cross-linguistic differences disappeared (Experiment 3). A compound analysis of all experiments further showed that (i) cross-linguistic differences were played out against a backdrop of commonly represented event components, and (ii) describing vs. not describing the events did not augment cross-linguistic differences, but instead had similar effects across languages. We interpret these findings as suggesting a dynamic role of language in event conceptualization.
  • Montero-Melis, G., Eisenbeiss, S., Narasimhan, B., Ibarretxe-Antuñano, I., Kita, S., Kopecka, A., Lüpke, F., Nikitina, T., Tragel, I., Jaeger, T. F., & Bohnemeyer, J. (2017). Satellite- vs. Verb-Framing Underpredicts Nonverbal Motion Categorization: Insights from a Large Language Sample and Simulations. Cognitive Semantics, 3(1), 36-61. doi:10.1163/23526416-00301002.

    Abstract

    Is motion cognition influenced by the large-scale typological patterns proposed in Talmy’s (2000) two-way distinction between verb-framed (V) and satellite-framed (S) languages? Previous studies investigating this question have been limited to comparing two or three languages at a time and have come to conflicting results. We present the largest cross-linguistic study on this question to date, drawing on data from nineteen genealogically diverse languages, all investigated in the same behavioral paradigm and using the same stimuli. After controlling for the different dependencies in the data by means of multilevel regression models, we find no evidence that S- vs. V-framing affects nonverbal categorization of motion events. At the same time, statistical simulations suggest that our study and previous work within the same behavioral paradigm suffer from insufficient statistical power. We discuss these findings in the light of the great variability between participants, which suggests flexibility in motion representation. Furthermore, we discuss the importance of accounting for language variability, something which can only be achieved with large cross-linguistic samples.
  • Mortensen, L., Meyer, A. S., & Humphreys, G. W. (2006). Age-related effects on speech production: A review. Language and Cognitive Processes, 21, 238-290. doi:10.1080/01690960444000278.

    Abstract

    In discourse, older adults tend to be more verbose and more disfluent than young adults, especially when the task is difficult and when it places few constraints on the content of the utterance. This may be due to (a) language-specific deficits in planning the content and syntactic structure of utterances or in selecting and retrieving words from the mental lexicon, (b) a general deficit in inhibiting irrelevant information, or (c) the selection of a specific speech style. The possibility that older adults have a deficit in lexical retrieval is supported by the results of picture naming studies, in which older adults have been found to name objects less accurately and more slowly than young adults, and by the results of definition naming studies, in which older adults have been found to experience more tip-of-the-tongue (TOT) states than young adults. The available evidence suggests that these age differences are largely due to weakening of the connections linking word lemmas to phonological word forms, though adults above 70 years of age may have an additional deficit in lemma selection.
  • Moscoso del Prado Martín, F., Kostic, A., & Baayen, R. H. (2004). Putting the bits together: An information theoretical perspective on morphological processing. Cognition, 94(1), 1-18. doi:10.1016/j.cognition.2003.10.015.

    Abstract

    In this study we introduce an information-theoretical formulation of the emergence of type- and token-based effects in morphological processing. We describe a probabilistic measure of the informational complexity of a word, its information residual, which encompasses the combined influences of the amount of information contained by the target word and the amount of information carried by its nested morphological paradigms. By means of re-analyses of previously published data on Dutch words we show that the information residual outperforms the combination of traditional token- and type-based counts in predicting response latencies in visual lexical decision, and at the same time provides a parsimonious account of inflectional, derivational, and compounding processes.
  • Moscoso del Prado Martín, F., Ernestus, M., & Baayen, R. H. (2004). Do type and token effects reflect different mechanisms? Connectionist modeling of Dutch past-tense formation and final devoicing. Brain and Language, 90(1-3), 287-298. doi:10.1016/j.bandl.2003.12.002.

    Abstract

    In this paper, we show that both token and type-based effects in lexical processing can result from a single, token-based, system, and therefore, do not necessarily reflect different levels of processing. We report three Simple Recurrent Networks modeling Dutch past-tense formation. These networks show token-based frequency effects and type-based analogical effects closely matching the behavior of human participants when producing past-tense forms for both existing verbs and pseudo-verbs. The third network covers the full vocabulary of Dutch, without imposing predefined linguistic structure on the input or output words.
  • Moscoso del Prado Martín, F., Bertram, R., Haikio, T., Schreuder, R., & Baayen, R. H. (2004). Morphological family size in a morphologically rich language: The case of Finnish compared to Dutch and Hebrew. Journal of Experimental Psychology: Learning, Memory and Cognition, 30(6), 1271-1278. doi:10.1037/0278-7393.30.6.1271.

    Abstract

    Finnish has a very productive morphology in which a stem can give rise to several thousand words. This study presents a visual lexical decision experiment addressing the processing consequences of the huge productivity of Finnish morphology. The authors observed that in Finnish words with larger morphological families elicited shorter response latencies. However, in contrast to Dutch and Hebrew, it is not the complete morphological family of a complex Finnish word that codetermines response latencies but only the subset of words directly derived from the complex word itself. Comparisons with parallel experiments using translation equivalents in Dutch and Hebrew showed substantial cross-language predictivity of family size between Finnish and Dutch but not between Finnish and Hebrew, reflecting the different ways in which the Hebrew and Finnish morphological systems contribute to the semantic organization of concepts in the mental lexicon.
  • Müller, O., & Hagoort, P. (2006). Access to lexical information in language comprehension: Semantics before syntax. Journal of Cognitive Neuroscience, 18(1), 84-96. doi:10.1162/089892906775249997.

    Abstract

    The recognition of a word makes available its semantic and
    syntactic properties. Using electrophysiological recordings, we
    investigated whether one set of these properties is available
    earlier than the other set. Dutch participants saw nouns on a
    computer screen and performed push-button responses: In
    one task, grammatical gender determined response hand
    (left/right) and semantic category determined response execution
    (go/no-go). In the other task, response hand depended
    on semantic category, whereas response execution depended
    on gender. During the latter task, response preparation occurred
    on no-go trials, as measured by the lateralized
    readiness potential: Semantic information was used for
    response preparation before gender information inhibited
    this process. Furthermore, an inhibition-related N2 effect
    occurred earlier for inhibition by semantics than for inhibition
    by gender. In summary, electrophysiological measures
    of both response preparation and inhibition indicated that
    the semantic word property was available earlier than the
    syntactic word property when participants read single
    words.
  • Murakami, S., Verdonschot, R. G., Kreiborg, S., Kakimoto, N., & Kawaguchi, A. (2017). Stereoscopy in dental education: An investigation. Journal of Dental Education, 81(4), 450-457. doi:10.21815/JDE.016.002.

    Abstract

    The aim of this study was to investigate whether stereoscopy can play a meaningful role in dental education. The study used an anaglyph technique in which two images were presented separately to the left and right eyes (using red/cyan filters), which, combined in the brain, give enhanced depth perception. A positional judgment task was performed to assess whether the use of stereoscopy would enhance depth perception among dental students at Osaka University in Japan. Subsequently, the optimum angle was evaluated to obtain maximum ability to discriminate among complex anatomical structures. Finally, students completed a questionnaire on a range of matters concerning their experience with stereoscopic images including their views on using stereoscopy in their future careers. The results showed that the students who used stereoscopy were better able than students who did not to appreciate spatial relationships between structures when judging relative positions. The maximum ability to discriminate among complex anatomical structures was between 2 and 6 degrees. The students' overall experience with the technique was positive, and although most did not have a clear vision for stereoscopy in their own practice, they did recognize its merits for education. These results suggest that using stereoscopic images in dental education can be quite valuable as stereoscopy greatly helped these students' understanding of the spatial relationships in complex anatomical structures.
  • Murphy, S. K., Nolan, C. M., Huang, Z., Kucera, K. S., Freking, B. A., Smith, T. P., Leymaster, K. A., Weidman, J. R., & Jirtle, a. R. L. (2006). Callipyge mutation affects gene expression in cis: A potential role for chromatin structure. Genome Research, 16, 340-346. doi:10.1101/gr.4389306.

    Abstract

    Muscular hypertrophy in callipyge sheep results from a single nucleotide substitution located in the genomic interval between the imprinted Delta, Drosophila, Homolog-like 1 (DLK1) and Maternally Expressed Gene 3 (MEG3). The mechanism linking the mutation to muscle hypertrophy is unclear but involves DLK1 overexpression. The mutation is contained within CLPG1 transcripts produced from this region. Herein we show that CLPG1 is expressed prenatally in the hypertrophy-responsive longissimus dorsi muscle by all four possible genotypes, but postnatal expression is restricted to sheep carrying the mutation. Surprisingly, the mutation results in nonimprinted monoallelic transcription of CLPG1 from only the mutated allele in adult sheep, whereas it is expressed biallelically during prenatal development. We further demonstrate that local CpG methylation is altered by the presence of the mutation in longissimus dorsi of postnatal sheep. For 10 CpG sites flanking the mutation, methylation is similar prenatally across genotypes, but doubles postnatally in normal sheep. This normal postnatal increase in methylation is significantly repressed in sheep carrying one copy of the mutation, and repressed even further in sheep with two mutant alleles. The attenuation in methylation status in the callipyge sheep correlates with the onset of the phenotype, continued CLPG1 transcription, and high-level expression of DLK1. In contrast, normal sheep exhibit hypermethylation of this locus after birth and CLPG1 silencing, which coincides with DLK1 transcriptional repression. These data are consistent with the notion that the callipyge mutation inhibits perinatal nucleation of regional chromatin condensation resulting in continued elevated transcription of prenatal DLK1 levels in adult callipyge sheep. We propose a model incorporating these results that can also account for the enigmatic normal phenotype of homozygous mutant sheep.
  • Narasimhan, B., & Gullberg, M. (2006). Perspective-shifts in event descriptions in Tamil child language. Journal of Child Language, 33(1), 99-124. doi:10.1017/S0305000905007191.

    Abstract

    Children are able to take multiple perspectives in talking about entities and events. But the nature of children's sensitivities to the complex patterns of perspective-taking in adult language is unknown. We examine perspective-taking in four- and six-year-old Tamil-speaking children describing placement events, as reflected in the use of a general placement verb (veyyii ‘put’) versus two fine-grained caused posture expressions specifying orientation, either vertical (nikka veyyii ‘make stand’) or horizontal (paDka veyyii ‘make lie’). We also explore whether animacy systematically promotes shifts to a fine-grained perspective. The results show that four- and six-year-olds switch perspectives as flexibly and systematically as adults do. Animacy influences shifts to a fine-grained perspective similarly across age groups. However, unexpectedly, six-year-olds also display greater overall sensitivity to orientation, preferring the vertical over the horizontal caused posture expression. Despite early flexibility, the factors governing the patterns of perspective-taking on events are undergoing change even in later childhood, reminiscent of U-shaped semantic reorganizations observed in children's lexical knowledge. The present study points to the intriguing possibility that mechanisms that operate at the level of semantics could also influence subtle patterns of lexical choice and perspective-shifts.
  • Narasimhan, B., Sproat, R., & Kiraz, G. (2004). Schwa-deletion in Hindi text-to-speech synthesis. International Journal of Speech Technology, 7(4), 319-333. doi:10.1023/B:IJST.0000037075.71599.62.

    Abstract

    We describe the phenomenon of schwa-deletion in Hindi and how it is handled in the pronunciation component of a multilingual concatenative text-to-speech system. Each of the consonants in written Hindi is associated with an “inherent” schwa vowel which is not represented in the orthography. For instance, the Hindi word pronounced as [namak] (’salt’) is represented in the orthography using the consonantal characters for [n], [m], and [k]. Two main factors complicate the issue of schwa pronunciation in Hindi. First, not every schwa following a consonant is pronounced within the word. Second, in multimorphemic words, the presence of a morpheme boundary can block schwa deletion where it might otherwise occur. We propose a model for schwa-deletion which combines a general purpose schwa-deletion rule proposed in the linguistics literature (Ohala, 1983), with additional morphological analysis necessitated by the high frequency of compounds in our database. The system is implemented in the framework of finite-state transducer technology.
  • Negwer, M., & Schubert, D. (2017). Talking convergence: Growing evidence links FOXP2 and retinoic acidin shaping speech-related motor circuitry. Frontiers in Neuroscience, 11: 19. doi:10.3389/fnins.2017.00019.

    Abstract

    A commentary on
    FOXP2 drives neuronal differentiation by interacting with retinoic acid signaling pathways

    by Devanna, P., Middelbeek, J., and Vernes, S. C. (2014). Front. Cell. Neurosci. 8:305. doi: 10.3389/fncel.2014.00305
  • Newbury, D. F., Cleak, J. D., Banfield, E., Marlow, A. J., Fisher, S. E., Monaco, A. P., Stott, C. M., Merricks, M. J., Goodyer, I. M., Slonims, V., Baird, G., Bolton, P., Everitt, A., Hennessy, E., Main, M., Helms, P., Kindley, A. D., Hodson, A., Watson, J., O’Hare, A. and 9 moreNewbury, D. F., Cleak, J. D., Banfield, E., Marlow, A. J., Fisher, S. E., Monaco, A. P., Stott, C. M., Merricks, M. J., Goodyer, I. M., Slonims, V., Baird, G., Bolton, P., Everitt, A., Hennessy, E., Main, M., Helms, P., Kindley, A. D., Hodson, A., Watson, J., O’Hare, A., Cohen, W., Cowie, H., Steel, J., MacLean, A., Seckl, J., Bishop, D. V. M., Simkin, Z., Conti-Ramsden, G., & Pickles, A. (2004). Highly significant linkage to the SLI1 Locus in an expanded sample of Individuals affected by specific language impairment. American Journal of Human Genetics, 74(6), 1225-1238. doi:10.1086/421529.

    Abstract

    Specific language impairment (SLI) is defined as an unexplained failure to acquire normal language skills despite adequate intelligence and opportunity. We have reported elsewhere a full-genome scan in 98 nuclear families affected by this disorder, with the use of three quantitative traits of language ability (the expressive and receptive tests of the Clinical Evaluation of Language Fundamentals and a test of nonsense word repetition). This screen implicated two quantitative trait loci, one on chromosome 16q (SLI1) and a second on chromosome 19q (SLI2). However, a second independent genome screen performed by another group, with the use of parametric linkage analyses in extended pedigrees, found little evidence for the involvement of either of these regions in SLI. To investigate these loci further, we have collected a second sample, consisting of 86 families (367 individuals, 174 independent sib pairs), all with probands whose language skills are ⩾1.5 SD below the mean for their age. Haseman-Elston linkage analysis resulted in a maximum LOD score (MLS) of 2.84 on chromosome 16 and an MLS of 2.31 on chromosome 19, both of which represent significant linkage at the 2% level. Amalgamation of the wave 2 sample with the cohort used for the genome screen generated a total of 184 families (840 individuals, 393 independent sib pairs). Analysis of linkage within this pooled group strengthened the evidence for linkage at SLI1 and yielded a highly significant LOD score (MLS = 7.46, interval empirical P<.0004). Furthermore, linkage at the same locus was also demonstrated to three reading-related measures (basic reading [MLS = 1.49], spelling [MLS = 2.67], and reading comprehension [MLS = 1.99] subtests of the Wechsler Objectives Reading Dimensions).
  • Niccolai, V., Klepp, A., Indefrey, P., Schnitzler, A., & Biermann-Ruben, K. (2017). Semantic discrimination impacts tDCS modulation of verb processing. Scientific Reports, 7: 17162. doi:10.1038/s41598-017-17326-w.

    Abstract

    Motor cortex activation observed during body-related verb processing hints at simulation accompanying linguistic understanding. By exploiting the up- and down-regulation that anodal and cathodal transcranial direct current stimulation (tDCS) exert on motor cortical excitability, we aimed at further characterizing the functional contribution of the motor system to linguistic processing. In a double-blind sham-controlled within-subjects design, online stimulation was applied to the left hemispheric hand-related motor cortex of 20 healthy subjects. A dual, double-dissociation task required participants to semantically discriminate concrete (hand/foot) from abstract verb primes as well as to respond with the hand or with the foot to verb-unrelated geometric targets. Analyses were conducted with linear mixed models. Semantic priming was confirmed by faster and more accurate reactions when the response effector was congruent with the verb’s body part. Cathodal stimulation induced faster responses for hand verb primes thus indicating a somatotopical distribution of cortical activation as induced by body-related verbs. Importantly, this effect depended on performance in semantic discrimination. The current results point to verb processing being selectively modifiable by neuromodulation and at the same time to a dependence of tDCS effects on enhanced simulation. We discuss putative mechanisms operating in this reciprocal dependence of neuromodulation and motor resonance.

    Additional information

    41598_2017_17326_MOESM1_ESM.pdf
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2006). When peanuts fall in love: N400 evidence for the power of discourse. Journal of Cognitive Neuroscience, 18(7), 1098-1111. doi:10.1162/jocn.2006.18.7.1098.

    Abstract

    In linguistic theories of how sentences encode meaning, a distinction is often made between the context-free rule-based combination of lexical–semantic features of the words within a sentence (‘‘semantics’’), and the contributions made by wider context (‘‘pragmatics’’). In psycholinguistics, this distinction has led to the view that listeners initially compute a local, context-independent meaning of a phrase or sentence before relating it to the wider context. An important aspect of such a two-step perspective on interpretation is that local semantics cannot initially be overruled by global contextual factors. In two spoken-language event-related potential experiments, we tested the viability of this claim by examining whether discourse context can overrule the impact of the core lexical–semantic feature animacy, considered to be an innate organizing principle of cognition. Two-step models of interpretation predict that verb–object animacy violations, as in ‘‘The girl comforted the clock,’’ will always perturb the unfolding interpretation process, regardless of wider context. When presented in isolation, such anomalies indeed elicit a clear N400 effect, a sign of interpretive problems. However, when the anomalies were embedded in a supportive context (e.g., a girl talking to a clock about his depression), this N400 effect disappeared completely. Moreover, given a suitable discourse context (e.g., a story about an amorous peanut), animacyviolating predicates (‘‘the peanut was in love’’) were actually processed more easily than canonical predicates (‘‘the peanut was salted’’). Our findings reveal that discourse context can immediately overrule local lexical–semantic violations, and therefore suggest that language comprehension does not involve an initially context-free semantic analysis.
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2006). Individual differences and contextual bias in pronoun resolution: Evidence from ERPs. Brain Research, 1118(1), 155-167. doi:10.1016/j.brainres.2006.08.022.

    Abstract

    Although we usually have no trouble finding the right antecedent for a pronoun, the co-reference relations between pronouns and antecedents in everyday language are often ‘formally’ ambiguous. But a pronoun is only really ambiguous if a reader or listener indeed perceives it to be ambiguous. Whether this is the case may depend on at least two factors: the language processing skills of an individual reader, and the contextual bias towards one particular referential interpretation. In the current study, we used event related brain potentials (ERPs) to explore how both these factors affect the resolution of referentially ambiguous pronouns. We compared ERPs elicited by formally ambiguous and non-ambiguous pronouns that were embedded in simple sentences (e.g., “Jennifer Lopez told Madonna that she had too much money.”). Individual differences in language processing skills were assessed with the Reading Span task, while the contextual bias of each sentence (up to the critical pronoun) had been assessed in a referential cloze pretest. In line with earlier research, ambiguous pronouns elicited a sustained, frontal negative shift relative to non-ambiguous pronouns at the group-level. The size of this effect was correlated with Reading Span score, as well as with contextual bias. These results suggest that whether a reader perceives a formally ambiguous pronoun to be ambiguous is subtly co-determined by both individual language processing skills and contextual bias.
  • Nieuwland, M. S., & Martin, A. E. (2017). Neural oscillations and a nascent corticohippocampal theory of reference. Journal of Cognitive Neuroscience, 29(5), 896-910. doi:10.1162/jocn_a_01091.

    Abstract

    The ability to use words to refer to the world is vital to the communicative power of human language. In particular, the anaphoric use of words to refer to previously mentioned concepts (antecedents) allows dialogue to be coherent and meaningful. Psycholinguistic theory posits that anaphor comprehension involves reactivating a memory representation of the antecedent. Whereas this implies the involvement of recognition memory, or the mnemonic sub-routines by which people distinguish old from new, the neural processes for reference resolution are largely unknown. Here, we report time-frequency analysis of four EEG experiments to reveal the increased coupling of functional neural systems associated with referentially coherent expressions compared to referentially problematic expressions. Despite varying in modality, language, and type of referential expression, all experiments showed larger gamma-band power for referentially coherent expressions compared to referentially problematic expressions. Beamformer analysis in high-density Experiment 4 localised the gamma-band increase to posterior parietal cortex around 400-600 ms after anaphor-onset and to frontaltemporal cortex around 500-1000 ms. We argue that the observed gamma-band power increases reflect successful referential binding and resolution, which links incoming information to antecedents through an interaction between the brain’s recognition memory networks and frontal-temporal language network. We integrate these findings with previous results from patient and neuroimaging studies, and we outline a nascent cortico-hippocampal theory of reference.
  • Nivard, M. G., Gage, S. H., Hottenga, J. J., van Beijsterveldt, C. E. M., Abdellaoui, A., Bartels, M., Baselmans, B. M. L., Ligthart, L., St Pourcain, B., Boomsma, D. I., Munafò, M. R., & Middeldorp, C. M. (2017). Genetic overlap between schizophrenia and developmental psychopathology: Longitudinal and multivariate polygenic risk prediction of common psychiatric traits during development. Schizophrenia Bulletin, 43(6), 1197-1207. doi:10.1093/schbul/sbx031.

    Abstract

    Background: Several nonpsychotic psychiatric disorders in childhood and adolescence can precede the onset of schizophrenia, but the etiology of this relationship remains unclear. We investigated to what extent the association between schizophrenia and psychiatric disorders in childhood is explained by correlated genetic risk factors. Methods: Polygenic risk scores (PRS), reflecting an individual’s genetic risk for schizophrenia, were constructed for 2588 children from the Netherlands Twin Register (NTR) and 6127 from the Avon Longitudinal Study of Parents And Children (ALSPAC). The associations between schizophrenia PRS and measures of anxiety, depression, attention deficit hyperactivity disorder (ADHD), and oppositional defiant disorder/conduct disorder (ODD/CD) were estimated at age 7, 10, 12/13, and 15 years in the 2 cohorts. Results were then meta-analyzed, and a meta-regression analysis was performed to test differences in effects sizes over, age and disorders. Results: Schizophrenia PRS were associated with childhood and adolescent psychopathology. Meta-regression analysis showed differences in the associations over disorders, with the strongest association with childhood and adolescent depression and a weaker association for ODD/CD at age 7. The associations increased with age and this increase was steepest for ADHD and ODD/CD. Genetic correlations varied between 0.10 and 0.25. Conclusion: By optimally using longitudinal data across diagnoses in a multivariate meta-analysis this study sheds light on the development of childhood disorders into severe adult psychiatric disorders. The results are consistent with a common genetic etiology of schizophrenia and developmental psychopathology as well as with a stronger shared genetic etiology between schizophrenia and adolescent onset psychopathology.
  • Nivard, M. G., Lubke, G. H., Dolan, C. V., Evans, D. M., St Pourcain, B., Munafo, M. R., & Middeldorp, C. M. (2017). Joint developmental trajectories of internalizing and externalizing disorders between childhood and adolescence. Development and Psychopathology, 29(3), 919-928. doi:10.1017/S0954579416000572.

    Abstract

    This study sought to identify trajectories of DSM-IV based internalizing (INT) and externalizing (EXT) problem scores across childhood and adolescence and to provide insight into the comorbidity by modeling the co-occurrence of INT and EXT trajectories. INT and EXT were measured repeatedly between age 7 and age 15 years in over 7,000 children and analyzed using growth mixture models. Five trajectories were identified for both INT and EXT, including very low, low, decreasing, and increasing trajectories. In addition, an adolescent onset trajectory was identified for INT and a stable high trajectory was identified for EXT. Multinomial regression showed that similar EXT and INT trajectories were associated. However, the adolescent onset INT trajectory was independent of high EXT trajectories, and persisting EXT was mainly associated with decreasing INT. Sex and early life environmental risk factors predicted EXT and, to a lesser extent, INT trajectories. The association between trajectories indicates the need to consider comorbidity when a child presents with INT or EXT disorders, particularly when symptoms start early. This is less necessary when INT symptoms start at adolescence. Future studies should investigate the etiology of co-occurring INT and EXT and the specific treatment needs of these severely affected children.
  • Noordman, L. G. M., & Vonk, W. (1998). Memory-based processing in understanding causal information. Discourse Processes, 191-212. doi:10.1080/01638539809545044.

    Abstract

    The reading process depends both on the text and on the reader. When we read a text, propositions in the current input are matched to propositions in the memory representation of the previous discourse but also to knowledge structures in long‐term memory. Therefore, memory‐based text processing refers both to the bottom‐up processing of the text and to the top‐down activation of the reader's knowledge. In this article, we focus on the role of cognitive structures in the reader's knowledge. We argue that causality is an important category in structuring human knowledge and that this property has consequences for text processing. Some research is discussed that illustrates that the more the information in the text reflects causal categories, the more easily the information is processed.
  • Norris, D., Cutler, A., McQueen, J. M., & Butterfield, S. (2006). Phonological and conceptual activation in speech comprehension. Cognitive Psychology, 53(2), 146-193. doi:10.1016/j.cogpsych.2006.03.001.

    Abstract

    We propose that speech comprehension involves the activation of token representations of the phonological forms of current lexical hypotheses, separately from the ongoing construction of a conceptual interpretation of the current utterance. In a series of cross-modal priming experiments, facilitation of lexical decision responses to visual target words (e.g., time) was found for targets that were semantic associates of auditory prime words (e.g., date) when the primes were isolated words, but not when the same primes appeared in sentence contexts. Identity priming (e.g., faster lexical decisions to visual date after spoken date than after an unrelated prime) appeared, however, both with isolated primes and with primes in prosodically neutral sentences. Associative priming in sentence contexts only emerged when sentence prosody involved contrastive accents, or when sentences were terminated immediately after the prime. Associative priming is therefore not an automatic consequence of speech processing. In no experiment was there associative priming from embedded words (e.g., sedate-time), but there was inhibitory identity priming (e.g., sedate-date) from embedded primes in sentence contexts. Speech comprehension therefore appears to involve separate distinct activation both of token phonological word representations and of conceptual word representations. Furthermore, both of these types of representation are distinct from the long-term memory representations of word form and meaning.

Share this page