Publications

Displaying 501 - 600 of 962
  • Lewis, A. G., Lemhӧfer, K., Schoffelen, J.-M., & Schriefers, H. (2016). Gender agreement violations modulate beta oscillatory dynamics during sentence comprehension: A comparison of second language learners and native speakers. Neuropsychologia, 89(1), 254-272. doi:10.1016/j.neuropsychologia.2016.06.031.

    Abstract

    For native speakers, many studies suggest a link between oscillatory neural activity in the beta frequency range and syntactic processing. For late second language (L2) learners on the other hand, the extent to which the neural architecture supporting syntactic processing is similar to or different from that of native speakers is still unclear. In a series of four experiments, we used electroencephalography to investigate the link between beta oscillatory activity and the processing of grammatical gender agreement in Dutch determiner-noun pairs, for Dutch native speakers, and for German L2 learners of Dutch. In Experiment 1 we show that for native speakers, grammatical gender agreement violations are yet another among many syntactic factors that modulate beta oscillatory activity during sentence comprehension. Beta power is higher for grammatically acceptable target words than for those that mismatch in grammatical gender with their preceding determiner. In Experiment 2 we observed no such beta modulations for L2 learners, irrespective of whether trials were sorted according to objective or subjective syntactic correctness. Experiment 3 ruled out that the absence of a beta effect for the L2 learners in Experiment 2 was due to repetition of the target nouns in objectively correct and incorrect determiner-noun pairs. Finally, Experiment 4 showed that when L2 learners are required to explicitly focus on grammatical information, they show modulations of beta oscillatory activity, comparable to those of native speakers, but only when trials are sorted according to participants’ idiosyncratic lexical representations of the grammatical gender of target nouns. Together, these findings suggest that beta power in L2 learners is sensitive to violations of grammatical gender agreement, but only when the importance of grammatical information is highlighted, and only when participants' subjective lexical representations are taken into account.
  • Liljeström, M., Hulten, A., Parkkonen, L., & Salmelin, R. (2009). Comparing MEG and fMRI views to naming actions and objects. Human Brain Mapping, 30, 1845-1856. doi:10.1002/hbm.20785.

    Abstract

    Most neuroimaging studies are performed using one imaging method only, either functional magnetic resonance imaging (fMRI), electroencephalography (EEG), or magnetoencephalography (MEG). Information on both location and timing has been sought by recording fMRI and EEG, simultaneously, or MEG and fMRI in separate sessions. Such approaches assume similar active areas whether detected via hemodynamic or electrophysiological signatures. Direct comparisons, after independent analysis of data from each imaging modality, have been conducted primarily on low-level sensory processing. Here, we report MEG (timing and location) and fMRI (location) results in 11 subjects when they named pictures that depicted an action or an object. The experimental design was exactly the same for the two imaging modalities. The MEG data were analyzed with two standard approaches: a set of equivalent current dipoles and a distributed minimum norm estimate. The fMRI blood-oxygenlevel dependent (BOLD) data were subjected to the usual random-effect contrast analysis. At the group level, MEG and fMRI data showed fairly good convergence, with both overall activation patterns and task effects localizing to comparable cortical regions. There were some systematic discrepancies, however, and the correspondence was less compelling in the individual subjects. The present analysis should be helpful in reconciling results of fMRI and MEG studies on high-level cognitive functions
  • Lind, J., Persson, J., Ingvar, M., Larsson, A., Cruts, M., Van Broeckhoven, C., Adolfsson, R., Bäckman, L., Nilsson, L.-G., Petersson, K. M., & Nyberg, L. (2006). Reduced functional brain activity response in cognitively intact apolipoprotein E ε4 carriers. Brain, 129(5), 1240-1248. doi:10.1093/brain/awl054.

    Abstract

    The apolipoprotein E {varepsilon}4 (APOE {varepsilon}4) is the main known genetic risk factor for Alzheimer's disease. Genetic assessments in combination with other diagnostic tools, such as neuroimaging, have the potential to facilitate early diagnosis. In this large-scale functional MRI (fMRI) study, we have contrasted 30 APOE {varepsilon}4 carriers (age range: 49–74 years; 19 females), of which 10 were homozygous for the {varepsilon}4 allele, and 30 non-carriers with regard to brain activity during a semantic categorization task. Test groups were closely matched for sex, age and education. Critically, both groups were cognitively intact and thus symptom-free of Alzheimer's disease. APOE {varepsilon}4 carriers showed reduced task-related responses in the left inferior parietal cortex, and bilaterally in the anterior cingulate region. A dose-related response was observed in the parietal area such that diminution was most pronounced in homozygous compared with heterozygous carriers. In addition, contrasts of processing novel versus familiar items revealed an abnormal response in the right hippocampus in the APOE {varepsilon}4 group, mainly expressed as diminished sensitivity to the relative novelty of stimuli. Collectively, these findings indicate that genetic risk translates into reduced functional brain activity, in regions pertinent to Alzheimer's disease, well before alterations can be detected at the behavioural level.
  • Liszkowski, U., Schäfer, M., Carpenter, M., & Tomasello, M. (2009). Prelinguistic infants, but not chimpanzees, communicate about absent entities. Psychological Science, 20, 654-660.

    Abstract

    One of the defining features of human language is displacement, the ability to make reference to absent entities. Here we show that prelinguistic, 12-month-old infants already can use a nonverbal pointing gesture to make reference to absent entities. We also show that chimpanzees—who can point for things they want humans to give them—do not point to refer to absent entities in the same way. These results demonstrate that the ability to communicate about absent but mutually known entities depends not on language, but rather on deeper social-cognitive skills that make acts of linguistic reference possible in the first place. These nonlinguistic skills for displaced reference emerged apparently only after humans' divergence from great apes some 6 million years ago.
  • Liszkowski, U., Carpenter, M., Striano, T., & Tomasello, M. (2006). Twelve- and 18-month-olds point to provide information for others. JOURNAL OF COGNITION AND DEVELOPMENT, 7, 173-187. doi:10.1207/s15327647jcd0702_2.

    Abstract

    Classically, infants are thought to point for 2 main reasons: (a) They point imperatively when they want an adult to do something for them (e.g., give them something; “Juice!”), and (b) they point declaratively when they want an adult to share attention with them to some interesting event or object (“Look!”). Here we demonstrate the existence of another motive for infants' early pointing gestures: to inform another person of the location of an object that person is searching for. This informative motive for pointing suggests that from very early in ontogeny humans conceive of others as intentional agents with informational states and they have the motivation to provide such information communicatively
  • Little, H., Eryilmaz, K., & de Boer, B. (2017). Conventionalisation and Discrimination as Competing Pressures on Continuous Speech-like Signals. Interaction studies, 18(3), 355-378. doi:10.1075/is.18.3.04lit.

    Abstract

    Arbitrary communication systems can emerge from iconic beginnings through processes of conventionalisation via interaction. Here, we explore whether this process of conventionalisation occurs with continuous, auditory signals. We conducted an artificial signalling experiment. Participants either created signals for themselves, or for a partner in a communication game. We found no evidence that the speech-like signals in our experiment became less iconic or simpler through interaction. We hypothesise that the reason for our results is that when it is difficult to be iconic initially because of the constraints of the modality, then iconicity needs to emerge to enable grounding before conventionalisation can occur. Further, pressures for discrimination, caused by the expanding meaning space in our study, may cause more complexity to emerge, again as a result of the restrictive signalling modality. Our findings have possible implications for the processes of conventionalisation possible in signed and spoken languages, as the spoken modality is more restrictive than the manual modality.
  • Little, H., Rasilo, H., van der Ham, S., & Eryılmaz, K. (2017). Empirical approaches for investigating the origins of structure in speech. Interaction studies, 18(3), 332-354. doi:10.1075/is.18.3.03lit.

    Abstract

    In language evolution research, the use of computational and experimental methods to investigate the emergence of structure in language is exploding. In this review, we look exclusively at work exploring the emergence of structure in speech, on both a categorical level (what drives the emergence of an inventory of individual speech sounds), and a combinatorial level (how these individual speech sounds emerge and are reused as part of larger structures). We show that computational and experimental methods for investigating population-level processes can be effectively used to explore and measure the effects of learning, communication and transmission on the emergence of structure in speech. We also look at work on child language acquisition as a tool for generating and validating hypotheses for the emergence of speech categories. Further, we review the effects of noise, iconicity and production effects.
  • Little, H. (2017). Introduction to the Special Issue on the Emergence of Sound Systems. Journal of Language Evolution, 2(1), 1-3. doi:10.1093/jole/lzx014.

    Abstract

    How did human sound systems get to be the way they are? Collecting contributions implementing a wealth of methods to address this question, this special issue treats language and speech as being the result of a complex adaptive system. The work throughout provides evidence and theory at the levels of phylogeny, glossogeny and ontogeny. In taking a multi-disciplinary approach that considers interactions within and between these levels of selection, the papers collectively provide a valuable, integrated contribution to existing work on the evolution of speech and sound systems.
  • Little, H., Eryılmaz, K., & de Boer, B. (2017). Signal dimensionality and the emergence of combinatorial structure. Cognition, 168, 1-15. doi:10.1016/j.cognition.2017.06.011.

    Abstract

    In language, a small number of meaningless building blocks can be combined into an unlimited set of meaningful utterances. This is known as combinatorial structure. One hypothesis for the initial emergence of combinatorial structure in language is that recombining elements of signals solves the problem of overcrowding in a signal space. Another hypothesis is that iconicity may impede the emergence of combinatorial structure. However, how these two hypotheses relate to each other is not often discussed. In this paper, we explore how signal space dimensionality relates to both overcrowding in the signal space and iconicity. We use an artificial signalling experiment to test whether a signal space and a meaning space having similar topologies will generate an iconic system and whether, when the topologies differ, the emergence of combinatorially structured signals is facilitated. In our experiments, signals are created from participants' hand movements, which are measured using an infrared sensor. We found that participants take advantage of iconic signal-meaning mappings where possible. Further, we use trajectory predictability, measures of variance, and Hidden Markov Models to measure the use of structure within the signals produced and found that when topologies do not match, then there is more evidence of combinatorial structure. The results from these experiments are interpreted in the context of the differences between the emergence of combinatorial structure in different linguistic modalities (speech and sign).

    Additional information

    mmc1.zip
  • Little, H. (Ed.). (2017). Special Issue on the Emergence of Sound Systems [Special Issue]. The Journal of Language Evolution, 2(1).
  • Lockwood, G. (2016). Academic clickbait: Articles with positively-framed titles, interesting phrasing, and no wordplay get more attention online. The Winnower, 3: e146723.36330. doi:10.15200/winn.146723.36330.

    Abstract

    This article is about whether the factors which drive online sharing of non-scholarly content also apply to academic journal titles. It uses Altmetric scores as a measure of online attention to articles from Frontiers in Psychology published in 2013 and 2014. Article titles with result-oriented positive framing and more interesting phrasing receive higher Altmetric scores, i.e., get more online attention. Article titles with wordplay and longer article titles receive lower Altmetric scores. This suggests that the same factors that affect how widely non-scholarly content is shared extend to academia, which has implications for how academics can make their work more likely to have more impact.
  • Lockwood, G., Hagoort, P., & Dingemanse, M. (2016). How iconicity helps people learn new words: neural correlates and individual differences in sound-symbolic bootstrapping. Collabra, 2(1): 7. doi:10.1525/collabra.42.

    Abstract

    Sound symbolism is increasingly understood as involving iconicity, or perceptual analogies and cross-modal correspondences between form and meaning, but the search for its functional and neural correlates is ongoing. Here we study how people learn sound-symbolic words, using behavioural, electrophysiological and individual difference measures. Dutch participants learned Japanese ideophones —lexical sound-symbolic words— with a translation of either the real meaning (in which form and meaning show cross-modal correspondences) or the opposite meaning (in which form and meaning show cross-modal clashes). Participants were significantly better at identifying the words they learned in the real condition, correctly remembering the real word pairing 86.7% of the time, but the opposite word pairing only 71.3% of the time. Analysing event-related potentials (ERPs) during the test round showed that ideophones in the real condition elicited a greater P3 component and late positive complex than ideophones in the opposite condition. In a subsequent forced choice task, participants were asked to guess the real translation from two alternatives. They did this with 73.0% accuracy, well above chance level even for words they had encountered in the opposite condition, showing that people are generally sensitive to the sound-symbolic cues in ideophones. Individual difference measures showed that the ERP effect in the test round of the learning task was greater for participants who were more sensitive to sound symbolism in the forced choice task. The main driver of the difference was a lower amplitude of the P3 component in response to ideophones in the opposite condition, suggesting that people who are more sensitive to sound symbolism may have more difficulty to suppress conflicting cross-modal information. The findings provide new evidence that cross-modal correspondences between sound and meaning facilitate word learning, while cross-modal clashes make word learning harder, especially for people who are more sensitive to sound symbolism.

    Additional information

    https://osf.io/ema3t/
  • Lockwood, G., Dingemanse, M., & Hagoort, P. (2016). Sound-symbolism boosts novel word learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(8), 1274-1281. doi:10.1037/xlm0000235.

    Abstract

    The existence of sound-symbolism (or a non-arbitrary link between form and meaning) is well-attested. However, sound-symbolism has mostly been investigated with nonwords in forced choice tasks, neither of which are representative of natural language. This study uses ideophones, which are naturally occurring sound-symbolic words that depict sensory information, to investigate how sensitive Dutch speakers are to sound-symbolism in Japanese in a learning task. Participants were taught 2 sets of Japanese ideophones; 1 set with the ideophones’ real meanings in Dutch, the other set with their opposite meanings. In Experiment 1, participants learned the ideophones and their real meanings much better than the ideophones with their opposite meanings. Moreover, despite the learning rounds, participants were still able to guess the real meanings of the ideophones in a 2-alternative forced-choice test after they were informed of the manipulation. This shows that natural language sound-symbolism is robust beyond 2-alternative forced-choice paradigms and affects broader language processes such as word learning. In Experiment 2, participants learned regular Japanese adjectives with the same manipulation, and there was no difference between real and opposite conditions. This shows that natural language sound-symbolism is especially strong in ideophones, and that people learn words better when form and meaning match. The highlights of this study are as follows: (a) Dutch speakers learn real meanings of Japanese ideophones better than opposite meanings, (b) Dutch speakers accurately guess meanings of Japanese ideophones, (c) this sensitivity happens despite learning some opposite pairings, (d) no such learning effect exists for regular Japanese adjectives, and (e) this shows the importance of sound-symbolism in scaffolding language learning
  • Lopopolo, A., Frank, S. L., Van den Bosch, A., & Willems, R. M. (2017). Using stochastic language models (SLM) to map lexical, syntactic, and phonological information processing in the brain. PLoS One, 12(5): e0177794. doi:10.1371/journal.pone.0177794.

    Abstract

    Language comprehension involves the simultaneous processing of information at the phonological, syntactic, and lexical level. We track these three distinct streams of information in the brain by using stochastic measures derived from computational language models to detect neural correlates of phoneme, part-of-speech, and word processing in an fMRI experiment. Probabilistic language models have proven to be useful tools for studying how language is processed as a sequence of symbols unfolding in time. Conditional probabilities between sequences of words are at the basis of probabilistic measures such as surprisal and perplexity which have been successfully used as predictors of several behavioural and neural correlates of sentence processing. Here we computed perplexity from sequences of words and their parts of speech, and their phonemic transcriptions. Brain activity time-locked to each word is regressed on the three model-derived measures. We observe that the brain keeps track of the statistical structure of lexical, syntactic and phonological information in distinct areas.

    Additional information

    Data availability
  • Magyari, L., De Ruiter, J. P., & Levinson, S. C. (2017). Temporal preparation for speaking in question-answer sequences. Frontiers in Psychology, 8: 211. doi:10.3389/fpsyg.2017.00211.

    Abstract

    In every-day conversations, the gap between turns of conversational partners is most frequently between 0 and 200 ms. We were interested how speakers achieve such fast transitions. We designed an experiment in which participants listened to pre-recorded questions about images presented on a screen and were asked to answer these questions. We tested whether speakers already prepare their answers while they listen to questions and whether they can prepare for the time of articulation by anticipating when questions end. In the experiment, it was possible to guess the answer at the beginning of the questions in half of the experimental trials. We also manipulated whether it was possible to predict the length of the last word of the questions. The results suggest when listeners know the answer early they start speech production already during the questions. Speakers can also time when to speak by predicting the duration of turns. These temporal predictions can be based on the length of anticipated words and on the overall probability of turn durations.

    Additional information

    presentation 1.pdf
  • Mainz, N., Shao, Z., Brysbaert, M., & Meyer, A. S. (2017). Vocabulary Knowledge Predicts Lexical Processing: Evidence from a Group of Participants with Diverse Educational Backgrounds. Frontiers in Psychology, 8: 1164. doi:10.3389/fpsyg.2017.01164.

    Abstract

    Vocabulary knowledge is central to a speaker's command of their language. In previous research, greater vocabulary knowledge has been associated with advantages in language processing. In this study, we examined the relationship between individual differences in vocabulary and language processing performance more closely by (i) using a battery of vocabulary tests instead of just one test, and (ii) testing not only university students (Experiment 1) but young adults from a broader range of educational backgrounds (Experiment 2). Five vocabulary tests were developed, including multiple-choice and open antonym and synonym tests and a definition test, and administered together with two established measures of vocabulary. Language processing performance was measured using a lexical decision task. In Experiment 1, vocabulary and word frequency were found to predict word recognition speed while we did not observe an interaction between the effects. In Experiment 2, word recognition performance was predicted by word frequency and the interaction between word frequency and vocabulary, with high-vocabulary individuals showing smaller frequency effects. While overall the individual vocabulary tests were correlated and showed similar relationships with language processing as compared to a composite measure of all tests, they appeared to share less variance in Experiment 2 than in Experiment 1. Implications of our findings concerning the assessment of vocabulary size in individual differences studies and the investigation of individuals from more varied backgrounds are discussed.

    Additional information

    Supplementary Material Appendices.pdf
  • Majid, A., Enfield, N. J., & Van Staden, M. (Eds.). (2006). Parts of the body: Cross-linguistic categorisation [Special Issue]. Language Sciences, 28(2-3).
  • Majid, A., Sanford, A. J., & Pickering, M. J. (2006). Covariation and quantifier polarity: What determines causal attribution in vignettes? Cognition, 99(1), 35-51. doi:10.1016/j.cognition.2004.12.004.

    Abstract

    Tests of causal attribution often use verbal vignettes, with covariation information provided through statements quantified with natural language expressions. The effect of covariation information has typically been taken to show that set size information affects attribution. However, recent research shows that quantifiers provide information about discourse focus as well as covariation information. In the attribution literature, quantifiers are used to depict covariation, but they confound quantity and focus. In four experiments, we show that focus explains all (Experiment 1) or some (Experiments 2, 3 and 4) of the impact of covariation information on the attributions made, confirming the importance of the confound. Attribution experiments using vignettes that present covariation information with natural language quantifiers may overestimate the impact of set size information, and ignore the impact of quantifier-induced focus.
  • Majid, A. (2006). Body part categorisation in Punjabi. Language Sciences, 28(2-3), 241-261. doi:10.1016/j.langsci.2005.11.012.

    Abstract

    A key question in categorisation is to what extent people categorise in the same way, or differently. This paper examines categorisation of the body in Punjabi, an Indo-European language spoken in Pakistan and India. First, an inventory of body part terms is presented, illustrating how Punjabi speakers segment and categorise the body. There are some noteworthy terms in the inventory, which illustrate categories in Punjabi that are unusual when compared to other languages presented in this volume. Second, Punjabi speakers’ conceptualisation of the relationship between body parts is explored. While some body part terms are viewed as being partonomically related, others are viewed as being in a locative relationship. It is suggested that there may be key ways in which languages differ in both the categorisation of the body into parts, and in how these parts are related to one another.
  • Majid, A. (2016). The content of minds: Asifa Majid talks to Jon Sutton about language and thought. The psychologist, 29, 554-556.
  • Majid, A., Speed, L., Croijmans, I., & Arshamian, A. (2017). What makes a better smeller? Perception, 46, 406-430. doi:10.1177/0301006616688224.

    Abstract

    Olfaction is often viewed as difficult, yet the empirical evidence suggests a different picture. A closer look shows people around the world differ in their ability to detect, discriminate, and name odors. This gives rise to the question of what influences our ability to smell. Instead of focusing on olfactory deficiencies, this review presents a positive perspective by focusing on factors that make someone a better smeller. We consider three driving forces in improving olfactory ability: one’s biological makeup, one’s experience, and the environment. For each factor, we consider aspects proposed to improve odor perception and critically examine the evidence; as well as introducing lesser discussed areas. In terms of biology, there are cases of neurodiversity, such as olfactory synesthesia, that serve to enhance olfactory ability. Our lifetime experience, be it typical development or unique training experience, can also modify the trajectory of olfaction. Finally, our odor environment, in terms of ambient odor or culinary traditions, can influence odor perception too. Rather than highlighting the weaknesses of olfaction, we emphasize routes to harnessing our olfactory potential.
  • Mak, W. M., Vonk, W., & Schriefers, H. (2006). Animacy in processing relative clauses: The hikers that rocks crush. Journal of Memory and Language, 54(4), 466-490. doi:10.1016/j.jml.2006.01.001.

    Abstract

    For several languages, a preference for subject relative clauses over object relative clauses has been reported. However, Mak, Vonk, and Schriefers (2002) showed that there is no such preference for relative clauses with an animate subject and an inanimate object. A Dutch object relative clause as …de rots, die de wandelaars beklommen hebben… (‘the rock, that the hikers climbed’) did not show longer reading times than its subject relative clause counterpart …de wandelaars, die de rots beklommen hebben… (‘the hikers, who climbed the rock’). In the present paper, we explore the factors that might contribute to this modulation of the usual preference for subject relative clauses. Experiment 1 shows that the animacy of the antecedent per se is not the decisive factor. On the contrary, in relative clauses with an inanimate antecedent and an inanimate relative-clause-internal noun phrase, the usual preference for subject relative clauses is found. In Experiments 2 and 3, subject and object relative clauses were contrasted in which either the subject or the object was inanimate. The results are interpreted in a framework in which the choice for an analysis of the relative clause is based on the interplay of animacy with topichood and verb semantics. This framework accounts for the commonly reported preference for subject relative clauses over object relative clauses as well as for the pattern of data found in the present experiments.
  • Mangione-Smith, R., Elliott, M. N., Stivers, T., McDonald, L. L., & Heritage, J. (2006). Ruling out the need for antibiotics: Are we sending the right message? Archives of Pediatrics & Adolescent Medicine, 160(9), 945-952.
  • Mani, N., Daum, M., & Huettig, F. (2016). “Pro-active” in many ways: Developmental evidence for a dynamic pluralistic approach to prediction. Quarterly Journal of Experimental Psychology, 69(11), 2189-2201. doi:10.1080/17470218.2015.1111395.

    Abstract

    The anticipation of the forthcoming behaviour of social interaction partners is a useful ability supporting interaction and communication between social partners. Associations and prediction based on the production system (in line with views that listeners use the production system covertly to anticipate what the other person might be likely to say) are two potential factors, which have been proposed to be involved in anticipatory language processing. We examined the influence of both factors on the degree to which listeners predict upcoming linguistic input. Are listeners more likely to predict book as an appropriate continuation of the sentence “The boy reads a”, based on the strength of the association between the words read and book (strong association) and read and letter (weak association)? Do more proficient producers predict more? What is the interplay of these two influences on prediction? The results suggest that associations influence language-mediated anticipatory eye gaze in two-year-olds and adults only when two thematically appropriate target objects compete for overt attention but not when these objects are presented separately. Furthermore, children’s prediction abilities are strongly related to their language production skills when appropriate target objects are presented separately but not when presented together. Both influences on prediction in language processing thus appear to be context-dependent. We conclude that multiple factors simultaneously influence listeners’ anticipation of upcoming linguistic input and that only such a dynamic approach to prediction can capture listeners’ prowess at predictive language processing.
  • Manrique, E. (2016). Other-initiated repair in Argentine Sign Language. Open Linguistics, 2, 1-34. doi:10.1515/opli-2016-0001.

    Abstract

    Other-initiated repair is an essential interactional practice to secure mutual understanding in everyday interaction. This article presents evidence from a large conversational corpus of a sign language, showing that signers of Argentine Sign Language (Lengua de Señas Argentina or ‘LSA’), like users of spoken languages, use a systematic set of linguistic formats and practices to indicate troubles of signing, seeing and understanding. The general aim of this article is to provide a general overview of the different visual-gestural linguistic patterns of other-initiated repair sequences in LSA. It also describes the quantitative distribution of other-initiated repair formats based on a collection of 213 cases. It describes the multimodal components of open and restricted types of repair initiators, and reports a previously undescribed implicit practice to initiate repair in LSA in comparison to explicitly produced formats. Part of a special issue presenting repair systems across a range of languages, this article contributes to a better understanding of the phenomenon of other-initiated repair in terms of visual and gestural practices in human interaction in both signed and spoken languages
  • Mansbridge, M. P., Tamaoka, K., Xiong, K., & Verdonschot, R. G. (2017). Ambiguity in the processing of Mandarin Chinese relative clauses: One factor cannot explain it all. PLoS One, 12(6): e0178369. doi:10.1371/journal.pone.0178369.

    Abstract

    This study addresses the question of whether native Mandarin Chinese speakers process and comprehend subject-extracted relative clauses (SRC) more readily than objectextracted relative clauses (ORC) in Mandarin Chinese. Presently, this has been a hotly debated issue, with various studies producing contrasting results. Using two eye-tracking experiments with ambiguous and unambiguous RCs, this study shows that both ORCs and SRCs have different processing requirements depending on the locus and time course during reading. The results reveal that ORC reading was possibly facilitated by linear/ temporal integration and canonicity. On the other hand, similarity-based interference made ORCs more difficult, and expectation-based processing was more prominent for unambiguous ORCs. Overall, RC processing in Mandarin should not be broken down to a single ORC (dis) advantage, but understood as multiple interdependent factors influencing whether ORCs are either more difficult or easier to parse depending on the task and context at hand.
  • Martin, A. E., & Doumas, L. A. A. (2017). A mechanism for the cortical computation of hierarchical linguistic structure. PLoS Biology, 15(3): e2000663. doi:10.1371/journal.pbio.2000663.

    Abstract

    Biological systems often detect species-specific signals in the environment. In humans, speech and language are species-specific signals of fundamental biological importance. To detect the linguistic signal, human brains must form hierarchical representations from a sequence of perceptual inputs distributed in time. What mechanism underlies this ability? One hypothesis is that the brain repurposed an available neurobiological mechanism when hierarchical linguistic representation became an efficient solution to a computational problem posed to the organism. Under such an account, a single mechanism must have the capacity to perform multiple, functionally related computations, e.g., detect the linguistic signal and perform other cognitive functions, while, ideally, oscillating like the human brain. We show that a computational model of analogy, built for an entirely different purpose—learning relational reasoning—processes sentences, represents their meaning, and, crucially, exhibits oscillatory activation patterns resembling cortical signals elicited by the same stimuli. Such redundancy in the cortical and machine signals is indicative of formal and mechanistic alignment between representational structure building and “cortical” oscillations. By inductive inference, this synergy suggests that the cortical signal reflects structure generation, just as the machine signal does. A single mechanism—using time to encode information across a layered network—generates the kind of (de)compositional representational hierarchy that is crucial for human language and offers a mechanistic linking hypothesis between linguistic representation and cortical computation
  • Martin, A. E., Huettig, F., & Nieuwland, M. S. (2017). Can structural priming answer the important questions about language? A commentary on Branigan and Pickering "An experimental approach to linguistic representation". Behavioral and Brain Sciences, 40: e304. doi:10.1017/S0140525X17000528.

    Abstract

    While structural priming makes a valuable contribution to psycholinguistics, it does not allow direct observation of representation, nor escape “source ambiguity.” Structural priming taps into implicit memory representations and processes that may differ from what is used online. We question whether implicit memory for language can and should be equated with linguistic representation or with language processing.
  • Martin, A. E., & McElree, B. (2009). Memory operations that support language comprehension: Evidence from verb-phrase ellipsis. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(5), 1231-1239. doi:10.1037/a0016271.

    Abstract

    Comprehension of verb-phrase ellipsis (VPE) requires reevaluation of recently processed constituents, which often necessitates retrieval of information about the elided constituent from memory. A. E. Martin and B. McElree (2008) argued that representations formed during comprehension are content addressable and that VPE antecedents are retrieved from memory via a cue-dependent direct-access pointer rather than via a search process. This hypothesis was further tested by manipulating the location of interfering material—either before the onset of the antecedent (proactive interference; PI) or intervening between antecedent and ellipsis site (retroactive interference; RI). The speed–accuracy tradeoff procedure was used to measure the time course of VPE processing. The location of the interfering material affected VPE comprehension accuracy: RI conditions engendered lower accuracy than PI conditions. Crucially, location did not affect the speed of processing VPE, which is inconsistent with both forward and backward search mechanisms. The observed time-course profiles are consistent with the hypothesis that VPE antecedents are retrieved via a cue-dependent direct-access operation. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
  • Martin, A. E. (2016). Language processing as cue integration: Grounding the psychology of language in perception and neurophysiology. Frontiers in Psychology, 7: 120. doi:10.3389/fpsyg.2016.00120.

    Abstract

    I argue that cue integration, a psychophysiological mechanism from vision and multisensory perception, offers a computational linking hypothesis between psycholinguistic theory and neurobiological models of language. I propose that this mechanism, which incorporates probabilistic estimates of a cue's reliability, might function in language processing from the perception of a phoneme to the comprehension of a phrase structure. I briefly consider the implications of the cue integration hypothesis for an integrated theory of language that includes acquisition, production, dialogue and bilingualism, while grounding the hypothesis in canonical neural computation.
  • Martin, A. E., Monahan, P. J., & Samuel, A. G. (2017). Prediction of agreement and phonetic overlap shape sublexical identification. Language and Speech, 60(3), 356-376. doi:10.1177/0023830916650714.

    Abstract

    The mapping between the physical speech signal and our internal representations is rarely straightforward. When faced with uncertainty, higher-order information is used to parse the signal and because of this, the lexicon and some aspects of sentential context have been shown to modulate the identification of ambiguous phonetic segments. Here, using a phoneme identification task (i.e., participants judged whether they heard [o] or [a] at the end of an adjective in a noun–adjective sequence), we asked whether grammatical gender cues influence phonetic identification and if this influence is shaped by the phonetic properties of the agreeing elements. In three experiments, we show that phrase-level gender agreement in Spanish affects the identification of ambiguous adjective-final vowels. Moreover, this effect is strongest when the phonetic characteristics of the element triggering agreement and the phonetic form of the agreeing element are identical. Our data are consistent with models wherein listeners generate specific predictions based on the interplay of underlying morphosyntactic knowledge and surface phonetic cues.
  • Massaro, D. W., & Jesse, A. (2009). Read my lips: Speech distortions in musical lyrics can be overcome (slightly) by facial information. Speech Communication, 51(7), 604-621. doi:10.1016/j.specom.2008.05.013.

    Abstract

    Understanding the lyrics of many contemporary songs is difficult, and an earlier study [Hidalgo-Barnes, M., Massaro, D.W., 2007. Read my lips: an animated face helps communicate musical lyrics. Psychomusicology 19, 3–12] showed a benefit for lyrics recognition when seeing a computer-animated talking head (Baldi) mouthing the lyrics along with hearing the singer. However, the contribution of visual information was relatively small compared to what is usually found for speech. In the current experiments, our goal was to determine why the face appears to contribute less when aligned with sung lyrics than when aligned with normal speech presented in noise. The first experiment compared the contribution of the talking head with the originally sung lyrics versus the case when it was aligned with the Festival text-to-speech synthesis (TtS) spoken at the original duration of the song’s lyrics. A small and similar influence of the face was found in both conditions. In the three experiments, we compared the presence of the face when the durations of the TtS were equated with the duration of the original musical lyrics to the case when the lyrics were read with typical TtS durations and this speech embedded in noise. The results indicated that the unusual temporally distorted durations of musical lyrics decreases the contribution of the visible speech from the face.
  • Massaro, D. W., & Perlman, M. (2017). Quantifying iconicity’s contribution during language acquisition: Implications for vocabulary learning. Frontiers in Communication, 2: 4. doi:10.3389/fcomm.2017.00004.

    Abstract

    Previous research found that iconicity—the motivated correspondence between word form and meaning—contributes to expressive vocabulary acquisition. We present two new experiments with two different databases and with novel analyses to give a detailed quantification of how iconicity contributes to vocabulary acquisition across development, including both receptive understanding and production. The results demonstrate that iconicity is more prevalent early in acquisition and diminishes with increasing age and with increasing vocabulary. In the first experiment, we found that the influence of iconicity on children’s production vocabulary decreased gradually with increasing age. These effects were independent of the observed influence of concreteness, difficulty of articulation, and parental input frequency. Importantly, we substantiated the independence of iconicity, concreteness, and systematicity—a statistical regularity between sounds and meanings. In the second experiment, we found that the average iconicity of both a child’s receptive vocabulary and expressive vocabulary diminished dramatically with increases in vocabulary size. These results indicate that iconic words tend to be learned early in the acquisition of both receptive vocabulary and expressive vocabulary. We recommend that iconicity be included as one of the many different influences on a child’s early vocabulary acquisition. Facing the logically insurmountable challenge to link the form of a novel word (e.g., “gavagai”) with its particular meaning (e.g., “rabbit”; Quine, 1960, 1990/1992), children manage to learn words with incredible ease. Interest in this process has permeated empirical and theoretical research in developmental psychology, psycholinguistics, and language studies more generally. Investigators have studied which words are learned and when they are learned (Fenson et al., 1994), biases in word learning (Markman, 1990, 1991); the perceptual, social, and linguistic properties of the words (Gentner, 1982; Waxman, 1999; Maguire et al., 2006; Vosoughi et al., 2010), the structure of the language being learned (Gentner and Boroditsky, 2001), and the influence of the child’s milieu on word learning (Hart and Risley, 1995; Roy et al., 2015). A growing number of studies also show that the iconicity of words might be a significant factor in word learning (Imai and Kita, 2014; Perniss and Vigliocco, 2014; Perry et al., 2015). Iconicity refers generally to a correspondence between the form of a signal (e.g., spoken word, sign, and written character) and its meaning. For example, the sign for tree is iconic in many signed languages: it resembles a branching tree waving above the ground in American Sign Language, outlines the shape of a tree in Danish Sign Language and forms a tree trunk in Chinese Sign Language. In contrast to signed languages, the words of spoken languages have traditionally been treated as arbitrary, with the assumption that the forms of most words bear no resemblance to their meaning (e.g., Hockett, 1960; Pinker and Bloom, 1990). However, there is now a large body of research showing that iconicity is prevalent in the lexicons of many spoken languages (Nuckolls, 1999; Dingemanse et al., 2015). Most languages have an inventory of iconic words for sounds—onomatopoeic words such as splash, slurp, and moo, which sound somewhat like the sound of the real-world event to which they refer. Rhodes (1994), for example, counts more than 100 of these words in English. Many languages also contain large inventories of ideophones—a distinctively iconic class of words that is used to express a variety of sensorimotor-rich meanings (Nuckolls, 1999; Voeltz and Kilian-Hatz, 2001; Dingemanse, 2012). For example, in Japanese, the word “koron”—with a voiceless [k] refers to a light object rolling once, the reduplicated “korokoro” to a light object rolling repeatedly, and “gorogoro”—with a voiced [g]—to a heavy object rolling repeatedly (Imai and Kita, 2014). And in Siwu, spoken in Ghana, ideophones include words like fwεfwε “springy, elastic” and saaa “cool sensation” (Dingemanse et al., 2015). Outside of onomatopoeia and ideophones, there is also evidence that adjectives and verbs—which also tend to convey sensorimotor imagery—are also relatively iconic (Nygaard et al., 2009; Perry et al., 2015). Another domain of iconic words involves some correspondence between the point of articulation of a word and its meaning. For example, there appears to be some prevalence across languages of nasal consonants in words for nose and bilabial consonants in words for lip (Urban, 2011). Spoken words can also have a correspondence between a word’s meaning and other aspects of its pronunciation. The word teeny, meaning small, is pronounced with a relatively small vocal tract, with high front vowels characterized by retracted lips and a high-frequency second formant (Ohala, 1994). Thus, teeny can be recognized as iconic of “small” (compared to the larger vocal tract configuration of the back, rounded vowel in huge), a pattern that is documented in the lexicons of a diversity of languages (Ultan, 1978; Blasi et al., 2016). Lewis and Frank (2016) have studied a more abstract form of iconicity that more meaningfully complex words tend to be longer. An evaluation of many diverse languages revealed that conceptually more complex meanings tend to have longer spoken forms. In their study, participants tended to assign a relatively long novel word to a conceptually more complex referent. Understanding that more complex meaning is usually represented by a longer word could aid a child’s parsing of a stream of spoken language and thus facilitate word learning. Some developmental psychologists have theorized that iconicity helps young children learn words by “bootstrapping” or “bridging” the association between a symbol and its referent (Imai and Kita, 2014; Perniss and Vigliocco, 2014). According to this idea, children begin to master word learning with the aid of iconic cues, which help to profile the connection between the form of a word and its meaning out in the world. The learning of verbs in particular may benefit from iconicity, as the referents of verbs are more abstract and challenging for young children to identify (Gentner, 1982; Snedeker and Gleitman, 2004). By helping children gain a firmer grasp of the concept of a symbol, iconicity might set the stage for the ensuing word-learning spurt of non-iconic words. The hypothesis that iconicity plays a role in word learning is supported by experimental studies showing that young children are better at learning words—especially verbs—when they are iconic (Imai et al., 2008; Kantartzis et al., 2011; Yoshida, 2012). In one study, for example, 3-year-old Japanese children were taught a set of novel verbs for actions. Some of the words the children learned were iconic (“sound-symbolic”), created on the basis of iconic patterns found in Japanese mimetics (e.g., the novel word nosunosu for a slow manner of walking; Imai et al., 2008). The results showed that children were better able to generalize action words across agents when the verb was iconic of the action compared to when it was not. A subsequent study also using novel verbs based on Japanese mimetics replicated the finding with 3-year-old English-speaking children (Kantartzis et al., 2011). However, it remains to be determined whether children trained in an iconic condition can generalize their learning to a non-iconic condition that would not otherwise be learned. Children as young as 14 months of age have been shown to benefit from iconicity in word learning (Imai et al., 2015). These children were better at learning novel words for spikey and rounded shapes when the words were iconic, corresponding to kiki and bouba sound symbolism (e.g., Köhler, 1947; Ramachandran and Hubbard, 2001). If iconic words are indeed easier to learn, there should be a preponderance of iconic words early in the learning of natural languages. There is evidence that this is the case in signed languages, which are widely recognized to contain a prevalence of iconic signs [Klima and Bellugi, 1979; e.g., as evident in Signing Savvy (2016)]. Although the role of iconicity in sign acquisition has been disputed [e.g., Orlansky and Bonvillian, 1984; see Thompson (2011) for discussion], the most thorough study to date found that signs of British Sign Language (BSL) that were learned earlier by children tended to be more iconic (Thompson et al., 2012). Thompson et al.’s measure of the age of acquisition of signs came from parental reports from a version of the MacArthur-Bates Communicative Development Inventory (MCDI; Fenson et al., 1994) adapted for BSL (Woolfe et al., 2010). The iconicity of signs was taken from norms based on BSL signers’ judgments using a scale of 1 (not at all iconic) to 7 [highly iconic; see Vinson et al. (2008), for norming details and BSL videos]. Thompson et al. (2012) found a positive correlation between iconicity judgments and words understood and produced. This relationship held up even after controlling for the contribution of imageability and familiarity. Surprisingly, however, there was a significantly stronger correlation for older children (21- to 30-month olds) than for younger children (age 11- to 20-month olds). Thompson et al. suggested that the larger role for iconicity for the older children may result from their increasing cognitive abilities or their greater experience in understanding meaningful form-meaning mappings. However, this suggestion does not fit with the expectation that iconicity should play a larger role earlier in language use. Thus, although supporting a role for iconicity in word learning, the larger influence for older children is inconsistent with the bootstrapping hypothesis, in which iconicity should play a larger role earlier in vocabulary learning (Imai and Kita, 2014; Perniss and Vigliocco, 2014). There is also evidence in spoken languages that earlier learned words tend to be more iconic. Perry et al. (2015) collected iconicity ratings on the roughly 600 English and Spanish words that are learned earliest by children, selected from their respective MCDIs. Native speakers on Amazon Mechanical Turk rated the iconicity of the words on a scale from −5 to 5, where 5 indicated that a word was highly iconic, −5 that it sounded like the opposite of its meaning, and 0 that it was completely arbitrary. Their instructions to raters are given in the Appendix because the same instructions were used for acquiring our iconicity ratings. The Perry et al. (2015) results showed that the likelihood of a word in children’s production vocabulary in both English and Spanish at 30 months was positively correlated with the iconicity ratings, even when several other possible contributing factors were partialed out, including log word frequency, concreteness, and word length. The pattern in Spanish held for two collections of iconicity ratings, one with the verbs of the 600-word set presented in infinitive form, and one with the verbs conjugated in the third person singular form. In English, the correlation between age of acquisition and iconicity held when the ratings were collected for words presented in written form only and in written form plus a spoken recording. It also held for ratings based on a more implicit measure of iconicity in which participants rated how accurately a space alien could guess the meaning of the word based on its sound alone. The pattern in English also held when Perry et al. (2015) factored out the systematicity of words [taken from Monaghan et al. (2014)]. Systematicity is measured as a correlation between form similarity and meaning similarity—that is, the degree to which words with similar meanings have similar forms. Monaghan et al. computed systematicity for a large number of English words and found a negative correlation with the age of acquisition of the word from 2 to 13+ years of age—more systematic words are learned earlier. Monaghan et al. (2014) and Christiansen and Chater (2016) observe that consistent sound-meaning patterns may facilitate early vocabulary acquisition, but the child would soon have to master arbitrary relationships necessitated by increases in vocabulary size. In theory, systematicity, sometimes called “relative iconicity,” is independent of iconicity. For example, the English cluster gl– occurs systematically in several words related to “vision” and “light,” such as glitter, glimmer, and glisten (Bergen, 2004), but the segments bear no obvious resemblance to this meaning. Monaghan et al. (2014) question whether spoken languages afford sufficient degrees of articulatory freedom for words to be iconic but not systematic. As evidence, they give the example of onomatopoeic words for the calls of small animals (e.g., peep and cheep) versus calls of big animals (roar and grrr), which would systematically reflect the size of the animal. Although Perry et al. (2015) found a positive effect of iconicity at 30 months, they did not evaluate its influence across the first years of a child’s life. To address this question, we conduct a more detailed examination of the time course of iconicity in word learning across the first 4 years of expressive vocabulary acquisition. In addition, we examine the role of iconicity in the acquisition of receptive vocabulary as well as productive vocabulary. There is some evidence that although receptive vocabulary and productive vocabulary are correlated with one another, a variable might not have equivalent influences on these two expressions of vocabulary. Massaro and Rowe (2015), for example, showed that difficulty of articulation had a strong effect on word production but not word comprehension. Thus, it is possible that the influence of iconicity on vocabulary development differs between production and comprehension. In particular, a larger influence on comprehension might follow from the emphasis of the bootstrapping hypothesis on iconicity serving to perceptually cue children to the connection between the sound of a word and its meaning
  • McLaughlin, R. L., Schijven, D., Van Rheenen, W., Van Eijk, K. R., O’Brien, M., Project MinE GWAS Consortium, Schizophrenia Working Group of the Psychiatric Genomics Consortium, Kahn, R. S., Ophoff, R. A., Goris, A., Bradley, D. G., Al-Chalabi, A., van den Berg, L. H., Luykx, J. J., Hardiman, O., & Veldink, J. H. (2017). Genetic correlation between amyotrophic lateral sclerosis and schizophrenia. Nature Communications, 8: 14774. doi:10.1038/ncomms14774.

    Abstract

    We have previously shown higher-than-expected rates of schizophrenia in relatives of patients with amyotrophic lateral sclerosis (ALS), suggesting an aetiological relationship between the diseases. Here, we investigate the genetic relationship between ALS and schizophrenia using genome-wide association study data from over 100,000 unique individuals. Using linkage disequilibrium score regression, we estimate the genetic correlation between ALS and schizophrenia to be 14.3% (7.05–21.6; P=1 × 10−4) with schizophrenia polygenic risk scores explaining up to 0.12% of the variance in ALS (P=8.4 × 10−7). A modest increase in comorbidity of ALS and schizophrenia is expected given these findings (odds ratio 1.08–1.26) but this would require very large studies to observe epidemiologically. We identify five potential novel ALS-associated loci using conditional false discovery rate analysis. It is likely that shared neurobiological mechanisms between these two disorders will engender novel hypotheses in future preclinical and clinical studies.
  • McQueen, J. M., Cutler, A., & Norris, D. (2006). Phonological abstraction in the mental lexicon. Cognitive Science, 30(6), 1113-1126. doi:10.1207/s15516709cog0000_79.

    Abstract

    A perceptual learning experiment provides evidence that the mental lexicon cannot consist solely of detailed acoustic traces of recognition episodes. In a training lexical decision phase, listeners heard an ambiguous [f–s] fricative sound, replacing either [f] or [s] in words. In a test phase, listeners then made lexical decisions to visual targets following auditory primes. Critical materials were minimal pairs that could be a word with either [f] or [s] (cf. English knife–nice), none of which had been heard in training. Listeners interpreted the minimal pair words differently in the second phase according to the training received in the first phase. Therefore, lexically mediated retuning of phoneme perception not only influences categorical decisions about fricatives (Norris, McQueen, & Cutler, 2003), but also benefits recognition of words outside the training set. The observed generalization across words suggests that this retuning occurs prelexically. Therefore, lexical processing involves sublexical phonological abstraction, not only accumulation of acoustic episodes.
  • McQueen, J. M., Norris, D., & Cutler, A. (2006). The dynamic nature of speech perception. Language and Speech, 49(1), 101-112.

    Abstract

    The speech perception system must be flexible in responding to the variability in speech sounds caused by differences among speakers and by language change over the lifespan of the listener. Indeed, listeners use lexical knowledge to retune perception of novel speech (Norris, McQueen, & Cutler, 2003). In that study, Dutch listeners made lexical decisions to spoken stimuli, including words with an ambiguous fricative (between [f] and [s]), in either [f]- or [s]-biased lexical contexts. In a subsequent categorization test, the former group of listeners identified more sounds on an [εf] - [εs] continuum as [f] than the latter group. In the present experiment, listeners received the same exposure and test stimuli, but did not make lexical decisions to the exposure items. Instead, they counted them. Categorization results were indistinguishable from those obtained earlier. These adjustments in fricative perception therefore do not depend on explicit judgments during exposure. This learning effect thus reflects automatic retuning of the interpretation of acoustic-phonetic information.
  • McQueen, J. M., Eisner, F., & Norris, D. (2016). When brain regions talk to each other during speech processing, what are they talking about? Commentary on Gow and Olson (2015). Language, Cognition and Neuroscience, 31(7), 860-863. doi:10.1080/23273798.2016.1154975.

    Abstract

    This commentary on Gow and Olson [2015. Sentential influences on acoustic-phonetic processing: A Granger causality analysis of multimodal imaging data. Language, Cognition and Neuroscience. doi:10.1080/23273798.2015.1029498] questions in three ways their conclusion that speech perception is based on interactive processing. First, it is not clear that the data presented by Gow and Olson reflect normal speech recognition. Second, Gow and Olson's conclusion depends on still-debated assumptions about the functions performed by specific brain regions. Third, the results are compatible with feedforward models of speech perception and appear inconsistent with models in which there are online interactions about phonological content. We suggest that progress in the neuroscience of speech perception requires the generation of testable hypotheses about the function(s) performed by inter-regional connections
  • McQueen, J. M., Norris, D., & Cutler, A. (2006). Are there really interactive processes in speech perception? Trends in Cognitive Sciences, 10(12), 533-533. doi:10.1016/j.tics.2006.10.004.
  • McQueen, J. M., Jesse, A., & Norris, D. (2009). No lexical–prelexical feedback during speech perception or: Is it time to stop playing those Christmas tapes? Journal of Memory and Language, 61, 1-18. doi:10.1016/j.jml.2009.03.002.

    Abstract

    The strongest support for feedback in speech perception comes from evidence of apparent lexical influence on prelexical fricative-stop compensation for coarticulation. Lexical knowledge (e.g., that the ambiguous final fricative of Christma? should be [s]) apparently influences perception of following stops. We argue that all such previous demonstrations can be explained without invoking lexical feedback. In particular, we show that one demonstration [Magnuson, J. S., McMurray, B., Tanenhaus, M. K., & Aslin, R. N. (2003). Lexical effects on compensation for coarticulation: The ghost of Christmash past. Cognitive Science, 27, 285–298] involved experimentally-induced biases (from 16 practice trials) rather than feedback. We found that the direction of the compensation effect depended on whether practice stimuli were words or nonwords. When both were used, there was no lexically-mediated compensation. Across experiments, however, there were lexical effects on fricative identification. This dissociation (lexical involvement in the fricative decisions but not in the following stop decisions made on the same trials) challenges interactive models in which feedback should cause both effects. We conclude that the prelexical level is sensitive to experimentally-induced phoneme-sequence biases, but that there is no feedback during speech perception.
  • Mead, S., Poulter, M., Uphill, J., Beck, J., Whitfield, J., Webb, T. E., Campbell, T., Adamson, G., Deriziotis, P., Tabrizi, S. J., Hummerich, H., Verzilli, C., Alpers, M. P., Whittaker, J. C., & Collinge, J. (2009). Genetic risk factors for variant Creutzfeldt-Jakob disease: A genome-wide association study. Lancet Neurology, 8(1), 57-66. doi:10.1016/S1474-4422(08)70265-5.

    Abstract

    BACKGROUND: Human and animal prion diseases are under genetic control, but apart from PRNP (the gene that encodes the prion protein), we understand little about human susceptibility to bovine spongiform encephalopathy (BSE) prions, the causal agent of variant Creutzfeldt-Jakob disease (vCJD).METHODS: We did a genome-wide association study of the risk of vCJD and tested for replication of our findings in samples from many categories of human prion disease (929 samples) and control samples from the UK and Papua New Guinea (4254 samples), including controls in the UK who were genotyped by the Wellcome Trust Case Control Consortium. We also did follow-up analyses of the genetic control of the clinical phenotype of prion disease and analysed candidate gene expression in a mouse cellular model of prion infection. FINDINGS: The PRNP locus was strongly associated with risk across several markers and all categories of prion disease (best single SNP [single nucleotide polymorphism] association in vCJD p=2.5 x 10(-17); best haplotypic association in vCJD p=1 x 10(-24)). Although the main contribution to disease risk was conferred by PRNP polymorphic codon 129, another nearby SNP conferred increased risk of vCJD. In addition to PRNP, one technically validated SNP association upstream of RARB (the gene that encodes retinoic acid receptor beta) had nominal genome-wide significance (p=1.9 x 10(-7)). A similar association was found in a small sample of patients with iatrogenic CJD (p=0.030) but not in patients with sporadic CJD (sCJD) or kuru. In cultured cells, retinoic acid regulates the expression of the prion protein. We found an association with acquired prion disease, including vCJD (p=5.6 x 10(-5)), kuru incubation time (p=0.017), and resistance to kuru (p=2.5 x 10(-4)), in a region upstream of STMN2 (the gene that encodes SCG10). The risk genotype was not associated with sCJD but conferred an earlier age of onset. Furthermore, expression of Stmn2 was reduced 30-fold post-infection in a mouse cellular model of prion disease. INTERPRETATION: The polymorphic codon 129 of PRNP was the main genetic risk factor for vCJD; however, additional candidate loci have been identified, which justifies functional analyses of these biological pathways in prion disease.
  • Menenti, L. (2006). L2-L1 word association in bilinguals: Direct evidence. Nijmegen CNS, 1, 17-24.

    Abstract

    The Revised Hierarchical Model (Kroll and Stewart, 1994) assumes that words in a bilingual’s languages have separate word form representations but shared conceptual representations. Two routes lead from an L2 word form to its conceptual representation: the word association route, where concepts are accessed through the corresponding L1 word form, and the concept mediation route, with direct access from L2 to concepts. To investigate word association, we presented proficient late German-Dutch bilinguals with L2 non-cognate word pairs in which the L1 translation of the first word rhymed with the second word (e.g. GRAP (joke) – Witz – FIETS (bike)). If the first word in a pair activated its L1 equivalent, then a phonological priming effect on the second word was expected. Priming was observed in lexical decision but not in semantic decision (living/non-living) on L2 words. In a control group of Dutch native speakers, no priming effect was found. This suggests that proficient bilinguals still make use of their L1 word form lexicon to process L2 in lexical decision.
  • Menenti, L., Petersson, K. M., Scheeringa, R., & Hagoort, P. (2009). When elephants fly: Differential sensitivity of right and left inferior frontal gyri to discourse and world knowledge. Journal of Cognitive Neuroscience, 21, 2358-2368. doi:10.1162/jocn.2008.21163.

    Abstract

    Both local discourse and world knowledge are known to influence sentence processing. We investigated how these two sources of information conspire in language comprehension. Two types of critical sentences, correct and world knowledge anomalies, were preceded by either a neutral or a local context. The latter made the world knowledge anomalies more acceptable or plausible. We predicted that the effect of world knowledge anomalies would be weaker for the local context. World knowledge effects have previously been observed in the left inferior frontal region (Brodmann's area 45/47). In the current study, an effect of world knowledge was present in this region in the neutral context. We also observed an effect in the right inferior frontal gyrus, which was more sensitive to the discourse manipulation than the left inferior frontal gyrus. In addition, the left angular gyrus reacted strongly to the degree of discourse coherence between the context and critical sentence. Overall, both world knowledge and the discourse context affect the process of meaning unification, but do so by recruiting partly different sets of brain areas.
  • Menks, W. M., Furger, R., Lenz, C., Fehlbaum, L. V., Stadler, C., & Raschle, N. M. (2017). Microstructural white matter alterations in the corpus callosum of girls with conduct disorder. Journal of the American Academy of Child & Adolescent Psychiatry, 56, 258-265. doi:10.1016/j.jaac.2016.12.006.

    Abstract

    Objective

    Diffusion tensor imaging (DTI) studies in adolescent conduct disorder (CD) have demonstrated white matter alterations of tracts connecting functionally distinct fronto-limbic regions, but only in boys or mixed-gender samples. So far, no study has investigated white matter integrity in girls with CD on a whole-brain level. Therefore, our aim was to investigate white matter alterations in adolescent girls with CD.
    Method

    We collected high-resolution DTI data from 24 girls with CD and 20 typically developing control girls using a 3T magnetic resonance imaging system. Fractional anisotropy (FA) and mean diffusivity (MD) were analyzed for whole-brain as well as a priori−defined regions of interest, while controlling for age and intelligence, using a voxel-based analysis and an age-appropriate customized template.
    Results

    Whole-brain findings revealed white matter alterations (i.e., increased FA) in girls with CD bilaterally within the body of the corpus callosum, expanding toward the right cingulum and left corona radiata. The FA and MD results in a priori−defined regions of interest were more widespread and included changes in the cingulum, corona radiata, fornix, and uncinate fasciculus. These results were not driven by age, intelligence, or attention-deficit/hyperactivity disorder comorbidity.
    Conclusion

    This report provides the first evidence of white matter alterations in female adolescents with CD as indicated through white matter reductions in callosal tracts. This finding enhances current knowledge about the neuropathological basis of female CD. An increased understanding of gender-specific neuronal characteristics in CD may influence diagnosis, early detection, and successful intervention strategies.
  • Menon, S., Rosenberg, K., Graham, S. A., Ward, E. M., Taylor, M. E., Drickamer, K., & Leckband, D. E. (2009). Binding-site geometry and flexibility in DC-SIGN demonstrated with surface force measurements. Proceedings of the National Academy of Sciences of the United States of America, 106, 11524-11529. doi:10.1073/pnas.0901783106.

    Abstract

    The dendritic cell receptor DC-SIGN mediates pathogen recognition by binding to glycans characteristic of pathogen surfaces, including those found on HIV. Clustering of carbohydrate-binding sites in the receptor tetramer is believed to be critical for targeting of pathogen glycans, but the arrangement of these sites remains poorly understood. Surface force measurements between apposed lipid bilayers displaying the extracellular domain of DC-SIGN and a neoglycolipid bearing an oligosaccharide ligand provide evidence that the receptor is in an extended conformation and that glycan docking is associated with a conformational change that repositions the carbohydrate-recognition domains during ligand binding. The results further show that the lateral mobility of membrane-bound ligands enhances the engagement of multiple carbohydrate-recognition domains in the receptor oligomer with appropriately spaced ligands. These studies highlight differences between pathogen targeting by DC-SIGN and receptors in which binding sites at fixed spacing bind to simple molecular patterns

    Additional information

    Menon_2009_Supporting_Information.pdf
  • Meyer, A. S., Huettig, F., & Levelt, W. J. M. (2016). Same, different, or closely related: What is the relationship between language production and comprehension? Journal of Memory and Language, 89, 1-7. doi:10.1016/j.jml.2016.03.002.
  • Meyer, A. S., & Huettig, F. (Eds.). (2016). Speaking and Listening: Relationships Between Language Production and Comprehension [Special Issue]. Journal of Memory and Language, 89.
  • Meyer, A. S., & Wheeldon, L. (Eds.). (2006). Language production across the life span [Special Issue]. Language and Cognitive Processes, 21(1-3).
  • Meyer, A. S., & Schriefers, H. (1991). Phonological facilitation in picture-word interference experiments: Effects of stimulus onset asynchrony and types of interfering stimuli. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17, 1146-1160. doi:10.1037/0278-7393.17.6.1146.

    Abstract

    Subjects named pictures while hearing distractor words that shared word-initial or word-final segments with the picture names or were unrelated to the picture names. The relative timing of distractor and picture presentation was varied. Compared with unrelated distractors, both types of related distractors facilitated picture naming under certain timing conditions. Begin-related distractors facilitated the naming responses if the shared segments began 150 ms before, at, or 150 ms after picture onset. By contrast, end-related distractors only facilitated the responses if the shared segments began at or 150 ms after picture onset. The results suggest that the phonological encoding of the beginning of a word is initiated before the encoding of its end.
  • Meyer, A. S., & Gerakaki, S. (2017). The art of conversation: Why it’s harder than you might think. Contact Magazine, 43(2), 11-15. Retrieved from http://contact.teslontario.org/the-art-of-conversation-why-its-harder-than-you-might-think/.
  • Meyer, A. S. (2017). Structural priming is not a Royal Road to representations. Commentary on Branigan and Pickering "An experimental approach to linguistic representation". Behavioral and Brain Sciences, 40: e305. doi:10.1017/S0140525X1700053X.

    Abstract

    Branigan & Pickering (B&P) propose that the structural priming paradigm is a Royal Road to linguistic representations of any kind, unobstructed by in fl uences of psychological processes. In my view, however, they are too optimistic about the versatility of the paradigm and, more importantly, its ability to provide direct evidence about the nature of stored linguistic representations.
  • Meyer, A. S. (1991). The time course of phonological encoding in language production: Phonological encoding inside a syllable. Journal of Memory and Language, 30, 69-69. doi:10.1016/0749-596X(91)90011-8.

    Abstract

    Eight experiments were carried out investigating whether different parts of a syllable must be phonologically encoded in a specific order or whether they can be encoded in any order. A speech production task was used in which the subjects in each test trial had to utter one out of three or five response words as quickly as possible. In the so-called homogeneous condition these words were related in form, while in the heterogeneous condition they were unrelated in form. For monosyllabic response words shorter reaction times were obtained in the homogeneous than in the heterogeneous condition when the words had the same onset, but not when they had the same rhyme. Similarly, for disyllabic response words, the reaction times were shorter in the homogeneous than in the heterogeneous condition when the words shared only the onset of the first syllable, but not when they shared only its rhyme. Furthermore, a stronger facilitatory effect was observed when the words had the entire first syllable in common than when they only shared the onset, or the onset and the nucleus, but not the coda of the first syllable. These results suggest that syllables are phonologically encoded in two ordered steps, the first of which is dedicated to the onset and the second to the rhyme.
  • Michalareas, G., Vezoli, J., Van Pelt, S., Schoffelen, J.-M., Kennedy, H., & Fries, P. (2016). Alpha-Beta and Gamma Rhythms Subserve Feedback and Feedforward Influences among Human Visual Cortical Areas. Neuron, 82(2), 384-397. doi:10.1016/j.neuron.2015.12.018.

    Abstract

    Primate visual cortex is hierarchically organized. Bottom-up and top-down influences are exerted through distinct frequency channels, as was recently revealed in macaques by correlating inter-areal influences with laminar anatomical projection patterns. Because this anatomical data cannot be obtained in human subjects, we selected seven homologous macaque and human visual areas, and we correlated the macaque laminar projection patterns to human inter-areal directed influences as measured with magnetoencephalography. We show that influences along feedforward projections predominate in the gamma band, whereas influences along feedback projections predominate in the alpha-beta band. Rhythmic inter-areal influences constrain a functional hierarchy of the seven homologous human visual areas that is in close agreement with the respective macaque anatomical hierarchy. Rhythmic influences allow an extension of the hierarchy to 26 human visual areas including uniquely human brain areas. Hierarchical levels of ventral- and dorsal-stream visual areas are differentially affected by inter-areal influences in the alpha-beta band.
  • Middeldorp, C. M., Hammerschlag, A. R., Ouwens, K. G., Groen-Blokhuis, M. M., St Pourcain, B., Greven, C. U., Pappa, I., Tiesler, C. M. T., Ang, W., Nolte, I. M., Vilor-Tejedor, N., Bacelis, J., Ebejer, J. L., Zhao, H., Davies, G. E., Ehli, E. A., Evans, D. M., Fedko, I. O., Guxens, M., Hottenga, J.-J. and 31 moreMiddeldorp, C. M., Hammerschlag, A. R., Ouwens, K. G., Groen-Blokhuis, M. M., St Pourcain, B., Greven, C. U., Pappa, I., Tiesler, C. M. T., Ang, W., Nolte, I. M., Vilor-Tejedor, N., Bacelis, J., Ebejer, J. L., Zhao, H., Davies, G. E., Ehli, E. A., Evans, D. M., Fedko, I. O., Guxens, M., Hottenga, J.-J., Hudziak, J. J., Jugessur, A., Kemp, J. P., Krapohl, E., Martin, N. G., Murcia, M., Myhre, R., Ormel, J., Ring, S. M., Standl, M., Stergiakouli, E., Stoltenberg, C., Thiering, E., Timpson, N. J., Trzaskowski, M., van der Most, P. J., Wang, C., EArly Genetics and Lifecourse Epidemiology (EAGLE) Consortium, Psychiatric Genomics Consortium ADHD Working Group, Nyholt, D. R., Medland, S. E., Neale, B., Jacobsson, B., Sunyer, J., Hartman, C. A., Whitehouse, A. J. O., Pennell, C. E., Heinrich, J., Plomin, R., Smith, G. D., Tiemeier, H., Posthuma, D., & Boomsma, D. I. (2016). A Genome-Wide Association Meta-Analysis of Attention-Deficit/Hyperactivity Disorder Symptoms in Population-Based Paediatric Cohorts. Journal of the American Academy of Child & Adolescent Psychiatry, 55(10), 896-905. doi:10.1016/j.jaac.2016.05.025.

    Abstract

    Objective To elucidate the influence of common genetic variants on childhood attention-deficit/hyperactivity disorder (ADHD) symptoms, to identify genetic variants that explain its high heritability, and to investigate the genetic overlap of ADHD symptom scores with ADHD diagnosis. Method Within the EArly Genetics and Lifecourse Epidemiology (EAGLE) consortium, genome-wide single nucleotide polymorphisms (SNPs) and ADHD symptom scores were available for 17,666 children (< 13 years) from nine population-based cohorts. SNP-based heritability was estimated in data from the three largest cohorts. Meta-analysis based on genome-wide association (GWA) analyses with SNPs was followed by gene-based association tests, and the overlap in results with a meta-analysis in the Psychiatric Genomics Consortium (PGC) case-control ADHD study was investigated. Results SNP-based heritability ranged from 5% to 34%, indicating that variation in common genetic variants influences ADHD symptom scores. The meta-analysis did not detect genome-wide significant SNPs, but three genes, lying close to each other with SNPs in high linkage disequilibrium (LD), showed a gene-wide significant association (p values between 1.46×10-6 and 2.66×10-6). One gene, WASL, is involved in neuronal development. Both SNP- and gene-based analyses indicated overlap with the PGC meta-analysis results with the genetic correlation estimated at 0.96. Conclusion The SNP-based heritability for ADHD symptom scores indicates a polygenic architecture and genes involved in neurite outgrowth are possibly involved. Continuous and dichotomous measures of ADHD appear to assess a genetically common phenotype. A next step is to combine data from population-based and case-control cohorts in genetic association studies to increase sample size and improve statistical power for identifying genetic variants.
  • Mitterer, H. (2006). On the causes of compensation for coarticulation: Evidence for phonological mediation. Perception & Psychophysics, 68(7), 1227-1240.

    Abstract

    This study examined whether compensation for coarticulation in fricative–vowel syllables is phonologically mediated or a consequence of auditory processes. Smits (2001a) had shown that compensation occurs for anticipatory lip rounding in a fricative caused by a following rounded vowel in Dutch. In a first experiment, the possibility that compensation is due to general auditory processing was investigated using nonspeech sounds. These did not cause context effects akin to compensation for coarticulation, although nonspeech sounds influenced speech sound identification in an integrative fashion. In a second experiment, a possible phonological basis for compensation for coarticulation was assessed by using audiovisual speech. Visual displays, which induced the perception of a rounded vowel, also influenced compensation for anticipatory lip rounding in the fricative. These results indicate that compensation for anticipatory lip rounding in fricative–vowel syllables is phonologically mediated. This result is discussed in the light of other compensation-for-coarticulation findings and general theories of speech perception.
  • Mitterer, H., Csépe, V., & Blomert, L. (2006). The role of perceptual integration in the recognition of assimilated word forms. Quarterly Journal of Experimental Psychology, 59(8), 1395-1424. doi:10.1080/17470210500198726.

    Abstract

    We investigated how spoken words are recognized when they have been altered by phonological assimilation. Previous research has shown that there is a process of perceptual compensation for phonological assimilations. Three recently formulated proposals regarding the mechanisms for compensation for assimilation make different predictions with regard to the level at which compensation is supposed to occur as well as regarding the role of specific language experience. In the present study, Hungarian words and nonwords, in which a viable and an unviable liquid assimilation was applied, were presented to Hungarian and Dutch listeners in an identification task and a discrimination task. Results indicate that viably changed forms are difficult to distinguish from canonical forms independent of experience with the assimilation rule applied in the utterances. This reveals that auditory processing contributes to perceptual compensation for assimilation, while language experience has only a minor role to play when identification is required.
  • Mitterer, H., Csépe, V., Honbolygo, F., & Blomert, L. (2006). The recognition of phonologically assimilated words does not depend on specific language experience. Cognitive Science, 30(3), 451-479. doi:10.1207/s15516709cog0000_57.

    Abstract

    In a series of 5 experiments, we investigated whether the processing of phonologically assimilated utterances is influenced by language learning. Previous experiments had shown that phonological assimilations, such as /lean#bacon/→[leam bacon], are compensated for in perception. In this article, we investigated whether compensation for assimilation can occur without experience with an assimilation rule using automatic event-related potentials. Our first experiment indicated that Dutch listeners compensate for a Hungarian assimilation rule. Two subsequent experiments, however, failed to show compensation for assimilation by both Dutch and Hungarian listeners. Two additional experiments showed that this was due to the acoustic properties of the assimilated utterance, confirming earlier reports that phonetic detail is important in compensation for assimilation. Our data indicate that compensation for assimilation can occur without experience with an assimilation rule, in line with phonetic–phonological theories that assume that speech production is influenced by speech-perception abilities.
  • Mitterer, H. (2006). Is vowel normalization independent of lexical processing? Phonetica, 63(4), 209-229. doi:10.1159/000097306.

    Abstract

    Vowel normalization in speech perception was investigated in three experiments. The range of the second formant in a carrier phrase was manipulated and this affected the perception of a target vowel in a compensatory fashion: A low F2 range in the carrier phrase made it more likely that the target vowel was perceived as a front vowel, that is, with a high F2. Recent experiments indicated that this effect might be moderated by the lexical status of the constituents of the carrier phrase. Manipulation of the lexical status in the present experiments, however, did not affect vowel normalization. In contrast, the range of vowels in the carrier phrase did influence vowel normalization. If the carrier phrase consisted of mid-to-high front vowels only, vowel categories shifted only for mid-to-high front vowels. It is argued that these results are a challenge for episodic models of word recognition.
  • Mitterer, H., & Ernestus, M. (2006). Listeners recover /t/s that speakers reduce: Evidence from /t/-lenition in Dutch. Journal of Phonetics, 34(1), 73-103. doi:10.1016/j.wocn.2005.03.003.

    Abstract

    In everyday speech, words may be reduced. Little is known about the consequences of such reductions for spoken word comprehension. This study investigated /t/-lenition in Dutch in two corpus studies and three perceptual experiments. The production studies revealed that /t/-lenition is most likely to occur after [s] and before bilabial consonants. The perception experiments showed that listeners take into account both phonological context, phonetic detail, and the lexical status of the form in the interpretation of codas that may or may not contain a lenited word-final /t/. These results speak against models of word recognition that make hard decisions on a prelexical level.
  • Mitterer, H., & McQueen, J. M. (2009). Foreign subtitles help but native-language subtitles harm foreign speech perception. PLoS ONE, 4(11), e7785. doi:10.1371/journal.pone.0007785.

    Abstract

    Understanding foreign speech is difficult, in part because of unusual mappings between sounds and words. It is known that listeners in their native language can use lexical knowledge (about how words ought to sound) to learn how to interpret unusual speech-sounds. We therefore investigated whether subtitles, which provide lexical information, support perceptual learning about foreign speech. Dutch participants, unfamiliar with Scottish and Australian regional accents of English, watched Scottish or Australian English videos with Dutch, English or no subtitles, and then repeated audio fragments of both accents. Repetition of novel fragments was worse after Dutch-subtitle exposure but better after English-subtitle exposure. Native-language subtitles appear to create lexical interference, but foreign-language subtitles assist speech learning by indicating which words (and hence sounds) are being spoken.
  • Mitterer, H., & McQueen, J. M. (2009). Processing reduced word-forms in speech perception using probabilistic knowledge about speech production. Journal of Experimental Psychology: Human Perception and Performance, 35(1), 244-263. doi:10.1037/a0012730.

    Abstract

    Two experiments examined how Dutch listeners deal with the effects of connected-speech processes, specifically those arising from word-final /t/ reduction (e.g., whether Dutch [tas] is tas, bag, or a reduced-/t/ version of tast, touch). Eye movements of Dutch participants were tracked as they looked at arrays containing 4 printed words, each associated with a geometrical shape. Minimal pairs (e.g., tas/tast) were either both above (boven) or both next to (naast) different shapes. Spoken instructions (e.g., “Klik op het woordje tas boven de ster,” [Click on the word bag above the star]) thus became unambiguous only on their final words. Prior to disambiguation, listeners' fixations were drawn to /t/-final words more when boven than when naast followed the ambiguous sequences. This behavior reflects Dutch speech-production data: /t/ is reduced more before /b/ than before /n/. We thus argue that probabilistic knowledge about the effect of following context in speech production is used prelexically in perception to help resolve lexical ambiguities caused by continuous-speech processes.
  • Mitterer, H., Horschig, J. M., Müsseler, J., & Majid, A. (2009). The influence of memory on perception: It's not what things look like, it's what you call them. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(6), 1557-1562. doi:10.1037/a0017019.

    Abstract

    World knowledge influences how we perceive the world. This study shows that this influence is at least partly mediated by declarative memory. Dutch and German participants categorized hues from a yellow-to-orange continuum on stimuli that were prototypically orange or yellow and that were also associated with these color labels. Both groups gave more “yellow” responses if an ambiguous hue occurred on a prototypically yellow stimulus. The language groups were also tested on a stimulus (traffic light) that is associated with the label orange in Dutch and with the label yellow in German, even though the objective color is the same for both populations. Dutch observers categorized this stimulus as orange more often than German observers, in line with the assumption that declarative knowledge mediates the influence of world knowledge on color categorization.

    Files private

    Request files
  • Moers, C., Meyer, A. S., & Janse, E. (2017). Effects of word frequency and transitional probability on word reading durations of younger and older speakers. Language and Speech, 60(2), 289-317. doi:10.1177/0023830916649215.

    Abstract

    High-frequency units are usually processed faster than low-frequency units in language comprehension and language production. Frequency effects have been shown for words as well as word combinations. Word co-occurrence effects can be operationalized in terms of transitional probability (TP). TPs reflect how probable a word is, conditioned by its right or left neighbouring word. This corpus study investigates whether three different age groups–younger children (8–12 years), adolescents (12–18 years) and older (62–95 years) Dutch speakers–show frequency and TP context effects on spoken word durations in reading aloud, and whether age groups differ in the size of these effects. Results show consistent effects of TP on word durations for all age groups. Thus, TP seems to influence the processing of words in context, beyond the well-established effect of word frequency, across the entire age range. However, the study also indicates that age groups differ in the size of TP effects, with older adults having smaller TP effects than adolescent readers. Our results show that probabilistic reduction effects in reading aloud may at least partly stem from contextual facilitation that leads to faster reading times in skilled readers, as well as in young language learners.
  • Moisik, S. R., & Dediu, D. (2017). Anatomical biasing and clicks: Evidence from biomechanical modeling. Journal of Language Evolution, 2(1), 37-51. doi:10.1093/jole/lzx004.

    Abstract

    It has been observed by several researchers that the Khoisan palate tends to lack a prominent alveolar ridge. A biomechanical model of click production was created to examine if these sounds might be subject to an anatomical bias associated with alveolar ridge size. Results suggest the bias is plausible, taking the form of decreased articulatory effort and improved volume change characteristics; however, further modeling and experimental research is required to solidify the claim.

    Additional information

    lzx004_Supp.zip
  • Moisik, S. R., & Gick, B. (2017). The quantal larynx: The stable regions of laryngeal biomechanics and implications for speech production. Journal of Speech, Language, and Hearing Research, 60, 540-560. doi:10.1044/2016_JSLHR-S-16-0019.

    Abstract

    Purpose: Recent proposals suggest that (a) the high dimensionality of speech motor control may be reduced via modular neuromuscular organization that takes advantage of intrinsic biomechanical regions of stability and (b) computational modeling provides a means to study whether and how such modularization works. In this study, the focus is on the larynx, a structure that is fundamental to speech production because of its role in phonation and numerous articulatory functions. Method: A 3-dimensional model of the larynx was created using the ArtiSynth platform (http://www.artisynth.org). This model was used to simulate laryngeal articulatory states, including inspiration, glottal fricative, modal prephonation, plain glottal stop, vocal–ventricular stop, and aryepiglotto– epiglottal stop and fricative. Results: Speech-relevant laryngeal biomechanics is rich with “quantal” or highly stable regions within muscle activation space. Conclusions: Quantal laryngeal biomechanics complement a modular view of speech control and have implications for the articulatory–biomechanical grounding of numerous phonetic and phonological phenomena
  • Monaghan, P. (2017). Canalization of language structure from environmental constraints: A computational model of word learning from multiple cues. Topics in Cognitive Science, 9(1), 21-34. doi:10.1111/tops.12239.

    Abstract

    There is substantial variation in language experience, yet there is surprising similarity in the language structure acquired. Constraints on language structure may be external modulators that result in this canalization of language structure, or else they may derive from the broader, communicative environment in which language is acquired. In this paper, the latter perspective is tested for its adequacy in explaining robustness of language learning to environmental variation. A computational model of word learning from cross‐situational, multimodal information was constructed and tested. Key to the model's robustness was the presence of multiple, individually unreliable information sources to support learning. This “degeneracy” in the language system has a detrimental effect on learning, compared to a noise‐free environment, but has a critically important effect on acquisition of a canalized system that is resistant to environmental noise in communication.
  • Monaghan, P., & Rowland, C. F. (2017). Combining language corpora with experimental and computational approaches for language acquisition research. Language Learning, 67(S1), 14-39. doi:10.1111/lang.12221.

    Abstract

    Historically, first language acquisition research was a painstaking process of observation, requiring the laborious hand coding of children's linguistic productions, followed by the generation of abstract theoretical proposals for how the developmental process unfolds. Recently, the ability to collect large-scale corpora of children's language exposure has revolutionized the field. New techniques enable more precise measurements of children's actual language input, and these corpora constrain computational and cognitive theories of language development, which can then generate predictions about learning behavior. We describe several instances where corpus, computational, and experimental work have been productively combined to uncover the first language acquisition process and the richness of multimodal properties of the environment, highlighting how these methods can be extended to address related issues in second language research. Finally, we outline some of the difficulties that can be encountered when applying multimethod approaches and show how these difficulties can be obviated
  • Monaghan, P., Chang, Y.-N., Welbourne, S., & Brysbaert, M. (2017). Exploring the relations between word frequency, language exposure, and bilingualism in a computational model of reading. Journal of Memory and Language, 93, 1-27. doi:10.1016/j.jml.2016.08.003.

    Abstract

    Individuals show differences in the extent to which psycholinguistic variables predict their responses for lexical processing tasks. A key variable accounting for much variance in lexical processing is frequency, but the size of the frequency effect has been demonstrated to reduce as a consequence of the individual’s vocabulary size. Using a connectionist computational implementation of the triangle model on a large set of English words, where orthographic, phonological, and semantic representations interact during processing, we show that the model demonstrates a reduced frequency effect as a consequence of amount of exposure to the language, a variable that was also a cause of greater vocabulary size in the model. The model was also trained to learn a second language, Dutch, and replicated behavioural observations that increased proficiency in a second language resulted in reduced frequency effects for that language but increased frequency effects in the first language. The model provides a first step to demonstrating causal relations between psycholinguistic variables in a model of individual differences in lexical processing, and the effect of bilingualism on interacting variables within the language processing system
  • Mongelli, V., Dehaene, S., Vinckier, F., Peretz, I., Bartolomeo, P., & Cohen, L. (2017). Music and words in the visual cortex: The impact of musical expertise. Cortex, 86, 260-274. doi:10.1016/j.cortex.2016.05.016.

    Abstract

    How does the human visual system accommodate expertise for two simultaneously acquired
    symbolic systems? We used fMRI to compare activations induced in the visual
    cortex by musical notation, written words and other classes of objects, in professional
    musicians and in musically naı¨ve controls. First, irrespective of expertise, selective activations
    for music were posterior and lateral to activations for words in the left occipitotemporal
    cortex. This indicates that symbols characterized by different visual features
    engage distinct cortical areas. Second, musical expertise increased the volume of activations
    for music and led to an anterolateral displacement of word-related activations. In
    musicians, there was also a dramatic increase of the brain-scale networks connected to the
    music-selective visual areas. Those findings reveal that acquiring a double visual expertise
    involves an expansion of category-selective areas, the development of novel long-distance
    functional connectivity, and possibly some competition between categories for the colonization
    of cortical space
  • Montero-Melis, G., & Bylund, E. (2017). Getting the ball rolling: the cross-linguistic conceptualization of caused motion. Language and Cognition, 9(3), 446–472. doi:10.1017/langcog.2016.22.

    Abstract

    Does the way we talk about events correspond to how we conceptualize them? Three experiments (N = 135) examined how Spanish and Swedish native speakers judge event similarity in the domain of caused motion (‘He rolled the tyre into the barn’). Spanish and Swedish motion descriptions regularly encode path (‘into’), but differ in how systematically they include manner information (‘roll’). We designed a similarity arrangement task which allowed participants to give varying weights to different dimensions when gauging event similarity. The three experiments progressively reduced the likelihood that speakers were using language to solve the task. We found that, as long as the use of language was possible (Experiments 1 and 2), Swedish speakers were more likely than Spanish speakers to base their similarity arrangements on object manner (rolling/sliding). However, when recruitment of language was hindered through verbal interference, cross-linguistic differences disappeared (Experiment 3). A compound analysis of all experiments further showed that (i) cross-linguistic differences were played out against a backdrop of commonly represented event components, and (ii) describing vs. not describing the events did not augment cross-linguistic differences, but instead had similar effects across languages. We interpret these findings as suggesting a dynamic role of language in event conceptualization.
  • Montero-Melis, G., Eisenbeiss, S., Narasimhan, B., Ibarretxe-Antuñano, I., Kita, S., Kopecka, A., Lüpke, F., Nikitina, T., Tragel, I., Jaeger, T. F., & Bohnemeyer, J. (2017). Satellite- vs. Verb-Framing Underpredicts Nonverbal Motion Categorization: Insights from a Large Language Sample and Simulations. Cognitive Semantics, 3(1), 36-61. doi:10.1163/23526416-00301002.

    Abstract

    Is motion cognition influenced by the large-scale typological patterns proposed in Talmy’s (2000) two-way distinction between verb-framed (V) and satellite-framed (S) languages? Previous studies investigating this question have been limited to comparing two or three languages at a time and have come to conflicting results. We present the largest cross-linguistic study on this question to date, drawing on data from nineteen genealogically diverse languages, all investigated in the same behavioral paradigm and using the same stimuli. After controlling for the different dependencies in the data by means of multilevel regression models, we find no evidence that S- vs. V-framing affects nonverbal categorization of motion events. At the same time, statistical simulations suggest that our study and previous work within the same behavioral paradigm suffer from insufficient statistical power. We discuss these findings in the light of the great variability between participants, which suggests flexibility in motion representation. Furthermore, we discuss the importance of accounting for language variability, something which can only be achieved with large cross-linguistic samples.
  • Montero-Melis, G., Jaeger, T. F., & Bylund, E. (2016). Thinking is modulated by recent linguistic experience: Second language priming affects perceived event similarity. Language Learning, 66(3), 636-665. doi:10.1111/lang.12172.

    Abstract

    Can recent second language (L2) exposure affect what we judge to be similar events? Using a priming paradigm, we manipulated whether native Swedish adult learners of L2 Spanish were primed to use path or manner during L2 descriptions of scenes depicting caused motion events (encoding phase). Subsequently, participants engaged in a nonverbal task, arranging events on the screen according to similarity (test phase). Path versus manner priming affected how participants judged event similarity during the test phase. The effects we find support the hypotheses that (a) speakers create or select ad hoc conceptual categories that are based on linguistic knowledge to carry out nonverbal tasks, and that (b) short-term, recent L2 experience can affect this ad hoc process. These findings further suggest that cognition can flexibly draw on linguistic categories that have been implicitly highlighted during recent exposure.
  • Li, S., Morley, M., Lu, M., Zhou, S., Stewart, K., French, C. A., Tucker, H. O., Fisher, S. E., & Morrisey, E. E. (2016). Foxp transcription factors suppress a non-pulmonary gene expression program to permit proper lung development. Developmental Biology, 416(2), 338-346. doi:10.1016/j.ydbio.2016.06.020.

    Abstract

    The inhibitory mechanisms that prevent gene expression programs from one tissue to be expressed in another are poorly understood. Foxp1/2/4 are forkhead transcription factors that repress gene expression and are individually important for endoderm development. We show that combined loss of all three Foxp1/2/4 family members in the developing anterior foregut endoderm leads to a loss of lung endoderm lineage commitment and subsequent development. Foxp1/2/4 deficient lungs express high levels of transcriptional regulators not normally expressed in the developing lung, including Pax2, Pax8, Pax9 and the Hoxa9-13 cluster. Ectopic expression of these transcriptional regulators is accompanied by decreased expression of lung restricted transcription factors including Nkx2-1, Sox2, and Sox9. Foxp1 binds to conserved forkhead DNA binding sites within the Hoxa9-13 cluster, indicating a direct repression mechanism. Thus, Foxp1/2/4 are essential for promoting lung endoderm development by repressing expression of non-pulmonary transcription factors
  • Mortensen, L., Meyer, A. S., & Humphreys, G. W. (2006). Age-related effects on speech production: A review. Language and Cognitive Processes, 21, 238-290. doi:10.1080/01690960444000278.

    Abstract

    In discourse, older adults tend to be more verbose and more disfluent than young adults, especially when the task is difficult and when it places few constraints on the content of the utterance. This may be due to (a) language-specific deficits in planning the content and syntactic structure of utterances or in selecting and retrieving words from the mental lexicon, (b) a general deficit in inhibiting irrelevant information, or (c) the selection of a specific speech style. The possibility that older adults have a deficit in lexical retrieval is supported by the results of picture naming studies, in which older adults have been found to name objects less accurately and more slowly than young adults, and by the results of definition naming studies, in which older adults have been found to experience more tip-of-the-tongue (TOT) states than young adults. The available evidence suggests that these age differences are largely due to weakening of the connections linking word lemmas to phonological word forms, though adults above 70 years of age may have an additional deficit in lemma selection.
  • Müller, O., & Hagoort, P. (2006). Access to lexical information in language comprehension: Semantics before syntax. Journal of Cognitive Neuroscience, 18(1), 84-96. doi:10.1162/089892906775249997.

    Abstract

    The recognition of a word makes available its semantic and
    syntactic properties. Using electrophysiological recordings, we
    investigated whether one set of these properties is available
    earlier than the other set. Dutch participants saw nouns on a
    computer screen and performed push-button responses: In
    one task, grammatical gender determined response hand
    (left/right) and semantic category determined response execution
    (go/no-go). In the other task, response hand depended
    on semantic category, whereas response execution depended
    on gender. During the latter task, response preparation occurred
    on no-go trials, as measured by the lateralized
    readiness potential: Semantic information was used for
    response preparation before gender information inhibited
    this process. Furthermore, an inhibition-related N2 effect
    occurred earlier for inhibition by semantics than for inhibition
    by gender. In summary, electrophysiological measures
    of both response preparation and inhibition indicated that
    the semantic word property was available earlier than the
    syntactic word property when participants read single
    words.
  • Murakami, S., Verdonschot, R. G., Kataoka, M., Kakimoto, N., Shimamoto, H., & Kreiborg, S. (2016). A standardized evaluation of artefacts from metallic compounds during fast MR imaging. Dentomaxillofacial Radiology, 45(8): 20160094. doi:10.1259/dmfr.20160094.

    Abstract

    Objectives: Metallic compounds present in the oral and maxillofacial regions (OMRs) cause large artefacts during MR scanning. We quantitatively assessed these artefacts embedded within a phantom according to standards set by the American Society for Testing and Materials (ASTM).
    Methods: Seven metallic dental materials (each of which was a 10-mm(3) cube embedded within a phantom) were scanned [i.e. aluminium (Al), silver alloy (Ag), type IV gold alloy (Au), gold-palladium-silver alloy (Au-Pd-Ag), titanium (Ti), nickel-chromium alloy (NC) and cobalt-chromium alloy (CC)] and compared with a reference image. Sequences included gradient echo (GRE), fast spin echo (FSE), gradient recalled acquisition in steady state (GRASS), a spoiled GRASS (SPGR), a fast SPGR (FSPGR), fast imaging employing steady state (FIESTA) and echo planar imaging (EPI; axial/sagittal planes). Artefact areas were determined according to the ASTM-F2119 standard, and artefact volumes were assessed using OsiriX MD software (Pixmeo, Geneva, Switzerland).
    Results: Tukey-Kramer post hoc tests were used for statistical comparisons. For most materials, scanning sequences eliciting artefact volumes in the following (ascending) order FSE-T-1/FSE-T-2 < FSPGR/SPGR < GRASS/GRE < FIESTA < EPI. For all scanning sequences, artefact volumes containing Au, Al, Ag and Au-Pd-Ag were significantly smaller than other materials (in which artefact volume size increased, respectively, from Ti < NC < CC). The artefact-specific shape (elicited by the cubic sample) depended on the scanning plane (i.e. a circular pattern for the axial plane and a "clover-like" pattern for the sagittal plane).
    Conclusions: The availability of standardized information on artefact size and configuration during MRI will enhance diagnosis when faced with metallic compounds in the OMR.
  • Murakami, S., Verdonschot, R. G., Kakimoto, N., Sumida, I., Fujiwara, M., Ogawa, K., & Furukawa, S. (2016). Preventing complications from high-dose rate brachytherapy when treating mobile tongue cancer via the application of a modular lead-lined spacer. PLoS One, 11(4): e0154226. doi:10.1371/journal.pone.0154226.

    Abstract

    Purpose
    To point out the advantages and drawbacks of high-dose rate brachytherapy in the treatment of mobile tongue cancer and indicate the clinical importance of modular lead-lined spacers when applying this technique to patients.
    Methods
    First, all basic steps to construct the modular spacer are shown. Second, we simulate and evaluate the dose rate reduction for a wide range of spacer configurations.
    Results
    With increasing distance to the source absorbed doses dropped considerably. Significantly more shielding was obtained when lead was added to the spacer and this effect was most pronounced on shorter (i.e. more clinically relevant) distances to the source.
    Conclusions
    The modular spacer represents an important addition to the planning and treatment stages of mobile tongue cancer using HDR-ISBT.

    Additional information

    tables
  • Murakami, S., Verdonschot, R. G., Kreiborg, S., Kakimoto, N., & Kawaguchi, A. (2017). Stereoscopy in dental education: An investigation. Journal of Dental Education, 81(4), 450-457. doi:10.21815/JDE.016.002.

    Abstract

    The aim of this study was to investigate whether stereoscopy can play a meaningful role in dental education. The study used an anaglyph technique in which two images were presented separately to the left and right eyes (using red/cyan filters), which, combined in the brain, give enhanced depth perception. A positional judgment task was performed to assess whether the use of stereoscopy would enhance depth perception among dental students at Osaka University in Japan. Subsequently, the optimum angle was evaluated to obtain maximum ability to discriminate among complex anatomical structures. Finally, students completed a questionnaire on a range of matters concerning their experience with stereoscopic images including their views on using stereoscopy in their future careers. The results showed that the students who used stereoscopy were better able than students who did not to appreciate spatial relationships between structures when judging relative positions. The maximum ability to discriminate among complex anatomical structures was between 2 and 6 degrees. The students' overall experience with the technique was positive, and although most did not have a clear vision for stereoscopy in their own practice, they did recognize its merits for education. These results suggest that using stereoscopic images in dental education can be quite valuable as stereoscopy greatly helped these students' understanding of the spatial relationships in complex anatomical structures.
  • Murphy, S. K., Nolan, C. M., Huang, Z., Kucera, K. S., Freking, B. A., Smith, T. P., Leymaster, K. A., Weidman, J. R., & Jirtle, a. R. L. (2006). Callipyge mutation affects gene expression in cis: A potential role for chromatin structure. Genome Research, 16, 340-346. doi:10.1101/gr.4389306.

    Abstract

    Muscular hypertrophy in callipyge sheep results from a single nucleotide substitution located in the genomic interval between the imprinted Delta, Drosophila, Homolog-like 1 (DLK1) and Maternally Expressed Gene 3 (MEG3). The mechanism linking the mutation to muscle hypertrophy is unclear but involves DLK1 overexpression. The mutation is contained within CLPG1 transcripts produced from this region. Herein we show that CLPG1 is expressed prenatally in the hypertrophy-responsive longissimus dorsi muscle by all four possible genotypes, but postnatal expression is restricted to sheep carrying the mutation. Surprisingly, the mutation results in nonimprinted monoallelic transcription of CLPG1 from only the mutated allele in adult sheep, whereas it is expressed biallelically during prenatal development. We further demonstrate that local CpG methylation is altered by the presence of the mutation in longissimus dorsi of postnatal sheep. For 10 CpG sites flanking the mutation, methylation is similar prenatally across genotypes, but doubles postnatally in normal sheep. This normal postnatal increase in methylation is significantly repressed in sheep carrying one copy of the mutation, and repressed even further in sheep with two mutant alleles. The attenuation in methylation status in the callipyge sheep correlates with the onset of the phenotype, continued CLPG1 transcription, and high-level expression of DLK1. In contrast, normal sheep exhibit hypermethylation of this locus after birth and CLPG1 silencing, which coincides with DLK1 transcriptional repression. These data are consistent with the notion that the callipyge mutation inhibits perinatal nucleation of regional chromatin condensation resulting in continued elevated transcription of prenatal DLK1 levels in adult callipyge sheep. We propose a model incorporating these results that can also account for the enigmatic normal phenotype of homozygous mutant sheep.
  • Nakayama, M., Kinoshita, S., & Verdonschot, R. G. (2016). The emergence of a phoneme-sized unit in L2 speech production: Evidence from Japanese-English bilinguals. Frontiers in Psychology, 7: 175. doi:10.3389/fpsyg.2016.00175.

    Abstract

    Recent research has revealed that the way phonology is constructed during word production differs across languages. Dutch and English native speakers are suggested to incrementally insert phonemes into a metrical frame, whereas Mandarin Chinese speakers use syllables and Japanese speakers use a unit called the mora (often a CV cluster such as "ka" or "ki"). The present study is concerned with the question how bilinguals construct phonology in their L2 when the phonological unit size differs from the unit in their L1. Japanese English bilinguals of varying proficiency read aloud English words preceded by masked primes that overlapped in just the onset (e.g., bark-BENCH) or the onset plus vowel corresponding to the mora-sized unit (e.g., bell-BENCH). Low proficient Japanese English bilinguals showed CV priming but did not show onset priming, indicating that they use their L1 phonological unit when reading L2 English words. In contrast, high-proficient Japanese English bilinguals showed significant onset priming. The size of the onset priming effect was correlated with the length of time spent in English-speaking countries, which suggests that extensive exposure to L2 phonology may play a key role in the emergence of a language-specific phonological unit in L2 word production.
  • Narasimhan, B., & Gullberg, M. (2006). Perspective-shifts in event descriptions in Tamil child language. Journal of Child Language, 33(1), 99-124. doi:10.1017/S0305000905007191.

    Abstract

    Children are able to take multiple perspectives in talking about entities and events. But the nature of children's sensitivities to the complex patterns of perspective-taking in adult language is unknown. We examine perspective-taking in four- and six-year-old Tamil-speaking children describing placement events, as reflected in the use of a general placement verb (veyyii ‘put’) versus two fine-grained caused posture expressions specifying orientation, either vertical (nikka veyyii ‘make stand’) or horizontal (paDka veyyii ‘make lie’). We also explore whether animacy systematically promotes shifts to a fine-grained perspective. The results show that four- and six-year-olds switch perspectives as flexibly and systematically as adults do. Animacy influences shifts to a fine-grained perspective similarly across age groups. However, unexpectedly, six-year-olds also display greater overall sensitivity to orientation, preferring the vertical over the horizontal caused posture expression. Despite early flexibility, the factors governing the patterns of perspective-taking on events are undergoing change even in later childhood, reminiscent of U-shaped semantic reorganizations observed in children's lexical knowledge. The present study points to the intriguing possibility that mechanisms that operate at the level of semantics could also influence subtle patterns of lexical choice and perspective-shifts.
  • Need, A. C., Ge, D., Weale, M. E., Maia, J., Feng, S., Heinzen, E. L., Shianna, K. V., Yoon, W., Kasperavičiūtė, D., Gennarelli, M., Strittmatter, W. J., Bonvicini, C., Rossi, G., Jayathilake, K., Cola, P. A., McEvoy, J. P., Keefe, R. S. E., Fisher, E. M. C., St. Jean, P. L., Giegling, I. and 13 moreNeed, A. C., Ge, D., Weale, M. E., Maia, J., Feng, S., Heinzen, E. L., Shianna, K. V., Yoon, W., Kasperavičiūtė, D., Gennarelli, M., Strittmatter, W. J., Bonvicini, C., Rossi, G., Jayathilake, K., Cola, P. A., McEvoy, J. P., Keefe, R. S. E., Fisher, E. M. C., St. Jean, P. L., Giegling, I., Hartmann, A. M., Möller, H.-J., Ruppert, A., Fraser, G., Crombie, C., Middleton, L. T., St. Clair, D., Roses, A. D., Muglia, P., Francks, C., Rujescu, D., Meltzer, H. Y., & Goldstein, D. B. (2009). A genome-wide investigation of SNPs and CNVs in schizophrenia. PLoS Genetics, 5(2), e1000373. doi:10.1371/journal.pgen.1000373.

    Abstract

    We report a genome-wide assessment of single nucleotide polymorphisms (SNPs) and copy number variants (CNVs) in schizophrenia. We investigated SNPs using 871 patients and 863 controls, following up the top hits in four independent cohorts comprising 1,460 patients and 12,995 controls, all of European origin. We found no genome-wide significant associations, nor could we provide support for any previously reported candidate gene or genome-wide associations. We went on to examine CNVs using a subset of 1,013 cases and 1,084 controls of European ancestry, and a further set of 60 cases and 64 controls of African ancestry. We found that eight cases and zero controls carried deletions greater than 2 Mb, of which two, at 8p22 and 16p13.11-p12.4, are newly reported here. A further evaluation of 1,378 controls identified no deletions greater than 2 Mb, suggesting a high prior probability of disease involvement when such deletions are observed in cases. We also provide further evidence for some smaller, previously reported, schizophrenia-associated CNVs, such as those in NRXN1 and APBA2. We could not provide strong support for the hypothesis that schizophrenia patients have a significantly greater “load” of large (>100 kb), rare CNVs, nor could we find common CNVs that associate with schizophrenia. Finally, we did not provide support for the suggestion that schizophrenia-associated CNVs may preferentially disrupt genes in neurodevelopmental pathways. Collectively, these analyses provide the first integrated study of SNPs and CNVs in schizophrenia and support the emerging view that rare deleterious variants may be more important in schizophrenia predisposition than common polymorphisms. While our analyses do not suggest that implicated CNVs impinge on particular key pathways, we do support the contribution of specific genomic regions in schizophrenia, presumably due to recurrent mutation. On balance, these data suggest that very few schizophrenia patients share identical genomic causation, potentially complicating efforts to personalize treatment regimens.
  • Negwer, M., & Schubert, D. (2017). Talking convergence: Growing evidence links FOXP2 and retinoic acidin shaping speech-related motor circuitry. Frontiers in Neuroscience, 11: 19. doi:10.3389/fnins.2017.00019.

    Abstract

    A commentary on
    FOXP2 drives neuronal differentiation by interacting with retinoic acid signaling pathways

    by Devanna, P., Middelbeek, J., and Vernes, S. C. (2014). Front. Cell. Neurosci. 8:305. doi: 10.3389/fncel.2014.00305
  • Newbury, D. F., Winchester, L., Addis, L., Paracchini, S., Buckingham, L.-L., Clark, A., Cohen, W., Cowie, H., Dworzynski, K., Everitt, A., Goodyer, I. M., Hennessy, E., Kindley, A. D., Miller, L. L., Nasir, J., O'Hare, A., Shaw, D., Simkin, Z., Simonoff, E., Slonims, V. and 11 moreNewbury, D. F., Winchester, L., Addis, L., Paracchini, S., Buckingham, L.-L., Clark, A., Cohen, W., Cowie, H., Dworzynski, K., Everitt, A., Goodyer, I. M., Hennessy, E., Kindley, A. D., Miller, L. L., Nasir, J., O'Hare, A., Shaw, D., Simkin, Z., Simonoff, E., Slonims, V., Watson, J., Ragoussis, J., Fisher, S. E., Seckl, J. R., Helms, P. J., Bolton, P. F., Pickles, A., Conti-Ramsden, G., Baird, G., Bishop, D. V., & Monaco, A. P. (2009). CMIP and ATP2C2 modulate phonological short-term memory in language impairment. American Journal of Human Genetics, 85(2), 264-272. doi:10.1016/j.ajhg.2009.07.004.

    Abstract

    Specific language impairment (SLI) is a common developmental disorder haracterized by difficulties in language acquisition despite otherwise normal development and in the absence of any obvious explanatory factors. We performed a high-density screen of SLI1, a region of chromosome 16q that shows highly significant and consistent linkage to nonword repetition, a measure of phonological short-term memory that is commonly impaired in SLI. Using two independent language-impaired samples, one family-based (211 families) and another selected from a population cohort on the basis of extreme language measures (490 cases), we detected association to two genes in the SLI1 region: that encoding c-maf-inducing protein (CMIP, minP = 5.5 × 10−7 at rs6564903) and that encoding calcium-transporting ATPase, type2C, member2 (ATP2C2, minP = 2.0 × 10−5 at rs11860694). Regression modeling indicated that each of these loci exerts an independent effect upon nonword repetition ability. Despite the consistent findings in language-impaired samples, investigation in a large unselected cohort (n = 3612) did not detect association. We therefore propose that variants in CMIP and ATP2C2 act to modulate phonological short-term memory primarily in the context of language impairment. As such, this investigation supports the hypothesis that some causes of language impairment are distinct from factors that influence normal language variation. This work therefore implicates CMIP and ATP2C2 in the etiology of SLI and provides molecular evidence for the importance of phonological short-term memory in language acquisition.

    Additional information

    mmc1.pdf
  • Newman-Norlund, S. E., Noordzij, M. L., Newman-Norlund, R. D., Volman, I. A., De Ruiter, J. P., Hagoort, P., & Toni, I. (2009). Recipient design in tacit communication. Cognition, 111, 46-54. doi:10.1016/j.cognition.2008.12.004.

    Abstract

    The ability to design tailored messages for specific listeners is an important aspect of
    human communication. The present study investigates whether a mere belief about an
    addressee’s identity influences the generation and production of a communicative message in
    a novel, non-verbal communication task. Participants were made to believe they were playing a game with a child or an adult partner, while a confederate acted as both child
    and adult partners with matched performance and response times. The participants’ belief
    influenced their behavior, spending longer when interacting with the presumed child
    addressee, but only during communicative portions of the game, i.e. using time as a tool
    to place emphasis on target information. This communicative adaptation attenuated with
    experience, and it was related to personality traits, namely Empathy and Need for Cognition
    measures. Overall, these findings indicate that novel nonverbal communicative interactions
    are selected according to a socio-centric perspective, and they are strongly
    influenced by participants’ traits.
  • Niccolai, V., Klepp, A., Indefrey, P., Schnitzler, A., & Biermann-Ruben, K. (2017). Semantic discrimination impacts tDCS modulation of verb processing. Scientific Reports, 7: 17162. doi:10.1038/s41598-017-17326-w.

    Abstract

    Motor cortex activation observed during body-related verb processing hints at simulation accompanying linguistic understanding. By exploiting the up- and down-regulation that anodal and cathodal transcranial direct current stimulation (tDCS) exert on motor cortical excitability, we aimed at further characterizing the functional contribution of the motor system to linguistic processing. In a double-blind sham-controlled within-subjects design, online stimulation was applied to the left hemispheric hand-related motor cortex of 20 healthy subjects. A dual, double-dissociation task required participants to semantically discriminate concrete (hand/foot) from abstract verb primes as well as to respond with the hand or with the foot to verb-unrelated geometric targets. Analyses were conducted with linear mixed models. Semantic priming was confirmed by faster and more accurate reactions when the response effector was congruent with the verb’s body part. Cathodal stimulation induced faster responses for hand verb primes thus indicating a somatotopical distribution of cortical activation as induced by body-related verbs. Importantly, this effect depended on performance in semantic discrimination. The current results point to verb processing being selectively modifiable by neuromodulation and at the same time to a dependence of tDCS effects on enhanced simulation. We discuss putative mechanisms operating in this reciprocal dependence of neuromodulation and motor resonance.

    Additional information

    41598_2017_17326_MOESM1_ESM.pdf
  • Niemi, J., Laine, M., & Järvikivi, J. (2009). Paradigmatic and extraparadigmatic morphology in the mental lexicon: Experimental evidence for a dissociation. The mental lexicon, 4(1), 26-40. doi:10.1075/ml.4.1.02nie.

    Abstract

    The present study discusses psycholinguistic evidence for a difference between paradigmatic and extraparadigmatic morphology by investigating the processing of Finnish inflected and cliticized words. The data are derived from three sources of Finnish: from single-word reading performance in an agrammatic deep dyslectic speaker, as well as from visual lexical decision and wordness/learnability ratings of cliticized vs. inflected items by normal Finnish speakers. The agrammatic speaker showed awareness of the suffixes in multimorphemic words, including clitics, since he attempted to fill in this slot with morphological material. However, he never produced a clitic — either as the correct response or as an error — in any morphological configuration (simplex, derived, inflected, compound). Moreover, he produced more nominative singular errors for case-inflected nouns than he did for the cliticized words, a pattern that is expected if case-inflected forms were closely associated with their lexical heads, i.e., if they were paradigmatic and cliticized words were not. Furthermore, a visual lexical decision task with normal speakers of Finnish, showed an additional processing cost (longer latencies and more errors on cliticized than on case-inflected noun forms). Finally, a rating task indicated no difference in relative wordness between these two types of words. However, the same cliticized words were judged harder to learn as L2 items than the inflected words, most probably due to their conceptual/semantic properties, in other words due to their lack of word-level translation equivalents in SAVE languages. Taken together, the present results suggest that the distinction between paradigmatic and extraparadigmatic morphology is psychologically real.
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2006). When peanuts fall in love: N400 evidence for the power of discourse. Journal of Cognitive Neuroscience, 18(7), 1098-1111. doi:10.1162/jocn.2006.18.7.1098.

    Abstract

    In linguistic theories of how sentences encode meaning, a distinction is often made between the context-free rule-based combination of lexical–semantic features of the words within a sentence (‘‘semantics’’), and the contributions made by wider context (‘‘pragmatics’’). In psycholinguistics, this distinction has led to the view that listeners initially compute a local, context-independent meaning of a phrase or sentence before relating it to the wider context. An important aspect of such a two-step perspective on interpretation is that local semantics cannot initially be overruled by global contextual factors. In two spoken-language event-related potential experiments, we tested the viability of this claim by examining whether discourse context can overrule the impact of the core lexical–semantic feature animacy, considered to be an innate organizing principle of cognition. Two-step models of interpretation predict that verb–object animacy violations, as in ‘‘The girl comforted the clock,’’ will always perturb the unfolding interpretation process, regardless of wider context. When presented in isolation, such anomalies indeed elicit a clear N400 effect, a sign of interpretive problems. However, when the anomalies were embedded in a supportive context (e.g., a girl talking to a clock about his depression), this N400 effect disappeared completely. Moreover, given a suitable discourse context (e.g., a story about an amorous peanut), animacyviolating predicates (‘‘the peanut was in love’’) were actually processed more easily than canonical predicates (‘‘the peanut was salted’’). Our findings reveal that discourse context can immediately overrule local lexical–semantic violations, and therefore suggest that language comprehension does not involve an initially context-free semantic analysis.
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2006). Individual differences and contextual bias in pronoun resolution: Evidence from ERPs. Brain Research, 1118(1), 155-167. doi:10.1016/j.brainres.2006.08.022.

    Abstract

    Although we usually have no trouble finding the right antecedent for a pronoun, the co-reference relations between pronouns and antecedents in everyday language are often ‘formally’ ambiguous. But a pronoun is only really ambiguous if a reader or listener indeed perceives it to be ambiguous. Whether this is the case may depend on at least two factors: the language processing skills of an individual reader, and the contextual bias towards one particular referential interpretation. In the current study, we used event related brain potentials (ERPs) to explore how both these factors affect the resolution of referentially ambiguous pronouns. We compared ERPs elicited by formally ambiguous and non-ambiguous pronouns that were embedded in simple sentences (e.g., “Jennifer Lopez told Madonna that she had too much money.”). Individual differences in language processing skills were assessed with the Reading Span task, while the contextual bias of each sentence (up to the critical pronoun) had been assessed in a referential cloze pretest. In line with earlier research, ambiguous pronouns elicited a sustained, frontal negative shift relative to non-ambiguous pronouns at the group-level. The size of this effect was correlated with Reading Span score, as well as with contextual bias. These results suggest that whether a reader perceives a formally ambiguous pronoun to be ambiguous is subtly co-determined by both individual language processing skills and contextual bias.
  • Nieuwland, M. S., & Martin, A. E. (2017). Neural oscillations and a nascent corticohippocampal theory of reference. Journal of Cognitive Neuroscience, 29(5), 896-910. doi:10.1162/jocn_a_01091.

    Abstract

    The ability to use words to refer to the world is vital to the communicative power of human language. In particular, the anaphoric use of words to refer to previously mentioned concepts (antecedents) allows dialogue to be coherent and meaningful. Psycholinguistic theory posits that anaphor comprehension involves reactivating a memory representation of the antecedent. Whereas this implies the involvement of recognition memory, or the mnemonic sub-routines by which people distinguish old from new, the neural processes for reference resolution are largely unknown. Here, we report time-frequency analysis of four EEG experiments to reveal the increased coupling of functional neural systems associated with referentially coherent expressions compared to referentially problematic expressions. Despite varying in modality, language, and type of referential expression, all experiments showed larger gamma-band power for referentially coherent expressions compared to referentially problematic expressions. Beamformer analysis in high-density Experiment 4 localised the gamma-band increase to posterior parietal cortex around 400-600 ms after anaphor-onset and to frontaltemporal cortex around 500-1000 ms. We argue that the observed gamma-band power increases reflect successful referential binding and resolution, which links incoming information to antecedents through an interaction between the brain’s recognition memory networks and frontal-temporal language network. We integrate these findings with previous results from patient and neuroimaging studies, and we outline a nascent cortico-hippocampal theory of reference.
  • Nieuwland, M. S. (2016). Quantification, prediction, and the online impact of sentence truth-value: Evidence from event-related potentials. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(2), 316-334. doi:10.1037/xlm0000173.

    Abstract

    Do negative quantifiers like “few” reduce people’s ability to rapidly evaluate incoming language with respect to world knowledge? Previous research has addressed this question by examining whether online measures of quantifier comprehension match the “final” interpretation reflected in verification judgments. However, these studies confounded quantifier valence with its impact on the unfolding expectations for upcoming words, yielding mixed results. In the current event-related potentials study, participants read negative and positive quantifier sentences matched on cloze probability and on truth-value (e.g., “Most/Few gardeners plant their flowers during the spring/winter for best results”). Regardless of whether participants explicitly verified the sentences or not, true-positive quantifier sentences elicited reduced N400s compared with false-positive quantifier sentences, reflecting the facilitated semantic retrieval of words that render a sentence true. No such facilitation was seen in negative quantifier sentences. However, mixed-effects model analyses (with cloze value and truth-value as continuous predictors) revealed that decreasing cloze values were associated with an interaction pattern between truth-value and quantifier, whereas increasing cloze values were associated with more similar truth-value effects regardless of quantifier. Quantifier sentences are thus understood neither always in 2 sequential stages, nor always in a partial-incremental fashion, nor always in a maximally incremental fashion. Instead, and in accordance with prediction-based views of sentence comprehension, quantifier sentence comprehension depends on incorporation of quantifier meaning into an online, knowledge-based prediction for upcoming words. Fully incremental quantifier interpretation occurs when quantifiers are incorporated into sufficiently strong online predictions for upcoming words. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
  • Nijland, L., & Janse, E. (Eds.). (2009). Auditory processing in speakers with acquired or developmental language disorders [Special Issue]. Clinical Linguistics and Phonetics, 23(3).
  • Nivard, M. G., Gage, S. H., Hottenga, J. J., van Beijsterveldt, C. E. M., Abdellaoui, A., Bartels, M., Baselmans, B. M. L., Ligthart, L., St Pourcain, B., Boomsma, D. I., Munafò, M. R., & Middeldorp, C. M. (2017). Genetic overlap between schizophrenia and developmental psychopathology: Longitudinal and multivariate polygenic risk prediction of common psychiatric traits during development. Schizophrenia Bulletin, 43(6), 1197-1207. doi:10.1093/schbul/sbx031.

    Abstract

    Background: Several nonpsychotic psychiatric disorders in childhood and adolescence can precede the onset of schizophrenia, but the etiology of this relationship remains unclear. We investigated to what extent the association between schizophrenia and psychiatric disorders in childhood is explained by correlated genetic risk factors. Methods: Polygenic risk scores (PRS), reflecting an individual’s genetic risk for schizophrenia, were constructed for 2588 children from the Netherlands Twin Register (NTR) and 6127 from the Avon Longitudinal Study of Parents And Children (ALSPAC). The associations between schizophrenia PRS and measures of anxiety, depression, attention deficit hyperactivity disorder (ADHD), and oppositional defiant disorder/conduct disorder (ODD/CD) were estimated at age 7, 10, 12/13, and 15 years in the 2 cohorts. Results were then meta-analyzed, and a meta-regression analysis was performed to test differences in effects sizes over, age and disorders. Results: Schizophrenia PRS were associated with childhood and adolescent psychopathology. Meta-regression analysis showed differences in the associations over disorders, with the strongest association with childhood and adolescent depression and a weaker association for ODD/CD at age 7. The associations increased with age and this increase was steepest for ADHD and ODD/CD. Genetic correlations varied between 0.10 and 0.25. Conclusion: By optimally using longitudinal data across diagnoses in a multivariate meta-analysis this study sheds light on the development of childhood disorders into severe adult psychiatric disorders. The results are consistent with a common genetic etiology of schizophrenia and developmental psychopathology as well as with a stronger shared genetic etiology between schizophrenia and adolescent onset psychopathology.
  • Nivard, M. G., Lubke, G. H., Dolan, C. V., Evans, D. M., St Pourcain, B., Munafo, M. R., & Middeldorp, C. M. (2017). Joint developmental trajectories of internalizing and externalizing disorders between childhood and adolescence. Development and Psychopathology, 29(3), 919-928. doi:10.1017/S0954579416000572.

    Abstract

    This study sought to identify trajectories of DSM-IV based internalizing (INT) and externalizing (EXT) problem scores across childhood and adolescence and to provide insight into the comorbidity by modeling the co-occurrence of INT and EXT trajectories. INT and EXT were measured repeatedly between age 7 and age 15 years in over 7,000 children and analyzed using growth mixture models. Five trajectories were identified for both INT and EXT, including very low, low, decreasing, and increasing trajectories. In addition, an adolescent onset trajectory was identified for INT and a stable high trajectory was identified for EXT. Multinomial regression showed that similar EXT and INT trajectories were associated. However, the adolescent onset INT trajectory was independent of high EXT trajectories, and persisting EXT was mainly associated with decreasing INT. Sex and early life environmental risk factors predicted EXT and, to a lesser extent, INT trajectories. The association between trajectories indicates the need to consider comorbidity when a child presents with INT or EXT disorders, particularly when symptoms start early. This is less necessary when INT symptoms start at adolescence. Future studies should investigate the etiology of co-occurring INT and EXT and the specific treatment needs of these severely affected children.
  • Noordzij, M., Newman-Norlund, S. E., De Ruiter, J. P., Hagoort, P., Levinson, S. C., & Toni, I. (2009). Brain mechanisms underlying human communication. Frontiers in Human Neuroscience, 3:14. doi:10.3389/neuro.09.014.2009.

    Abstract

    Human communication has been described as involving the coding-decoding of a conventional symbol system, which could be supported by parts of the human motor system (i.e. the “mirror neurons system”). However, this view does not explain how these conventions could develop in the first place. Here we target the neglected but crucial issue of how people organize their non-verbal behavior to communicate a given intention without pre-established conventions. We have measured behavioral and brain responses in pairs of subjects during communicative exchanges occurring in a real, interactive, on-line social context. In two fMRI studies, we found robust evidence that planning new communicative actions (by a sender) and recognizing the communicative intention of the same actions (by a receiver) relied on spatially overlapping portions of their brains (the right posterior superior temporal sulcus). The response of this region was lateralized to the right hemisphere, modulated by the ambiguity in meaning of the communicative acts, but not by their sensorimotor complexity. These results indicate that the sender of a communicative signal uses his own intention recognition system to make a prediction of the intention recognition performed by the receiver. This finding supports the notion that our communicative abilities are distinct from both sensorimotor processes and language abilities.
  • Norcliffe, E., & Jaeger, T. F. (2016). Predicting head-marking variability in Yucatec Maya relative clause production. Language and Cognition, 8(2), 167-205. doi:10.1017/langcog.2014.39.

    Abstract

    Recent proposals hold that the cognitive systems underlying language production exhibit computational properties that facilitate communicative efficiency, i.e., an efficient trade-off between production ease and robust information transmission. We contribute to the cross-linguistic evaluation of the communicative efficiency hypothesis by investigating speakers’ preferences in the production of a typologically rare head-marking alternation that occurs in relative clause constructions in Yucatec Maya. In a sentence recall study, we find that speakers of Yucatec Maya prefer to use reduced forms of relative clause verbs when the relative clause is more contextually expected. This result is consistent with communicative efficiency and thus supports its typological generalizability. We compare two types of cue to the presence of a relative clause, pragmatic cues previously investigated in other languages and a highly predictive morphosyntactic cue specific to Yucatec. We find that Yucatec speakers’ preferences for a reduced verb form are primarily conditioned on the more informative cue. This demonstrates the role of both general principles of language production and their language-specific realizations.
  • Norris, D., Cutler, A., McQueen, J. M., & Butterfield, S. (2006). Phonological and conceptual activation in speech comprehension. Cognitive Psychology, 53(2), 146-193. doi:10.1016/j.cogpsych.2006.03.001.

    Abstract

    We propose that speech comprehension involves the activation of token representations of the phonological forms of current lexical hypotheses, separately from the ongoing construction of a conceptual interpretation of the current utterance. In a series of cross-modal priming experiments, facilitation of lexical decision responses to visual target words (e.g., time) was found for targets that were semantic associates of auditory prime words (e.g., date) when the primes were isolated words, but not when the same primes appeared in sentence contexts. Identity priming (e.g., faster lexical decisions to visual date after spoken date than after an unrelated prime) appeared, however, both with isolated primes and with primes in prosodically neutral sentences. Associative priming in sentence contexts only emerged when sentence prosody involved contrastive accents, or when sentences were terminated immediately after the prime. Associative priming is therefore not an automatic consequence of speech processing. In no experiment was there associative priming from embedded words (e.g., sedate-time), but there was inhibitory identity priming (e.g., sedate-date) from embedded primes in sentence contexts. Speech comprehension therefore appears to involve separate distinct activation both of token phonological word representations and of conceptual word representations. Furthermore, both of these types of representation are distinct from the long-term memory representations of word form and meaning.
  • Norris, D., Butterfield, S., McQueen, J. M., & Cutler, A. (2006). Lexically guided retuning of letter perception. Quarterly Journal of Experimental Psychology, 59(9), 1505-1515. doi:10.1080/17470210600739494.

    Abstract

    Participants made visual lexical decisions to upper-case words and nonwords, and then categorized an ambiguous N–H letter continuum. The lexical decision phase included different exposure conditions: Some participants saw an ambiguous letter “?”, midway between N and H, in N-biased lexical contexts (e.g., REIG?), plus words with unambiguousH(e.g., WEIGH); others saw the reverse (e.g., WEIG?, REIGN). The first group categorized more of the test continuum as N than did the second group. Control groups, who saw “?” in nonword contexts (e.g., SMIG?), plus either of the unambiguous word sets (e.g., WEIGH or REIGN), showed no such subsequent effects. Perceptual learning about ambiguous letters therefore appears to be based on lexical knowledge, just as in an analogous speech experiment (Norris, McQueen, & Cutler, 2003) which showed similar lexical influence in learning about ambiguous phonemes. We argue that lexically guided learning is an efficient general strategy available for exploitation by different specific perceptual tasks.
  • Norris, D., McQueen, J. M., & Cutler, A. (2016). Prediction, Bayesian inference and feedback in speech recognition. Language, Cognition and Neuroscience, 31(1), 4-18. doi:10.1080/23273798.2015.1081703.

    Abstract

    Speech perception involves prediction, but how is that prediction implemented? In cognitive models prediction has often been taken to imply that there is feedback of activation from lexical to pre-lexical processes as implemented in interactive-activation models (IAMs). We show that simple activation feedback does not actually improve speech recognition. However, other forms of feedback can be beneficial. In particular, feedback can enable the listener to adapt to changing input, and can potentially help the listener to recognise unusual input, or recognise speech in the presence of competing sounds. The common feature of these helpful forms of feedback is that they are all ways of optimising the performance of speech recognition using Bayesian inference. That is, listeners make predictions about speech because speech recognition is optimal in the sense captured in Bayesian models.
  • Obleser, J., & Eisner, F. (2009). Pre-lexical abstraction of speech in the auditory cortex. Trends in Cognitive Sciences, 13, 14-19. doi:10.1016/j.tics.2008.09.005.

    Abstract

    Speech perception requires the decoding of complex acoustic patterns. According to most cognitive models of spoken word recognition, this complexity is dealt with before lexical access via a process of abstraction from the acoustic signal to pre-lexical categories. It is currently unclear how these categories are implemented in the auditory cortex. Recent advances in animal neurophysiology and human functional imaging have made it possible to investigate the processing of speech in terms of probabilistic cortical maps rather than simple cognitive subtraction, which will enable us to relate neurometric data more directly to behavioural studies. We suggest that integration of insights from cognitive science, neurophysiology and functional imaging is necessary for furthering our understanding of pre-lexical abstraction in the cortex.

    Files private

    Request files
  • Ocklenburg, S., Schmitz, J., Moinfar, Z., Moser, D., Klose, R., Lor, S., Kunz, G., Tegenthoff, M., Faustmann, P., Francks, C., Epplen, J. T., Kumsta, R., & Güntürkün, O. (2017). Epigenetic regulation of lateralized fetal spinal gene expression underlies hemispheric asymmetries. eLife, 6: e22784. doi:10.7554/eLife.22784.001.

    Abstract

    Lateralization is a fundamental principle of nervous system organization but its molecular determinants are mostly unknown. In humans, asymmetric gene expression in the fetal cortex has been suggested as the molecular basis of handedness. However, human fetuses already show considerable asymmetries in arm movements before the motor cortex is functionally linked to the spinal cord, making it more likely that spinal gene expression asymmetries form the molecular basis of handedness. We analyzed genome-wide mRNA expression and DNA methylation in cervical and anterior thoracal spinal cord segments of five human fetuses and show development-dependent gene expression asymmetries. These gene expression asymmetries were epigenetically regulated by miRNA expression asymmetries in the TGF-β signaling pathway and lateralized methylation of CpG islands. Our findings suggest that molecular mechanisms for epigenetic regulation within the spinal cord constitute the starting point for handedness, implying a fundamental shift in our understanding of the ontogenesis of hemispheric asymmetries in humans

Share this page